Open-source AI employee that joins your team as a real colleague — with its own identity, memory, and judgment.
What it does
Open Intern lives in your team's communication channels and works alongside you — with persistent memory and enterprise-grade safety.
Lives in Lark, Discord, Slack, or a web dashboard. Meets your team where they already work.
Three-layer memory (org, channel, personal) powered by pgvector. Remembers context like a real colleague.
Manage all your AI agents from one unified interface. No more one-server-per-agent sprawl.
Action classification, human approval workflows, and complete audit trails. Your policies, enforced.
Scale to zero when idle, burst on demand. Sandboxed runtime means your data stays safe.
MIT licensed. Self-hosted on your infrastructure. Zero telemetry. Full control, no lock-in.
How it works
Three steps from clone to colleague.
Clone the repo and run docker compose up. PostgreSQL, pgvector, and the agent start automatically.
Run open_intern init to choose your platform, enter credentials, and define your agent's identity.
Your AI teammate joins the channel. Mention it, DM it, or let it proactively engage — it remembers everything.
How it compares
Open Intern is the enterprise AI teammate — not a personal assistant, not a CLI tool.
| Open Intern | OpenClaw | IronClaw | |
|---|---|---|---|
| Target | Teams & enterprises | Individual users | Privacy-focused devs |
| Memory | 3-layer org memory | Per-user, flat | Single-user vector |
| Multi-Agent | Unified dashboard | One per instance | One per instance |
| Scaling | Elastic, scale to zero | One server each | One server each |
| Isolation | Sandboxed runtime | Runs on host | WASM sandbox |
| Telemetry | Zero | Opt-out | Zero |
Get started
All you need is Docker.