
The Problem with One Big Brain
Most people’s first instinct with AI is to build a single, all-knowing assistant. Feed it everything. Give it every tool. Make it smart enough to handle any task.
The problem: context is finite. Memory degrades. A single agent trying to be a lawyer, a sysadmin, a teacher, a trader, and a parent scheduler simultaneously ends up being mediocre at all of them. It’s like hiring one person and expecting them to be your CFO, your attorney, your IT department, and your kids’ tutor.
The better model? A specialized team. Each agent knows their domain deeply, runs on the model best suited for their work, and operates without polluting every other conversation with unrelated context.
That’s the philosophy behind Mission Control.
What Is Mission Control?
Mission Control is a full-stack AI command center I built on top of OpenClaw — a personal AI gateway that handles session management, agent routing, cron scheduling, and multi-channel communication. It’s a Next.js dashboard (dark theme, left sidebar navigation, 20+ pages) running locally on my Mac mini M4 at mission.adammeeker.com:8443, accessible remotely via Tailscale.
But it’s not just a dashboard. It’s an operating system for a team of AI agents — each with their own identity, voice, model selection, skills, and area of responsibility.

The Inspiration
I’ve been building systems professionally for over two decades. The pattern that always works: break complex problems into specialized components, each with clear ownership and well-defined interfaces. What’s a microservices architecture but a team of focused specialists that communicate through contracts?
When I started taking AI seriously, I tried the obvious path — one big assistant with access to everything. It worked, sort of. But the same problem kept appearing: the more you ask of a single context window, the more it becomes a generalist that’s average at everything rather than expert at anything.
The mental model that changed things was treating AI not as a tool I pick up and put down, but as infrastructure that runs continuously. Like a network stack or an event bus — always listening, always ready to act, not waiting to be invoked.
From there the architecture followed naturally. If AI is infrastructure, then it should have the same properties I demand from any good infrastructure: modularity, specialization, and clear failure boundaries. One agent per domain. Each running on the model best suited to the work. Each with its own context, personality, and directives.
That’s the idea behind Mission Control.
The Team: 14 Specialized Agents
Why Specialization Matters
Each agent in Mission Control has:
- A focused domain — they don’t try to do everything
- A selected model — Opus for complex reasoning (Matlock), Haiku for fast triage (Inbox), Sonnet for balanced work (most others)
- A personality and voice — cloned via Chatterbox TTS running locally on the M4’s MPS GPU
- Explicit directives — instructions burned into their system prompt for their specific role
Here’s the current roster:
| Agent | Role | Model | Voice |
|---|---|---|---|
| 👔 Barack | Chief of Staff | Claude Sonnet 4.6 | Obama (cloned) |
| ⚖️ Olivia | Executive Enforcer | Claude Sonnet 4.5 | Olivia Benson (cloned) |
| 🔬 Marcus | Research & Intelligence | Claude Sonnet 4.5 | Morgan Freeman (cloned) |
| ✍️ Riley | Communications & Writing | Claude Opus 4.5 | Custom voice |
| ⚖️ Matlock | Legal Counsel | Claude Opus 4.5 | David Attenborough (cloned) |
| 💼 Sterling | Business Operations | Claude Sonnet 4.5 | Custom voice |
| 📈 Vega | Quantitative Trading | Claude Sonnet 4.6 | Morgan Freeman |
| 🎓 Sage | Teaching Assistant | Claude Sonnet 4.5 | Taylor Swift (cloned) |
| 🔧 Forge | Infrastructure Engineer | Claude Sonnet 4.6 | Morgan Freeman |
| 📡 Maxwell | Communications & PR | Claude Opus 4.5 | Obama |
| 🎵 Taylor | Family & Personal | Claude Haiku 4.5 | Taylor Swift |
| 💻 Dev | Software Engineer | Claude Sonnet 4.6 | text-only |
| 🔍 Quinn | QA & Validation | Claude Haiku 4.5 | text-only |
| 📬 Inbox | Email Triage | Claude Haiku 4.5 | text-only |
The model choices aren’t arbitrary. Matlock (legal review) runs on Opus because contract analysis needs deep reasoning. Inbox (email triage) runs on Haiku because it needs to be fast and cheap — it’s processing dozens of emails, not drafting legal briefs. Vega (trading) runs on Sonnet because it needs good quantitative reasoning but also speed for market analysis.
This is the key insight: different work needs different tools. One giant super-agent on Opus for everything would be both slower and more expensive than running the right model for each job.
The Pages: 20 Dashboards for 20 Domains
Every page in Mission Control is a purpose-built interface for a specific area of life and work:
Work & Business
- 📋 Tasks — Kanban board with bidirectional Microsoft Planner sync (OAuth, auto-refresh tokens, full checklist/attachment support)
- 📅 Calendar — Weekly grid + OpenClaw cron job management (view, edit, run, delete scheduled agents)
- 🏗️ Projects — Portfolio view of active consulting engagements (Dakota Red, Klosterman Construction, Globus Medical, teaching courses)
- 💸 Invoices — Invoice Ninja integration for consulting billing
- 📝 Contracts — DocuSeal integration for client agreements
- ⚖️ Legal — Matlock’s domain: SOW review queue, compliance flags
Intelligence & Research
- 🧠 Briefing — Every morning: calendar, tasks, weather, email highlights, overnight agent activity, consulting pipeline, memory flash
- 📡 Feed — Unified real-time stream: emails, iMessages, agent actions, tasks, cron events
- 🔍 People — Personal CRM with 730 contacts, relationship strength metrics, enrichment via Marcus
Life & Home
- 👨👩👧👧 Family — Meeker family command center: upcoming events, birthday tracking, Tessa’s kindergarten registration deadline (March 16!)
- 🏠 Home — Full Home Assistant integration: toggle lights, adjust thermostats, lock doors, quick actions (Good Night, Movie Mode, Leaving)
- 📈 Trading — Vega’s dashboard: positions, P&L, decision rationale log, instruction input
Teaching
- 🎓 Teaching — Course management: BAIS 4150 and BAIS 6040 at University of Iowa, Kirkwood Programming Logic (Fall 2026), Canvas LMS integration, D2L/Brightspace placeholder
Infrastructure & Memory
- 🔧 System — Service health (6 services monitored), Microsoft Graph OAuth, config management, Forge’s monitoring panel
- 💾 Memory — Obsidian-style markdown editor: browse/edit all memory files and long-term MEMORY.md
- 📚 Docs — 373-document browser across Obsidian vault, workspace docs, and project files
- 👥 Team — Org chart of all 14 agents, grouped by function, with direct chat links
- 🤖 Agents — Full roster grid with per-agent profile pages (5 tabs: Profile, Activity, Config, Tasks, Voice Test)
- 💬 Channels — Multi-agent chat with streaming responses, action protocol, page context injection
The Voice System: Everyone Has a Voice
One of the most satisfying parts of this build is the voice system. Chatterbox TTS runs locally on the Mac mini’s M4 MPS GPU at localhost:4126. Each agent has a cloned voice — 30 seconds of reference audio, zero cloud API calls, zero ongoing cost.
Barack speaks in Obama’s voice. Matlock in David Attenborough’s. Sage in Taylor Swift’s. The voice test UI on every agent profile page lets you type custom text and hear it immediately.
The voice interface extends to Mission Control itself: a floating microphone button on every page lets you ask Barack anything verbally. He responds through the cloned voice. It feels like talking to your team.

The Home Assistant Integration
One of the more satisfying completions: Mission Control now has full bidirectional Home Assistant control. The /home page shows your entire home — lights, climate, locks, automations — and lets you control all of it.
Quick actions at the top:
- 🌙 Good Night — turns off all lights, locks the front door, triggers the night automation
- 🏠 I’m Home — welcome home routine
- 🚗 Leaving — lock everything, turn off lights, arm alarm
- 💡 Movie Mode — dim living room to 20%, kill the rest
Per-entity controls: brightness sliders, thermostat ±0.5°F, lock/unlock with confirmation. All changes show optimistic UI feedback and refresh the actual state after 1.5 seconds.
The Technical Architecture
The whole thing runs on a few key pillars:
OpenClaw handles agent sessions, cron scheduling, and the bridge between the dashboard and Claude. Agents communicate via openclaw agent --to <session-id> --message "..." from the Next.js API routes. No direct LLM API calls from the frontend — everything routes through OpenClaw’s session management.
SQLite (data/agents.db) stores agents, tasks, channels, messages, system config, and Planner sync data. Lightweight, fast, zero infrastructure.
Chatterbox TTS (local) handles all voice synthesis. MPS-accelerated on the M4 chip. ElevenLabs-compatible API so OpenClaw’s TTS integration works out of the box.
Caddy handles HTTPS on port 8443 with Let’s Encrypt certificates via Cloudflare DNS challenge. Accessible from anywhere via Tailscale (subnet routing advertises the full 10.0.0.0/24 network).
LaunchAgent keeps Mission Control always running — auto-restarts on crash, starts at login.
Ollama runs locally on the Mac mini, providing open-weight LLMs (Llama 3, Mistral, Phi) as a two-purpose layer. First: cost optimization — routine, low-stakes tasks like summarizing a calendar event, reformatting a note, or classifying an email don’t need Claude. A local 7B model is fast, free, and more than sufficient. Second: emergency fallback — if Anthropic, OpenAI, and every other frontier provider go dark simultaneously, the system doesn’t die. Agents fall back to the local model, performance degrades gracefully, and the infrastructure keeps running. It’s the difference between a power outage and a generator kicking in.

What’s Coming Next
The current system is powerful but it’s still largely reactive — you come to it. The next phase is making it truly proactive:
Agent-to-agent communication — Dev spawns Quinn to review its PRs. Marcus spawns additional research threads. Matlock and Sterling collaborate on a proposal without human intervention. Agents handoff tasks between themselves.
The always-listening office assistant — A wake word daemon ("Hey Barack") → whisper.cpp local transcription → OpenClaw agent → Chatterbox voice response. No cloud transcription. Fully offline-capable.
The /config page — Edit SOUL.md, AGENTS.md, TOOLS.md, HEARTBEAT.md, and OpenClaw config directly from the browser. Change an agent’s personality and it takes effect immediately.
The /workflows page — A visual editor for cron jobs. Each workflow gets a markdown instruction editor, error logs for the last 10 runs, and a “Run Now” button. The calendar page already links to cron jobs — this closes the loop.
D2L/Brightspace integration — Sage needs access to Kirkwood Community College’s LMS for the Programming Logic course I’m teaching in Fall 2026. The OAuth skeleton is built; just needs credentials from Kirkwood IT.
Alpaca trading integration — Vega’s dashboard shows mock positions today. Once the Alpaca API keys are wired in, it becomes a live quantitative trading interface with real position management.
The Philosophy: Build the Team, Not the God
The biggest lesson from this project: resist the urge to build one agent that does everything.
The LLM context window is your most precious resource. Every token you spend on unrelated history is a token not spent on the actual problem at hand. Sage doesn’t need to know about Vega’s trading positions. Matlock doesn’t need to know about the family calendar. Barack coordinates them, but each one stays sharp within their lane.
The specialized team model also makes it much easier to choose the right model for the job. You wouldn’t use a sledgehammer to hang a picture frame. Haiku for fast triage, Sonnet for balanced work, Opus for deep reasoning — the cost and speed profile matches the actual complexity of each task.
And there’s something psychologically satisfying about it too. When I ask Matlock to review a contract, it doesn’t feel like I’m asking a chatbot. It feels like I’m asking my lawyer. The persona, the voice, the focused expertise — it all adds up to something that actually feels like a team.
Mission Control is built on OpenClaw — the personal AI gateway that makes all of this possible.
The full stack: OpenClaw + Claude (Anthropic) + Ollama (local fallback) + Next.js + SQLite + Chatterbox TTS + Home Assistant + Microsoft Planner + n8n + Tailscale. Running 24/7 on a Mac mini M4 + Proxmox cluster in my home office in Tiffin, Iowa.