Skip to content
Kodakodadocs
Concepts

Architecture

Control plane, runtime, knowledge, memory, artifacts.

Koda is a control-plane-first agent platform. The repository ships both the operator-facing product surfaces and the runtime services required to execute, ground, and supervise configurable agents in production-style environments. This page walks through the full topology and how the layers communicate.

Architecture
HTTPgRPCSQL/S3
OperatorsDashboard · CLI · APIEnd usersTelegram · Web chat · CustomWeb (Next.js):3000 · operator dashboardControl plane:8090 · /api/control-plane/*Runtime API:8090 · /api/runtime/*runtime-kernel:50061 · tasks · env · terminalsmemory:50063 · recall · extractretrieval:50062 · knowledge searchartifact:50064 · ingest · evidencesecurity:50065 · validate · redactExternal providersClaude · GPT · Gemini · OllamaPostgresdurable state + pgvectorSeaweedFS:8333 · S3-compatible objectsDocker Composehealth · doctor · lifecycle
100

The six domains

Koda is organised around six stable domains. Everything else is an implementation detail underneath one of them.

  • Control plane — setup, provider configuration, secrets, agent definitions, publication, and operator APIs.
  • Runtime — queue orchestration, execution supervision, runtime APIs, agent tools, and provider adapters.
  • Knowledge — retrieval, evidence sourcing, and operator-approved grounding context.
  • Memory — recall, extraction, curation, and durable semantic context.
  • Artifacts — ingestion, metadata, object-backed binaries, and evidence generation.
  • Infrastructure — Postgres, S3-compatible object storage, Docker Compose, health checks, and bootstrap tooling.
Harness-oriented by design
Koda is deliberately not a single niche, assistant persona, or task domain. Multi-agent and multi-provider configurations are first-class operating patterns. Infrastructure bootstrap is separated from product configuration so operators can shape agents however they need.

How the layers talk

Four transport patterns cover everything. The architecture diagram above highlights each one in a different colour.

  • HTTP — the control plane serves /api/control-plane/* and the runtime surfaces /api/runtime/* on port 8090. The Next.js dashboard on port 3000 proxies everything it needs through these same routes.
  • gRPC (internal) — the control plane and runtime call five internal services: runtime-kernel:50061, retrieval:50062, memory:50063, artifact:50064, security:50065. None of these ports are exposed outside the compose network.
  • SQL — every service with durable state talks to a single Postgres instance (with pgvector) through typed repositories. No service owns its own SQLite or embedded store.
  • S3-compatible — binary artifacts travel over the S3 API to SeaweedFS (port 8333 internal). Swap in AWS S3, MinIO, or Cloudflare R2 by changing the endpoint; the contract is identical.

Deployment topology

The default installation brings up one compose stack with four public-facing services and five internal gRPC services. Both the local quickstart and the single-node VPS path use the same topology so that production and development stay close.

  • web (Next.js dashboard) and app (control plane + runtime HTTP) are the only services that should ever be reachable from outside the compose network.
  • postgres holds durable state for every service — runtime, control-plane, knowledge, memory, and audit all live here under separate schemas.
  • seaweedfs + seaweedfs-init provide the bundled S3-compatible object store. seaweedfs-init is a one-shot container that creates the default bucket on first boot.
  • The five internal gRPC services are started by the same compose file but bind only to the backend network.

State and storage

Koda uses durable storage by default. Local disk is treated as scratch only — if you lose a container, you lose nothing that matters.

  • Postgres is the source of truth for control-plane, runtime, memory, knowledge, and audit records. The same database, different schemas.
  • Object binaries and artifact payloads flow through a generic S3-compatible contract. SeaweedFS is the default, but the system doesn't care which backend serves the contract.
  • Local disk inside containers is scratch. Runtime workspaces, git worktrees, terminal scratch — all ephemeral.
Why control-plane-first
Product configuration (providers, agents, secrets, integrations) lives behind the control plane, not in per-agent .env files. Bootstrap infrastructure (Docker, Postgres, object storage) is separate. This keeps reverse proxies, Tailscale, and VPS platforms thin and focused on infrastructure concerns.

Public surfaces

Everything external ever sees comes through one of these entry points:

  • / and /control-plane — the Next.js operator dashboard.
  • /setup — first-boot compatibility bridge.
  • /api/control-plane/* — HTTP control-plane API.
  • /api/runtime/* — HTTP runtime API.
  • /docs/openapi/control-plane.json — the OpenAPI contract that the dashboard (and any external integration) is built against.

Go deeper

  • Control plane — what operators configure and how the API is organised.
  • Runtime — the execution lifecycle and the five internal services.
  • Memory & knowledge — how recall and retrieval ground every task.