Skip to content

Docker Packaging

Spawnfile can generate Docker container artifacts as part of the compile process. This gives you a way to build and run compiled agents against real runtimes using standard Docker tooling.

One compile = one container.

The compiler walks the full graph from the root Spawnfile. Everything it resolves — agents, subagents, team members — lands in a single container image. This applies regardless of how many Spawnfiles are in the graph, how many agents are resolved, or how many distinct runtimes appear.

The compiler emits container artifacts at the compile output root alongside the runtime-specific output:

dist/
Dockerfile
entrypoint.sh
.env.example
container/
rootfs/
var/lib/spawnfile/instances/...
runtimes/
openclaw/agents/analyst/...
picoclaw/agents/editor/...
spawnfile-report.json
  • Dockerfile and entrypoint.sh are generated by the compiler based on the resolved graph.
  • runtimes/ is the human-inspectable adapter output.
  • container/rootfs/ is the final container filesystem for build-time placement into the runtime’s expected paths.

Each runtime adapter declares container metadata including:

  • A standalone base image or install strategy aligned with the pinned runtime ref
  • System dependencies
  • Expected config and workspace paths inside the container
  • The start command and required runtime environment

For single-runtime compiles, the Dockerfile uses that runtime’s base image directly. For multi-runtime compiles, the Dockerfile uses a common base and installs each runtime.

The runtime version used in the Dockerfile matches the pinned ref from runtimes.yaml. This keeps the container aligned with the runtime version the adapters were written against.

The compile step does not require local runtime clones. The Docker build step is responsible for fetching or installing the pinned runtime artifact.

The entrypoint script handles:

  1. Validating required environment variables and files
  2. Materializing env-backed secret files when a runtime expects file-based auth
  3. Starting the runtime process(es)

For a single runtime, the compiler pre-places config and workspace files into final runtime paths under container/rootfs/. The entrypoint stays minimal:

  • Validate required env vars
  • Validate compiled config exists at the expected path
  • Write env-backed secret files when needed
  • exec the runtime’s start command

For multiple runtimes in one container, the entrypoint:

  • Validates required env and config for each target
  • Writes env-backed secret files for each target
  • Starts each runtime process
  • Traps signals and forwards them to all child processes
  • Waits for all processes
Project TypeContainer Behavior
Single agentOne runtime process, one config
Agent with subagentsOne runtime process — the runtime manages subagent delegation
Team on one runtimeOne runtime process with multi-agent config, or one process per agent
Team on multiple runtimesOne process group, one runtime process per distinct runtime

The compiler emits a .env.example listing all required and optional environment variables:

  • Secrets declared in manifests (e.g. SEARCH_API_KEY)
  • Model auth variables for providers using api_key auth (e.g. ANTHROPIC_API_KEY)
  • Surface auth variables for declared communication surfaces (e.g. DISCORD_BOT_TOKEN, TELEGRAM_BOT_TOKEN, SLACK_BOT_TOKEN, SLACK_APP_TOKEN)
  • Runtime auth variables (e.g. OPENCLAW_GATEWAY_TOKEN)
  • Variables the entrypoint or runtime expects

Actual secret values are never emitted. The .env.example contains variable names with empty values and comments.

If a runtime expects secret file references in its config, the adapter declares env-to-file bindings and the entrypoint materializes them before startup.

Spawnfile manages runtime and model auth through local auth profiles. This keeps secrets out of the build and injects them only at run time.

The primary happy path is spawnfile auth sync, which reads model auth intent from the project’s manifests and imports matching local credentials into a named profile:

Terminal window
spawnfile auth sync fixtures/single-agent --profile dev --env-file ./.env

This reads the declared auth methods on each model target and surface, then imports the matching material. For example, if the manifest declares auth.method: claude-code, the sync imports your local Claude Code CLI credentials. If it declares auth.method: api_key, it reads the key from the provided env file.

Lower-level commands are available for manual profile editing:

Terminal window
# Import a .env file into a profile
spawnfile auth import-env --profile dev --env-file ./.env
# Import Claude Code CLI credentials
spawnfile auth import-claude-code --profile dev
# Import Codex CLI credentials
spawnfile auth import-codex --profile dev
Terminal window
# List all auth profiles
spawnfile auth list
# Show details of a profile
spawnfile auth show --profile dev
MethodWhat Gets Imported
api_keyProvider API key from env file
claude-codeLocal Claude Code CLI credential store
codexLocal Codex CLI credential store
noneNothing — used for local models

claude-code and codex imports mount existing local CLI credential stores into runtime homes at spawnfile run time. api_key is the primary path for provider API keys passed as environment variables.

The intended flow uses spawnfile build and spawnfile run for the happy path:

Terminal window
# Sync declared model auth into a local profile
spawnfile auth sync fixtures/single-agent --profile dev --env-file ./.env
# Compile and build the container
spawnfile build fixtures/single-agent --out ./bundle/single-agent --tag my-agent
# Run with the local auth profile
spawnfile run fixtures/single-agent --out ./bundle/single-agent --tag my-agent --auth-profile dev

For teams:

Terminal window
spawnfile auth sync fixtures/multi-runtime-team --profile dev --env-file ./.env
spawnfile build fixtures/multi-runtime-team --out ./bundle/team --tag my-team
spawnfile run fixtures/multi-runtime-team --out ./bundle/team --tag my-team --auth-profile dev

Same flow regardless of project complexity. One compile, one build, one run.

spawnfile build stays secrets-free by default. It compiles the project and then runs docker build against the emitted output directory. The generated Dockerfile installs pinned compiled runtime artifacts — it does not rebuild runtime sources during image build.

spawnfile run is the auth-aware wrapper over docker run. It validates declared model auth before container startup and mounts the right credential material from the selected profile.

Manual Docker remains valid against the compile output:

Terminal window
spawnfile compile fixtures/single-agent --out ./bundle/single-agent
cd ./bundle/single-agent
docker build -t my-agent .
cp .env.example .env
# Fill in secret values...
docker run --env-file .env -p 18789:18789 my-agent

The compile report includes a container section:

{
"container": {
"runtimes_installed": ["openclaw", "picoclaw"],
"dockerfile": "Dockerfile",
"entrypoint": "entrypoint.sh",
"env_example": ".env.example",
"secrets_required": ["SEARCH_API_KEY", "ANTHROPIC_API_KEY"],
"ports": [3000]
}
}

These are out of scope for v0.1:

  • Docker Compose generation for multi-container topologies
  • Orchestration (Kubernetes, ECS, Fly, etc.)
  • Image publishing and registry
  • Runtime-native auth bootstrap (onboarding flows stay manual)
  • HEALTHCHECK instructions or readiness contracts
  • Volume management and persistence strategy
  • Network topology between containers
  • CI/CD integration