How to Safely Run OpenClaw (ex Clawd and Moltbot) in Local Docker
Running OpenClaw directly on your host system is a bad idea. The service has full access to the filesystem and everything inside the environment where it runs. That’s powerful — and potentially destructive. Treat it like any other automation tool with shell-level capabilities.
The goal here is simple:
- run OpenClaw in isolation
- keep persistent data
- add SSH for debugging and development
- keep everything reproducible via docker-compose
This setup is intended for controlled local environments.
1. Directory Structure
$ tree -L 1 ./
./
|-- Dockerfile.ssh <--- patch on top of default Dockerfile
|-- docker-compose.yml
|-- openclaw-01 <--- persistent container data
|-- openclaw-git <--- git clone from https://github.com/openclaw/openclaw
`-- openclaw-src <--- default structure
`-- openclaw
|-- openclaw.json
`-- workspace
2. Basic openclaw.json
A minimal configuration that starts a fresh gateway instance and waits for bootstrap (openclaw onboard):
$ cat openclaw-src/openclaw/openclaw.json
{
"gateway": {
"mode": "local",
"controlUi": {
"allowedOrigins": [
"*"
],
"allowInsecureAuth": true
},
"remote": {
"token": "MyPassword"
},
"auth": {
"mode": "token",
"token": "MyPassword"
}
}
}
No external dependencies are required to start.
3. Dockerfile Extension (Dockerfile.ssh)
Instead of modifying the upstream image, we extend it. Key points:
- additional npm packages (axios, cheerio)
- SSH server
- common utilities (cron, jq, mc, lynx, poppler-utils, etc.)
- optional root execution for debugging
FROM openclaw-lc:0.0.3
USER node
WORKDIR /app
RUN pnpm add -w \
axios \
cheerio
USER root
RUN chown node:node /app
RUN apt-get update && \
apt-get install -y --no-install-recommends \
openssh-server \
cron \
ca-certificates \
jq \
mc \
lynx \
sudo \
poppler-utils \
mailutils \
&& mkdir -p /var/run/sshd \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Base SSH config (non-strict)
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config && \
sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config && \
sed -i 's/#MaxAuthTries.*/MaxAuthTries 20/' /etc/ssh/sshd_config && \
sed -i 's/#MaxSessions.*/MaxSessions 50/' /etc/ssh/sshd_config
# Fix to run commands as root
RUN ln -s /home/node/.openclaw /root/.openclaw && \
ln -s /app/openclaw.mjs /bin/openclaw
ENTRYPOINT ["/bin/bash","-c", "\
if [ -n \"$SSH_ROOT_PASSWORD\" ]; then \
echo \"root:$SSH_ROOT_PASSWORD\" | chpasswd; \
echo '[entrypoint] Root SSH password set'; \
fi; \
\
if case \"${SSH_SAFE,,}\" in 1|true|yes|on) true ;; *) false ;; esac; then \
echo '[entrypoint] SSH safe mode enabled'; \
echo 'ForceCommand /app/openclaw.mjs $SSH_ORIGINAL_COMMAND' >> /etc/ssh/sshd_config && \
echo 'PermitTTY yes' >> /etc/ssh/sshd_config && \
echo 'AllowTcpForwarding no' >> /etc/ssh/sshd_config && \
echo 'X11Forwarding no' >> /etc/ssh/sshd_config; \
fi; \
\
/usr/sbin/cron & \
/usr/sbin/sshd & \
exec \"$@\" \
"]
CMD ["node", "dist/index.js", "gateway", "--allow-unconfigured"]
Why SSH? In many cases OpenClaw runs locally, but it can also run on a separate machine — for example, on a dedicated host such as an Nvidia Spark box or a VM. Direct shell access simplifies inspection, debugging, and interaction with the runtime environment.
4. docker-compose.yml
Everything is centralized in one place:
services:
openclaw-gateway-01:
# image: openclaw-lc:0.0.3
# build:
# context: ./openclaw-git/
# dockerfile: Dockerfile
image: openclaw-lc-ssh:0.2.1
build:
context: ./
dockerfile: Dockerfile.ssh
container_name: openclaw-gateway-01
environment:
HOME: /home/node
TERM: xterm-256color
NODE_ENV: production
SSH_ROOT_PASSWORD: MyPassword
SSH_SAFE: no
OPENCLAW_NO_RESPAWN: 1
privileged: true
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- "./openclaw-01/openclaw:/home/node/.openclaw"
ports:
- "18100:18789"
- "18122:22"
init: true
command:
[
"node",
"dist/index.js",
"gateway",
"--bind",
"lan",
"--port",
"18789",
"--allow-unconfigured"
]
restart: unless-stopped
ollama:
image: ollama/ollama:latest
container_name: ollama
ports:
- "11434:11434"
volumes:
- "./openclaw-01/ollama-data:/root/.ollama/"
environment:
- OLLAMA_ALLOW_PULL=true
restart: unless-stopped
open-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: open-webui
depends_on:
- ollama
environment:
- 'OLLAMA_BASE_URL=http://ollama:11434'
ports:
- "3000:8080"
volumes:
- "./openclaw-01/openwebui-data:/app"
restart: unless-stopped
5. Local LLM Stack (Ollama + Open WebUI)
The compose file also includes an example setup for running Ollama together with Open WebUI. This allows you to run everything fully locally and experiment without external APIs. If required, you can restrict outbound traffic at the firewall level to further isolate the environment.
6. Running the Stack
For quick startup and to keep the session persistent, I often use screen.
$ screen -Rda moltbolt $ docker compose -f docker-compose.yml up [+] up 1/3 ? Network moltbolt_default Created 0.0s [+] up 4/4 ? Network moltbolt_default Created 0.0s ? Container openclaw-gateway-01 Created 0.1s ? Container ollama Created 0.0s ? Container open-webui Created 0.0s Attaching to ollama, open-webui, openclaw-gateway-01 ...
This keeps the stack running even if you detach from the terminal.
7. Login and Initial Setup
$ ssh root@127.0.0.1 -t -p 18122
root@127.0.0.1's password:
root@daadbedd2383:~# openclaw onboard
🦞 OpenClaw 2026.3.2 (unknown) — Ah, the fruit tree company! 🍎
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
██░▄▄▄░██░▄▄░██░▄▄▄██░▀██░██░▄▄▀██░████░▄▄▀██░███░██
██░███░██░▀▀░██░▄▄▄██░█░█░██░█████░████░▀▀░██░█░█░██
██░▀▀▀░██░█████░▀▀▀██░██▄░██░▀▀▄██░▀▀░█░██░██▄▀▄▀▄██
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🦞 OPENCLAW 🦞
┌ OpenClaw onboarding
...
8. Web UI Access
After onboarding, open your browser:
http://127.0.0.1:18100
Conclusion
This is not a production-ready setup; it is a controlled local playground for experimentation and development.
Useful. Powerful. Still a toy 😉
Comments
Post a Comment