How to Safely Run OpenClaw (Ex-Clawd & MoltBolt)

Running OpenClaw directly on your host system is risky. The service has full access to your filesystem and the environment in which it runs. This is powerful — but potentially destructive. Treat it like any other automation tool with shell-level capabilities.

Modern AI agents are designed to read files, execute commands and interact with network services. While this makes them extremely useful for automation, it also means that a misconfiguration, prompt injection or malicious extension could potentially affect the entire system.

Because of this, the safest approach is to run such tools inside an isolated environment where mistakes cannot easily damage the host system.

The setup goal is simple:

  • Run OpenClaw in isolation
  • Keep persistent data
  • Add SSH for debugging and development
  • Maintain reproducibility via docker-compose

This configuration is intended for controlled local environments where you want flexibility for development but still maintain a reasonable level of isolation.

The approach described below uses Docker containers, persistent volumes and a minimal configuration that allows experimentation without giving the agent unrestricted access to the host operating system.

1. Directory Structure

The working directory contains three important parts: the OpenClaw source code, persistent runtime data, and the Docker configuration used to build and run the container.

$ tree -L 1 ./
./
|-- Dockerfile.ssh           <--- patch on top of default Dockerfile
|-- docker-compose.yml
|-- openclaw-01              <--- persistent container data
|-- openclaw-git             <--- git clone from https://github.com/openclaw/openclaw
`-- openclaw-src             <--- default structure
    `-- openclaw
        |-- openclaw.json
        `-- workspace

This layout keeps runtime data separate from the application source code. The container can be rebuilt at any time without losing user data or configuration.

2. Minimal openclaw.json

The configuration below enables the gateway in local mode and uses a simple token authentication mechanism. This is sufficient for local development environments.

$ cat openclaw-src/openclaw/openclaw.json
{
  "gateway": {
    "mode": "local",
    "controlUi": {
      "allowedOrigins": [
        "*"
      ],
      "allowInsecureAuth": true
    },
    "remote": {
      "token": "MyPassword"
    },
    "auth": {
      "mode": "token",
      "token": "MyPassword"
    }
  }
}

For real deployments you would normally restrict allowed origins and avoid permissive settings, but for isolated local testing this configuration keeps things simple.

3. Dockerfile Extension (Dockerfile.ssh)

The base OpenClaw container is extended to include additional tools useful for debugging and experimentation. SSH access makes it possible to interact with the environment directly.

Several small utilities are also added to simplify working inside the container.

FROM openclaw-lc:0.0.3

USER node
WORKDIR /app

RUN pnpm add -w \
    axios \
    cheerio

USER root

RUN chown node:node /app

RUN apt-get update && \
    apt-get install -y --no-install-recommends \
      openssh-server \
      cron \
      ca-certificates \
      jq \
      mc \
      lynx \
      sudo \
      poppler-utils \
      mailutils \
    && mkdir -p /var/run/sshd \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config && \
    sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config && \
    sed -i 's/#MaxAuthTries.*/MaxAuthTries 20/' /etc/ssh/sshd_config && \
    sed -i 's/#MaxSessions.*/MaxSessions 50/' /etc/ssh/sshd_config

RUN ln -s /home/node/.openclaw /root/.openclaw && \
    ln -s /app/openclaw.mjs /bin/openclaw

ENTRYPOINT ["/bin/bash","-c", "\
  if [ -n \"$SSH_ROOT_PASSWORD\" ]; then \
    echo \"root:$SSH_ROOT_PASSWORD\" | chpasswd; \
    echo '[entrypoint] Root SSH password set'; \
  fi; \
  if case \"${SSH_SAFE,,}\" in 1|true|yes|on) true ;; *) false ;; esac; then \
    echo '[entrypoint] SSH safe mode enabled'; \
    echo 'ForceCommand /app/openclaw.mjs $SSH_ORIGINAL_COMMAND' >> /etc/ssh/sshd_config && \
    echo 'PermitTTY yes' >> /etc/ssh/sshd_config && \
    echo 'AllowTcpForwarding no' >> /etc/ssh/sshd_config && \
    echo 'X11Forwarding no' >> /etc/ssh/sshd_config; \
  fi; \
  /usr/sbin/cron & \
  /usr/sbin/sshd & \
  exec \"$@\" "]

CMD ["node", "dist/index.js", "gateway", "--allow-unconfigured"]

The SSH configuration can also operate in a restricted mode where the OpenClaw process is forced as the executed command. This helps limit what remote sessions can do inside the container.

4. docker-compose.yml

The entire stack is defined in a single compose file. This makes the environment easy to recreate and ensures that all components start together.

services:

  openclaw-gateway-01:
    image: openclaw-lc-ssh:0.2.1
    build:
      context: ./
      dockerfile: Dockerfile.ssh
    container_name: openclaw-gateway-01
    environment:
      HOME: /home/node
      TERM: xterm-256color
      NODE_ENV: production
      SSH_ROOT_PASSWORD: MyPassword
      SSH_SAFE: no
      OPENCLAW_NO_RESPAWN: 1
    privileged: true
    volumes:
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
      - "./openclaw-01/openclaw:/home/node/.openclaw"
    ports:
      - "18100:18789"
      - "18122:22"
    init: true
    command:
      [
        "node",
        "dist/index.js",
        "gateway",
        "--bind",
        "lan",
        "--port",
        "18789",
        "--allow-unconfigured"
      ]
    restart: unless-stopped

  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - "./openclaw-01/ollama-data:/root/.ollama/"
    environment:
      - OLLAMA_ALLOW_PULL=true
    restart: unless-stopped

  open-webui:
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui
    depends_on:
      - ollama
    environment:
      - 'OLLAMA_BASE_URL=http://ollama:11434'
    ports:
      - "3000:8080"
    volumes:
      - "./openclaw-01/openwebui-data:/app"
    restart: unless-stopped

With this configuration the OpenClaw gateway, local language model runtime and web interface all run together in a single reproducible environment.

5. Local LLM Stack (Ollama + Open WebUI)

This setup allows you to run Ollama and Open WebUI fully locally without relying on external APIs. Running models locally can be useful for privacy, offline experimentation or avoiding API limits.

Optional firewall rules can further isolate outbound traffic if you want the entire stack to remain self-contained.

If you want more control over firewall behaviour on Docker hosts, take a look at this article: Firewall Control on Docker Hosts Using the DOCKER-USER Chain.

6. Running the Stack

Once everything is prepared, start the environment using Docker Compose.

$ screen -Rda moltbolt
$ docker compose -f docker-compose.yml up
[+] up 1/3
 ? Network moltbolt_default      Created               0.0s
[+] up 4/4
 ? Network moltbolt_default      Created               0.0s
 ? Container openclaw-gateway-01 Created               0.1s
 ? Container ollama              Created               0.0s
 ? Container open-webui          Created               0.0s
Attaching to ollama, open-webui, openclaw-gateway-01
...

The containers will start in the foreground so you can observe logs during the initial setup.

7. Login and Initial Setup

After the gateway starts you can connect to the container via SSH and perform the OpenClaw onboarding process.

$ ssh root@127.0.0.1 -t -p 18122
root@127.0.0.1's password:
root@daadbedd2383:~# openclaw onboard

🦞 OpenClaw 2026.3.2 (unknown) — Ah, the fruit tree company! 🍎

▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
██░▄▄▄░██░▄▄░██░▄▄▄██░▀██░██░▄▄▀██░████░▄▄▀██░███░██
██░███░██░▀▀░██░▄▄▄██░█░█░██░█████░████░▀▀░██░█░█░██
██░▀▀▀░██░█████░▀▀▀██░██▄░██░▀▀▄██░▀▀░█░██░██▄▀▄▀▄██
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

                  🦞 OPENCLAW 🦞

┌  OpenClaw onboarding
...

The onboarding process initializes the workspace and prepares the environment for running agents and skills.

8. Web UI Access

After onboarding, open your browser at:

http://127.0.0.1:18100

The web interface allows you to interact with the gateway, monitor activity and test workflows.

Conclusion

This setup is intended for controlled local experimentation and development. Running OpenClaw in a containerized environment helps reduce risk while still allowing full flexibility for testing and development.

Useful. Powerful. Still a toy 😉

Human Logic, AI Syntax... Note on Content: I'm a Systems Engineer, not a native English writer. To ensure my technical ideas are clear and accessible, I use AI tools to polish the grammar and style. The workflow is simple: I provide the logic, the code, and the real-world experience. The AI handles the "English-to-Human" translation layer. If you find a bug, that's on me. If you find a perfectly placed comma, that's probably the AI.

Comments

Popular posts from this blog

FreeRadius with Google Workspace LDAP

Fixing pssh (parallel-ssh) Problems on Debian 10 with Python 3.7