Published on

Coding agents in secured VS Code dev containers

Hero image
Authors
  • avatar
    Name
    Daniel Demmel
    Occupation
    Software engineer with
    21 years of professional
    experience

Why do we need sandboxed agents

Simon Willison describes the lethal trifecta as the combination of:

  1. Access to an LLM – the agent can generate arbitrary code and commands
  2. The ability to execute code – the agent can run what it generates
  3. Access to untrusted content – web pages, npm packages, user files that could contain prompt injections

When all three are present, you've got a recipe for trouble. A malicious npm package or a cleverly crafted webpage could inject prompts that convince the agent to execute harmful commands. Even without malicious intent, the agent could make mistakes with destructive consequences.

The goal isn't to eliminate risk entirely – it's to limit the blast radius. If something goes wrong, we want it contained to the sandbox rather than having full access to your machine, credentials, and the ability to push malicious code to your repositories.

The future might be cloud, but it's not here yet

Claude Code Web is useful for exploration, but it's not empowered and flexible enough to run specific test harnesses that provide the way for Claude to verify its work. This is of course not specific to CC Web and is solvable, but I'm not ready to pay full LLM API inference costs + a custom containerised infra provider and invest into working around their quirks.

VS Code dev container is team sharing heaven

I'd like my teammates to be able to benefit from my side quests investing in tooling. For example, I created a browser testing Skill using browser-debugger-cli (that wraps Chrome Devtools access into a CLI instead of MCP for agents) but it only helps if the scripts are zero setup, otherwise Claude Code will flail around in other people's sessions until they invest in setting up and debugging tools.

VS Code dev containers aren't designed for coding agents – the implementation prioritises convenience shortcuts rather than sandboxing – but it's a perfect starting point for reproducible dev environments. Another important piece for me was to wrap the IDE "backend" into a container, to eliminate people getting false positive linter / type errors by forgetting to install new dependencies, etc.

I'd know, I've been harping on about containerising dev environments for more than a decade! Dev containers have been around for years too, I gave them a go a few times, but never got to a point where they were good enough. Now finally, with a bit of LLM help to keep the momentum and the kinks of the implementation ironed out throughout the years, I managed to set it up in a way that makes dev experince better rather than compromise it!

The basic structure

A minimal secured dev container setup needs three files:

.devcontainer/devcontainer.json – the main configuration that VS Code reads:

{
  "name": "Secured Dev Container",
  "dockerComposeFile": "docker-compose.yml",
  "service": "app",
  "workspaceFolder": "/app",
  "remoteUser": "vscode",
  "shutdownAction": "stopCompose",
  "remoteEnv": {
    "SSH_AUTH_SOCK": "",
    "GPG_AGENT_INFO": "",
    "BROWSER": "",
    "VSCODE_IPC_HOOK_CLI": null,
    "VSCODE_GIT_IPC_HANDLE": null,
    "GIT_ASKPASS": null,
    "VSCODE_GIT_ASKPASS_MAIN": null,
    "VSCODE_GIT_ASKPASS_NODE": null,
    "VSCODE_GIT_ASKPASS_EXTRA_ARGS": null,
    "REMOTE_CONTAINERS_IPC": null,
    "REMOTE_CONTAINERS_SOCKETS": null,
    "REMOTE_CONTAINERS_DISPLAY_SOCK": null,
    "WAYLAND_DISPLAY": null
  },
  // postStartCommand: clean up sockets created before VS Code attaches
  "postStartCommand": "find /tmp -maxdepth 2 \\( -name 'vscode-ssh-auth-*.sock' -o -name 'vscode-remote-containers-ipc-*.sock' -o -name 'vscode-remote-containers-*.js' \\) -delete 2>/dev/null || true",
  // IPC socket cleanup (vscode-ipc-*.sock, vscode-git-*.sock) is handled by a background
  // loop in the Docker Compose command – see the "Socket file deletion" section below.
  "customizations": {
    "vscode": {
      "settings": {
        "dev.containers.dockerCredentialHelper": false,
        "dev.containers.copyGitConfig": false
      }
    }
  }
}

.devcontainer/docker-compose.yml – defines the dev container and the Docker socket proxy. Note the socket cleanup loop in the command – this is where IPC socket deletion happens (explained in the VS Code IPC hardening section below):

services:
  docker-proxy:
    image: tecnativa/docker-socket-proxy:latest
    environment:
      # Read-only operations - allowed
      CONTAINERS: 1
      IMAGES: 1
      INFO: 1
      NETWORKS: 1
      VOLUMES: 1
      # Dangerous operations - blocked
      POST: 0
      BUILD: 0
      COMMIT: 0
      EXEC: 0
      SWARM: 0
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
      - dev

  app:
    build:
      context: ..
      dockerfile: .devcontainer/Dockerfile
    cap_drop:
      - ALL
    security_opt:
      - no-new-privileges:true
    volumes:
      - ..:/app:cached
    environment:
      DOCKER_HOST: tcp://docker-proxy:2375
    networks:
      - dev
    depends_on:
      - docker-proxy
    # Socket cleanup loop: 10 passes at 30s intervals (~5 minutes) to catch all
    # VS Code IPC sockets, including late-created ones. Runs as a child of the
    # container's own bash process – not subject to VS Code lifecycle management.
    command: >
      bash -c '
      (for i in 1 2 3 4 5 6 7 8 9 10; do sleep 30;
        find /tmp -maxdepth 2 \( -name "vscode-ipc-*.sock" -o -name "vscode-git-*.sock" \) -delete 2>/dev/null;
      done) &
      sleep infinity'

networks:
  dev:
    external: true

.devcontainer/Dockerfile – crucially, without sudo, and with a shell hardening script (more on why in a moment):

FROM node:lts

# Use bash with pipefail
SHELL ["/bin/bash", "-o", "pipefail", "-c"]

# Install useful tools (sudo intentionally omitted for security)
RUN apt-get update && apt-get install -y --no-install-recommends \
    git \
    vim \
    ripgrep \
    fd-find \
    docker-cli \
    && rm -rf /var/lib/apt/lists/*

# Create non-root user
ARG USERNAME=vscode
ARG USER_UID=1000
ARG USER_GID=$USER_UID

RUN groupadd --gid $USER_GID $USERNAME \
    && useradd --uid $USER_UID --gid $USER_GID -m $USERNAME

USER $USERNAME

# Security hardening script – sourced from .bashrc before the interactive guard
# See the "VS Code IPC hardening" section for why this is needed
RUN mkdir -p /home/vscode/.config && cat << 'HARDEN' > /home/vscode/.config/security-harden.sh
unset VSCODE_IPC_HOOK_CLI VSCODE_GIT_IPC_HANDLE GIT_ASKPASS \
      VSCODE_GIT_ASKPASS_MAIN VSCODE_GIT_ASKPASS_NODE VSCODE_GIT_ASKPASS_EXTRA_ARGS \
      REMOTE_CONTAINERS_IPC REMOTE_CONTAINERS_SOCKETS REMOTE_CONTAINERS_DISPLAY_SOCK \
      WAYLAND_DISPLAY
export BROWSER= SSH_AUTH_SOCK= GPG_AGENT_INFO=
HARDEN

RUN sed -i '1i source ~/.config/security-harden.sh 2>/dev/null || true' ~/.bashrc

WORKDIR /app

CMD ["sleep", "infinity"]

The external network (dev in this example) allows your dev container to communicate with sibling services like databases or emulators that you might have running in other containers.

Securing dev containers

Right, so here's where it gets interesting. VS Code dev containers are designed to be convenient, not secure. They actively work against you by injecting various – otherwise helpful – features that happen to be security holes when you're running an autonomous coding agent.

The threat model

What are we actually protecting against?

  • Malicious npm packages – supply chain attacks that execute arbitrary code during npm install (using the postInstall hook) or at runtime
  • Prompt injection – malicious content in files, URLs, or API responses that manipulates the agent into executing harmful commands
  • AI mistakes – even without malicious intent, the agent could make errors with destructive consequences

The goal is a sandbox where the agent can work freely while limiting the blast radius of any compromise.

Docker socket proxy – preventing container escape

The most obvious attack vector is Docker itself. With direct socket access, escaping the container is trivial:

docker run -it --privileged --pid=host -v /:/host alpine chroot /host

That's complete host access in a single command. Not ideal...

The Tecnativa docker-socket-proxy intercepts Docker API calls and blocks dangerous operations. With POST: 0 and EXEC: 0, the agent can still view container logs (useful for debugging sibling services) but can't create new containers or execute commands in existing ones.

What the agent can do:

  • docker ps – list running containers
  • docker logs <container> – view container logs
  • docker inspect <container> – inspect container details

What the agent cannot do:

  • docker run – create new containers
  • docker exec – execute commands in other containers
  • docker build – build images

Why not just remove Docker access entirely? Being able to view logs of sibling containers (postgres, emulators, etc.) is genuinely useful for debugging. The proxy preserves this capability while blocking escape vectors.

Privilege escalation prevention

This one has three parts:

No sudo – don't install it. A non-root user with sudo access can escalate to root and bypass container restrictions.

$ sudo -l
bash: sudo: command not found

If you see your agent wants to use another tool, add it to the Dockerfile.

Drop all Linux capabilitiescap_drop: [ALL] in docker-compose.yml removes all kernel capabilities from the container. The container only runs Node.js, git, and bash – none of which need special capabilities at runtime. If something breaks, specific capabilities can be added back with cap_add.

Prevent new privilegessecurity_opt: no-new-privileges:true prevents privilege escalation via setuid/setgid binaries. Combined with removing sudo, this ensures no process in the container can gain elevated privileges.

claude-code:
  cap_drop:
    - ALL
  security_opt:
    - no-new-privileges:true

Together these strengthen the container boundary itself – even if an attacker finds a setuid binary or a way to invoke a privileged operation, the kernel will refuse.

Git push prevention – no SSH keys

Malicious code could (force-)push itself to a remote repository, establishing persistence or spreading to other systems. The fix is straightforward: don't mount SSH keys into the container. You'll also need to prevent VS Code from injecting them, see in a bit.

One credential that is unavoidable is the one for your agent – unless you're happy to log in every time – so keep it in a gitignored directory on the host:

# In .devcontainer/docker-compose.yml – no ~/.ssh mount
volumes:
  - ../.claude-docker/.claude.json:/home/vscode/.claude.json

What the agent can do:

  • git log, git status, git diff – full read access
  • git commit, git branch – local commits and branches
  • git stash, git checkout – local operations

What the agent cannot do:

  • git push – fails with SSH authentication error
  • git fetch from private repos – no credentials

Changes are still tracked by git, so you can review everything before pushing yourself. I personally find this an incredibly good compromise, the agent can use all the basic git functions, but unable to do destructive commands.

VS Code IPC hardening – the lesser-known attack surface

This is the sneaky one. VS Code's remote development model creates multiple Unix sockets in /tmp that enable communication between the container and host. Research by The Red Guild demonstrates these can be abused for container escape:

SocketPurposeAttack Vector
vscode-ssh-auth-*.sockSSH agent forwardingUse host SSH keys without authorisation
vscode-ipc-*.sockCLI integrationExecute commands on host via code CLI
vscode-remote-containers-ipc-*.sockHost-container RPCExtension command execution bridge
vscode-git-*.sockGit extension IPCGit credential access

VS Code also injects environment variables pointing to these sockets and to helper scripts that can trigger actions on the host:

VariablePoints toRisk
VSCODE_IPC_HOOK_CLIvscode-ipc-*.sockHost command execution via code CLI
VSCODE_GIT_IPC_HANDLEvscode-git-*.sock (in /tmp/user/1000/)Git credential access
GIT_ASKPASS / VSCODE_GIT_ASKPASS_*VS Code's credential helper scriptsHTTPS Git credential leakage
REMOTE_CONTAINERS_IPCvscode-remote-containers-ipc-*.sockExtension command execution
REMOTE_CONTAINERS_SOCKETSJSON array of socket pathsEnumerates multiple escape vectors
BROWSERVS Code's browser.sh helperHost-side execution via --openExternal

The remoteEnv trap

The natural first instinct is to clear these in devcontainer.json's remoteEnv:

{
  "remoteEnv": {
    "SSH_AUTH_SOCK": "",
    "BROWSER": "",
    "VSCODE_IPC_HOOK_CLI": null
  }
}

This works for some variables (SSH_AUTH_SOCK, GPG_AGENT_INFO), but VS Code re-injects its own variables (BROWSER, VSCODE_IPC_HOOK_CLI, GIT_ASKPASS, etc.) when spawning new processes. You can verify this yourself:

# Inside the container, despite remoteEnv clearing these:
$ echo $BROWSER
/vscode/vscode-server/bin/linux-x64/.../bin/helpers/browser.sh

$ echo $VSCODE_IPC_HOOK_CLI
/tmp/vscode-ipc-<uuid>.sock

So remoteEnv alone is insufficient. We need a layered approach.

The .bashrc subtlety

The obvious fix seems to be clearing these variables in .bashrc. But there's a catch: coding agents like Claude Code invoke bash as a non-interactive login shell. You can verify this:

shopt login_shell    # on – it IS a login shell
echo $-              # hmtBc – no 'i' flag, NOT interactive

For login shells, bash sources ~/.profile, which sources ~/.bashrc. But Debian's default .bashrc has an interactive guard near the top:

# If not running interactively, don't do anything
case $- in
    *i*) ;;
      *) return;;
esac

This means anything after this guard is invisible to the coding agent. The fix is to source a hardening script before the guard – on line 1 of .bashrc.

Three-layer defence

Layer 1 – remoteEnv (first line of defence, partially effective):

{
  "remoteEnv": {
    "SSH_AUTH_SOCK": "",
    "GPG_AGENT_INFO": "",
    "BROWSER": "",
    "VSCODE_IPC_HOOK_CLI": null,
    "VSCODE_GIT_IPC_HANDLE": null,
    "GIT_ASKPASS": null,
    "VSCODE_GIT_ASKPASS_MAIN": null,
    "REMOTE_CONTAINERS_IPC": null,
    "REMOTE_CONTAINERS_SOCKETS": null,
    "REMOTE_CONTAINERS_DISPLAY_SOCK": null,
    "WAYLAND_DISPLAY": null
  }
}

Layer 2 – Shell hardening script (primary defence, baked into Dockerfile):

# Security hardening script – sourced from .bashrc before the interactive guard
RUN mkdir -p /home/vscode/.config && cat << 'HARDEN' > /home/vscode/.config/security-harden.sh
# VS Code IPC sockets – can execute commands on the host
unset VSCODE_IPC_HOOK_CLI

# VS Code Git extension IPC – credential access via host
unset VSCODE_GIT_IPC_HANDLE \
      GIT_ASKPASS \
      VSCODE_GIT_ASKPASS_MAIN \
      VSCODE_GIT_ASKPASS_NODE \
      VSCODE_GIT_ASKPASS_EXTRA_ARGS

# Remote Containers extension IPC – host command execution bridge
unset REMOTE_CONTAINERS_IPC \
      REMOTE_CONTAINERS_SOCKETS \
      REMOTE_CONTAINERS_DISPLAY_SOCK

# GUI forwarding (low risk but unnecessary)
unset WAYLAND_DISPLAY

# Browser helper – can trigger actions on host via --openExternal
# Set to empty rather than unset to prevent fallback to defaults
export BROWSER=

# Agent forwarding – set to empty to prevent fallback to default socket paths
export SSH_AUTH_SOCK=
export GPG_AGENT_INFO=
HARDEN

# Source it BEFORE the interactive guard in .bashrc
RUN sed -i '1i source ~/.config/security-harden.sh 2>/dev/null || true' ~/.bashrc

Layer 3 – Socket file deletion (defence in depth):

Two mechanisms handle different socket creation timings:

  • postStartCommand in devcontainer.json catches early sockets (SSH auth, remote-containers) before VS Code attaches
  • A background cleanup loop in the Docker Compose command catches IPC and git sockets created during and after VS Code attach
{
  // In devcontainer.json – postStartCommand catches early sockets
  "postStartCommand": "find /tmp -maxdepth 2 \\( -name 'vscode-ssh-auth-*.sock' -o -name 'vscode-remote-containers-ipc-*.sock' -o -name 'vscode-remote-containers-*.js' \\) -delete 2>/dev/null || true"
}
# In docker-compose.yml – background loop catches IPC/git sockets.
# 10 passes at 30s intervals (~5 minutes) to catch all late-created sockets.
# Runs as a child of the container's own bash process, NOT a VS Code lifecycle command.
command: >
  bash -c '
  (for i in 1 2 3 4 5 6 7 8 9 10; do sleep 30;
    find /tmp -maxdepth 2 \( -name "vscode-ipc-*.sock" -o -name "vscode-git-*.sock" \) -delete 2>/dev/null;
  done) &
  sleep infinity'

Note -maxdepth 2 to catch sockets in /tmp/user/1000/ as well. vscode-ipc-*.sock and vscode-git-*.sock are not recreated after deletion – the IDE continues to work without them. However, VS Code creates IPC sockets at multiple times during startup – some 60+ seconds after attach. A single cleanup pass misses these late-created sockets. The 10-pass approach (every 30s for ~5 minutes) ensures all sockets are caught regardless of when VS Code creates them.

Why Docker Compose command instead of postAttachCommand? This was a hard-won lesson. VS Code's postAttachCommand is unreliable for background processes – VS Code appears to use cgroup-based cleanup that kills ALL processes spawned during lifecycle commands, regardless of nohup, setsid, double-fork, or any other daemonisation technique. The Docker Compose command runs as the container's own process tree, which is not subject to VS Code's lifecycle management – making it the right place for long-running background tasks.

Why three layers? Each has limitations: remoteEnv is overridden by VS Code for its own variables; the shell hardening only applies to bash processes (not direct socket access); socket deletion has a window while the cleanup passes complete (30s gaps). Together, they ensure that standard tools, shell commands, and opportunistic discovery all fail.

What each mitigation does:

VariableWhen ClearedTrade-off
SSH_AUTH_SOCKSSH tools can't find agentCan't use host SSH keys
GPG_AGENT_INFOGPG can't find agentCan't sign with host GPG keys
BROWSERxdg-open/open failLinks won't open in host browser
VSCODE_IPC_HOOK_CLIcode command failsCan't open files in VS Code from terminal
GIT_ASKPASS / VSCODE_GIT_ASKPASS_*Git HTTPS credential helper disabledNo HTTPS git auth (SSH already blocked)
VSCODE_GIT_IPC_HANDLEGit extension IPC disabledVS Code Git panel may lose some features
REMOTE_CONTAINERS_*Extension IPC disabledMinor feature loss in container management

You also want to disable VS Code's credential injection:

{
  "customizations": {
    "vscode": {
      "settings": {
        "dev.containers.dockerCredentialHelper": false,
        "dev.containers.copyGitConfig": false
      }
    }
  }
}

Remaining risk: During the window while the cleanup passes complete, a targeted attack could discover and directly connect to vscode-ipc-*.sock sockets using VS Code's internal IPC protocol. After each pass, the sockets are permanently removed. To use this vector, a pre-existing script would need to perform the full attack (otherwise the socket will close on it) within seconds of container startup, which is unlikely.

Triggering actions in sibling containers

With read-only Docker access, docker exec is blocked. So how does the agent interact with other services?

The answer is HTTP endpoints. If your agent needs to trigger actions in a database container or restart a service, expose an HTTP endpoint for that action. This is actually better design anyway - explicit, logged, and rate-limitable.

For example, instead of docker exec postgres pg_dump, have a small HTTP service that accepts a request and runs the backup.

Accepted risks

Not everything can be locked down without making development impractical. These are the trade-offs:

RiskImpactWhy Accepted
Network egressData exfiltration possibleDevelopment requires internet access
Workspace write accessSource code can be modifiedEssential for development; git tracks changes
Claude credentials readableOAuth token could be stolenToken is revocable; limited blast radius
Environment variablesSecrets in .env accessibleDevelopment requires env vars, no production keys

The key insight is that these risks are containable. Network egress could be monitored, workspace changes are tracked by git, and tokens can be revoked.

Verification commands

Quick checks that security controls are working:

# Docker escape blocked
docker run alpine echo "test"  # Should fail with 403

# Sudo unavailable
sudo whoami  # Should fail

# Git push blocked
git push  # Should fail with SSH error

# All capabilities dropped
grep 'CapEff' /proc/self/status  # Should show 0000000000000000

# No-new-privileges enforced
grep 'NoNewPrivs' /proc/self/status  # Should show 1

# VS Code escape vectors cleared
echo $VSCODE_IPC_HOOK_CLI  # Should be empty
echo $BROWSER              # Should be empty
echo $GIT_ASKPASS          # Should be empty

# Read operations work
docker ps  # Should list containers
git log --oneline -5  # Should show history

Agents need tight feedback loops, or you get slop

Since LLMs are statistical next token prediction machines (as much as that's hard to believe reading some of the more impressive outputs), they can not think through the code as such - they have no way of verifying anything purely "in their head" like humans. So the only way to not just play the slop slot machine is to give them tools to verify their output in context. Once you do, the arc of their coding session will bend towards making something working rather than compounding errors by having to work blind.

People pushing the boundaries of agentic code generation have been working on increasingly ambitious orchestration platforms, but code generation volume isn't really the bottleneck even with just one or few agents. I think the most valuable pieces for coding agent tooling is an ecosystem of skills tested and adapted for your particular project, with which any new piece of code can reliably be verified. I'm working on web projects mostly, so for me these are strict type checkers and linters, integration tests with high fidelity emulators for backend pieces, and browser access for frontend and end-to-end testing. At work I think maybe as much as half of my time during the last half a year has been invested in "gold plating" our repositories with these tools. What's brilliant is that these are just as useful for humans as they are for LLMs - while I can theoretically think through how code behaves in my head, it's a difficult and slow process, so all these guardrails can help me spend less time on syntax and micro-decisions and direct my thinking and attention to the architectural and system level tradeoffs I need to decide on.

Working code

Putting it all together, here's the (almost) full config from a Node app:

.devcontainer/devcontainer.json

The initializeCommand is a bit complex because it needs to be compatible with running multiple git worktree copies. The last bits are making sure to precreate .claude-docker/.bash_history and .claude-docker/.claude.json as Docker has the annoying habit of creating a directory on the host if nothing exists for a file host mount volume. What's cool is that this runs on the host – as opposed to command in docker-compose.yml later – so you can generate env vars dynamically.

{
  "name": "Claude Code",
  "dockerComposeFile": "docker-compose.yml",
  "service": "claude-code",
  "workspaceFolder": "/app",
  "remoteUser": "vscode",
  "shutdownAction": "stopCompose",
  "remoteEnv": {
    // First layer of defence – VS Code re-injects some of these, so the
    // security-harden.sh script sourced from .bashrc is the real safeguard.
    "SSH_AUTH_SOCK": "",
    "GPG_AGENT_INFO": "",
    "BROWSER": "",
    "VSCODE_IPC_HOOK_CLI": null,
    "VSCODE_GIT_IPC_HANDLE": null,
    "GIT_ASKPASS": null,
    "VSCODE_GIT_ASKPASS_MAIN": null,
    "VSCODE_GIT_ASKPASS_NODE": null,
    "VSCODE_GIT_ASKPASS_EXTRA_ARGS": null,
    "REMOTE_CONTAINERS_IPC": null,
    "REMOTE_CONTAINERS_SOCKETS": null,
    "REMOTE_CONTAINERS_DISPLAY_SOCK": null,
    "WAYLAND_DISPLAY": null
  },
  "initializeCommand": "bash -c 'mkdir -p .devcontainer && echo \"WORKTREE_NAME=$(basename \"$PWD\")\" > .devcontainer/.env && echo \"GIT_MAIN_REPO_PATH=$(realpath \"$(git rev-parse --git-common-dir 2>/dev/null)/..\" 2>/dev/null || echo \"$PWD\")\" >> .devcontainer/.env && echo \"LOCAL_WORKSPACE_FOLDER=$PWD\" >> .devcontainer/.env && echo \"HOST_HOME=$HOME\" >> .devcontainer/.env && echo \"HOST_UID=$(id -u)\" >> .devcontainer/.env && echo \"HOST_GID=$(id -g)\" >> .devcontainer/.env && mkdir -p .claude-docker && touch .claude-docker/.bash_history && [ -f .claude-docker/.claude.json ] || echo '{}' > .claude-docker/.claude.json'",
  // postStartCommand: clean up sockets created before VS Code attaches
  "postStartCommand": "find /tmp -maxdepth 2 \\( -name 'vscode-ssh-auth-*.sock' -o -name 'vscode-remote-containers-ipc-*.sock' -o -name 'vscode-remote-containers-*.js' \\) -delete 2>/dev/null || true",
  // IPC socket cleanup (vscode-ipc-*.sock, vscode-git-*.sock) is handled by a background
  // loop in the Docker Compose command – postAttachCommand is unreliable for background
  // processes due to VS Code's cgroup-based lifecycle cleanup.
  "customizations": {
    "vscode": {
      "settings": {
        "dev.containers.dockerCredentialHelper": false,
        "dev.containers.copyGitConfig": false,
        "terminal.integrated.defaultProfile.linux": "bash",
        "terminal.integrated.automationProfile.linux": {
          "path": "/bin/bash"
        },
        "terminal.integrated.profiles.linux": {
          "bash": {
            "path": "/bin/bash"
          }
        }
      },
      "extensions": [
        "dbaeumer.vscode-eslint",
        "biomejs.biome",
        "prisma.prisma",
        "zenstack.zenstack",
        "johnpapa.vscode-peacock",
        "anthropic.claude-code",
        "ms-azuretools.vscode-docker"
      ]
    }
  }
}

.devcontainer/docker-compose.yml

services:
  docker-proxy:
    image: tecnativa/docker-socket-proxy:latest
    environment:
      # Read-only operations - allowed
      CONTAINERS: 1
      IMAGES: 1
      INFO: 1
      NETWORKS: 1
      VOLUMES: 1
      # Dangerous operations - blocked
      POST: 0
      BUILD: 0
      COMMIT: 0
      EXEC: 0
      SWARM: 0
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
      - dev

  # Forwards localhost:3000-3005 inside devcontainer to the app container
  # This allows Auth0 redirects to work with dynamic port assignment
  # (Auth0 has localhost:3000-3005 registered as allowed callback URLs)
  localhost-proxy:
    image: alpine/socat
    network_mode: "service:claude-code"
    entrypoint: ["/bin/sh", "-c"]
    command:
      - |
        APP_HOST="${WORKTREE_NAME:-platform-frontend-next}"
        for port in 3000 3001 3002 3003 3004 3005; do
          socat TCP-LISTEN:$$port,fork,reuseaddr TCP:$$APP_HOST:3000 &
        done
        wait
    restart: unless-stopped
    depends_on:
      - claude-code

  claude-code:
    build:
      context: ..
      dockerfile: .devcontainer/Dockerfile
      args:
        USER_UID: ${HOST_UID:-1000}
        USER_GID: ${HOST_GID:-1000}
    container_name: claude-code-${WORKTREE_NAME:-default}
    cap_drop:
      - ALL
    security_opt:
      - no-new-privileges:true
    volumes:
      # Workspace
      - ..:/app:cached
      # Isolated node_modules per worktree
      - node-modules:/app/node_modules
      # Git worktree support - mount main repo's .git to same absolute path
      - ${GIT_MAIN_REPO_PATH}/.git:${GIT_MAIN_REPO_PATH}/.git:cached
      # Claude config and logs
      - ${LOCAL_WORKSPACE_FOLDER}/.claude-docker/.claude.json:/home/vscode/.claude.json
      - ${LOCAL_WORKSPACE_FOLDER}/.claude-docker/.bash_history:/home/vscode/.bash_history
      # Shared pnpm store (macOS path with Linux fallback)
      - ${PNPM_STORE_PATH:-${HOST_HOME}/Library/pnpm/store}:/home/vscode/.local/share/pnpm/store:cached
      # Playwright browser cache (persists between container restarts)
      - playwright-browsers:/home/vscode/.cache/ms-playwright
    environment:
      DOCKER_HOST: tcp://docker-proxy:2375
      DATABASE_URL: postgresql://postgres:password@postgres:5432/onboarding-db
      DATABASE_HOST: postgres
      DATABASE_PORT: 5432
      BQ_EMULATOR_HOST: http://bigquery-emulator:9050
      PLAYWRIGHT_BROWSERS_PATH: /home/vscode/.cache/ms-playwright
    env_file:
      - .env
      - ../.env
    networks:
      - dev
    depends_on:
      - docker-proxy
    # These are running here instead of Dockerfile to ensure freshness and happen in the background while the IDE is already open.
    # The socket cleanup loop (10 passes at 30s intervals) catches VS Code IPC sockets
    # including late-created ones. Runs as a child of the container's own bash process –
    # not subject to VS Code's cgroup-based lifecycle cleanup.
    command: >
      bash -c '. /home/vscode/.bashrc &&
      curl -fsSL https://claude.ai/install.sh | bash &&
      pnpm config set store-dir /home/vscode/.local/share/pnpm/store &&
      pnpm install &&
      just platform-frontend playwright-ensure-browsers;
      (for i in 1 2 3 4 5 6 7 8 9 10; do sleep 30;
        find /tmp -maxdepth 2 \( -name "vscode-ipc-*.sock" -o -name "vscode-git-*.sock" \) -delete 2>/dev/null;
      done) &
      sleep infinity'

networks:
  dev:
    external: true

volumes:
  node-modules:
    name: claude-code-${WORKTREE_NAME:-default}-node-modules
  playwright-browsers:
    name: claude-code-${WORKTREE_NAME:-default}-playwright-browsers

Dockerfile

This is based of Debian and installs Node manually to be able to track the version in .nvmrc. Note the security hardening script baked into the image – this is the primary defence against VS Code's re-injected environment variables.

FROM debian:trixie

# Use bash for the shell with pipefail
SHELL ["/bin/bash", "-o", "pipefail", "-c"]

# Install system dependencies (sudo intentionally omitted for security)
# To get Chromium deps via Playwright you can run:
# pnpm --filter platform-frontend-next exec playwright install chromium --with-deps --dry-run
RUN apt-get update && apt-get install -y --no-install-recommends \
    ca-certificates \
    curl \
    git \
    xz-utils \
    jq \
    vim \
    ripgrep \
    fd-find \
    htop \
    less \
    tree \
    docker-cli \
    wget \
    locales \
    just \
    unzip \
    libasound2t64 libatk-bridge2.0-0t64 libatk1.0-0t64 libatspi2.0-0t64 libcairo2 libcups2t64 libdbus-1-3 libdrm2 libgbm1 libglib2.0-0t64 libnspr4 libnss3 libpango-1.0-0 libx11-6 libxcb1 libxcomposite1 libxdamage1 libxext6 libxfixes3 libxkbcommon0 libxrandr2 xvfb fonts-noto-color-emoji fonts-unifont libfontconfig1 libfreetype6 xfonts-scalable fonts-liberation fonts-ipafont-gothic fonts-wqy-zenhei fonts-tlwg-loma-otf fonts-freefont-ttf \
    && rm -rf /var/lib/apt/lists/*

# Generate and configure locale
RUN sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && \
    locale-gen
ENV LANG=en_US.UTF-8
ENV LC_ALL=en_US.UTF-8

# Create non-root user using host user ID / GID numbers
ARG USERNAME=vscode
ARG USER_UID=1000
ARG USER_GID=$USER_UID

RUN if getent group $USER_GID >/dev/null; then \
        useradd --uid $USER_UID --gid $USER_GID -m $USERNAME; \
    else \
        groupadd --gid $USER_GID $USERNAME && \
        useradd --uid $USER_UID --gid $USER_GID -m $USERNAME; \
    fi

RUN mkdir -p /app/node_modules /app/apps/platform-frontend/node_modules \
    /usr/local/lib/node_modules && chown -R $USER_UID:$USER_GID /app

# Install Node.js using the version in .nvmrc
COPY .nvmrc /tmp/.nvmrc
RUN NODE_VERSION=$(cat /tmp/.nvmrc | tr -d '[:space:]') \
    && ARCH=$(uname -m | sed 's/x86_64/x64/' | sed 's/aarch64/arm64/') \
    && curl -fsSL "https://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-${ARCH}.tar.xz" \
    | tar -xJ -C /usr/local --strip-components=1 \
    && rm /tmp/.nvmrc \
    && npm install -g pnpm@10.12.4

# Switch to non-root user for remaining setup
USER $USERNAME

# Set up shell environment
ENV SHELL=/bin/bash

# Security hardening: clear VS Code escape vectors injected into the container.
# This script is sourced from .bashrc BEFORE the interactive guard, so it runs
# for non-interactive login shells (which is how Claude Code invokes bash).
RUN mkdir -p /home/vscode/.config && cat << 'HARDEN' > /home/vscode/.config/security-harden.sh
# VS Code IPC sockets – can execute commands on the host
unset VSCODE_IPC_HOOK_CLI

# VS Code Git extension IPC – credential access via host
unset VSCODE_GIT_IPC_HANDLE \
      GIT_ASKPASS \
      VSCODE_GIT_ASKPASS_MAIN \
      VSCODE_GIT_ASKPASS_NODE \
      VSCODE_GIT_ASKPASS_EXTRA_ARGS

# Remote Containers extension IPC – host command execution bridge
unset REMOTE_CONTAINERS_IPC \
      REMOTE_CONTAINERS_SOCKETS \
      REMOTE_CONTAINERS_DISPLAY_SOCK

# GUI forwarding (low risk but unnecessary)
unset WAYLAND_DISPLAY

# Browser helper – can trigger actions on host via --openExternal
# Set to empty rather than unset to prevent fallback to defaults
export BROWSER=

# Agent forwarding – set to empty to prevent fallback to default socket paths
export SSH_AUTH_SOCK=
export GPG_AGENT_INFO=
HARDEN

RUN sed -i '1i source ~/.config/security-harden.sh 2>/dev/null || true' ~/.bashrc \
    && sed -i '2i export PATH="$HOME/.local/bin:$PATH"' ~/.bashrc \
    && mkdir -p /home/vscode/.local/bin /home/vscode/.cache/ms-playwright \
    /home/vscode/.local/share/pnpm/store /home/vscode/.local/share/pnpm/global

WORKDIR /app

# Note: Claude Code and dependencies are installed on container startup

CMD ["sleep", "infinity"]

.devcontainer/container-prompt.md

You might want to inject a custom prompt orienting your agent to prevent wasted tool calls. In my case this is an addendum to CLAUDE.md and I use it with: claude --append-system-prompt "$(cat .devcontainer/container-prompt.md)"

You are running inside a Docker DevContainer.

## Network - IMPORTANT
Use container hostnames, NOT localhost:
- `postgres` for PostgreSQL (port 5432)
- `bigquery-emulator` for BigQuery (port 9050)
- `fake-gcs` for GCS emulator (port 8000)
- The app container is named after the worktree directory (e.g., `platform-feature-branch`)

## Docker Access (Read-Only)
Docker access is via a socket proxy that only allows read operations:
- `docker ps` - List running containers
- `docker logs <container>` - View container logs
- `docker inspect <container>` - Inspect container details

**Blocked operations** (for security):
- `docker run` - Cannot create new containers
- `docker exec` - Cannot exec into containers
- `docker build` - Cannot build images

This prevents container escape attacks from malicious code or prompt injection.

## Git Access (Read + Local Commit Only)
- `git log`, `git status`, `git diff` - Full read access to history
- `git commit`, `git branch` - Can make local commits and branches
- `git push` - **BLOCKED** (no SSH keys mounted)

This prevents malicious code from pushing to remote repositories.

## Database
- DATABASE_URL is pre-configured to use `postgres` hostname
- BQ_EMULATOR_HOST points to the BigQuery emulator

## File System
- Project is at `/app` (same as app container)
- node_modules are in isolated Docker volumes (not synced to host)

## Privilege Restrictions
- No sudo access - cannot escalate privileges
- Non-root user (vscode) with limited capabilities

I hope this full example will help you to get going quicker, it took me a while to hand tune the small details to make sure the dev container rebuild and startup are quick.