Jump to content

Claude Code Isolation and Burner Workflow 260211: Difference between revisions

From Game in the Brain Wiki
Add session notes 260222: firejail incompatibility fix, simplified launcher script
Rename csb.sh to claude_isolation.sh; add detailed comments with John/Mary examples
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
= Sandboxing Claude Code with Distrobox and Firejail =
= Claude Code Isolation with Distrobox — Burner Workflow =


== Overview ==
== Overview ==


This guide documents how to run Claude Code in an isolated environment using two layers of sandboxing:
This guide documents how to run Claude Code in an isolated environment using Distrobox containers. The core idea:


# '''Distrobox''' — runs Claude Code inside a Linux container, isolating it from the host system.
* Each project or task lives in its own '''container''' — isolated from the host system.
# '''Firejail''' — runs inside the container and further restricts Claude Code to a single project directory.
* Containers are '''persistent'''. You enter them, do work, and come back later.
* At any point you can '''save the current state''' as an image a snapshot you can restore from or clone.
* A '''golden image''' (or template) is a clean, pre-configured base you clone new containers from.
* You delete containers and images on your own schedule, when you no longer need them.


This protects against malicious prompt injection by limiting what Claude Code can access, even if it is tricked into running harmful commands.
Think of it like save points in a game: you can keep playing from where you left off, and save whenever you want a checkpoint.
 
This protects against malicious prompt injection by limiting what Claude Code can access — any damage from a bad agent run stays inside the container and does not touch the host.


== Command Context ==
== Command Context ==
Line 27: Line 32:
* A Linux host (Fedora, Ubuntu, Arch, etc.)
* A Linux host (Fedora, Ubuntu, Arch, etc.)
* [https://github.com/89luca89/distrobox Distrobox] installed on the host
* [https://github.com/89luca89/distrobox Distrobox] installed on the host
* Podman installed on the host
* A Claude Code account and API access
* A Claude Code account and API access


== Step 1: Install Distrobox on the Host ==
== Naming Convention ==


'''[HOST]''' Install Distrobox on your host system:
Containers and images are named using a short prefix and a date in <code>YYMMDD</code> format. The date identifies when the container or save point was created, making it easy to track your working state over time.
 
{| class="wikitable"
|-
! Type !! Format !! Example
|-
| Working container || <code>PREFIX-YYMMDD</code> || <code>work-260220</code>
|-
| Saved image (save point) || <code>localhost/PREFIX-YYMMDD:latest</code> || <code>localhost/work-260220:latest</code>
|-
| Golden image (template) || <code>localhost/PREFIX-base:latest</code> || <code>localhost/work-base:latest</code>
|-
| Burner home directory || <code>~/sandbox-homes/PREFIX-YYMMDD</code> || <code>~/sandbox-homes/work-260220</code>
|}
 
Choose any short prefix that makes sense for your setup. Use the same prefix consistently so your image library stays organised.
 
'''Example timeline:'''
* You set up a container on the 20th → <code>work-260220</code>, saved as <code>localhost/work-260220:latest</code>
* On the 22nd you want a new save point → commit the running container as <code>localhost/work-260222:latest</code>
* Start a new container from that save point when needed → <code>work-260222</code>
* <code>work-260220</code> is still there — enter it again any time
* Delete whichever images or containers you no longer need
 
== One-Time Setup: Create the Golden Image ==
 
Run these steps once. The result is a golden image — a clean, pre-configured base you will clone all future work containers from.
 
=== Step 1: Install Distrobox ===
 
'''[HOST]'''


  sudo apt install distrobox    # Debian/Ubuntu
  sudo apt install distrobox    # Debian/Ubuntu
Line 37: Line 73:
  yay -S distrobox              # Arch (AUR)
  yay -S distrobox              # Arch (AUR)


== Step 2: Create a Distrobox Container ==
=== Step 2: Create and Enter the Base Container ===
 
'''[HOST]''' Create a home directory for the base container. Run these as two separate commands:


'''[HOST]''' Create a new container (Ubuntu-based in this example):
mkdir -p ~/sandbox-homes/work-base


  distrobox create --name claude-container --image ubuntu:24.04
  distrobox create --name work-base --image ubuntu:24.04 --home ~/sandbox-homes/work-base


'''[HOST]''' Enter the container:
'''[HOST]''' Enter the container:


  distrobox enter claude-container
  distrobox enter work-base
 
Your prompt will change to indicate you are now inside the container.


== Step 3: Install Claude Code Inside the Container ==
=== Step 3: Install Claude Code ===


'''[DISTROBOX]''' Install Node.js (if not already present) and Claude Code:
'''[DISTROBOX]'''


curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
sudo apt install -y nodejs
  npm install -g @anthropic-ai/claude-code
  npm install -g @anthropic-ai/claude-code


'''[DISTROBOX]''' Log in and verify it works:
'''[DISTROBOX]''' Log in and verify:


  claude
  claude


== Step 4: Create a Dedicated Project Directory ==
Complete the authentication flow. Your credentials are stored inside the container.


'''[DISTROBOX]''' Create the directory where all Claude Code work will happen:
=== Step 4: Add the Launcher Script ===


mkdir -p ~/claude_workspace
'''[DISTROBOX]''' Create a project directory and the launcher script:
cd ~/claude_workspace


This is the '''only''' directory Claude Code will be able to access when sandboxed.
mkdir -p ~/project
nano ~/project/claude_isolation.sh


== Step 5: Install Firejail Inside the Container ==
Contents:


'''[DISTROBOX]''' Install Firejail:
<syntaxhighlight lang="bash">
#!/bin/bash
# =============================================================================
# claude_isolation.sh
# Launcher script for Claude Code inside a Distrobox container.
#
# Place this script in your project directory inside the container.
# Run it from there to start a Claude Code session scoped to that directory.
#
# Usage:
#  ./claude_isolation.sh
#  ./claude_isolation.sh --dangerously-skip-permissions
# =============================================================================


sudo apt install firejail


'''[DISTROBOX]''' Verify the installation:
# -----------------------------------------------------------------------------
# WORK_DIR — the directory Claude Code will run in.
#
# The default below auto-detects the directory this script lives in.
# This works for most setups and requires no changes.
#
# If you need a fixed path regardless of where the script is called from,
# comment out the auto-detect line and set WORK_DIR manually instead.
#
# Examples:
#  John building a chatbot:  WORK_DIR="/home/john/projects/chatbot"
#  Mary running experiments: WORK_DIR="/home/mary/ai-lab/experiment-3"
# -----------------------------------------------------------------------------
WORK_DIR="$(cd "$(dirname "$0")" && pwd)"
# WORK_DIR="/home/USER/your-project-directory"  # uncomment to hardcode


firejail --version


== Step 6: Create the Launcher Script ==
# -----------------------------------------------------------------------------
# Move into the work directory.
# Claude Code will treat this as its root — all file reads and writes
# happen relative to here.
# -----------------------------------------------------------------------------
cd "$WORK_DIR"


'''[DISTROBOX]''' Create the launcher script inside the project directory:


  nano ~/claude_workspace/run_claude.sh
# -----------------------------------------------------------------------------
# Optional: override the Claude config directory.
#
# By default Claude stores its config (login tokens, settings) in $HOME/.claude
# Inside a --home container, $HOME points to the burner directory, so a brand
# new container will not have credentials and you will need to log in once.
#
# If you want to reuse credentials from your host's real home directory,
# uncomment the export line and set the absolute path to your .claude folder.
#
# Examples:
#  John:  export CLAUDE_CONFIG_HOME="/home/john/.claude"
#  Mary: export CLAUDE_CONFIG_HOME="/home/mary/.claude"
#
# Leave commented out to keep full isolation (recommended).
# Each new container will prompt you to log in once, then store credentials
# in its own burner home.
# -----------------------------------------------------------------------------
# export CLAUDE_CONFIG_HOME="/home/USER/.claude"


With the following contents:


<pre>
echo "Starting Claude Code..."
#!/bin/bash
echo "  Working directory: $WORK_DIR"
# Launch Claude Code sandboxed to this directory using firejail
echo ""
# Usage: ./run_claude.sh [claude args...]
 


WORK_DIR="$(cd "$(dirname "$0")" && pwd)"
# -----------------------------------------------------------------------------
CLAUDE_DIR="$HOME/.claude"
# Launch Claude Code.
NVM_DIR="$HOME/.nvm"
#
# 'exec' replaces this shell process with claude — keeps the process tree clean.
# '$@' passes any arguments you gave this script directly through to claude.
#
# Common arguments:
#  --dangerously-skip-permissions  auto-approve all actions (use inside
#                                    a container only — never on bare host)
# -----------------------------------------------------------------------------
exec claude "$@"
</syntaxhighlight>


if ! command -v firejail &>/dev/null; then
'''[DISTROBOX]''' Make it executable:
    echo "Error: firejail is not installed. Install with: sudo apt install firejail"
    exit 1
fi


if ! command -v claude &>/dev/null; then
chmod +x ~/project/claude_isolation.sh
    echo "Error: claude is not installed or not in PATH"
    exit 1
fi


CLAUDE_BIN="$(which claude)"
=== Step 5: Save as the Golden Image ===


echo "Starting Claude Code in sandbox..."
'''[HOST]''' Exit the container, then stop and commit it:
echo "  Allowed directory: $WORK_DIR"
echo "  Config directory:  $CLAUDE_DIR (read-write)"
echo "  NVM directory:    $NVM_DIR"
echo ""


exec firejail --noprofile \
exit
    --whitelist="$WORK_DIR" \
    --whitelist="$CLAUDE_DIR" \
    --whitelist="$NVM_DIR" \
    --noroot \
    --caps.drop=all \
    "$CLAUDE_BIN" "$@"
</pre>


'''[DISTROBOX]''' Make it executable:
distrobox stop work-base
podman container commit work-base localhost/work-base:latest


chmod +x ~/claude_workspace/run_claude.sh
'''[HOST]''' Verify:


== Step 7: Launch Claude Code in the Sandbox ==
podman image ls


'''[HOST]''' Enter your distrobox container:
You now have a golden image. The base container can be kept or deleted — the image is self-contained.


distrobox enter claude-container
== Daily Workflow ==


'''[DISTROBOX]''' Navigate to the project directory:
=== Starting a New Container ===


cd ~/claude_workspace
When starting fresh work, clone a container from the golden image (or any saved image). Use today's date in the name.


'''[DISTROBOX]''' Run the launcher script:
'''[HOST]''' Run these as two separate commands:


  ./run_claude.sh
  mkdir -p ~/sandbox-homes/work-260220


Claude Code will start, restricted to only the <code>~/claude_workspace</code> directory.
distrobox create --name work-260220 --image localhost/work-base:latest --home ~/sandbox-homes/work-260220


== Step 8: Save the Container as a Golden Image ==
'''[HOST]''' Enter it:


Once the container is fully configured (Claude Code installed, Firejail set up, launcher script in place), save it as a reusable '''golden image'''. This means future sessions start from a clean, pre-configured state without reinstalling anything.
distrobox enter work-260220


'''[HOST]''' First exit the container if you are still inside it:
'''[DISTROBOX]''' Launch Claude Code:


  exit
  cd ~/project
./claude_isolation.sh


'''[HOST]''' Stop the container:
=== Continuing an Existing Container ===


distrobox stop claude-container
If the container already exists, just enter it again — it retains its full state:


'''[HOST]''' Commit the container state to a Podman image:
'''[HOST]'''


  podman container commit claude-container localhost/claude-golden:latest
  distrobox enter work-260220


'''[HOST]''' Verify the image was created:
'''[DISTROBOX]'''


  podman image ls | grep claude-golden
  cd ~/project
./claude_isolation.sh


You now have a golden image stored locally. The original container can be kept or deleted — the image is self-contained.
=== Saving a Save Point ===


=== Starting a Fresh Session from the Golden Image ===
At any point — before a risky change, after a milestone, or at the end of a working day — commit the container state as a named image.


Each new working session creates a throwaway container from the golden image. When the session ends, the container is deleted — leaving no trace.
'''[HOST]''' Stop the container:


'''[HOST]''' Create a new session container from the golden image:
distrobox stop work-260220


distrobox create --name claude-session-1 --image localhost/claude-golden:latest
'''[HOST]''' Commit to a dated image:


'''[HOST]''' Enter it:
podman container commit work-260220 localhost/work-260222:latest


distrobox enter claude-session-1
'''[HOST]''' Start the container again:


'''[DISTROBOX]''' Go to the workspace and start working:
distrobox enter work-260220


cd ~/claude_workspace
The save point <code>localhost/work-260222:latest</code> is now available. Your original container <code>work-260220</code> is unchanged and still usable.
./run_claude.sh


'''[HOST]''' When the session is done, delete the container entirely:
=== Switching Between Save Points ===


distrobox rm claude-session-1
You can branch off from any saved image. Both lines of work remain independent.


=== Cloning the Golden Image for Multiple Parallel Sessions ===
'''Example:''' You have been using <code>work-260220</code>. You save a point as <code>localhost/work-260222:latest</code>. Now you can:


You can run multiple independent sessions at the same time, each from the same golden image:
* Keep using <code>work-260220</code> as-is
* Start a new container from the 260222 save point:


'''[HOST]'''
mkdir -p ~/sandbox-homes/work-260222
<pre>
distrobox create --name work-260222 --image localhost/work-260222:latest --home ~/sandbox-homes/work-260222
distrobox create --name claude-session-A --image localhost/claude-golden:latest
distrobox enter work-260222
distrobox create --name claude-session-B --image localhost/claude-golden:latest
distrobox create --name claude-session-C --image localhost/claude-golden:latest
</pre>


Each container is completely independent. Changes in session A do not affect session B or C.
* Go back to <code>work-260220</code> at any time:


=== Updating the Golden Image ===
distrobox enter work-260220


When you want to update the base setup (e.g. upgrade Claude Code or install a new tool for all future sessions):
=== Restoring from a Save Point ===


'''[HOST]''' Create a container from the current golden image:
If a container is broken or you want a clean start from a previous state:


distrobox create --name claude-update --image localhost/claude-golden:latest
'''[HOST]''' Delete the current container:


'''[HOST]''' Enter it and make your changes:
distrobox rm work-260222
rm -rf ~/sandbox-homes/work-260222


distrobox enter claude-update
'''[HOST]''' Re-create it from the save point image:


'''[DISTROBOX]''' Make updates (example):
mkdir -p ~/sandbox-homes/work-260222
distrobox create --name work-260222 --image localhost/work-260222:latest --home ~/sandbox-homes/work-260222


npm update -g @anthropic-ai/claude-code
=== Promoting to the Golden Image ===


'''[HOST]''' Exit, stop, and commit a new golden image:
If a container has reached a state you want all future containers to start from, promote it:


exit
'''[HOST]'''
distrobox stop claude-update
podman container commit claude-update localhost/claude-golden:latest
distrobox rm claude-update


== True Filesystem Isolation ==
distrobox stop work-260222
podman container commit work-260222 localhost/work-base:latest


The standard Distrobox setup above has two surfaces that reach the host filesystem even inside the container:
New containers cloned from <code>localhost/work-base:latest</code> will now include those changes.


# '''Shared home directory''' — Distrobox mounts your real <code>~</code> inside the container. Anything written to <code>~/.config</code>, <code>~/.local</code>, <code>~/.bashrc</code>, etc. affects the host immediately.
=== Managing Your Image Library ===
# '''<code>/run/host</code> mounted read-write''' — Distrobox mounts the entire host root filesystem at <code>/run/host</code> with write access, meaning code inside the container can modify host files outside the home directory.


Firejail (Step 6) protects against this within a session, but for a fully isolated container that protects the host at the OS level, create the container with a '''separate home directory''' and <code>/run/host</code> mounted read-only:
'''[HOST]''' List all images:


'''[HOST]''' Create a dedicated sandbox home directory:
podman image ls


mkdir -p ~/sandbox-homes/claude-isolated
'''[HOST]''' Delete an image you no longer need:


'''[HOST]''' Create the isolated container:
podman image rm localhost/work-260220:latest


<pre>
'''[HOST]''' List all containers:
distrobox create \
  --name claude-isolated \
  --image ubuntu:24.04 \
  --home ~/sandbox-homes/claude-isolated \
  --additional-flags "--mount type=bind,source=/,target=/run/host,ro"
</pre>


'''[HOST]''' Enter it:
podman ps -a


distrobox enter claude-isolated
'''[HOST]''' Delete a container and its home when you are done:


With this setup:
distrobox rm work-260220
* Claude Code can only write to <code>~/sandbox-homes/claude-isolated/</code> — your real home directory is untouched
rm -rf ~/sandbox-homes/work-260220
* <code>/run/host</code> is read-only — host filesystem cannot be modified from inside the container
* Deleting <code>~/sandbox-homes/claude-isolated/</code> after a session removes all traces


=== What remains shared even with isolation ===
== What Distrobox Isolation Provides ==


{| class="wikitable"
{| class="wikitable"
Line 259: Line 325:
! Surface !! Isolated? !! Notes
! Surface !! Isolated? !! Notes
|-
|-
| Host home directory || ✅ Yes || Separate sandbox home used instead
| Host home directory || ✅ Yes || Container uses its own burner home via <code>--home</code>; <code>/home/USER</code> is never touched
|-
| Host filesystem via <code>/run/host</code> || ✅ Yes || Mounted read-only
|-
| System packages || ✅ Yes || Container overlay layer
|-
| Network || ❌ No || Container shares host network namespace; add <code>--network=slirp4netns</code> to <code>--additional-flags</code> to isolate
|-
| Linux kernel || ❌ No || Rootless container shares the host kernel (acceptable for most threat models)
|-
|-
| X11/Wayland display || ❌ No || GUI apps render on host desktop
| Host filesystem via <code>/run/host</code> || ⚠️ Partial || Mounted read-write by default. Add <code>--additional-flags "--mount type=bind,source=/,target=/run/host,ro"</code> at container creation to make it read-only
|}
 
'''Note:''' The golden image and cloning workflow (Step 8) works identically with the isolated container — just use <code>--image localhost/claude-golden:latest</code> and <code>--home ~/sandbox-homes/claude-session-NAME</code> when creating each session container.
 
== What Each Layer Protects ==
 
{| class="wikitable"
|-
|-
! Layer !! What it does !! What it blocks
| System packages || ✅ Yes || Container uses its own overlay filesystem
|-
|-
| '''Distrobox''' || Runs everything in a container || Protects host system files, host packages, and host configuration from changes
| Network || ❌ No || Container shares the host network namespace. Claude Code requires network access to reach the Anthropic API
|-
|-
| '''Firejail''' || Restricts filesystem access within the container || Blocks access to everything outside <code>~/claude_workspace</code>, prevents privilege escalation, drops Linux capabilities
| Linux kernel || ❌ No || Rootless containers share the host kernel (acceptable for most threat models)
|-
|-
| '''run_claude.sh''' || Automates launching with the correct flags || Ensures you never accidentally run Claude Code without the sandbox
| X11/Wayland display || ❌ No || GUI apps render on the host desktop
|}
|}


== Verifying the Sandbox Works ==
== Why the Burner Concept ==


Once Claude Code is running inside the sandbox, test it by asking Claude to:
The Burner Workflow is designed to give Claude Code extensive permissions — auto-allow mode, running system commands, installing packages — without risking your actual computer.


# Read a file outside the project directory — should fail
* '''Safety with high permissions:''' If Claude Code runs <code>rm -rf</code> or installs 50 packages, your main system is untouched. The damage stays inside the container.
# Run <code>ls ~</code> — should only show whitelisted directories
* '''Dependency hygiene:''' Agents often install tools to complete tasks. Distrobox keeps this inside the box. Delete the container when you are done with the project.
# Run <code>sudo anything</code> — should be blocked
* '''Save points for risky work:''' Before letting an agent attempt something uncertain, commit a save point. If it breaks the container, restore from the save point and try a different approach.
# Read <code>~/.ssh/id_rsa</code> — should be inaccessible
* '''Better integration than Docker:''' Unlike raw Docker, Distrobox integrates naturally with your terminal environment while still keeping the execution environment separate.
 
== Limitations ==
 
* Firejail is Linux-only; this will not work on macOS or Windows.
* Claude Code needs network access to reach the Anthropic API, so network is not blocked by default. Add <code>--net=none</code> to <code>run_claude.sh</code> to fully disable networking.
* If <code>~/.claude</code> is read-only, Claude Code cannot write session data (login tokens). Remove the <code>--read-only</code> line from the script if you wish to persist logins between sessions, though this slightly lowers security.
* By default, Distrobox shares the home directory with the host. Use the isolated container setup in the '''True Filesystem Isolation''' section above to prevent this.
 
== Why the Guide Uses Distrobox (The "Burner" Concept) ==
The "Burner Workflow" is designed to give the AI agent extensive permissions (e.g., <code>auto-allow</code> mode, running system commands, installing packages) without risking your actual computer. Distrobox is used to create a '''disposable container''' that feels like your native terminal but is actually isolated.
 
* '''Safety with High Permissions:''' If you let Claude Code run <code>rm -rf</code> or install 50 different Node.js packages in a Distrobox, your main system remains untouched. If you do this on your host machine, you risk breaking your OS.
* '''Dependency Hygiene:''' Claude Code agents often need to install tools (compilers, Python libraries, etc.) to complete tasks. Distrobox keeps this "junk" inside the box. When you are done, you can simply delete the box.
* '''Better Integration than Docker:''' Unlike a raw Docker container, Distrobox allows the AI to easily access your home directory files (if mapped) and use your host's terminal tools, making the workflow smoother while still keeping the ''execution environment'' separate.


=== Can You Skip Distrobox? ===
=== Can You Skip Distrobox? ===


* '''Yes, if:''' You are just testing Claude Code, only plan to edit specific files, and will manually approve every command it tries to run (the default "safe" mode). Anthropic provides built-in sandboxing that is sufficient for basic use.
* '''Yes, if:''' You are just testing Claude Code and will manually approve every command (the default safe mode).
* '''No, if:''' You want to follow the "Burner" methodology — meaning you want to set the agent to '''autonomous mode''' (skipping permission prompts) or let it freely install tools. In this case, skipping the Distrobox layer is dangerous and defeats the purpose of the guide.
* '''No, if:''' You want to use autonomous mode skipping permission prompts or letting the agent freely install tools. In that case, skipping Distrobox is dangerous and defeats the purpose of this guide.


== References ==
== References ==


* [https://github.com/89luca89/distrobox Distrobox]
* [https://github.com/89luca89/distrobox Distrobox]
* [https://firejail.wordpress.com/ Firejail]
* [https://claude.ai/code Claude Code]
* [https://github.com/netblue30/firejail Firejail GitHub]


== Session Notes 260222 — Testing & Fixes ==
== Session Notes 260222 — Testing & Fixes ==
Line 329: Line 365:
When distrobox is created with <code>--home</code> pointing to a burner directory, the container's <code>$HOME</code> becomes that directory — not the real user home. Any launcher script variables using <code>$HOME</code> (like <code>CLAUDE_DIR</code> and <code>NVM_DIR</code>) will resolve to wrong paths.
When distrobox is created with <code>--home</code> pointing to a burner directory, the container's <code>$HOME</code> becomes that directory — not the real user home. Any launcher script variables using <code>$HOME</code> (like <code>CLAUDE_DIR</code> and <code>NVM_DIR</code>) will resolve to wrong paths.


'''Fix:''' Hardcode absolute paths in the launcher script:
'''Fix:''' Hardcode absolute paths in the launcher script if needed.
<syntaxhighlight lang="bash">
CLAUDE_DIR="/home/user/.claude"
NVM_DIR="/home/user/.nvm"
</syntaxhighlight>


=== 3. Firejail is incompatible with distrobox <code>--home</code> workflow ===
=== 3. Firejail is incompatible with distrobox <code>--home</code> workflow ===
Firejail fails with <code>no suitable ...bin/claude executable found</code> inside distrobox when using a custom <code>--home</code> directory. The cause is firejail's whitelist mode blocking Node.js runtime dependencies that Claude Code requires. Firejail and distrobox namespacing conflict in this configuration.
Firejail fails with <code>no suitable ...bin/claude executable found</code> inside distrobox when using a custom <code>--home</code> directory. The cause is firejail's whitelist mode blocking Node.js runtime dependencies that Claude Code requires.


'''Fix:''' Drop firejail from the launcher script. Distrobox with <code>--home</code> provides sufficient filesystem isolation for the burner workflow. Simplified <code>csb.sh</code>:
'''Fix:''' Drop firejail. Distrobox with <code>--home</code> provides sufficient filesystem isolation for the burner workflow. The <code>claude_isolation.sh</code> script above is the current recommended launcher.
<syntaxhighlight lang="bash">
#!/bin/bash
WORK_DIR="$(cd "$(dirname "$0")" && pwd)"
cd "$WORK_DIR"
echo "Starting Claude Code..."
echo "  Working directory: $WORK_DIR"
echo ""
exec claude "$@"
</syntaxhighlight>


=== 4. Backup script may produce duplicate image files ===
=== 4. Backup script may produce duplicate image files ===

Latest revision as of 07:46, 22 February 2026

Claude Code Isolation with Distrobox — Burner Workflow

Overview

This guide documents how to run Claude Code in an isolated environment using Distrobox containers. The core idea:

  • Each project or task lives in its own container — isolated from the host system.
  • Containers are persistent. You enter them, do work, and come back later.
  • At any point you can save the current state as an image — a snapshot you can restore from or clone.
  • A golden image (or template) is a clean, pre-configured base you clone new containers from.
  • You delete containers and images on your own schedule, when you no longer need them.

Think of it like save points in a game: you can keep playing from where you left off, and save whenever you want a checkpoint.

This protects against malicious prompt injection by limiting what Claude Code can access — any damage from a bad agent run stays inside the container and does not touch the host.

Command Context

Every command in this guide is prefixed with where it must be run:

Prefix Meaning
[HOST] Run this in a terminal on your normal Linux desktop, outside any container
[DISTROBOX] Run this inside the Distrobox container after entering it

Prerequisites

  • A Linux host (Fedora, Ubuntu, Arch, etc.)
  • Distrobox installed on the host
  • Podman installed on the host
  • A Claude Code account and API access

Naming Convention

Containers and images are named using a short prefix and a date in YYMMDD format. The date identifies when the container or save point was created, making it easy to track your working state over time.

Type Format Example
Working container PREFIX-YYMMDD work-260220
Saved image (save point) localhost/PREFIX-YYMMDD:latest localhost/work-260220:latest
Golden image (template) localhost/PREFIX-base:latest localhost/work-base:latest
Burner home directory ~/sandbox-homes/PREFIX-YYMMDD ~/sandbox-homes/work-260220

Choose any short prefix that makes sense for your setup. Use the same prefix consistently so your image library stays organised.

Example timeline:

  • You set up a container on the 20th → work-260220, saved as localhost/work-260220:latest
  • On the 22nd you want a new save point → commit the running container as localhost/work-260222:latest
  • Start a new container from that save point when needed → work-260222
  • work-260220 is still there — enter it again any time
  • Delete whichever images or containers you no longer need

One-Time Setup: Create the Golden Image

Run these steps once. The result is a golden image — a clean, pre-configured base you will clone all future work containers from.

Step 1: Install Distrobox

[HOST]

sudo apt install distrobox    # Debian/Ubuntu
sudo dnf install distrobox    # Fedora
yay -S distrobox              # Arch (AUR)

Step 2: Create and Enter the Base Container

[HOST] Create a home directory for the base container. Run these as two separate commands:

mkdir -p ~/sandbox-homes/work-base
distrobox create --name work-base --image ubuntu:24.04 --home ~/sandbox-homes/work-base

[HOST] Enter the container:

distrobox enter work-base

Step 3: Install Claude Code

[DISTROBOX]

curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
sudo apt install -y nodejs
npm install -g @anthropic-ai/claude-code

[DISTROBOX] Log in and verify:

claude

Complete the authentication flow. Your credentials are stored inside the container.

Step 4: Add the Launcher Script

[DISTROBOX] Create a project directory and the launcher script:

mkdir -p ~/project
nano ~/project/claude_isolation.sh

Contents:

#!/bin/bash
# =============================================================================
# claude_isolation.sh
# Launcher script for Claude Code inside a Distrobox container.
#
# Place this script in your project directory inside the container.
# Run it from there to start a Claude Code session scoped to that directory.
#
# Usage:
#   ./claude_isolation.sh
#   ./claude_isolation.sh --dangerously-skip-permissions
# =============================================================================


# -----------------------------------------------------------------------------
# WORK_DIR — the directory Claude Code will run in.
#
# The default below auto-detects the directory this script lives in.
# This works for most setups and requires no changes.
#
# If you need a fixed path regardless of where the script is called from,
# comment out the auto-detect line and set WORK_DIR manually instead.
#
# Examples:
#   John building a chatbot:  WORK_DIR="/home/john/projects/chatbot"
#   Mary running experiments: WORK_DIR="/home/mary/ai-lab/experiment-3"
# -----------------------------------------------------------------------------
WORK_DIR="$(cd "$(dirname "$0")" && pwd)"
# WORK_DIR="/home/USER/your-project-directory"   # uncomment to hardcode


# -----------------------------------------------------------------------------
# Move into the work directory.
# Claude Code will treat this as its root — all file reads and writes
# happen relative to here.
# -----------------------------------------------------------------------------
cd "$WORK_DIR"


# -----------------------------------------------------------------------------
# Optional: override the Claude config directory.
#
# By default Claude stores its config (login tokens, settings) in $HOME/.claude
# Inside a --home container, $HOME points to the burner directory, so a brand
# new container will not have credentials and you will need to log in once.
#
# If you want to reuse credentials from your host's real home directory,
# uncomment the export line and set the absolute path to your .claude folder.
#
# Examples:
#   John:  export CLAUDE_CONFIG_HOME="/home/john/.claude"
#   Mary:  export CLAUDE_CONFIG_HOME="/home/mary/.claude"
#
# Leave commented out to keep full isolation (recommended).
# Each new container will prompt you to log in once, then store credentials
# in its own burner home.
# -----------------------------------------------------------------------------
# export CLAUDE_CONFIG_HOME="/home/USER/.claude"


echo "Starting Claude Code..."
echo "  Working directory: $WORK_DIR"
echo ""


# -----------------------------------------------------------------------------
# Launch Claude Code.
#
# 'exec' replaces this shell process with claude — keeps the process tree clean.
# '$@' passes any arguments you gave this script directly through to claude.
#
# Common arguments:
#   --dangerously-skip-permissions   auto-approve all actions (use inside
#                                    a container only — never on bare host)
# -----------------------------------------------------------------------------
exec claude "$@"

[DISTROBOX] Make it executable:

chmod +x ~/project/claude_isolation.sh

Step 5: Save as the Golden Image

[HOST] Exit the container, then stop and commit it:

exit
distrobox stop work-base
podman container commit work-base localhost/work-base:latest

[HOST] Verify:

podman image ls

You now have a golden image. The base container can be kept or deleted — the image is self-contained.

Daily Workflow

Starting a New Container

When starting fresh work, clone a container from the golden image (or any saved image). Use today's date in the name.

[HOST] Run these as two separate commands:

mkdir -p ~/sandbox-homes/work-260220
distrobox create --name work-260220 --image localhost/work-base:latest --home ~/sandbox-homes/work-260220

[HOST] Enter it:

distrobox enter work-260220

[DISTROBOX] Launch Claude Code:

cd ~/project
./claude_isolation.sh

Continuing an Existing Container

If the container already exists, just enter it again — it retains its full state:

[HOST]

distrobox enter work-260220

[DISTROBOX]

cd ~/project
./claude_isolation.sh

Saving a Save Point

At any point — before a risky change, after a milestone, or at the end of a working day — commit the container state as a named image.

[HOST] Stop the container:

distrobox stop work-260220

[HOST] Commit to a dated image:

podman container commit work-260220 localhost/work-260222:latest

[HOST] Start the container again:

distrobox enter work-260220

The save point localhost/work-260222:latest is now available. Your original container work-260220 is unchanged and still usable.

Switching Between Save Points

You can branch off from any saved image. Both lines of work remain independent.

Example: You have been using work-260220. You save a point as localhost/work-260222:latest. Now you can:

  • Keep using work-260220 as-is
  • Start a new container from the 260222 save point:
mkdir -p ~/sandbox-homes/work-260222
distrobox create --name work-260222 --image localhost/work-260222:latest --home ~/sandbox-homes/work-260222
distrobox enter work-260222
  • Go back to work-260220 at any time:
distrobox enter work-260220

Restoring from a Save Point

If a container is broken or you want a clean start from a previous state:

[HOST] Delete the current container:

distrobox rm work-260222
rm -rf ~/sandbox-homes/work-260222

[HOST] Re-create it from the save point image:

mkdir -p ~/sandbox-homes/work-260222
distrobox create --name work-260222 --image localhost/work-260222:latest --home ~/sandbox-homes/work-260222

Promoting to the Golden Image

If a container has reached a state you want all future containers to start from, promote it:

[HOST]

distrobox stop work-260222
podman container commit work-260222 localhost/work-base:latest

New containers cloned from localhost/work-base:latest will now include those changes.

Managing Your Image Library

[HOST] List all images:

podman image ls

[HOST] Delete an image you no longer need:

podman image rm localhost/work-260220:latest

[HOST] List all containers:

podman ps -a

[HOST] Delete a container and its home when you are done:

distrobox rm work-260220
rm -rf ~/sandbox-homes/work-260220

What Distrobox Isolation Provides

Surface Isolated? Notes
Host home directory ✅ Yes Container uses its own burner home via --home; /home/USER is never touched
Host filesystem via /run/host ⚠️ Partial Mounted read-write by default. Add --additional-flags "--mount type=bind,source=/,target=/run/host,ro" at container creation to make it read-only
System packages ✅ Yes Container uses its own overlay filesystem
Network ❌ No Container shares the host network namespace. Claude Code requires network access to reach the Anthropic API
Linux kernel ❌ No Rootless containers share the host kernel (acceptable for most threat models)
X11/Wayland display ❌ No GUI apps render on the host desktop

Why the Burner Concept

The Burner Workflow is designed to give Claude Code extensive permissions — auto-allow mode, running system commands, installing packages — without risking your actual computer.

  • Safety with high permissions: If Claude Code runs rm -rf or installs 50 packages, your main system is untouched. The damage stays inside the container.
  • Dependency hygiene: Agents often install tools to complete tasks. Distrobox keeps this inside the box. Delete the container when you are done with the project.
  • Save points for risky work: Before letting an agent attempt something uncertain, commit a save point. If it breaks the container, restore from the save point and try a different approach.
  • Better integration than Docker: Unlike raw Docker, Distrobox integrates naturally with your terminal environment while still keeping the execution environment separate.

Can You Skip Distrobox?

  • Yes, if: You are just testing Claude Code and will manually approve every command (the default safe mode).
  • No, if: You want to use autonomous mode — skipping permission prompts or letting the agent freely install tools. In that case, skipping Distrobox is dangerous and defeats the purpose of this guide.

References

Session Notes 260222 — Testing & Fixes

1. mkdir -p and distrobox create must be run as separate commands

Pasting them as a single line fails silently. Always run them separately on the host.

2. $HOME resolves incorrectly inside --home containers

When distrobox is created with --home pointing to a burner directory, the container's $HOME becomes that directory — not the real user home. Any launcher script variables using $HOME (like CLAUDE_DIR and NVM_DIR) will resolve to wrong paths.

Fix: Hardcode absolute paths in the launcher script if needed.

3. Firejail is incompatible with distrobox --home workflow

Firejail fails with no suitable ...bin/claude executable found inside distrobox when using a custom --home directory. The cause is firejail's whitelist mode blocking Node.js runtime dependencies that Claude Code requires.

Fix: Drop firejail. Distrobox with --home provides sufficient filesystem isolation for the burner workflow. The claude_isolation.sh script above is the current recommended launcher.

4. Backup script may produce duplicate image files

Manual podman save and a skip-duplicates backup script may use different filename conventions (e.g. imagename_latest.tar.gz vs localhost_imagename_latest.tar.gz), resulting in duplicate files on the backup destination. Check for and remove duplicates after any manual save.

5. @reboot cron needs sleep 30

Network filesystem mounts (e.g. GVFS SMB) are not ready immediately on boot. Without a sleep delay, backup scripts triggered via @reboot cron will fail silently with a "destination not mounted" error. Add sleep 30 before the backup command.