Claude Code Isolation and Burner Workflow 260211: Difference between revisions
Justinaquino (talk | contribs) |
Justinaquino (talk | contribs) Rename csb.sh to claude_isolation.sh; add detailed comments with John/Mary examples |
||
| (6 intermediate revisions by the same user not shown) | |||
| Line 1: | Line 1: | ||
= | = Claude Code Isolation with Distrobox — Burner Workflow = | ||
== Overview == | == Overview == | ||
This guide documents how to run Claude Code in an isolated environment using | This guide documents how to run Claude Code in an isolated environment using Distrobox containers. The core idea: | ||
* Each project or task lives in its own '''container''' — isolated from the host system. | |||
* Containers are '''persistent'''. You enter them, do work, and come back later. | |||
* At any point you can '''save the current state''' as an image — a snapshot you can restore from or clone. | |||
* A '''golden image''' (or template) is a clean, pre-configured base you clone new containers from. | |||
* You delete containers and images on your own schedule, when you no longer need them. | |||
This protects against malicious prompt injection by limiting what Claude Code can access, | Think of it like save points in a game: you can keep playing from where you left off, and save whenever you want a checkpoint. | ||
This protects against malicious prompt injection by limiting what Claude Code can access — any damage from a bad agent run stays inside the container and does not touch the host. | |||
== Command Context == | |||
Every command in this guide is prefixed with where it must be run: | |||
{| class="wikitable" | |||
|- | |||
! Prefix !! Meaning | |||
|- | |||
| <code>'''[HOST]'''</code> || Run this in a terminal on your normal Linux desktop, outside any container | |||
|- | |||
| <code>'''[DISTROBOX]'''</code> || Run this inside the Distrobox container after entering it | |||
|} | |||
== Prerequisites == | == Prerequisites == | ||
| Line 14: | Line 32: | ||
* A Linux host (Fedora, Ubuntu, Arch, etc.) | * A Linux host (Fedora, Ubuntu, Arch, etc.) | ||
* [https://github.com/89luca89/distrobox Distrobox] installed on the host | * [https://github.com/89luca89/distrobox Distrobox] installed on the host | ||
* | * Podman installed on the host | ||
* A Claude Code account and API access | * A Claude Code account and API access | ||
== | == Naming Convention == | ||
Containers and images are named using a short prefix and a date in <code>YYMMDD</code> format. The date identifies when the container or save point was created, making it easy to track your working state over time. | |||
{| class="wikitable" | |||
|- | |||
! Type !! Format !! Example | |||
|- | |||
| Working container || <code>PREFIX-YYMMDD</code> || <code>work-260220</code> | |||
|- | |||
| Saved image (save point) || <code>localhost/PREFIX-YYMMDD:latest</code> || <code>localhost/work-260220:latest</code> | |||
|- | |||
| Golden image (template) || <code>localhost/PREFIX-base:latest</code> || <code>localhost/work-base:latest</code> | |||
|- | |||
| Burner home directory || <code>~/sandbox-homes/PREFIX-YYMMDD</code> || <code>~/sandbox-homes/work-260220</code> | |||
|} | |||
Choose any short prefix that makes sense for your setup. Use the same prefix consistently so your image library stays organised. | |||
'''Example timeline:''' | |||
* You set up a container on the 20th → <code>work-260220</code>, saved as <code>localhost/work-260220:latest</code> | |||
* On the 22nd you want a new save point → commit the running container as <code>localhost/work-260222:latest</code> | |||
* Start a new container from that save point when needed → <code>work-260222</code> | |||
* <code>work-260220</code> is still there — enter it again any time | |||
* Delete whichever images or containers you no longer need | |||
== One-Time Setup: Create the Golden Image == | |||
Run these steps once. The result is a golden image — a clean, pre-configured base you will clone all future work containers from. | |||
=== Step 1: Install Distrobox === | |||
'''[HOST]''' | |||
sudo apt install distrobox # Debian/Ubuntu | sudo apt install distrobox # Debian/Ubuntu | ||
| Line 25: | Line 73: | ||
yay -S distrobox # Arch (AUR) | yay -S distrobox # Arch (AUR) | ||
=== Step 2: Create and Enter the Base Container === | |||
'''[HOST]''' Create a home directory for the base container. Run these as two separate commands: | |||
mkdir -p ~/sandbox-homes/work-base | |||
distrobox create --name work-base --image ubuntu:24.04 --home ~/sandbox-homes/work-base | |||
'''[HOST]''' Enter the container: | |||
distrobox enter work-base | |||
=== Step 3: Install Claude Code === | |||
'''[DISTROBOX]''' | |||
curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash - | |||
sudo apt install -y nodejs | |||
npm install -g @anthropic-ai/claude-code | |||
'''[DISTROBOX]''' Log in and verify: | |||
claude | |||
Complete the authentication flow. Your credentials are stored inside the container. | |||
=== Step 4: Add the Launcher Script === | |||
'''[DISTROBOX]''' Create a project directory and the launcher script: | |||
mkdir -p ~/project | |||
nano ~/project/claude_isolation.sh | |||
Contents: | |||
<syntaxhighlight lang="bash"> | |||
#!/bin/bash | |||
# ============================================================================= | |||
# claude_isolation.sh | |||
# Launcher script for Claude Code inside a Distrobox container. | |||
# | |||
# Place this script in your project directory inside the container. | |||
# Run it from there to start a Claude Code session scoped to that directory. | |||
# | |||
# Usage: | |||
# ./claude_isolation.sh | |||
# ./claude_isolation.sh --dangerously-skip-permissions | |||
# ============================================================================= | |||
# ----------------------------------------------------------------------------- | |||
# WORK_DIR — the directory Claude Code will run in. | |||
# | |||
# The default below auto-detects the directory this script lives in. | |||
# This works for most setups and requires no changes. | |||
# | |||
# If you need a fixed path regardless of where the script is called from, | |||
# comment out the auto-detect line and set WORK_DIR manually instead. | |||
# | |||
# Examples: | |||
# John building a chatbot: WORK_DIR="/home/john/projects/chatbot" | |||
# Mary running experiments: WORK_DIR="/home/mary/ai-lab/experiment-3" | |||
# ----------------------------------------------------------------------------- | |||
WORK_DIR="$(cd "$(dirname "$0")" && pwd)" | |||
# WORK_DIR="/home/USER/your-project-directory" # uncomment to hardcode | |||
# ----------------------------------------------------------------------------- | |||
# Move into the work directory. | |||
# Claude Code will treat this as its root — all file reads and writes | |||
# happen relative to here. | |||
# ----------------------------------------------------------------------------- | |||
cd "$WORK_DIR" | |||
# ----------------------------------------------------------------------------- | |||
# Optional: override the Claude config directory. | |||
# | |||
# By default Claude stores its config (login tokens, settings) in $HOME/.claude | |||
# Inside a --home container, $HOME points to the burner directory, so a brand | |||
# new container will not have credentials and you will need to log in once. | |||
# | |||
# If you want to reuse credentials from your host's real home directory, | |||
# uncomment the export line and set the absolute path to your .claude folder. | |||
# | |||
# Examples: | |||
# John: export CLAUDE_CONFIG_HOME="/home/john/.claude" | |||
# Mary: export CLAUDE_CONFIG_HOME="/home/mary/.claude" | |||
# | |||
# Leave commented out to keep full isolation (recommended). | |||
# Each new container will prompt you to log in once, then store credentials | |||
# in its own burner home. | |||
# ----------------------------------------------------------------------------- | |||
# export CLAUDE_CONFIG_HOME="/home/USER/.claude" | |||
echo "Starting Claude Code..." | |||
echo " Working directory: $WORK_DIR" | |||
echo "" | |||
# ----------------------------------------------------------------------------- | |||
# Launch Claude Code. | |||
# | |||
# 'exec' replaces this shell process with claude — keeps the process tree clean. | |||
# '$@' passes any arguments you gave this script directly through to claude. | |||
# | |||
# Common arguments: | |||
# --dangerously-skip-permissions auto-approve all actions (use inside | |||
# a container only — never on bare host) | |||
# ----------------------------------------------------------------------------- | |||
exec claude "$@" | |||
</syntaxhighlight> | |||
'''[DISTROBOX]''' Make it executable: | |||
chmod +x ~/project/claude_isolation.sh | |||
=== Step 5: Save as the Golden Image === | |||
'''[HOST]''' Exit the container, then stop and commit it: | |||
exit | |||
distrobox stop work-base | |||
podman container commit work-base localhost/work-base:latest | |||
'''[HOST]''' Verify: | |||
podman image ls | |||
You now have a golden image. The base container can be kept or deleted — the image is self-contained. | |||
== Daily Workflow == | |||
=== Starting a New Container === | |||
When starting fresh work, clone a container from the golden image (or any saved image). Use today's date in the name. | |||
'''[HOST]''' Run these as two separate commands: | |||
mkdir -p ~/sandbox-homes/work-260220 | |||
distrobox create --name work-260220 --image localhost/work-base:latest --home ~/sandbox-homes/work-260220 | |||
'''[HOST]''' Enter it: | |||
distrobox enter work-260220 | |||
'''[DISTROBOX]''' Launch Claude Code: | |||
cd ~/project | |||
./claude_isolation.sh | |||
=== Continuing an Existing Container === | |||
If the container already exists, just enter it again — it retains its full state: | |||
'''[HOST]''' | |||
distrobox enter work-260220 | |||
'''[DISTROBOX]''' | |||
cd ~/project | |||
./claude_isolation.sh | |||
=== Saving a Save Point === | |||
At any point — before a risky change, after a milestone, or at the end of a working day — commit the container state as a named image. | |||
'''[HOST]''' Stop the container: | |||
distrobox stop work-260220 | |||
'''[HOST]''' Commit to a dated image: | |||
podman container commit work-260220 localhost/work-260222:latest | |||
'''[HOST]''' Start the container again: | |||
distrobox enter work-260220 | |||
The save point <code>localhost/work-260222:latest</code> is now available. Your original container <code>work-260220</code> is unchanged and still usable. | |||
== | === Switching Between Save Points === | ||
You can branch off from any saved image. Both lines of work remain independent. | |||
'''Example:''' You have been using <code>work-260220</code>. You save a point as <code>localhost/work-260222:latest</code>. Now you can: | |||
* Keep using <code>work-260220</code> as-is | |||
* Start a new container from the 260222 save point: | |||
mkdir -p ~/sandbox-homes/work-260222 | |||
distrobox create --name work-260222 --image localhost/work-260222:latest --home ~/sandbox-homes/work-260222 | |||
distrobox enter work-260222 | |||
* Go back to <code>work-260220</code> at any time: | |||
distrobox enter work-260220 | |||
=== Restoring from a Save Point === | |||
If a container is broken or you want a clean start from a previous state: | |||
'''[HOST]''' Delete the current container: | |||
distrobox rm work-260222 | |||
rm -rf ~/sandbox-homes/work-260222 | |||
'''[HOST]''' Re-create it from the save point image: | |||
mkdir -p ~/sandbox-homes/work-260222 | |||
distrobox create --name work-260222 --image localhost/work-260222:latest --home ~/sandbox-homes/work-260222 | |||
=== Promoting to the Golden Image === | |||
If a container has reached a state you want all future containers to start from, promote it: | |||
'''[HOST]''' | |||
distrobox stop work-260222 | |||
podman container commit work-260222 localhost/work-base:latest | |||
New containers cloned from <code>localhost/work-base:latest</code> will now include those changes. | |||
== | === Managing Your Image Library === | ||
'''[HOST]''' List all images: | |||
podman image ls | |||
'''[HOST]''' Delete an image you no longer need: | |||
podman image rm localhost/work-260220:latest | |||
'''[HOST]''' List all containers: | |||
podman ps -a | |||
'''[HOST]''' Delete a container and its home when you are done: | |||
distrobox rm work-260220 | |||
rm -rf ~/sandbox-homes/work-260220 | |||
== What | == What Distrobox Isolation Provides == | ||
{| class="wikitable" | {| class="wikitable" | ||
|- | |- | ||
! | ! Surface !! Isolated? !! Notes | ||
|- | |||
| Host home directory || ✅ Yes || Container uses its own burner home via <code>--home</code>; <code>/home/USER</code> is never touched | |||
|- | |||
| Host filesystem via <code>/run/host</code> || ⚠️ Partial || Mounted read-write by default. Add <code>--additional-flags "--mount type=bind,source=/,target=/run/host,ro"</code> at container creation to make it read-only | |||
|- | |||
| System packages || ✅ Yes || Container uses its own overlay filesystem | |||
|- | |- | ||
| | | Network || ❌ No || Container shares the host network namespace. Claude Code requires network access to reach the Anthropic API | ||
|- | |- | ||
| | | Linux kernel || ❌ No || Rootless containers share the host kernel (acceptable for most threat models) | ||
|- | |- | ||
| | | X11/Wayland display || ❌ No || GUI apps render on the host desktop | ||
|} | |} | ||
== | == Why the Burner Concept == | ||
The Burner Workflow is designed to give Claude Code extensive permissions — auto-allow mode, running system commands, installing packages — without risking your actual computer. | |||
* '''Safety with high permissions:''' If Claude Code runs <code>rm -rf</code> or installs 50 packages, your main system is untouched. The damage stays inside the container. | |||
* '''Dependency hygiene:''' Agents often install tools to complete tasks. Distrobox keeps this inside the box. Delete the container when you are done with the project. | |||
* '''Save points for risky work:''' Before letting an agent attempt something uncertain, commit a save point. If it breaks the container, restore from the save point and try a different approach. | |||
* '''Better integration than Docker:''' Unlike raw Docker, Distrobox integrates naturally with your terminal environment while still keeping the execution environment separate. | |||
== | === Can You Skip Distrobox? === | ||
* '''Yes, if:''' You are just testing Claude Code and will manually approve every command (the default safe mode). | |||
* '''No, if:''' You want to use autonomous mode — skipping permission prompts or letting the agent freely install tools. In that case, skipping Distrobox is dangerous and defeats the purpose of this guide. | |||
== References == | |||
* [https://github.com/89luca89/distrobox Distrobox] | |||
* [https://claude.ai/code Claude Code] | |||
== | == Session Notes 260222 — Testing & Fixes == | ||
=== 1. <code>mkdir -p</code> and <code>distrobox create</code> must be run as separate commands === | |||
Pasting them as a single line fails silently. Always run them separately on the host. | |||
== | === 2. <code>$HOME</code> resolves incorrectly inside <code>--home</code> containers === | ||
When distrobox is created with <code>--home</code> pointing to a burner directory, the container's <code>$HOME</code> becomes that directory — not the real user home. Any launcher script variables using <code>$HOME</code> (like <code>CLAUDE_DIR</code> and <code>NVM_DIR</code>) will resolve to wrong paths. | |||
'''Fix:''' Hardcode absolute paths in the launcher script if needed. | |||
=== | === 3. Firejail is incompatible with distrobox <code>--home</code> workflow === | ||
Firejail fails with <code>no suitable ...bin/claude executable found</code> inside distrobox when using a custom <code>--home</code> directory. The cause is firejail's whitelist mode blocking Node.js runtime dependencies that Claude Code requires. | |||
'''Fix:''' Drop firejail. Distrobox with <code>--home</code> provides sufficient filesystem isolation for the burner workflow. The <code>claude_isolation.sh</code> script above is the current recommended launcher. | |||
=== | === 4. Backup script may produce duplicate image files === | ||
Manual <code>podman save</code> and a skip-duplicates backup script may use different filename conventions (e.g. <code>imagename_latest.tar.gz</code> vs <code>localhost_imagename_latest.tar.gz</code>), resulting in duplicate files on the backup destination. Check for and remove duplicates after any manual save. | |||
== | === 5. <code>@reboot</code> cron needs <code>sleep 30</code> === | ||
Network filesystem mounts (e.g. GVFS SMB) are not ready immediately on boot. Without a sleep delay, backup scripts triggered via <code>@reboot</code> cron will fail silently with a "destination not mounted" error. Add <code>sleep 30</code> before the backup command. | |||
Latest revision as of 07:46, 22 February 2026
Claude Code Isolation with Distrobox — Burner Workflow
Overview
This guide documents how to run Claude Code in an isolated environment using Distrobox containers. The core idea:
- Each project or task lives in its own container — isolated from the host system.
- Containers are persistent. You enter them, do work, and come back later.
- At any point you can save the current state as an image — a snapshot you can restore from or clone.
- A golden image (or template) is a clean, pre-configured base you clone new containers from.
- You delete containers and images on your own schedule, when you no longer need them.
Think of it like save points in a game: you can keep playing from where you left off, and save whenever you want a checkpoint.
This protects against malicious prompt injection by limiting what Claude Code can access — any damage from a bad agent run stays inside the container and does not touch the host.
Command Context
Every command in this guide is prefixed with where it must be run:
| Prefix | Meaning |
|---|---|
[HOST] |
Run this in a terminal on your normal Linux desktop, outside any container |
[DISTROBOX] |
Run this inside the Distrobox container after entering it |
Prerequisites
- A Linux host (Fedora, Ubuntu, Arch, etc.)
- Distrobox installed on the host
- Podman installed on the host
- A Claude Code account and API access
Naming Convention
Containers and images are named using a short prefix and a date in YYMMDD format. The date identifies when the container or save point was created, making it easy to track your working state over time.
| Type | Format | Example |
|---|---|---|
| Working container | PREFIX-YYMMDD |
work-260220
|
| Saved image (save point) | localhost/PREFIX-YYMMDD:latest |
localhost/work-260220:latest
|
| Golden image (template) | localhost/PREFIX-base:latest |
localhost/work-base:latest
|
| Burner home directory | ~/sandbox-homes/PREFIX-YYMMDD |
~/sandbox-homes/work-260220
|
Choose any short prefix that makes sense for your setup. Use the same prefix consistently so your image library stays organised.
Example timeline:
- You set up a container on the 20th →
work-260220, saved aslocalhost/work-260220:latest - On the 22nd you want a new save point → commit the running container as
localhost/work-260222:latest - Start a new container from that save point when needed →
work-260222 work-260220is still there — enter it again any time- Delete whichever images or containers you no longer need
One-Time Setup: Create the Golden Image
Run these steps once. The result is a golden image — a clean, pre-configured base you will clone all future work containers from.
Step 1: Install Distrobox
[HOST]
sudo apt install distrobox # Debian/Ubuntu sudo dnf install distrobox # Fedora yay -S distrobox # Arch (AUR)
Step 2: Create and Enter the Base Container
[HOST] Create a home directory for the base container. Run these as two separate commands:
mkdir -p ~/sandbox-homes/work-base
distrobox create --name work-base --image ubuntu:24.04 --home ~/sandbox-homes/work-base
[HOST] Enter the container:
distrobox enter work-base
Step 3: Install Claude Code
[DISTROBOX]
curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash - sudo apt install -y nodejs npm install -g @anthropic-ai/claude-code
[DISTROBOX] Log in and verify:
claude
Complete the authentication flow. Your credentials are stored inside the container.
Step 4: Add the Launcher Script
[DISTROBOX] Create a project directory and the launcher script:
mkdir -p ~/project nano ~/project/claude_isolation.sh
Contents:
#!/bin/bash
# =============================================================================
# claude_isolation.sh
# Launcher script for Claude Code inside a Distrobox container.
#
# Place this script in your project directory inside the container.
# Run it from there to start a Claude Code session scoped to that directory.
#
# Usage:
# ./claude_isolation.sh
# ./claude_isolation.sh --dangerously-skip-permissions
# =============================================================================
# -----------------------------------------------------------------------------
# WORK_DIR — the directory Claude Code will run in.
#
# The default below auto-detects the directory this script lives in.
# This works for most setups and requires no changes.
#
# If you need a fixed path regardless of where the script is called from,
# comment out the auto-detect line and set WORK_DIR manually instead.
#
# Examples:
# John building a chatbot: WORK_DIR="/home/john/projects/chatbot"
# Mary running experiments: WORK_DIR="/home/mary/ai-lab/experiment-3"
# -----------------------------------------------------------------------------
WORK_DIR="$(cd "$(dirname "$0")" && pwd)"
# WORK_DIR="/home/USER/your-project-directory" # uncomment to hardcode
# -----------------------------------------------------------------------------
# Move into the work directory.
# Claude Code will treat this as its root — all file reads and writes
# happen relative to here.
# -----------------------------------------------------------------------------
cd "$WORK_DIR"
# -----------------------------------------------------------------------------
# Optional: override the Claude config directory.
#
# By default Claude stores its config (login tokens, settings) in $HOME/.claude
# Inside a --home container, $HOME points to the burner directory, so a brand
# new container will not have credentials and you will need to log in once.
#
# If you want to reuse credentials from your host's real home directory,
# uncomment the export line and set the absolute path to your .claude folder.
#
# Examples:
# John: export CLAUDE_CONFIG_HOME="/home/john/.claude"
# Mary: export CLAUDE_CONFIG_HOME="/home/mary/.claude"
#
# Leave commented out to keep full isolation (recommended).
# Each new container will prompt you to log in once, then store credentials
# in its own burner home.
# -----------------------------------------------------------------------------
# export CLAUDE_CONFIG_HOME="/home/USER/.claude"
echo "Starting Claude Code..."
echo " Working directory: $WORK_DIR"
echo ""
# -----------------------------------------------------------------------------
# Launch Claude Code.
#
# 'exec' replaces this shell process with claude — keeps the process tree clean.
# '$@' passes any arguments you gave this script directly through to claude.
#
# Common arguments:
# --dangerously-skip-permissions auto-approve all actions (use inside
# a container only — never on bare host)
# -----------------------------------------------------------------------------
exec claude "$@"
[DISTROBOX] Make it executable:
chmod +x ~/project/claude_isolation.sh
Step 5: Save as the Golden Image
[HOST] Exit the container, then stop and commit it:
exit
distrobox stop work-base podman container commit work-base localhost/work-base:latest
[HOST] Verify:
podman image ls
You now have a golden image. The base container can be kept or deleted — the image is self-contained.
Daily Workflow
Starting a New Container
When starting fresh work, clone a container from the golden image (or any saved image). Use today's date in the name.
[HOST] Run these as two separate commands:
mkdir -p ~/sandbox-homes/work-260220
distrobox create --name work-260220 --image localhost/work-base:latest --home ~/sandbox-homes/work-260220
[HOST] Enter it:
distrobox enter work-260220
[DISTROBOX] Launch Claude Code:
cd ~/project ./claude_isolation.sh
Continuing an Existing Container
If the container already exists, just enter it again — it retains its full state:
[HOST]
distrobox enter work-260220
[DISTROBOX]
cd ~/project ./claude_isolation.sh
Saving a Save Point
At any point — before a risky change, after a milestone, or at the end of a working day — commit the container state as a named image.
[HOST] Stop the container:
distrobox stop work-260220
[HOST] Commit to a dated image:
podman container commit work-260220 localhost/work-260222:latest
[HOST] Start the container again:
distrobox enter work-260220
The save point localhost/work-260222:latest is now available. Your original container work-260220 is unchanged and still usable.
Switching Between Save Points
You can branch off from any saved image. Both lines of work remain independent.
Example: You have been using work-260220. You save a point as localhost/work-260222:latest. Now you can:
- Keep using
work-260220as-is - Start a new container from the 260222 save point:
mkdir -p ~/sandbox-homes/work-260222 distrobox create --name work-260222 --image localhost/work-260222:latest --home ~/sandbox-homes/work-260222 distrobox enter work-260222
- Go back to
work-260220at any time:
distrobox enter work-260220
Restoring from a Save Point
If a container is broken or you want a clean start from a previous state:
[HOST] Delete the current container:
distrobox rm work-260222 rm -rf ~/sandbox-homes/work-260222
[HOST] Re-create it from the save point image:
mkdir -p ~/sandbox-homes/work-260222 distrobox create --name work-260222 --image localhost/work-260222:latest --home ~/sandbox-homes/work-260222
Promoting to the Golden Image
If a container has reached a state you want all future containers to start from, promote it:
[HOST]
distrobox stop work-260222 podman container commit work-260222 localhost/work-base:latest
New containers cloned from localhost/work-base:latest will now include those changes.
Managing Your Image Library
[HOST] List all images:
podman image ls
[HOST] Delete an image you no longer need:
podman image rm localhost/work-260220:latest
[HOST] List all containers:
podman ps -a
[HOST] Delete a container and its home when you are done:
distrobox rm work-260220 rm -rf ~/sandbox-homes/work-260220
What Distrobox Isolation Provides
| Surface | Isolated? | Notes |
|---|---|---|
| Host home directory | ✅ Yes | Container uses its own burner home via --home; /home/USER is never touched
|
Host filesystem via /run/host |
⚠️ Partial | Mounted read-write by default. Add --additional-flags "--mount type=bind,source=/,target=/run/host,ro" at container creation to make it read-only
|
| System packages | ✅ Yes | Container uses its own overlay filesystem |
| Network | ❌ No | Container shares the host network namespace. Claude Code requires network access to reach the Anthropic API |
| Linux kernel | ❌ No | Rootless containers share the host kernel (acceptable for most threat models) |
| X11/Wayland display | ❌ No | GUI apps render on the host desktop |
Why the Burner Concept
The Burner Workflow is designed to give Claude Code extensive permissions — auto-allow mode, running system commands, installing packages — without risking your actual computer.
- Safety with high permissions: If Claude Code runs
rm -rfor installs 50 packages, your main system is untouched. The damage stays inside the container. - Dependency hygiene: Agents often install tools to complete tasks. Distrobox keeps this inside the box. Delete the container when you are done with the project.
- Save points for risky work: Before letting an agent attempt something uncertain, commit a save point. If it breaks the container, restore from the save point and try a different approach.
- Better integration than Docker: Unlike raw Docker, Distrobox integrates naturally with your terminal environment while still keeping the execution environment separate.
Can You Skip Distrobox?
- Yes, if: You are just testing Claude Code and will manually approve every command (the default safe mode).
- No, if: You want to use autonomous mode — skipping permission prompts or letting the agent freely install tools. In that case, skipping Distrobox is dangerous and defeats the purpose of this guide.
References
Session Notes 260222 — Testing & Fixes
1. mkdir -p and distrobox create must be run as separate commands
Pasting them as a single line fails silently. Always run them separately on the host.
2. $HOME resolves incorrectly inside --home containers
When distrobox is created with --home pointing to a burner directory, the container's $HOME becomes that directory — not the real user home. Any launcher script variables using $HOME (like CLAUDE_DIR and NVM_DIR) will resolve to wrong paths.
Fix: Hardcode absolute paths in the launcher script if needed.
3. Firejail is incompatible with distrobox --home workflow
Firejail fails with no suitable ...bin/claude executable found inside distrobox when using a custom --home directory. The cause is firejail's whitelist mode blocking Node.js runtime dependencies that Claude Code requires.
Fix: Drop firejail. Distrobox with --home provides sufficient filesystem isolation for the burner workflow. The claude_isolation.sh script above is the current recommended launcher.
4. Backup script may produce duplicate image files
Manual podman save and a skip-duplicates backup script may use different filename conventions (e.g. imagename_latest.tar.gz vs localhost_imagename_latest.tar.gz), resulting in duplicate files on the backup destination. Check for and remove duplicates after any manual save.
5. @reboot cron needs sleep 30
Network filesystem mounts (e.g. GVFS SMB) are not ready immediately on boot. Without a sleep delay, backup scripts triggered via @reboot cron will fail silently with a "destination not mounted" error. Add sleep 30 before the backup command.