Jump to content

Opencode isolation and burner workflow 260216: Difference between revisions

From Game in the Brain Wiki
No edit summary
No edit summary
 
(12 intermediate revisions by the same user not shown)
Line 1: Line 1:
= OpenCode "Burner" Workflow & Hardware Reality Check =


This guide establishes a '''Distrobox-based isolation workflow''' for running OpenCode. This prevents AI agents from accidentally modifying your host system files and allows you to "nuke" the environment if it gets corrupted or compromised.
= Beginner's Guide to OpenCode Isolation and Burner Workflows =
Welcome! If you are using OpenCode (or similar AI coding agents), you are giving an AI the ability to run commands on your computer. While incredibly powerful, this comes with risks. A confused AI—or a malicious hidden instruction (prompt injection) in a downloaded file—could accidentally delete your personal files or mess up your system.


== Part 1: The "Burner" Philosophy ==
This guide teaches you how to use '''Distrobox''' to create "sandboxes" (isolated containers). By putting the AI in a sandbox, any damage it causes stays locked inside that box, keeping your real computer completely safe.


'''The Goal:''' Run OpenCode in a disposable container (<code>opencode-burner</code>) that shares ''only'' specific project folders, not your entire home directory.
== Key Concepts (Think of it like a Video Game) ==


'''Simpler than Claude Code:'''
* '''Containers (The Sandbox):''' A mini, isolated operating system running inside your real computer.
This workflow is significantly more streamlined than typical Claude Code isolation setups.
* '''Golden Image (The Master Save File):''' A perfectly set-up container with all the tools installed. We copy this every time we start a new project so we don't have to install things twice.
* '''Save Points (Checkpoints):''' Just like saving your game before a boss fight, we can "commit" our container's state. If the AI breaks the code later, we can reload the save!
* '''Burner Home:''' A special, restricted folder we give to the AI instead of letting it see your real <code>Documents</code> or <code>Desktop</code> folders.


'''No Re-Authentication Loop:''' Unlike Claude, which often forces you to re-login via browser or copy keys every session, this container '''persists''' your "Burner Identity" (Git credentials & Configs).
== 1. Hardware Reality Check ==
 
Before we begin, AI agents need to "read" your code to understand it. The amount of code they can remember at once is called the '''Context Window'''. To process large context windows, your graphics card (GPU) needs memory, called '''VRAM'''.
'''Zero Boot Time:''' Distrobox shares your host kernel. There is no VM overhead; it launches instantly.
 
'''One-Step Launch:''' You don't need to manually start a Docker daemon, attach a shell, and then run a binary. The launcher script handles the context switch automatically.
 
'''The Loop:'''
 
'''Spin up''' a fresh Arch/Ubuntu container.
 
'''Inject''' the necessary tools (NodeJS, Git, OpenCode CLI, Firejail).
 
'''Mount''' only the target project.
 
'''Burn it''' (delete the container) when the project is done or the environment gets messy.
 
== Part 2: The Hardware Reality (VRAM is the Limit) ==
 
Running local coding agents requires massive '''Context Windows'''. Codebases are large. The standard 4k or 8k context is useless for an agent reading multiple files.
 
'''The Golden Rule:''' You need at least '''32k Context''' for a usable agent.
 
=== GPU Tier List (Discrete VRAM) ===


Here is what you can expect based on your hardware:
{| class="wikitable"
{| class="wikitable"
! GPU Class !! VRAM !! Price (PHP) !! Practical Limit !! Verdict
!Your GPU VRAM
!Example Graphics Card
!Context Window (Memory)
!What this means for you
|-
|-
| '''RX 7600''' || '''8 GB''' || ~₱18,000 || '''8k - 16k'''<br />''(32k Unstable)'' || '''Entry Level.'''<br />8GB is the hard floor. You can run a 7B model, but 32k context "sometimes works" and often crashes (OOM) because the OS + Model leaves almost no room for the cache.
|'''8GB'''
|Radeon RX 7600
|8k–16k
|Good for small scripts, but might crash on large projects.
|-
|-
| '''RX 9070 XT''' || '''16 GB''' || ~₱50,000 || '''64k (Unstable)''' || '''The Danger Zone.'''<br />16GB is tight. You ''can'' force a '''64k context''', but it '''requires creating a custom <code>Modelfile</code>''' and is unstable. The model weights (~5GB) + OS (~4GB) leave little room for the KV cache. Expect OOM crashes.
|'''16GB'''
|Radeon RX 9070 XT
|~32k
|The minimum recommended for a smooth AI agent experience.
|-
|-
| '''RX 7900 XT''' || '''20 GB''' || ~₱80,000 || '''64k - 80k''' || '''The Sweet Spot.'''<br />That extra 4GB VRAM is crucial. It creates enough headroom to run a quantized 7B or 14B model with a healthy 64k context window comfortably.
|'''20GB'''
|Radeon RX 7900 XT
|64k–80k
|The "Sweet Spot." Handles multiple large files easily.
|-
|-
| '''Workstation'''<br />(e.g., W7800/R9700) || '''32 GB''' || High || '''128k+''' || '''The AI King.'''<br />Required if you want to run full 128k context locally. 32GB VRAM allows for uncompressed cache or running larger parameters (e.g., 32B models) with decent context.
|'''32GB+'''
|Mac Studio / Pro GPUs
|128k+
|Can read entire massive codebases at once.
|}
|}
''(Note: If you have a computer with "Unified Memory" like an Apple Silicon Mac or a Ryzen AI Max+, you can use system RAM for AI, which allows for huge memory but runs a bit slower).''


=== Unified Memory & Workstation Architectures (Massive Capacity, Slower Speed) ===
== 2. Setting up the "Golden Image" (One-Time Setup) ==
You only need to do this section once! We are going to build our "Master Save File" that has all the programming tools the AI needs.


Use these if you need to run Huge Models (70B, 405B) that won't fit on any consumer GPU.
=== Step 2.1: Install Distrobox on your Host Computer ===
First, we need the software that makes the sandboxes. Run the command for your computer's operating system:
<code># If you use Debian or Ubuntu:
sudo apt install distrobox   
# If you use Fedora:
sudo dnf install distrobox   
# If you use Arch Linux:
yay -S distrobox</code>             


'''Trade-off:''' While capacity is massive, Tokens Per Second (T/s) is significantly lower (slower generation) compared to the Discrete GPUs above due to lower memory bandwidth.
=== Step 2.2: Create the Base Sandbox ===
Now we create a brand new, empty sandbox named <code>oc-base</code>. We also tell it to use a fake home directory (<code>~/sandbox-homes/oc-base</code>) so it can't see your real personal files.
<code># 1. Create the folder that will act as the fake home
mkdir -p ~/sandbox-homes/oc-base
# 2. Build the sandbox using Ubuntu as the base system
distrobox create --name oc-base --image ubuntu:24.04 --home ~/sandbox-homes/oc-base
# 3. Step inside the sandbox!
distrobox enter oc-base</code>


{| class="wikitable"
=== Step 2.3: Equip the Sandbox with Tools ===
! System / Chip !! RAM (Unified) !! Est. Cost (PHP) !! Capability !! Verdict
Now that you are ''inside'' the sandbox, let's install the tools the AI needs to write and test code (like Node.js, Python, and Git).
|-
<code># Download and install Node.js, Git, and Python
| '''AMD Ryzen AI Max+ PRO 395'''<br />''(Strix Halo)'' || '''128 GB'''<br />''(allocates ~112GB to AI)'' || '''~₱135,000'''<br />''(Framework/MiniPC)'' || '''Llama 3 70B @ Q8'''<br />'''DeepSeek V2 Lite''' || '''The Compact Beast.'''<br />Allows running 70B models comfortably. Excellent value for capacity, but slower inference than a dGPU.
curl -fsSL <nowiki>[https://deb.nodesource.com/setup_lts.x]</nowiki>(<nowiki>https://deb.nodesource.com/setup_lts.x</nowiki>) | sudo -E bash -
|-
sudo apt install -y nodejs git python3
| '''Apple Mac Studio'''<br />''(M3/M4 Ultra)'' || '''Up to 512 GB''' || '''~₱400,000 - ₱600,000+'''<br />''(Max Config)'' || '''Llama 3 405B @ Q4'''<br />'''DeepSeek V3 671B''' || '''The Whale.'''<br />One of the few ways to run "Frontier" models locally. Extremely expensive and slower generation speeds, but unparalleled privacy for massive models.
|}
# Install the OpenCode AI software globally
npm install -g opencode-ai
# Run OpenCode once to set up your API keys and authenticate
opencode</code> 


=== Recommended High-Context Models ===
=== Step 2.4: Create a Helper Script ===
Select the model that fits your GPU tier.
To make starting projects easier later, we'll create a shortcut script. Still inside the sandbox, run these commands to create a file called <code>opencode_isolation.sh</code>:
<code># Create the project directory first
mkdir -p ~/project
# Create the script
cat << 'EOF' > ~/project/opencode_isolation.sh
#!/bin/bash
# This script starts OpenCode safely inside our current folder.
WORK_DIR="$(cd "$(dirname "$0")" && pwd)"
cd "$WORK_DIR"
echo "Starting OpenCode..."
echo "Working directory: $WORK_DIR"
echo ""
exec opencode "$@"
EOF
# Make the script executable (runnable)
chmod +x ~/project/opencode_isolation.sh</code>


'''7-8 Billion Parameter Models (Best for 12GB - 16GB VRAM)'''
=== Step 2.5: Save the Master Sandbox ===
Now we step out of the sandbox and save it as our "Golden Image" template.
<code># 1. Leave the sandbox and return to your real computer
exit 
# 2. Turn off the sandbox
distrobox stop oc-base
# 3. Save it as a reusable template (image) named "oc-base:latest"
podman container commit oc-base localhost/oc-base:latest
# 4. Verify it was saved successfully
podman image ls</code> 


'''Qwen2.5-Coder-7B-Instruct:''' The current gold standard. Supports up to '''128k natively'''. Excellent for bug fixing and large codebase understanding on consumer hardware.
== 3. Protect Your Identity: The GitHub "Burner" Account ==
'''Important:''' Do NOT give the AI access to your personal GitHub account! If the AI gets confused, it might delete your repositories or leak your private code.


'''YaRN-Mistral-7b-64k:''' Specifically configured for '''64k context''' using the YaRN extension method. Benchmarks show stable perplexity at long lengths.
# Go to GitHub and create a completely new, separate account (a "burner" account).
# Generate a '''Personal Access Token''' for this new account.
# Set the token to expire in 90 days.
# Only give the token <code>repo</code> and <code>workflow</code> permissions.
# Use ''this'' account and token when setting up git inside your sandboxes.


'''OpenCodeReasoning-Nemotron-7B:''' Supports '''64k context'''. Excels specifically at reasoning tasks for code generation.
== 4. Daily Workflow: How to Use Your Sandboxes ==
Now that your Golden Image is ready, here is how you will actually work day-to-day. We use a naming convention with the date to keep things organized (e.g., <code>oc-260216</code> means OpenCode project from Feb 16, 2026).


'''14-16 Billion Parameter Models (Best for 20GB - 24GB VRAM)'''
=== Scenario A: Starting a Brand New Project ===
We will copy the master save file to create a fresh workspace.
<code># On your main computer:
# 1. Make a folder for today's project
mkdir -p ~/sandbox-homes/oc-260216
# 2. Create a new sandbox cloned from your Golden Image
distrobox create --name oc-260216 --image localhost/oc-base:latest --home ~/sandbox-homes/oc-260216
# 3. Enter the new sandbox
distrobox enter oc-260216
# Inside the sandbox:
# 4. Navigate to the project folder and start the AI!
cd ~/project && ./opencode_isolation.sh</code>


'''Qwen2.5-Coder-14B-Instruct:''' The heavy hitter. Supports '''128k natively'''. Provides significantly more capacity for complex, multi-file project analysis and agentic workflows than the 7B version.
=== Scenario B: Saving Your Progress (Checkpoint) ===
Before you ask the AI to do a massive, complicated refactor, save your container! If the AI ruins the code, you can easily go back.
<code># On your main computer:
distrobox stop oc-260216
podman container commit oc-260216 localhost/oc-260216:latest
# Now you can enter again and safely let the AI work
distrobox enter oc-260216</code>


'''DeepCoder-14B-Preview:''' Supports '''64k context'''. Uses reinforcement learning to achieve performance comparable to much larger proprietary models.
=== Scenario C: Oh no! The AI broke everything! (Restoring a Checkpoint) ===
If you saved a checkpoint (like in Scenario B) and want to go back to it:
<code># On your main computer:
# 1. Delete the ruined sandbox
distrobox rm oc-260217 && rm -rf ~/sandbox-homes/oc-260217
# 2. Recreate it from your last good save point!
mkdir -p ~/sandbox-homes/oc-260217
distrobox create --name oc-260217 --image localhost/oc-260216:latest --home ~/sandbox-homes/oc-260217</code>


=== The "DeepSeek V3" Alternative ===
== 5. Cleaning Up ==
Before spending ₱80k on hardware, consider the '''DeepSeek API'''.
Over time, these sandboxes will take up hard drive space. Here is how to clean them up when you are done with a project.
<code># See all your saved templates/images
podman image ls             
# Delete a specific saved image
podman image rm localhost/oc-260216:latest
# Delete a working sandbox and its fake home folder
distrobox rm oc-260216 && rm -rf ~/sandbox-homes/oc-260216</code>


'''Context:''' 64k (Output) / 128k (Input) natively.
== 6. What is actually protected? (Isolation Coverage) ==
 
For transparency, here is exactly what this setup protects against:
'''Intelligence:''' DeepSeek V3 (671B MoE) is vastly smarter than any local Qwen 7B/14B.
{| class="wikitable"
 
!System Area
'''Cost:''' ~₱7.80 ($0.14) per '''1 Million''' tokens.
!Is it Protected?
 
!Explanation
'''Strategy:''' Use '''Local 7B''' for small, private edits. Use '''DeepSeek V3''' for "Build" agent tasks requiring long context.
|-
 
|'''Your Personal Files'''
== Part 3: GitHub Burner Identity ==
|✅ Yes
 
|The AI uses the fake <code>--home</code> folder and cannot see your real Documents or Desktop.
Agents need to pull/push code. Do not give them your main GitHub credentials (SSH keys) that have access to your private company/client repos.
|-
 
|'''System Apps/Packages'''
'''The Strategy:'''
|✅ Yes
 
|If the AI tries to install a virus via <code>apt-get</code>, it only installs inside the disposable sandbox.
Start by testing with your main account (carefully) to verify the tool works.
|-
 
|'''Host Filesystem'''
'''IMMEDIATELY''' switch to a Burner GitHub Account for daily agentic work.
|⚠️ Partial
 
|By default, the rest of your hard drive is readable. Advanced users can add <code>--additional-flags</code> to lock this down further.
'''Generating a Token (PAT):'''
|-
You cannot use a simple password for Git over HTTPS. You need a Personal Access Token.
|'''Network/Internet'''
 
|❌ No
Log in to your '''Burner GitHub Account'''.
|The AI shares your computer's internet connection (it needs this to access the OpenCode API).
 
|}
Click your '''Profile Picture''' (top right) → '''Settings'''.
 
Scroll to the very bottom left sidebar → '''Developer settings'''.
 
Click '''Personal access tokens''' → '''Tokens (classic)'''.
 
Click '''Generate new token (classic)'''.
 
#* ''Note:'' You may be asked for 2FA or Email authentication.
 
'''Scopes:''' Select <code>repo</code> (Full control of private repositories) and <code>workflow</code>.
 
'''Expiration:''' Set to '''90 days'''. (This is fine; we want these to expire so we don't leave loose ends).
 
'''Copy the token immediately.''' You will not see it again.
 
'''Inside the Burner Container:'''
When OpenCode asks for Git authentication, use your Burner Username and paste this Token as the password.
 
== Part 4: Setup Instructions (Manual / "Claude Style") ==
 
Use this method if you prefer to enter the terminal first and run commands manually.
 
=== 1. Install Distrobox ===
'''
 
$$HOST TERMINAL$$
 
'''
<syntaxhighlight lang="bash">
curl -s https://raw.githubusercontent.com/89luca89/distrobox/main/install | sudo sh
</syntaxhighlight>
 
=== 2. Create the Burner Container ===
'''
 
$$HOST TERMINAL$$
 
'''
<syntaxhighlight lang="bash">
distrobox create --name opencode-burner --image archlinux:latest
</syntaxhighlight>
 
=== 3. Enter the Container ===
'''
 
$$HOST TERMINAL$$
 
''' → '''
 
$$CONTAINER TERMINAL$$
 
'''
Run this to step inside. Your prompt will change.
<syntaxhighlight lang="bash">
distrobox enter opencode-burner
</syntaxhighlight>
 
=== 4. Install Dependencies & Security ===
'''
 
$$CONTAINER TERMINAL$$
 
'''
Now that you are ''inside'', install Node, Git, and '''Firejail'''.
<syntaxhighlight lang="bash">
 
Update and install tools
 
sudo pacman -Syu nodejs npm git base-devel python firejail
</syntaxhighlight>
 
=== 5. Install OpenCode ===
'''
 
$$CONTAINER TERMINAL$$
 
'''
<syntaxhighlight lang="bash">
npm install -g opencode
</syntaxhighlight>
 
=== 6. Configure OpenCode (Manual Edit) ===
'''
 
$$CONTAINER TERMINAL$$
 
'''
We need to manually edit the config file. Since we are in a minimal terminal, we use <code>nano</code>.
 
'''Open the file:'''
<syntaxhighlight lang="bash">
mkdir -p ~/.config/opencode
nano ~/.config/opencode/opencode.json
</syntaxhighlight>
 
'''Nano Shortcuts to Clear File:'''
If the file already has content, use this sequence to clear it quickly:
 
<code>Alt</code> + <code></code> : Go to the very first line (Top).
 
<code>Alt</code> + <code>A</code> : 'Mark' the text (Start selection).
 
<code>Alt</code> + <code>/</code> : Go to the very last line (End).
 
<code>Ctrl</code> + <code>K</code> : Cut/Remove the selected text.
 
'''Paste the Configuration:'''
Copy the JSON below and paste it into the terminal (usually <code>Ctrl</code>+<code>Shift</code>+<code>V</code> or Right Click).
 
<syntaxhighlight lang="json">
{
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama Local",
"options": {
"baseURL": "http://127.0.0.1:11434/v1"
},
"models": {
"qwen2.5-coder:7b": { "name": "Qwen 2.5 Coder (7B)" }
}
},
"deepseek": {
"npm": "@ai-sdk/openai-compatible",
"name": "DeepSeek API",
"options": {
"baseURL": "https://api.deepseek.com/v1",
"apiKey": "sk-YOUR_API_KEY"
},
"models": {
"deepseek-chat": {
"name": "DeepSeek V3 (Fast & Smart)"
},
"deepseek-reasoner": {
"name": "DeepSeek R1 (Thinking Model)"
}
}
}
}
}
</syntaxhighlight>
 
'''Save and Exit:'''
 
<code>Ctrl</code> + <code>O</code> (Write Out/Save) -> Press <code>Enter</code> to confirm.
 
<code>Ctrl</code> + <code>X</code> (Exit).
 
=== 7. Create Isolated Workspace ===
'''
 
$$CONTAINER TERMINAL$$
 
'''
Create a dedicated directory. This will be the only place the AI is allowed to read/write files.
<syntaxhighlight lang="bash">
mkdir -p ~/opencode_workspace
cd ~/opencode_workspace
</syntaxhighlight>
 
=== 8. Launch with Firejail (Strict Whitelist) ===
'''
 
$$CONTAINER TERMINAL$$
 
'''
Run the application with explicit whitelists. This blocks access to the rest of your home directory (SSH keys, documents, etc.) unless explicitly allowed.
<syntaxhighlight lang="bash">
firejail --whitelist=$HOME/opencode_workspace
 
--whitelist=$HOME/.config/opencode
 
--whitelist=$HOME/.gitconfig
 
opencode
</syntaxhighlight>
 
== Part 5: Automated Launcher (Optional) ==
 
If you get tired of typing <code>distrobox enter</code> every time, you can use the <code>launch_burner.sh</code> script (provided separately) from your '''


$$HOST TERMINAL$$
== 7. Common Beginner Issues (Troubleshooting) ==


'''. It handles the context switching and Firejail wrapping automatically.
* '''"I get cgroup warnings when starting a container!"'''
** ''Fix:'' Ignore it! This is perfectly normal for sandboxes running without administrator privileges. The container will still work fine.
* '''"My <code>distrobox create</code> command failed silently."'''
** ''Fix:'' Make sure you run the <code>mkdir</code> command to create the fake home folder ''before'' running <code>distrobox create</code>. If the folder doesn't exist, the creation will fail.
* '''"Inside the container, where is my script?"'''
** ''Fix:'' Because we use a fake home, the <code>$HOME</code> variable points to <code>~/sandbox-homes/oc-base</code>. Always use exact, absolute paths if your scripts seem to be getting lost.

Latest revision as of 17:24, 23 February 2026

Beginner's Guide to OpenCode Isolation and Burner Workflows

Welcome! If you are using OpenCode (or similar AI coding agents), you are giving an AI the ability to run commands on your computer. While incredibly powerful, this comes with risks. A confused AI—or a malicious hidden instruction (prompt injection) in a downloaded file—could accidentally delete your personal files or mess up your system.

This guide teaches you how to use Distrobox to create "sandboxes" (isolated containers). By putting the AI in a sandbox, any damage it causes stays locked inside that box, keeping your real computer completely safe.

Key Concepts (Think of it like a Video Game)

  • Containers (The Sandbox): A mini, isolated operating system running inside your real computer.
  • Golden Image (The Master Save File): A perfectly set-up container with all the tools installed. We copy this every time we start a new project so we don't have to install things twice.
  • Save Points (Checkpoints): Just like saving your game before a boss fight, we can "commit" our container's state. If the AI breaks the code later, we can reload the save!
  • Burner Home: A special, restricted folder we give to the AI instead of letting it see your real Documents or Desktop folders.

1. Hardware Reality Check

Before we begin, AI agents need to "read" your code to understand it. The amount of code they can remember at once is called the Context Window. To process large context windows, your graphics card (GPU) needs memory, called VRAM.

Here is what you can expect based on your hardware:

Your GPU VRAM Example Graphics Card Context Window (Memory) What this means for you
8GB Radeon RX 7600 8k–16k Good for small scripts, but might crash on large projects.
16GB Radeon RX 9070 XT ~32k The minimum recommended for a smooth AI agent experience.
20GB Radeon RX 7900 XT 64k–80k The "Sweet Spot." Handles multiple large files easily.
32GB+ Mac Studio / Pro GPUs 128k+ Can read entire massive codebases at once.

(Note: If you have a computer with "Unified Memory" like an Apple Silicon Mac or a Ryzen AI Max+, you can use system RAM for AI, which allows for huge memory but runs a bit slower).

2. Setting up the "Golden Image" (One-Time Setup)

You only need to do this section once! We are going to build our "Master Save File" that has all the programming tools the AI needs.

Step 2.1: Install Distrobox on your Host Computer

First, we need the software that makes the sandboxes. Run the command for your computer's operating system:

# If you use Debian or Ubuntu:
sudo apt install distrobox    

# If you use Fedora:
sudo dnf install distrobox    

# If you use Arch Linux:
yay -S distrobox              

Step 2.2: Create the Base Sandbox

Now we create a brand new, empty sandbox named oc-base. We also tell it to use a fake home directory (~/sandbox-homes/oc-base) so it can't see your real personal files.

# 1. Create the folder that will act as the fake home
mkdir -p ~/sandbox-homes/oc-base

# 2. Build the sandbox using Ubuntu as the base system
distrobox create --name oc-base --image ubuntu:24.04 --home ~/sandbox-homes/oc-base

# 3. Step inside the sandbox!
distrobox enter oc-base

Step 2.3: Equip the Sandbox with Tools

Now that you are inside the sandbox, let's install the tools the AI needs to write and test code (like Node.js, Python, and Git).

# Download and install Node.js, Git, and Python
curl -fsSL [https://deb.nodesource.com/setup_lts.x](https://deb.nodesource.com/setup_lts.x) | sudo -E bash -
sudo apt install -y nodejs git python3

# Install the OpenCode AI software globally
npm install -g opencode-ai

# Run OpenCode once to set up your API keys and authenticate
opencode  

Step 2.4: Create a Helper Script

To make starting projects easier later, we'll create a shortcut script. Still inside the sandbox, run these commands to create a file called opencode_isolation.sh:

# Create the project directory first
mkdir -p ~/project

# Create the script
cat << 'EOF' > ~/project/opencode_isolation.sh
#!/bin/bash
# This script starts OpenCode safely inside our current folder.
WORK_DIR="$(cd "$(dirname "$0")" && pwd)"

cd "$WORK_DIR"
echo "Starting OpenCode..."
echo "Working directory: $WORK_DIR"
echo ""

exec opencode "$@"
EOF

# Make the script executable (runnable)
chmod +x ~/project/opencode_isolation.sh

Step 2.5: Save the Master Sandbox

Now we step out of the sandbox and save it as our "Golden Image" template.

# 1. Leave the sandbox and return to your real computer
exit  

# 2. Turn off the sandbox
distrobox stop oc-base

# 3. Save it as a reusable template (image) named "oc-base:latest"
podman container commit oc-base localhost/oc-base:latest

# 4. Verify it was saved successfully
podman image ls  

3. Protect Your Identity: The GitHub "Burner" Account

Important: Do NOT give the AI access to your personal GitHub account! If the AI gets confused, it might delete your repositories or leak your private code.

  1. Go to GitHub and create a completely new, separate account (a "burner" account).
  2. Generate a Personal Access Token for this new account.
  3. Set the token to expire in 90 days.
  4. Only give the token repo and workflow permissions.
  5. Use this account and token when setting up git inside your sandboxes.

4. Daily Workflow: How to Use Your Sandboxes

Now that your Golden Image is ready, here is how you will actually work day-to-day. We use a naming convention with the date to keep things organized (e.g., oc-260216 means OpenCode project from Feb 16, 2026).

Scenario A: Starting a Brand New Project

We will copy the master save file to create a fresh workspace.

# On your main computer:
# 1. Make a folder for today's project
mkdir -p ~/sandbox-homes/oc-260216

# 2. Create a new sandbox cloned from your Golden Image
distrobox create --name oc-260216 --image localhost/oc-base:latest --home ~/sandbox-homes/oc-260216

# 3. Enter the new sandbox
distrobox enter oc-260216

# Inside the sandbox:
# 4. Navigate to the project folder and start the AI!
cd ~/project && ./opencode_isolation.sh

Scenario B: Saving Your Progress (Checkpoint)

Before you ask the AI to do a massive, complicated refactor, save your container! If the AI ruins the code, you can easily go back.

# On your main computer:
distrobox stop oc-260216
podman container commit oc-260216 localhost/oc-260216:latest

# Now you can enter again and safely let the AI work
distrobox enter oc-260216

Scenario C: Oh no! The AI broke everything! (Restoring a Checkpoint)

If you saved a checkpoint (like in Scenario B) and want to go back to it:

# On your main computer:
# 1. Delete the ruined sandbox
distrobox rm oc-260217 && rm -rf ~/sandbox-homes/oc-260217

# 2. Recreate it from your last good save point!
mkdir -p ~/sandbox-homes/oc-260217
distrobox create --name oc-260217 --image localhost/oc-260216:latest --home ~/sandbox-homes/oc-260217

5. Cleaning Up

Over time, these sandboxes will take up hard drive space. Here is how to clean them up when you are done with a project.

# See all your saved templates/images
podman image ls              

# Delete a specific saved image
podman image rm localhost/oc-260216:latest

# Delete a working sandbox and its fake home folder
distrobox rm oc-260216 && rm -rf ~/sandbox-homes/oc-260216

6. What is actually protected? (Isolation Coverage)

For transparency, here is exactly what this setup protects against:

System Area Is it Protected? Explanation
Your Personal Files ✅ Yes The AI uses the fake --home folder and cannot see your real Documents or Desktop.
System Apps/Packages ✅ Yes If the AI tries to install a virus via apt-get, it only installs inside the disposable sandbox.
Host Filesystem ⚠️ Partial By default, the rest of your hard drive is readable. Advanced users can add --additional-flags to lock this down further.
Network/Internet ❌ No The AI shares your computer's internet connection (it needs this to access the OpenCode API).

7. Common Beginner Issues (Troubleshooting)

  • "I get cgroup warnings when starting a container!"
    • Fix: Ignore it! This is perfectly normal for sandboxes running without administrator privileges. The container will still work fine.
  • "My distrobox create command failed silently."
    • Fix: Make sure you run the mkdir command to create the fake home folder before running distrobox create. If the folder doesn't exist, the creation will fail.
  • "Inside the container, where is my script?"
    • Fix: Because we use a fake home, the $HOME variable points to ~/sandbox-homes/oc-base. Always use exact, absolute paths if your scripts seem to be getting lost.