Opencode isolation and burner workflow 260216
OpenCode "Burner" Workflow & Hardware Reality Check
This guide establishes a Distrobox-based isolation workflow for running OpenCode. This prevents AI agents from accidentally modifying your host system files and allows you to "nuke" the environment if it gets corrupted or compromised.
Part 1: The "Burner" Philosophy
The Goal: Run OpenCode in a disposable container (opencode-burner) that shares only specific project folders, not your entire home directory.
Simpler than Claude Code: This workflow is significantly more streamlined than typical Claude Code isolation setups.
No Re-Authentication Loop: Unlike Claude, which often forces you to re-login via browser or copy keys every session, this container persists your "Burner Identity" (Git credentials & Configs).
Zero Boot Time: Distrobox shares your host kernel. There is no VM overhead; it launches instantly.
One-Step Launch: You don't need to manually start a Docker daemon, attach a shell, and then run a binary. The launcher script handles the context switch automatically.
The Loop:
Spin up a fresh Arch/Ubuntu container.
Inject the necessary tools (NodeJS, Git, OpenCode CLI, Firejail).
Mount only the target project.
Burn it (delete the container) when the project is done or the environment gets messy.
Part 2: The Hardware Reality (VRAM is the Limit)
Running local coding agents requires massive Context Windows. Codebases are large. The standard 4k or 8k context is useless for an agent reading multiple files.
The Golden Rule: You need at least 32k Context for a usable agent.
GPU Tier List (VRAM vs. Context)
| GPU Class | VRAM | Price (PHP) | Practical Limit | Verdict |
|---|---|---|---|---|
| RX 7600 | 8 GB | ~₱18,000 | 8k - 16k (32k Unstable) |
Entry Level. 8GB is the hard floor. You can run a 7B model, but 32k context "sometimes works" and often crashes (OOM) because the OS + Model leaves almost no room for the cache. |
| RX 9070 XT | 16 GB | ~₱50,000 | 64k (Unstable) | The Danger Zone. 16GB is tight. You can force a 64k context, but it requires creating a custom Modelfile and is unstable. The model weights (~5GB) + OS (~4GB) leave little room for the KV cache. Expect OOM crashes.
|
| RX 7900 XT | 20 GB | ~₱80,000 | 64k - 80k | The Sweet Spot. That extra 4GB VRAM is crucial. It creates enough headroom to run a quantized 7B or 14B model with a healthy 64k context window comfortably. |
| Workstation (e.g., W7800/R9700) |
32 GB | High | 128k+ | The AI King. Required if you want to run full 128k context locally. 32GB VRAM allows for uncompressed cache or running larger parameters (e.g., 32B models) with decent context. |
Recommended High-Context Models
Select the model that fits your GPU tier.
7-8 Billion Parameter Models (Best for 12GB - 16GB VRAM)
Qwen2.5-Coder-7B-Instruct: The current gold standard. Supports up to 128k natively. Excellent for bug fixing and large codebase understanding on consumer hardware.
YaRN-Mistral-7b-64k: Specifically configured for 64k context using the YaRN extension method. Benchmarks show stable perplexity at long lengths.
OpenCodeReasoning-Nemotron-7B: Supports 64k context. Excels specifically at reasoning tasks for code generation.
14-16 Billion Parameter Models (Best for 20GB - 24GB VRAM)
Qwen2.5-Coder-14B-Instruct: The heavy hitter. Supports 128k natively. Provides significantly more capacity for complex, multi-file project analysis and agentic workflows than the 7B version.
DeepCoder-14B-Preview: Supports 64k context. Uses reinforcement learning to achieve performance comparable to much larger proprietary models.
The "DeepSeek V3" Alternative
Before spending ₱80k on hardware, consider the DeepSeek API.
Context: 64k (Output) / 128k (Input) natively.
Intelligence: DeepSeek V3 (671B MoE) is vastly smarter than any local Qwen 7B/14B.
Cost: ~₱7.80 ($0.14) per 1 Million tokens.
Strategy: Use Local 7B for small, private edits. Use DeepSeek V3 for "Build" agent tasks requiring long context.
Part 3: GitHub Burner Identity
Agents need to pull/push code. Do not give them your main GitHub credentials (SSH keys) that have access to your private company/client repos.
The Strategy:
Start by testing with your main account (carefully) to verify the tool works.
IMMEDIATELY switch to a Burner GitHub Account for daily agentic work.
Generating a Token (PAT): You cannot use a simple password for Git over HTTPS. You need a Personal Access Token.
Log in to your Burner GitHub Account.
Click your Profile Picture (top right) → Settings.
Scroll to the very bottom left sidebar → Developer settings.
Click Personal access tokens → Tokens (classic).
Click Generate new token (classic).
- Note: You may be asked for 2FA or Email authentication.
Scopes: Select repo (Full control of private repositories) and workflow.
Expiration: Set to 90 days. (This is fine; we want these to expire so we don't leave loose ends).
Copy the token immediately. You will not see it again.
Inside the Burner Container: When OpenCode asks for Git authentication, use your Burner Username and paste this Token as the password.
Part 4: Setup Instructions (Manual / "Claude Style")
Use this method if you prefer to enter the terminal first and run commands manually.
1. Install Distrobox
[HOST TERMINAL]
curl -s https://raw.githubusercontent.com/89luca89/distrobox/main/install | sudo sh
2. Create the Burner Container
[HOST TERMINAL]
distrobox create --name opencode-burner --image archlinux:latest
3. Enter the Container
[HOST TERMINAL] → [CONTAINER TERMINAL] Run this to step inside. Your prompt will change.
distrobox enter opencode-burner
4. Install Dependencies & Security
[CONTAINER TERMINAL] Now that you are inside, install Node, Git, and Firejail.
Update and install tools
sudo pacman -Syu nodejs npm git base-devel python firejail
5. Install OpenCode
[CONTAINER TERMINAL]
npm install -g opencode
6. Configure OpenCode (Manual Edit)
[CONTAINER TERMINAL]
We need to manually edit the config file. Since we are in a minimal terminal, we use nano.
Open the file:
mkdir -p ~/.config/opencode
nano ~/.config/opencode/opencode.json
Nano Shortcuts to Clear File: If the file already has content, use this sequence to clear it quickly:
Alt + : Go to the very first line (Top).
Alt + A : 'Mark' the text (Start selection).
Alt + / : Go to the very last line (End).
Ctrl + K : Cut/Remove the selected text.
Paste the Configuration:
Copy the JSON below and paste it into the terminal (usually Ctrl+Shift+V or Right Click).
{
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama Local",
"options": {
"baseURL": "http://127.0.0.1:11434/v1"
},
"models": {
"qwen2.5-coder:7b": { "name": "Qwen 2.5 Coder (7B)" }
}
},
"deepseek": {
"npm": "@ai-sdk/openai-compatible",
"name": "DeepSeek API",
"options": {
"baseURL": "https://api.deepseek.com/v1",
"apiKey": "sk-YOUR_API_KEY"
},
"models": {
"deepseek-chat": {
"name": "DeepSeek V3 (Fast & Smart)"
},
"deepseek-reasoner": {
"name": "DeepSeek R1 (Thinking Model)"
}
}
}
}
}
Save and Exit:
Ctrl + O (Write Out/Save) -> Press Enter to confirm.
Ctrl + X (Exit).
7. Launch with Firejail
[CONTAINER TERMINAL] Run the application wrapped in Firejail to restrict its access even further within the container.
firejail opencode
Part 5: Automated Launcher (Optional)
If you get tired of typing distrobox enter every time, you can use the launch_burner.sh script (provided separately) from your [HOST TERMINAL]. It handles the context switching and Firejail wrapping automatically.