OpenCode
OpenCode
OpenCode is an open-source terminal AI interface that supports multiple AI models (cloud and local), includes session management, and provides timeline-based conversation history.
Key advantage: Model flexibility - use Grok (free), Claude Pro (subscription), local Ollama models, or any API-compatible model.
Installation
bash# Install via npm npm install -g opencode # Or with sudo if needed sudo npm install -g opencode
First launch:
bashopencode
This creates ~/.config/opencode/opencode.json for configuration.
Free Grok Access
OpenCode has a partnership with Grok AI for free access to Grok Fast model:
bashopencode # Automatically launches with Grok Fast
No API key needed for initial use.
Using Claude Pro Subscription
Login with existing Claude Pro account:
bashopencode auth login
- Choose "Anthropic (Claude)"
- Browser opens for authentication
- Paste code when prompted
- Now logged in
Switch to Claude model:
bashopencode # Press / for command menu /model # Select Claude Sonnet 4.5
Advantage: Pay $20/month for Claude Pro, use it in browser AND terminal (no separate API costs).
Local Models with Ollama
Configure Ollama:
Edit ~/.config/opencode/opencode.json:
json{ "ollama": { "model": "llama3.2", "baseUrl": "http://localhost:11434" } }
Install model:
bashollama pull llama3.2
Use in OpenCode:
bashopencode /model # Select llama3.2
Available local models:
llama3.2- Meta's latestcodellama- Specialized for codemistral- Efficient, fastphi- Microsoft's small model- Any Ollama-compatible model
Features
Session Management
List all sessions:
bashopencode /sessions
Resume previous session:
bashopencode -r # or opencode --resume # Choose from list
New session:
bashopencode # Starts fresh conversation
Advantage: All conversations stored locally, searchable, resumable.
Timeline (Time Travel)
View conversation timeline:
bash# Inside OpenCode /timeline
Shows:
- All conversation turns
- Timestamps
- Token usage per turn
- File operations
Restore to earlier point:
- Navigate timeline
- Select turn to restore to
- Conversation reverts (creates new branch)
Use cases:
- Undo bad decisions
- Branch from earlier point
- Review what AI did when
Session Sharing
Share current session:
bash# Inside OpenCode /share
Copies URL to clipboard - anyone with link can view (read-only).
Use cases:
- Share debugging sessions
- Collaborate on prompts
- Document AI workflows
Headless Server
Start server mode:
bashopencode server start --port 3000
Attach to server:
bashopencode attach
Use case: Long-running AI processes, background tasks, remote access.
Multi-Model Workflows
Switch models mid-conversation:
bash/model # Choose different model
Compare outputs:
markdown# Ask question with Grok What's the best NAS for home lab? # Switch to Claude /model β Claude Sonnet 4.5 # Ask same question What's the best NAS for home lab? # Switch to local Llama /model β llama3.2 # Compare all three responses
Configuration
Config file: ~/.config/opencode/opencode.json
json{ "ollama": { "model": "llama3.2", "baseUrl": "http://localhost:11434" }, "anthropic": { "apiKey": "sk-ant-...", // Optional if using auth login "model": "claude-sonnet-4.5" }, "grok": { "enabled": true }, "defaults": { "model": "grok-fast", "dangerousMode": false } }
Command Reference
| Command | Action |
|---|---|
/model | Switch AI model |
/sessions | List all sessions |
/timeline | View conversation history |
/share | Share session (read-only link) |
/export | Export session as JSON |
/help | Show all commands |
Ctrl-C | Interrupt AI response |
CLI Options
bash# Resume previous session opencode -r opencode --resume # Start headless server opencode server start --port 3000 # Attach to server opencode attach # Export session opencode export <session-id> # Help opencode help
Use Cases
Budget-Conscious AI Work
Free tier: Use Grok Fast for general queries
Research tasks: Switch to Claude Pro when needed
Code generation: Use local Codellama (no API costs)
Result: Optimize costs by matching task to model pricing.
Privacy-Sensitive Projects
Local-only workflow:
bash# Configure Ollama opencode /model β llama3.2 # All processing stays on your machine # No data sent to cloud APIs
Model Comparison
Same prompt, multiple models:
markdown1. Ask Grok 2. /model β Claude 3. Ask same question 4. /model β llama3.2 5. Ask same question 6. Compare quality/speed/cost
Long-Running Projects
Session persistence:
bash# Day 1 opencode # Work on project... # Day 2 opencode -r # Resume exactly where you left off
Timeline-based recovery:
bash# Made mistake 10 turns ago /timeline # Restore to that point # Continue from there
Comparison: OpenCode vs Claude Code
| Feature | OpenCode | Claude Code |
|---|---|---|
| Models | Grok, Claude, Ollama, any API | Claude only |
| Cost | Free (Grok/Ollama) or Claude Pro | Claude Pro or API |
| Agents | No | Yes (powerful) |
| Local models | Yes (Ollama) | No |
| Session management | Yes (with timeline) | Yes |
| Open source | Yes | No |
| Timeline/restore | Yes | No |
| Session sharing | Yes (read-only links) | No |
Recommendation:
- Use Claude Code if you need agents and complex workflows
- Use OpenCode if you want model flexibility and open-source control
- Use both - Claude Code for production, OpenCode for experimentation
Tips
Model selection strategy:
- Grok Free: Quick questions, brainstorming
- Claude Pro: Complex tasks, writing, analysis
- Local Ollama: Privacy-sensitive, offline work, cost optimization
Session hygiene:
- Start new session per project
- Use timeline to prune bad branches
- Export important sessions as JSON backup
Cost optimization:
- Default to Grok Free
- Switch to Claude only when needed
- Use local models for iteration/testing
Links
OpenCode GitHub Repository
- URL: https://github.com/opencode-ai/opencode
- Summary: Open-source terminal AI interface with multi-model support
- Related: Claude Code with Agents, Ollama, Grok AI
NetworkChuck Video: You've Been Using AI the Hard Way
- URL: https://www.youtube.com/watch?v=MsQACpcuTkU
- Summary: Covers OpenCode alongside Claude Code and Gemini CLI for terminal-based AI workflows
- Related: Claude Code with Agents, Gemini CLI