← Back to articles

OpenCode

Path: Computer Tech/AI/ML/OpenCode.mdUpdated: 2/3/2026

OpenCode

OpenCode is an open-source terminal AI interface that supports multiple AI models (cloud and local), includes session management, and provides timeline-based conversation history.

Key advantage: Model flexibility - use Grok (free), Claude Pro (subscription), local Ollama models, or any API-compatible model.

Installation

bash
# Install via npm
npm install -g opencode

# Or with sudo if needed
sudo npm install -g opencode

First launch:

bash
opencode

This creates ~/.config/opencode/opencode.json for configuration.

Free Grok Access

OpenCode has a partnership with Grok AI for free access to Grok Fast model:

bash
opencode
# Automatically launches with Grok Fast

No API key needed for initial use.

Using Claude Pro Subscription

Login with existing Claude Pro account:

bash
opencode auth login
  1. Choose "Anthropic (Claude)"
  2. Browser opens for authentication
  3. Paste code when prompted
  4. Now logged in

Switch to Claude model:

bash
opencode
# Press / for command menu
/model
# Select Claude Sonnet 4.5

Advantage: Pay $20/month for Claude Pro, use it in browser AND terminal (no separate API costs).

Local Models with Ollama

Configure Ollama:

Edit ~/.config/opencode/opencode.json:

json
{
  "ollama": {
    "model": "llama3.2",
    "baseUrl": "http://localhost:11434"
  }
}

Install model:

bash
ollama pull llama3.2

Use in OpenCode:

bash
opencode
/model
# Select llama3.2

Available local models:

  • llama3.2 - Meta's latest
  • codellama - Specialized for code
  • mistral - Efficient, fast
  • phi - Microsoft's small model
  • Any Ollama-compatible model

Features

Session Management

List all sessions:

bash
opencode
/sessions

Resume previous session:

bash
opencode -r
# or
opencode --resume
# Choose from list

New session:

bash
opencode
# Starts fresh conversation

Advantage: All conversations stored locally, searchable, resumable.

Timeline (Time Travel)

View conversation timeline:

bash
# Inside OpenCode
/timeline

Shows:

  • All conversation turns
  • Timestamps
  • Token usage per turn
  • File operations

Restore to earlier point:

  1. Navigate timeline
  2. Select turn to restore to
  3. Conversation reverts (creates new branch)

Use cases:

  • Undo bad decisions
  • Branch from earlier point
  • Review what AI did when

Session Sharing

Share current session:

bash
# Inside OpenCode
/share

Copies URL to clipboard - anyone with link can view (read-only).

Use cases:

  • Share debugging sessions
  • Collaborate on prompts
  • Document AI workflows

Headless Server

Start server mode:

bash
opencode server start --port 3000

Attach to server:

bash
opencode attach

Use case: Long-running AI processes, background tasks, remote access.

Multi-Model Workflows

Switch models mid-conversation:

bash
/model
# Choose different model

Compare outputs:

markdown
# Ask question with Grok
What's the best NAS for home lab?

# Switch to Claude
/model β†’ Claude Sonnet 4.5

# Ask same question
What's the best NAS for home lab?

# Switch to local Llama
/model β†’ llama3.2

# Compare all three responses

Configuration

Config file: ~/.config/opencode/opencode.json

json
{
  "ollama": {
    "model": "llama3.2",
    "baseUrl": "http://localhost:11434"
  },
  "anthropic": {
    "apiKey": "sk-ant-...",  // Optional if using auth login
    "model": "claude-sonnet-4.5"
  },
  "grok": {
    "enabled": true
  },
  "defaults": {
    "model": "grok-fast",
    "dangerousMode": false
  }
}

Command Reference

CommandAction
/modelSwitch AI model
/sessionsList all sessions
/timelineView conversation history
/shareShare session (read-only link)
/exportExport session as JSON
/helpShow all commands
Ctrl-CInterrupt AI response

CLI Options

bash
# Resume previous session
opencode -r
opencode --resume

# Start headless server
opencode server start --port 3000

# Attach to server
opencode attach

# Export session
opencode export <session-id>

# Help
opencode help

Use Cases

Budget-Conscious AI Work

Free tier: Use Grok Fast for general queries

Research tasks: Switch to Claude Pro when needed

Code generation: Use local Codellama (no API costs)

Result: Optimize costs by matching task to model pricing.

Privacy-Sensitive Projects

Local-only workflow:

bash
# Configure Ollama
opencode
/model β†’ llama3.2

# All processing stays on your machine
# No data sent to cloud APIs

Model Comparison

Same prompt, multiple models:

markdown
1. Ask Grok
2. /model β†’ Claude
3. Ask same question
4. /model β†’ llama3.2
5. Ask same question
6. Compare quality/speed/cost

Long-Running Projects

Session persistence:

bash
# Day 1
opencode
# Work on project...

# Day 2
opencode -r
# Resume exactly where you left off

Timeline-based recovery:

bash
# Made mistake 10 turns ago
/timeline
# Restore to that point
# Continue from there

Comparison: OpenCode vs Claude Code

FeatureOpenCodeClaude Code
ModelsGrok, Claude, Ollama, any APIClaude only
CostFree (Grok/Ollama) or Claude ProClaude Pro or API
AgentsNoYes (powerful)
Local modelsYes (Ollama)No
Session managementYes (with timeline)Yes
Open sourceYesNo
Timeline/restoreYesNo
Session sharingYes (read-only links)No

Recommendation:

  • Use Claude Code if you need agents and complex workflows
  • Use OpenCode if you want model flexibility and open-source control
  • Use both - Claude Code for production, OpenCode for experimentation

Tips

Model selection strategy:

  • Grok Free: Quick questions, brainstorming
  • Claude Pro: Complex tasks, writing, analysis
  • Local Ollama: Privacy-sensitive, offline work, cost optimization

Session hygiene:

  • Start new session per project
  • Use timeline to prune bad branches
  • Export important sessions as JSON backup

Cost optimization:

  • Default to Grok Free
  • Switch to Claude only when needed
  • Use local models for iteration/testing

Links

OpenCode GitHub Repository

NetworkChuck Video: You've Been Using AI the Hard Way