1. Introduction
What is OCCode CLI?
OCCode CLI is a terminal-based AI coding assistant developed by OpenCan.ai. It brings the power of advanced language models directly to your command line with multi-provider support, real-time cost tracking, checkpoint-based undo, and over 60 interactive commands.
Multi-Provider AI
11 providers, 40+ models including Claude, GPT, Gemini, DeepSeek, Ollama & more
Checkpoint System
Save and restore file states for safe undo/rollback of any change
Cost Tracking
Real-time token usage and cost display across all providers
Session Persistence
Resume conversations without losing context
Convergence Engine
Multi-model ensemble with Merge, Vote, Debate, and Review strategies
Git Integration
AI-generated commit messages, enhanced diffs, auto-stage
System Requirements
| Requirement | Details |
|---|---|
| Node.js | 18.0 or higher (for npm install method) |
| OS | Windows 10+, macOS 12+, Linux (x64/arm64) |
| Disk Space | ~50MB (binary) or ~5MB (npm) |
| Network | Required for cloud AI providers; optional for Ollama/local |
2. Installation
Download (Account Required)
Download OCCode from the Install Guide. An opencan.ai account is required. A valid license key is required to activate the application.
| Platform | File | Size | SHA256 Checksum |
|---|---|---|---|
| Windows x64 | occode-windows-x64.zip | 47 MB | 7f4840e8355a2ec64adfe0e667f1486acf19c5313fbb49dcc3c750636307b85a |
| macOS Intel | occode-macos-x64.zip | 51 MB | b62c578b38e771961b53b75f3851edb816ce0339155251af7bf9cf0847640c24 |
| macOS Apple Silicon | occode-macos-arm64.zip | 51 MB | 53befd5d8d381e24d1bb85072e6332e565350973595cea4210bb7fd227a0e3b5 |
| Linux x64 | occode-linux-x64.zip | 50 MB | fe27a7e04d1d6aa8511de2e23436f6f2891ad01de8d8cffb84cead12f790b965 |
- Download the zip file for your platform from the Install Guide
- Obtain your license key from your opencan.ai dashboard
- Verify the SHA256 checksum to confirm the download was not corrupted
- Extract the zip file and place the binary in your PATH
- Activate with:
occode activate --key YOUR-LICENSE-KEY
# Linux/macOS
unzip occode-linux-x64.zip # or occode-macos-*.zip
sudo mv occode /usr/local/bin/
# Windows - right-click zip → Extract All, then add folder to PATH
# Verify checksum (Linux)
sha256sum occode-linux-x64.zip
# Activate with your license key
occode activate --key YOUR-LICENSE-KEY
Verify Installation
occode --version
occode --help
3. Quick Start
Step 1: Configure Your API Key
# Interactive configuration wizard
occode config
# Or set directly
occode config --set-key --provider anthropic
Step 2: Start an Interactive Session
occode
This launches the REPL (Read-Eval-Print Loop) where you can chat with the AI and give coding instructions.
Step 3: Run a One-Shot Task
# Execute a task and exit
occode run "Add unit tests for UserService"
# Autonomous mode (no approval prompts)
occode run "Fix all TypeScript errors" --yes
# With specific files in context
occode run "Refactor this component" -c src/App.tsx
Step 4: Understanding the Interface
# The REPL shows a timestamp prompt:
2026-02-10 14:30:12 EST your message here
# Use slash commands for quick actions:
/help # Show all commands
/cost # Show token costs
/model # Show current model
/status # Show session info
4. Core Commands
occode / occode chat — Interactive Mode
occode [chat]
-m, --model <model> # Model to use (e.g. claude-sonnet-4-20250514)
-p, --provider <provider> # AI provider (anthropic, openai, etc.)
-c, --context <files...> # Add files to context
--session <id> # Resume previous session
occode run <task> — Single Task Execution
occode run <task>
-y, --yes # Auto-approve all actions (autonomous)
-s, --supervised # Approve all file changes
-m, --model <model> # Model to use
-c, --context <files...> # Add files to context
--dry-run # Preview without executing
--max-turns <n> # Maximum agent turns (default: 50)
--timeout <seconds> # Timeout limit
occode watch — Watch Mode
occode watch
-p, --pattern <glob> # File pattern to watch
-i, --ignore <patterns> # Patterns to ignore
-t, --task <task> # Task to run on changes
occode config — Configuration
occode config
--set <key=value> # Set config value
--get <key> # Get config value
--list # List all settings
--reset # Reset to defaults
--set-key # Set API key securely (keychain)
occode commit — AI Git Commit
occode commit
-a, --all # Stage all changes
-p, --push # Push after commit
occode explain — Explain Code
occode explain [file]
-d, --detailed # Detailed explanation
--architecture # Explain project structure
occode review — Code Review
occode review [path]
--security # Focus on security
--performance # Focus on performance
--diff # Review staged changes
occode search — Semantic Search
occode search <query>
-n, --limit <n> # Max results
-t, --type <type> # Symbol type filter
occode generate — Generate Code/Docs
occode generate <type>
# Types: readme, tests, docs, types
occode checkpoint — Manage Checkpoints
occode checkpoint list # List all checkpoints
occode checkpoint create <msg> # Create named checkpoint
occode checkpoint restore <id> # Restore to checkpoint
occode checkpoint delete <id> # Delete checkpoint
occode undo — Undo Last Change
occode undo
occode history — Session History
occode history
-n, --limit <n> # Number of sessions
--clear # Clear all history
--export <file> # Export to file
--import <file> # Import from file
Other Commands
occode init # Initialize in current directory
occode status # Show current status
occode update # Check for updates
5. Interactive Mode (REPL)
While in chat mode, use these slash commands:
Provider & Model Commands
/provider [name] # Show current or switch provider
/model [name] # Show current or change model
/api_key [key] # Show status or set API key
/api_url [url] # Show or change custom API endpoint
/api_url reset # Reset to default endpoint
Model Profile Commands (12)
/profile # Show active profile and list all
/profile <name> # Activate a profile
/profile create <name> # Create new profile interactively
/profile template <name> # Create from built-in template
/profile templates # List available templates
/profile edit <name> # Edit existing profile
/profile delete <name> # Delete a profile
/profile off # Deactivate current profile
/profile default <name> # Set default profile (auto-activates on startup)
/profile prefix <name> <prefix> # Set message prefix trigger
/profile export # Export profiles to JSON
/profile import <file> # Import profiles from JSON
Convergence Commands (18)
/converge # Show convergence status
/converge on | enable # Enable convergence mode
/converge off | disable # Disable convergence mode
/converge strategy <name> # Set strategy (merge/vote/debate/review)
/converge add <alias> # Add model by short alias
/converge remove <name> # Remove a participant model
/converge aggregator <name> # Set aggregator model for synthesis
/converge preset <name> # Load preset configuration
/converge presets # List available presets
/converge rounds <n> # Set debate rounds (1-5)
/converge show on|off # Show/hide individual model outputs
/converge models # List configured participant models
/converge catalog # Browse all models in catalog
/converge available # Show models with API keys detected
/converge search <keyword> # Search models
/converge last # Show stats from last run
/converge reset # Reset to defaults
/converge export|import # Export/import configuration
Context Management (6)
/context # Show context overview with token usage
/context add <file> # Add file to context
/context remove <file> # Remove file from context
/pin <file> # Pin file to persistent context
/unpin <file> # Remove pin from file
/exclude <pattern> # Exclude files matching glob
/include <pattern> # Remove exclusion pattern
Session Management (7)
/clear # Clear conversation history
/status # Show session statistics
/cost # Show detailed cost breakdown
/compact # Compact history to save tokens
/export <file> # Export session to JSON
/debug # Toggle debug mode
/mode interactive|auto # Set execution mode
Git Integration (3)
/git # Show git status
/commit # Generate AI-powered commit message
/diff <files> # Enhanced diff with syntax highlighting
/diff --side-by-side <files> # Side-by-side comparison
Checkpoint & Undo (4)
/checkpoint [message] # Create named checkpoint
/checkpoint list # List all checkpoints
/checkpoint restore <id> # Restore to specific checkpoint
/undo # Undo last file change
Daemon & Indexing (5)
/index # Show indexing status
/index rebuild # Force rebuild codebase index
/daemon # Show daemon status
/daemon start|stop|restart # Manage background daemon
Subscription (3)
OCCode includes a 7-day free trial with full access. Create an opencan.ai account and choose a plan to start your trial — you won't be charged until day 8. After the trial, a license key is required. Cancel before day 8 to avoid charges. No refunds after billing.
Important: Your license key must be activated immediately upon purchase. Your free trial ends on day 7 and your subscription will be charged on day 8.
Team+ plans: For admin-generated license keys, the 7-day trial period begins at the time each key is generated. After 30 days from the initial Team+ subscription purchase, newly generated license keys will no longer include a trial period.
/subscription # Show subscription status (trial days remaining or plan info)
/subscription activate <key> # Activate license key
/subscription plans # View available plans
Feature Toggles
/features # Show all features and status
/features enable <feature> # Enable a feature
/features disable <feature> # Disable a feature
/features cost # Show token cost impact
/features reset # Reset to defaults
/tdg # Toggle Test-Driven Generation
/autofix # Toggle Auto-Fix in LSP Loop
Help & Transcripts
/help | /h | /? # Show all commands
/transcripts # Toggle transcript saving
/transcripts status # Show transcript configuration
/transcripts export [path] # Export to unencrypted JSON
/transcripts clear # Clear all entries
Keyboard Shortcuts
| Key | Action |
|---|---|
Ctrl+C | Stop current operation |
Ctrl+D | Exit session |
Up/Down | Navigate command history |
Tab | Auto-complete file paths and commands |
6. AI Providers
OCCode supports 11 AI providers with 40+ models.
| Provider | Speed | Quality | Cost | Key Required |
|---|---|---|---|---|
| Anthropic (Claude) | Fast | Excellent | $$$ | Yes |
| OpenAI (GPT) | Fast | Excellent | $$$ | Yes |
| DeepSeek | Fast | Excellent | $ | Yes |
| Google (Gemini) | Medium | Good | $$ | Yes |
| Mistral AI | Fast | Good | $$ | Yes |
| Groq | Ultra Fast | Good | Free* | Yes |
| Together AI | Fast | Good | $$ | Yes |
| OpenRouter | Varies | Varies | Varies | Yes |
| Ollama (Local) | Medium | Good | Free | No |
| OpenCan | Varies | Varies | Varies | Yes |
| Custom/Local | Varies | Varies | Varies | Optional |
Anthropic (Claude) Setup
occode config --set provider=anthropic
occode config --set model=claude-sonnet-4-20250514
occode config --set-key --provider anthropic
# Get key from: https://console.anthropic.com
OpenAI (GPT) Setup
occode config --set provider=openai
occode config --set model=gpt-4o
occode config --set-key --provider openai
# Get key from: https://platform.openai.com
DeepSeek Setup
occode config --set provider=deepseek
occode config --set model=deepseek-coder
occode config --set-key --provider deepseek
# Get key from: https://platform.deepseek.com
Ollama (Local - Free, Offline)
# 1. Install Ollama: https://ollama.ai/download
# 2. Pull a model:
ollama pull llama3.3
# 3. Configure OCCode:
occode config --set provider=local
occode config --set apiEndpoint=http://localhost:11434/v1
occode config --set model=llama3.3
OpenRouter (100+ Models via One Key)
occode config --set provider=openrouter
occode config --set model=anthropic/claude-sonnet-4
occode config --set-key --provider openrouter
# Get key from: https://openrouter.ai/keys
Recommended Setups
| User Type | Primary | Fallback |
|---|---|---|
| Professional Dev | Anthropic + Claude Sonnet | OpenRouter + DeepSeek R1 |
| Student/Learner | Groq + Llama 3.3 70B (free) | Ollama + Llama 3.3 (free) |
| Team/Enterprise | OpenCan (centralized billing) | Anthropic or OpenAI |
| Budget-Conscious | DeepSeek + deepseek-coder ($) | Ollama (free) |
7. Model Profiles
Profiles are named model configurations for quick switching.
Built-in Templates
| Template | Provider | Model | Use Case |
|---|---|---|---|
fast | Anthropic | Claude Haiku | Quick responses, low cost |
power | Anthropic | Claude Opus | Complex reasoning |
creative | Anthropic | High-temp Sonnet | Creative tasks |
gpt | OpenAI | GPT-4 Turbo | Alternative perspective |
local | Ollama | Local models | Offline, private |
Usage
# Create from template
/profile template fast
# Activate
/profile fast
# Create custom profile
/profile create my-custom
# Set prefix trigger (e.g., "quick: your message" auto-activates fast profile)
/profile prefix fast quick
# Set a default profile
/profile default fast
# Export/import profiles for team sharing
/profile export
/profile import profiles.json
8. Convergence Engine
The Convergence Engine runs your query through multiple AI models simultaneously and synthesizes the best response.
4 Strategies
| Strategy | How It Works | Best For |
|---|---|---|
| Merge (MoA) | Parallel generation + synthesis | Maximum quality |
| Vote | Democratic selection via model voting | Consensus decisions |
| Debate | Multi-round critique and refinement | Thorough analysis |
| Review | Generate + review + revise workflow | Code quality |
Built-in Presets
/converge preset duo-merge # 2 models with merge
/converge preset trio-merge # 3 models for max quality
/converge preset code-review # Cost-effective review workflow
/converge preset debate # Thorough analysis
/converge preset vote # Democratic consensus
/converge preset local-merge # Zero API cost (local models)
Custom Configuration
/converge on # Enable convergence
/converge strategy merge # Set strategy
/converge add sonnet # Add Claude Sonnet
/converge add gpt4 # Add GPT-4
/converge aggregator opus # Use Opus as synthesizer
/converge rounds 3 # Set 3 debate rounds
9. Context Management
Adding Files to Context
/context add src/api/routes.ts
/context add src/**/*.ts # Glob patterns supported
Pinning Files (Persistent Across Sessions)
/pin src/types.ts # Always in context
/unpin src/types.ts # Remove pin
Excluding Files
/exclude "*.test.ts"
/exclude "**/node_modules/**"
/include "*.test.ts" # Remove exclusion
Context Overview
/context
# Output:
# Context Overview:
# Working directory: /home/user/my-project
# Token usage: 12,450 / 128,000 (9.7%)
# [###.............................]
#
# Pinned files:
# src/types.ts (2,500 tokens)
# Active context files:
# src/api/routes.ts (1,800 tokens)
# ... and 15 more files
Automatic Detection
OCCode automatically detects project type (Node.js, Python, Rust, Go, Java), frameworks (React, Django, Express), and loads relevant context.
10. Session Management
Automatic Persistence
Sessions are automatically saved to ~/.occode/sessions/. Each session gets a unique ID.
Resume a Previous Session
occode --session sess_abc123
Session Commands
/status # Messages, tokens, cost, turns
/clear # Reset conversation (current session)
/compact # Summarize to save tokens
/export session.json # Save session to file
History Management
occode history # View recent sessions
occode history -n 10 # Last 10 sessions
occode history --export sessions.json
occode history --clear # Delete all history
11. Checkpoints & Undo
Checkpoints save file states before changes so you can always roll back.
Creating Checkpoints
/checkpoint Before refactoring auth module
# or
occode checkpoint create "Before refactoring"
Restoring
/checkpoint list # See all saved states
/checkpoint restore ckpt_123 # Restore to that state
Quick Undo
/undo # Reverts last file change
# or
occode undo
12. Cost Tracking
Viewing Costs
/cost
# Output:
# Cost Report
# ────────────────────────────
# Provider: anthropic
# Model: claude-sonnet-4-20250514
#
# Token Usage:
# Input: 12,450 tokens
# Output: 3,200 tokens
# Total: 15,650 tokens
#
# Total Cost: $0.0856
Model Tiers
| Tier | Examples | Cost |
|---|---|---|
| Flagship | Opus, GPT-4, Gemini Pro | $$$ |
| Balanced | Sonnet, o3-mini | $$ |
| Fast | Haiku, GPT-4o-mini, Gemini Flash | $ |
| Economy | Ollama, local models | Free |
/profile fast for simple tasks and /profile power only when needed. Switch to Ollama for unlimited free usage on private/offline tasks.13. Git Integration
AI-Powered Commits
/commit
# OCCode analyzes your diff and generates:
# "feat: Add user authentication with JWT tokens"
# Approve? [y/N]
Enhanced Diff
/diff src/auth.ts # Syntax-highlighted diff
/diff --side-by-side src/auth.ts # Side-by-side view
/diff --no-syntax src/auth.ts # Plain text diff
Git Status
/git
# Shows branch, staged files, unstaged changes
CLI Commit Command
occode commit # Interactive commit
occode commit -a # Stage all + commit
occode commit -a -p # Stage all + commit + push
14. Feature Toggles
OCCode includes powerful features that use additional AI tokens. Toggle them on/off to control costs.
High Token Impact HIGH
| Feature | Command | Default | Description |
|---|---|---|---|
| Test-Driven Generation | /tdg | Off | Generate tests + iterate until passing (3-10x tokens) |
| Visual UI Repair | /features enable visualUIRepair | Off | Screenshot analysis + auto-fix UI issues |
| Browser E2E Testing | /features enable browserTesting | Off | Puppeteer/Playwright auto-test generation |
| Auto Code Review | /features enable autoCodeReview | Off | Automatic PR review + suggestions |
Medium Token Impact MEDIUM
| Feature | Command | Default |
|---|---|---|
| Proactive Monitoring | /features enable proactiveMonitoring | Off |
| Coverage-Guided Tests | /features disable coverageGuidedTests | On |
| Auto Documentation | /features enable autoDocumentation | Off |
| Refactoring Suggestions | /features enable refactoringSuggestions | Off |
| Performance Optimization | /features enable performanceOptimization | Off |
Low Token Impact LOW
| Feature | Command | Default |
|---|---|---|
| Auto-Fix (LSP Loop) | /autofix | On |
Token Cost Multiplier
/features cost
# Shows: Token Multiplier: 1.2x (baseline + coverage-guided tests)
# With TDG enabled: 5.0x
# With multiple features: up to 10x+
15. P2P Messaging
OCCode includes end-to-end encrypted (E2EE) peer-to-peer messaging directly in the CLI. Send secure messages to other OCCode users without leaving your terminal. All messages are encrypted client-side — the server stores only ciphertext and cannot read message content.
1. Run
/dm keys to generate your encryption keys2. Run
/dm search username to find a user3. Run
/dm open <userId> to start a conversation4. Type your message and press Enter — it's encrypted and sent automatically
5. Run
/dm close to exit DM mode and return to AI chat
DM Commands Reference
| Command | Description |
|---|---|
/dm keys | Generate or show your E2EE keypair and fingerprint |
/dm rotate-keys | Generate a new keypair and revoke the old one |
/dm search <query> | Search for users by name or email |
/dm open <userId> | Open or create a conversation with a user |
/dm list | List all active conversations with unread counts |
/dm read [convId] | Show message history for a conversation |
/dm send <text> | Send a message in the active conversation |
/dm close | Close the active DM session and return to AI chat |
/dm status | Show WebSocket connection status and offline queue |
/dm transcript on|off|export | Enable, disable, or export DM transcripts |
DM Chat Mode
When you open a conversation with /dm open, OCCode enters DM chat mode. In this mode:
- Everything you type is sent as a message to the other user (not to the AI)
- Messages appear in real-time as the other user responds
- Typing indicators show when the other person is typing
- Type
/dm closeto exit DM mode and return to normal AI chat
End-to-End Encryption
OCCode uses industry-standard cryptographic algorithms for message security:
| Component | Algorithm | Purpose |
|---|---|---|
| Key Exchange | X25519 (Curve25519) | Establish shared secret between users |
| Message Encryption | XSalsa20-Poly1305 | Authenticated encryption (TweetNaCl) |
| Key Fingerprint | SHA-256 | Verify key identity |
Key Management
Your encryption keys are stored securely:
- Primary storage: OS keychain (macOS Keychain, Windows Credential Manager, Linux libsecret)
- Fallback: Encrypted file at
~/.occode/dm-keys.enc
Your private key never leaves your device. If you lose your keys (e.g., switch machines), you'll need to generate new keys with
/dm keys or /dm rotate-keys. Previous messages remain readable as long as both parties' keys are available.
Key Fingerprint Verification
When you run /dm keys, OCCode displays your key fingerprint as a colon-separated hex string (e.g., a3:4f:b2:...). You can verify this fingerprint with your contact through an out-of-band channel to confirm you're communicating with the right person.
DM Transcripts
You can optionally save transcripts of your DM conversations:
/dm transcript on # Enable transcript logging
/dm transcript off # Disable transcript logging
/dm transcript export # Export decrypted transcript as JSON
Transcripts are stored locally at ~/.occode/dm-transcripts/ and encrypted with AES-256-GCM using a password-derived key.
Your organization administrator may require DM transcripts to be enabled for compliance. When enforced, you cannot disable transcripts — the
/dm transcript off command will show a policy notice.
Offline Message Queue
If you send a message while disconnected, it's saved to a local queue (~/.occode/dm-offline-queue.json, max 100 messages). Messages are automatically sent when your connection is restored.
Check connection status anytime with /dm status.
Message Features
- Typing indicators: See when the other person is typing
- Delivery receipts: Know when your message was delivered
- Read receipts: Know when your message was read
- Message history: View past messages with
/dm read - WebRTC signaling: Foundation for future voice/video calls
15.1. Multi-Channel Communications
Beyond P2P messaging, OCCode includes a multi-channel communications module that lets you send and receive messages over 8 independent channels — from Bluetooth on your desk to LoRa radios and ham radio spanning continents. Each channel is optional and only activates when configured.
1. Install the npm package for your channel (e.g.
npm install bluetooth-serial-port)2. Edit
~/.occode/comms.json to enable and configure the channel3. Run
/comms status to verify the connection4. See the full Communications Guide for detailed setup instructions
Channel Overview
Bluetooth
Serial Port Profile (SPP) over paired devices. Optional AES-256-GCM encryption.
WiFi LAN
mDNS peer discovery + TCP messaging on your local network.
Modem SMS
USB GSM modem with AT commands. Send/receive real SMS.
Telegram
Bot API with long polling and optional chat-ID whitelist.
Cloud SMS
Twilio REST API for outbound + webhook for inbound SMS.
LoRa Radio
Long-range radio (RYLR896/998, RAK, RN2483). 2–15 km range.
Meshtastic
LoRa mesh with multi-hop routing and built-in AES-256.
Ham Radio
AX.25 packet radio + JS8Call weak-signal HF. Requires license.
Comms Commands Reference
| Command | Description |
|---|---|
/comms status | Show all channel connection statuses |
/comms bt scan|send | Bluetooth device scan and messaging |
/comms wifi peers|send | WiFi LAN peer discovery and messaging |
/comms modem ports|send|signal | GSM modem SMS and signal check |
/comms tg send | Telegram bot messaging |
/comms sms send|status | Cloud SMS via Twilio |
/comms lora send|info | LoRa radio messaging |
/comms mesh send|nodes|info | Meshtastic mesh networking |
/comms ham send|freq|mode|info | Ham radio (AX.25 / JS8Call) |
/comms history [n] | Recent messages across all channels |
/comms config | Configuration instructions |
For detailed per-channel setup, configuration examples, hardware requirements, and testing instructions, see the OCCode Communications Guide.
15.5. Plugin System
OCCode CLI supports plugins — sandboxed extensions that add custom slash commands, hook into lifecycle events, and access AI and context features through a permission-gated API. Plugins run in isolated VM contexts and cannot access the filesystem or Node.js modules directly.
1. Install a plugin:
/plugin install /path/to/my-plugin2. Enable it:
/plugin enable my-plugin3. Use its commands:
/my-plugin:command-name [args]4. Disable when done:
/plugin disable my-plugin
Plugin Management Commands
| Command | Description |
|---|---|
/plugin or /plugin list | List all installed plugins with their current state |
/plugin install <path> | Install a plugin from a local directory |
/plugin enable <name> | Enable a plugin (loads it in its sandbox) |
/plugin disable <name> | Disable a plugin (keeps it installed) |
/plugin uninstall <name> | Uninstall and remove a plugin completely |
/plugin info <name> | Show plugin details: version, permissions, commands |
/plugin help | Show plugin command help |
Using Plugin Commands
Plugin commands use the syntax /<plugin-name>:<command-name> [arguments]:
# Run the "check" command from "my-linter" plugin
/my-linter:check src/
# Run the "format" command from "formatter" plugin
/formatter:format --style=prettier
# Run the "run" command from "test-runner" plugin
/test-runner:run --watch
Plugin States
| State | Description |
|---|---|
installed | Plugin files copied to disk, not yet active |
enabled | Plugin loaded in sandbox, commands available |
disabled | Plugin installed but not active |
error | Plugin failed to load (check /plugin info for details) |
Creating Plugins
Plugins are defined by a plugin.json manifest and a JavaScript entry point file.
Plugin Manifest (plugin.json)
{
"name": "my-plugin",
"version": "1.0.0",
"displayName": "My Plugin",
"description": "Does something useful",
"author": "Your Name",
"entryPoint": "dist/index.js",
"permissions": ["commands", "context:read"],
"commands": [
{
"name": "greet",
"description": "Say hello",
"handler": "greetHandler"
}
],
"hooks": [
{
"event": "command:execute",
"handler": "onCommandExecute"
}
]
}
Entry Point Example (dist/index.js)
// Export command handlers
module.exports.greetHandler = async function(args) {
const name = args || 'World';
return `Hello, ${name}!`;
};
// Export hook handlers
module.exports.onCommandExecute = async function(data) {
api.log('Command executed:', data);
};
Available Permissions
| Permission | What It Grants |
|---|---|
commands | Define custom slash commands |
context:read | Read workspace context files via api.context.read(path) |
context:write | Write workspace context via api.context.write(path, content) |
ai:call | Call AI completions via api.ai.complete(prompt) |
messages:read | Read conversation messages via api.messages.read() |
messages:write | Write messages via api.messages.write(content) |
network | Make HTTPS requests via api.network.fetch(url) |
hooks:pre | Register pre-execution hooks |
hooks:post | Register post-execution hooks |
ui | Modify display elements |
dm:read | Read direct messages (may be blocked by org policy) |
dm:write | Send direct messages (may be blocked by org policy) |
transcripts:read | Read transcripts (may be blocked by org policy) |
Your organization administrator may restrict which plugins can be installed and which permissions are available. If a plugin requires a blocked permission, that capability will be silently disabled. Check with your admin if a plugin isn't working as expected.
Plugin File Structure
my-plugin/
plugin.json # Required: manifest with metadata and permissions
dist/
index.js # Required: entry point (referenced by entryPoint field)
helpers.js # Optional: additional code (bundled into entry point)
Plugins are stored at ~/.occode/plugins/ after installation. The plugin registry is tracked in ~/.occode/plugins/plugins.json.
Security Notes
- Plugins run in isolated VM sandboxes with a 10-second execution timeout
- No access to
require,import,process, or the filesystem - Network requests are HTTPS only with a 30-second timeout
- All API access is permission-checked — calling an API without the required permission throws an error
- Only review and install plugins from trusted sources
16. Configuration Reference
Global Config: ~/.occode/config.json
{
"provider": "anthropic",
"model": "claude-sonnet-4-20250514",
"maxTokens": 4096,
"temperature": 0.7,
"autoApprove": false,
"features": {
"tdg": false,
"autoFix": true,
"coverageGuidedTests": true
}
}
Project Config: .occode.json
{
"provider": "anthropic",
"model": "claude-sonnet-4-20250514",
"mode": "interactive",
"contextPatterns": ["src/**/*"],
"ignorePatterns": ["node_modules", "dist"]
}
Environment Variables
| Variable | Description | Example |
|---|---|---|
OCCODE_PROVIDER | AI provider | anthropic |
OCCODE_MODEL | Model name | claude-sonnet-4-20250514 |
OCCODE_API_ENDPOINT | Custom endpoint | http://localhost:11434/v1 |
OCCODE_MAX_TOKENS | Max output tokens | 4096 |
OCCODE_TEMPERATURE | Temperature (0-1) | 0.7 |
OCCODE_API_KEY | Fallback API key | — |
ANTHROPIC_API_KEY | Anthropic key | sk-ant-... |
OPENAI_API_KEY | OpenAI key | sk-... |
GOOGLE_API_KEY | Google/Gemini key | — |
DEEPSEEK_API_KEY | DeepSeek key | — |
MISTRAL_API_KEY | Mistral key | — |
GROQ_API_KEY | Groq key | — |
TOGETHER_API_KEY | Together AI key | — |
OPENROUTER_API_KEY | OpenRouter key | sk-or-v1-... |
OPENCAN_API_KEY | OpenCan key | — |
Configuration Priority (highest to lowest)
- Command-line flags (
--model,--provider) - Environment variables (
OCCODE_MODEL) - Project config (
.occode.json) - Global config (
~/.occode/config.json) - Built-in defaults
17. Troubleshooting
API Key Not Found
occode config --set-key --provider anthropic
# Or set via environment:
export ANTHROPIC_API_KEY="sk-ant-..."
Model Not Available
# Check configuration:
occode config --list
# Verify model name for your provider:
/model # Shows current model
/converge catalog # Browse all available models
Command Timeout
occode run "task" --timeout 600 # 10 minutes
Local Model Connection (Ollama)
# Ensure Ollama is running:
curl http://localhost:11434/api/tags
# Start if needed:
ollama serve
# Check available models:
ollama list
Session Won't Resume
# Check session ID:
occode history
# Sessions stored at:
ls ~/.occode/sessions/
High Token Usage
/features cost # Check token multiplier
/features # See which features are enabled
/compact # Compress conversation history
18. Tips & Best Practices
/context add or -c flag so the AI understands your code.
/cost regularly. Use /profile fast for simple tasks to save money.
/checkpoint "before big refactor".
--dry-run to see what the AI would do before executing.
/compact to summarize conversation history and reduce token usage.
fast for quick queries, power for complex work.