Using Han with OpenCode

How to use Han's validation pipeline and plugin ecosystem with OpenCode using the bridge plugin.

Han works with OpenCode through a bridge plugin that translates OpenCode's event system into Han hook executions. Your existing Han plugins - validation, context injection, skills, disciplines - work in OpenCode without modification.

How It Works

OpenCode uses a JS/TS plugin system. The Han bridge plugin runs inside OpenCode and connects:

  1. OpenCode eventsHan hooks (PreToolUse, PostToolUse, Stop)
  2. System prompt injectionCore guidelines (professional honesty, no excuses, skill selection)
  3. Chat message hooksDatetime injection (current time on every prompt)
  4. Custom toolsSkills & disciplines (400+ skills, 25 agent personas)
Agent edits src/app.ts
  -> OpenCode fires tool.execute.after
  -> Bridge matches PostToolUse hooks (biome, eslint, tsc)
  -> Runs hooks in parallel, collects results
  -> Agent sees validation errors and fixes them

Setup

1. Install Han and plugins

curl -fsSL https://han.guru/install.sh | bash
han plugin install --auto

2. Add the bridge to OpenCode

Add to your opencode.json:

{
  "$schema": "https://opencode.ai/config.json",
  "plugin": ["opencode-plugin-han"]
}

That's it. Your Han plugins now work in OpenCode.

Coverage Matrix

The bridge maps Claude Code's hook events to OpenCode's plugin API:

Claude Code HookOpenCode EquivalentStatusNotes
PostToolUsetool.execute.afterImplementedPrimary validation path - per-file linting/formatting
PreToolUsetool.execute.beforeImplementedPre-execution gates, subagent context injection
Stopstop + session.idleImplementedFull project validation when agent finishes
SessionStartexperimental.chat.system.transformImplementedCore guidelines injected into system prompt
UserPromptSubmitchat.messageImplementedCurrent datetime injected on every prompt
SubagentPrompttool.execute.before (Task/agent)ImplementedDiscipline context injected into subagent prompts
Skillstool registrationImplemented400+ skills via han_skills tool
Disciplinestool + system.transformImplemented25 agent personas via han_discipline tool
Event LoggingJSONL + coordinatorImplementedBrowse UI visibility for OpenCode sessions
SubagentStart/StopNot availableNo OpenCode equivalent
MCP tool eventsNot availableOpenCode doesn't fire events for MCP calls (#2319)
Permission denialNot availabletool.execute.before can't block tool execution
PreCompactNot availableNo OpenCode equivalent
Session slugNot availableOpenCode uses its own session naming

What Works

PostToolUse Validation (Primary)

The most important feature. When the agent edits a file, Han's per-file validation hooks fire:

PluginWhat It Does
biomeLint and format JavaScript/TypeScript
eslintJavaScript/TypeScript linting
prettierCode formatting
typescriptType checking
clippyRust linting
pylintPython linting

Results are delivered two ways:

  1. Inline: Appended directly to the tool output (agent sees immediately)
  2. Notification: Sent via client.session.prompt() (agent acts on next turn)

PreToolUse Hooks

Run before a tool executes via tool.execute.before. Enables:

  • Pre-commit/pre-push validation gates (intercept git commands)
  • Subagent context injection (discipline context added to task tool prompts)
  • Input modification for tool calls

Stop Validation (Secondary)

When the agent finishes a turn, broader project-level hooks run:

  • Full project linting
  • Type checking across the codebase
  • Test suite execution

If issues are found, the bridge re-prompts the agent to fix them.

SessionStart Context (Guidelines)

Core guidelines are injected into every LLM call via experimental.chat.system.transform:

  • Professional honesty — Verify claims before accepting them
  • No time estimates — Use phase numbers and priority order instead
  • No excuses — Own every issue (Boy Scout Rule)
  • Date handling — Use injected datetime, never hardcode
  • Skill selection — Review available skills before starting work

These are the same guidelines that Claude Code sessions receive via the core plugin's SessionStart hook.

UserPromptSubmit Context (Datetime)

Current local datetime is injected on every user message via chat.message, mirroring Claude Code's UserPromptSubmit hook. This ensures the LLM always knows the current time for temporal assertions.

Result Format

Hook results are structured so the agent can parse and act on them:

<han-post-tool-validation files="src/app.ts">
The following validation hooks reported issues after your last edit.
Please fix these issues before continuing:

<han-validation plugin="biome" hook="lint-async" status="failed">
src/app.ts:10:5 lint/correctness/noUnusedVariables
  This variable is unused.
</han-validation>
</han-post-tool-validation>

Skills (400+)

The bridge registers a han_skills tool with OpenCode, giving the LLM on-demand access to Han's full skill library. Skills are discovered at startup from installed plugins' skills/*/SKILL.md files.

The LLM can search for skills and load their full content:

han_skills({ action: "list", filter: "react" })
→ Lists all React-related skills across plugins

han_skills({ action: "load", skill: "react-hooks-patterns" })
→ Loads full skill content into context

Disciplines (25 Agent Personas)

The bridge registers a han_discipline tool for activating specialized agent personas. When activated, the discipline's expertise is injected into every LLM call via system prompt.

han_discipline({ action: "list" })
→ Available: frontend, backend, sre, security, mobile, database...

han_discipline({ action: "activate", discipline: "frontend" })
→ System prompt now includes frontend expertise context

Available disciplines: frontend, backend, api, architecture, mobile, database, security, infrastructure, sre, performance, accessibility, quality, documentation, project-management, product, data-engineering, machine-learning, and more.

Remaining Gaps

These are genuine platform limitations that cannot be bridged:

  • MCP tool events: OpenCode doesn't fire tool.execute.after for MCP tool calls (opencode#2319). Validation only runs for built-in tools (edit, write, bash).
  • Subagent hooks: No OpenCode equivalent for SubagentStart/SubagentStop. Discipline context is injected via tool.execute.before as a workaround.
  • Permission denial: OpenCode's tool.execute.before cannot block tool execution (no permissionDecision equivalent). PreToolUse hooks can warn but not deny.
  • PreCompact: No hook before context compaction.
  • Checkpoint filtering: Session-scoped checkpoint filtering (only validate files changed since last checkpoint) is not yet implemented in the bridge.

How Plugins Stay Compatible

Han plugins don't need modification to work with OpenCode. The bridge reads the same han-plugin.yml files that Claude Code uses:

# This config works in both Claude Code and OpenCode
hooks:
  lint-async:
    event: PostToolUse
    command: "npx -y @biomejs/biome check --write ${HAN_FILES}"
    tool_filter: [Edit, Write, NotebookEdit]
    file_filter: ["**/*.{js,jsx,ts,tsx}"]
    dirs_with: ["biome.json"]

The difference is in who executes the hook:

  • Claude Code: Reads hooks.json, calls han hook run via shell
  • OpenCode: Bridge reads han-plugin.yml, runs the command directly as a promise

Same hook definition. Same validation. Different runtime.

Architecture

OpenCode Plugin Runtime
  |
  |-- experimental.chat.system.transform
  |     -> Core guidelines (professional honesty, no excuses, etc.)
  |     -> Active discipline context injection
  |     -> Skill/discipline capability summary
  |
  |-- chat.message ────────────> Datetime injection (every prompt)
  |
  |-- tool.execute.before ─────> PreToolUse hooks
  |                                -> Pre-execution validation gates
  |                                -> Discipline context for subagents
  |
  |-- tool.execute.after ──────> PostToolUse hooks (per-file validation)
  |                                -> discovery → matcher → executor → formatter
  |                                -> mutate tool output + notify agent
  |
  |-- session.idle / stop ─────> Stop hooks (full project validation)
  |                                -> client.session.prompt() if failures
  |
  |-- tool: han_skills ────────> Skill discovery (400+ coding skills)
  |                                -> list/search/load SKILL.md content
  |
  |-- tool: han_discipline ────> Agent disciplines (25 personas)
  |                                -> activate/deactivate/list
  |
  |-- JSONL event logger ──────> Browse UI visibility
                                   -> ~/.han/opencode/projects/{slug}/

The bridge discovers hooks at startup by reading:

  1. ~/.claude/settings.json and .claude/settings.json for enabled plugins
  2. .claude-plugin/marketplace.json for plugin path resolution
  3. Each plugin's han-plugin.yml for hook definitions

Event Logging

The bridge writes Han-format JSONL events to ~/.han/opencode/projects/. Each event includes provider: "opencode" to distinguish OpenCode sessions from Claude Code sessions.

On startup, the bridge launches the Han coordinator in the background. The coordinator watches the OpenCode events directory and indexes events into SQLite, making them visible in the Browse UI alongside Claude Code sessions.

Events logged:

  • hook_run / hook_result - Hook execution lifecycle
  • hook_file_change - File edits detected via tool events

Environment variables set by the bridge:

  • HAN_PROVIDER=opencode - Identifies the provider for child processes
  • HAN_SESSION_ID=<uuid> - Session ID for event correlation

Next Steps