OpenClaw Coding Orchestration: OpenCode + SpecLedger
Building AI-first products at Ratrekt Labs means we write a lot of specs and feed them to coding agents. The process was manual, repetitive, and slow — until we wired three tools together into a pipeline where we decide what to build, and the tools handle the execution.
The Stack
There are three tools in this pipeline, and each has a clear role:
OpenClaw is your AI assistant. It lives in Telegram, Discord, or wherever you chat. You tell it what to build, and it coordinates the entire process. With the right skills installed, OpenClaw acts like a senior developer on your team — it breaks down problems, writes specs, and orchestrates implementation.
OpenCode (or Claude Code) is the coding sub-agent. OpenClaw can run terminal commands and load skills, but for coding work you want clean, dedicated agent sessions scoped to a specific workspace. That’s where OpenCode comes in. OpenClaw spawns OpenCode as a sub-agent, and OpenCode loads SpecLedger skills and commands to bridge SpecLedger into the OpenClaw workflow. This keeps your coding sessions isolated and clean while still being orchestrated from chat.
SpecLedger is the spec engine. It manages specifications, tracks every version and change, catches when code drifts from the spec, and gives you a web app to review and comment on specs before any code gets written. SpecLedger CLI commands like sl init can be run by either OpenClaw or OpenCode — both have the capability. But for the heavier SpecLedger commands that involve reading your codebase and writing code (specify, plan, tasks, implements, verify), those run through OpenCode as the dedicated coding sub-agent.
The dynamic is simple. You chat with OpenClaw about what you want to build. OpenClaw spawns OpenCode into your project to do the actual work. OpenCode uses SpecLedger to manage specs at every step. And you get review links along the way to stay in control.
The Problem
Our workflow every time we built a feature:
- Write a specification
- Feed it to an AI coding agent
- Review the output
- Iterate
Steps 1-2 were manual. For a solo founder with multiple projects, writing specs and hand-feeding them to a coding agent was eating hours every week.
We wanted to describe a feature in plain English, review the spec before any code gets written, and have the implementation run with our finger on the pulse at every phase.
Architecture
graph LR
You["You<br/>(Telegram/Discord)"] -->|"Decide what to build"| OC["OpenClaw<br/>(Assistant)"]
OC -->|"Spawn session"| OCode["OpenCode<br/>(Runner)"]
OCode -->|"SpecLedger CLI"| SL["SpecLedger<br/>(Blueprint)"]
SL -->|"Specs, plans, tasks"| OCode
OCode -->|"Implement code"| Project["Your<br/>Project"]
OCode -->|"Checkpoints"| OC
OC -->|"Review links<br/>+ notifications"| You
You -->|"Approve / Feedback"| OC
You decide what to build. OpenClaw turns that into a spec, sends you a link to review. You approve, OpenClaw moves to the next phase. At every step, you get a checkpoint — a SpecLedger app link where you can review, leave comments, and approve before the next phase starts. No surprises, no black box.
Setup
Before we dive into the flow, here’s what you need:
OpenClaw — AI assistant platform. Runs on your server, connects to your messaging apps. Install via:
curl -fsSL https://openclaw.ai/install.sh | bash
openclaw onboard
OpenCode (or Claude Code) — coding agent for your terminal. The actual code runner.
# OpenCode
curl -fsSL https://opencode.ai/install | bash
# or Claude Code
npm install -g @anthropic-ai/claude-code
SpecLedger — spec management platform. Sign up at app.specledger.io (free tier available), then install the CLI:
curl -fsSL https://specledger.io/install.sh | bash
sl auth login
Once all three are installed, connect them by chatting to OpenClaw:
“Create a skill called coding-specledger that spawns OpenCode as a coding agent via ACP harness. The skill should be able to run SpecLedger CLI commands — sl init for project setup, specledger.specify to generate specs from feature descriptions, specledger.plan to create implementation plans, specledger.tasks to break plans into task breakdowns, specledger.implements to execute implementation, and specledger.verify to validate code against specs. After each phase, send me the SpecLedger app link so I can review and approve before proceeding to the next phase.”
OpenClaw will generate a coding-specledger skill file. With this skill, OpenClaw becomes a developer on your team — one that writes specs, coordinates implementation, and gives you checkpoints at every phase.
Development Flow
Step 1: Initialize a SpecLedger Project
“Run
sl initin my project folder to create a new SpecLedger project”
OpenClaw spawns an OpenCode session, which runs the SpecLedger CLI in your project directory. Your project is now connected to SpecLedger.
Step 2: Specify a Feature
“Use
specledger.specifyto create a spec for a user authentication endpoint with JWT tokens, rate limiting, and refresh token rotation”
OpenCode runs the SpecLedger spec generator. The spec gets created with all the required fields — objective, inputs, outputs, constraints, edge cases, acceptance criteria.
Checkpoint: OpenClaw sends you the SpecLedger app link. Open it, review the spec, leave comments if anything needs to change. Approve when ready.
Step 3: Generate an Implementation Plan
“Run
specledger.planon the auth spec”
SpecLedger analyzes the spec and generates a step-by-step implementation plan.
Checkpoint: Review the plan in the SpecLedger app. Make sure the approach makes sense before any code gets written. Approve or send feedback.
Step 4: Create Task Breakdown
“Run
specledger.tasksto break the plan into tasks”
SpecLedger generates a task breakdown from the implementation plan. Each task is a focused, deliverable unit of work.
Checkpoint: Review the tasks. Confirm priority and scope. Adjust if needed.
Step 5: Implement
“Run
specledger.implementsto start implementing”
OpenCode works through each task — writes code, runs tests, validates against the spec. You get a notification when each task completes.
Step 6: Validate
“Run
specledger.verifyto check if the implementation matches the spec”
SpecLedger validates the code against the spec. If something drifted, you get a detailed report.
Checkpoint: Review the verification results. If something’s off, update the spec or provide feedback to fix it.
The Full Pipeline
flowchart LR
You["You"] -->|"Decide"| OC["OpenClaw"]
OC -->|"Spawn"| OCode["OpenCode"]
subgraph Pipeline["SpecLedger"]
direction LR
S1["sl init"] --> S2["specify"]
S2 --> S3["plan"]
S3 --> S4["tasks"]
S4 --> S5["implements"]
S5 --> S6["verify"]
end
OCode --> Pipeline
S2 -->|"Review link"| You
S3 -->|"Review link"| You
S4 -->|"Review link"| You
S6 -->|"Results"| You
S6 -->|"Fail"| S2
Every phase has a checkpoint. You’re never out of the loop. OpenClaw gives you the SpecLedger app link at each step, you review and approve, then it moves forward. If something doesn’t look right, you stop it before any code is written.
Results After a Few Weeks
Speed: Features that took 2-3 hours now take 15-30 minutes end to end. Spec generation alone saves significant time.
Quality: Detailed, structured specs give OpenCode better context. Fewer hallucinated features — the AI sticks to what’s specified instead of guessing.
Control: With checkpoints at every phase, nothing gets implemented without your review. No black box, no surprises.
Consistency: SpecLedger tracks every version and delta. Code drift from the spec gets caught early, especially across multiple sessions.
Practical Tips
1. Write detailed specs. Output quality is proportional to spec quality. Vague specs produce vague code. Be explicit.
2. Review at every checkpoint. Don’t skip the review links. The whole point is keeping your finger on the pulse. A 30-second review saves hours of rework.
3. Start small. One spec per feature, one feature per pipeline run. Small independent specs beat monolithic ones every time.
4. Use version control. Every spec change in SpecLedger is tracked. Revert to a previous version if implementation goes wrong.
5. Iterate on the spec, not the code. When something doesn’t work, update the spec instead of manually fixing code. Keep the spec as single source of truth.
When This Works Best
Great for:
- Features with clear requirements
- Projects with test coverage
- Multi-repo development
- Teams that want review checkpoints before code gets written
Less effective for:
- Ambiguous or constantly changing requirements
- Projects with no test infrastructure
- Exploratory prototyping (vibe coding is fine here)
What’s Next
- Spec templates for common patterns (API endpoints, DB migrations, UI components)
- Automated spec validation that catches ambiguities before code generation
- Multi-repo spec orchestration (one spec spanning multiple codebases)
- CI/CD integration for automatic deployment after spec validation
The goal: you decide what to build, the pipeline handles the execution, and you stay in control at every phase. Spec-driven development with AI agents isn’t just faster — it’s more reliable, more consistent, and keeps you in the driver’s seat.
Ready to try it? Start with SpecLedger for spec management, OpenClaw for orchestration, and OpenCode for coding.
Building with AI? Follow us on X (@RatrektLabs) for more insights on AI tools, automation, and spec-driven development.