# Spec-Driven Frameworks

<SlideStart />

## OpenSpec

OpenSpec (openspec.dev) is a framework for writing machine-readable specifications. It defines a structured YAML/Markdown format that both tools and LLMs can parse consistently, without ambiguity about where to find the goal, the interfaces, or the acceptance criteria.

Every spec in OpenSpec follows a fixed schema: a goal section, interface definitions, behavior descriptions, and tests. That predictability is the point. When all your specs share the same structure, you can validate them automatically, diff them in pull requests, and feed them to any tool that understands the format.

OpenSpec specs live in version control alongside the code they describe. The spec and the implementation evolve together, which makes it harder for documentation to drift out of sync.

This approach works best for teams that need consistency across multiple projects or contributors. If five engineers are each writing specs their own way, OpenSpec gives you a shared language without forcing a heavy process on anyone.

<SlideEnd />

<SlideStart />

## spec-kit

spec-kit (github.com/github/spec-kit) is GitHub's approach to spec-driven development. It was designed for teams using GitHub Copilot and centers on lightweight spec files stored in `.github/specs/`. Before any code is written, a spec file describing the feature lands in that directory and goes through pull request review.

The integration with GitHub's review process is the practical advantage here. Reviewers see the spec before they see the implementation. If the spec is wrong or incomplete, that gets caught before any code exists, which is a much cheaper point to fix it.

spec-kit suits teams that are already working inside GitHub and using Copilot for code generation. The tooling assumption is built in — specs are artifacts that Copilot can read to generate implementations that match the intent. If your team is not on that stack, the integration benefits disappear and you are left with a Markdown convention you could replicate yourself.

<SlideEnd />

<SlideStart />

## BMAD

BMAD (Breakthrough Method for Agile Development, bmad.fr) is the most structured of the four options. It defines four roles — Analyst, Architect, Developer, QA — and a corresponding set of artifact types: Brief, PRD, Architecture doc, and Stories.

The workflow moves linearly through these roles. The Analyst gathers requirements and produces a Brief and PRD. The Architect translates those into a technical Architecture doc. The Developer writes code against Stories derived from that architecture. The QA validates the output against the acceptance criteria in those Stories.

<DiagramViewer title="BMAD workflow">
```d2
direction: right

analyst: Analyst
architect: Architect
developer: Developer
qa: QA

analyst -> architect: "Brief + PRD"
architect -> developer: Architecture doc
developer -> qa: "Stories + code"
```
</DiagramViewer>

That sounds heavyweight, and on a large team it is deliberately so. That traceability is the point for organizations that need audit trails or where multiple people are building together with Claude.

For a solo data scientist, BMAD is worth understanding differently. You are not adopting four roles — you are adopting four thinking modes. Writing a brief before you architect, and an architecture before you code, enforces a discipline that prevents the most common failure mode in AI-assisted development: jumping to implementation before the problem is fully understood.

BMAD is a strong fit for MLOps projects where pipelines, models, APIs, and monitoring are separate concerns owned by different people, and where a requirement that was misunderstood at the start costs weeks to fix later.

<SlideEnd />

<SlideStart />

## Custom approaches

Many teams do not need a full framework. They need a template — something that ensures everyone asks the same questions before starting work.

The start-work project (github.com/prillcode/start-work) is a useful reference. It is a shell script combined with a Markdown template that generates a spec file at the start of any new Claude session. The automation is minimal; the value is in the habit it enforces.

The minimal version of this approach is a `SPEC.md` template committed to your repository. Before opening a new Claude session for a feature, fill it in. That single practice eliminates most of the context loss that happens when you start coding without a clear goal.

```markdown title="SPEC.md"
# Feature: [name]

## Goal
[One paragraph]

## Inputs
[Data types and formats]

## Outputs
[Data types and formats]

## Behavior
[Step by step]

## Constraints
[What it must NOT do]

## Done when
[Testable acceptance criteria]
```

The six sections cover what the feature does, what it touches, and how you know it is finished. That last section matters most. The "Done when" section forces you to write acceptance criteria before any code exists — and that single habit has the biggest impact on what Claude actually builds.

You can extend this template as your needs grow. Add a section for data schemas if you work with structured data. Add a rollback plan if you are deploying models. The format is yours to adapt.

<SlideEnd />

<SlideStart />

## Comparison

| | OpenSpec | spec-kit | BMAD | Custom |
|---|---|---|---|---|
| Complexity | Medium | Low | High | Low |
| Tooling | Yes | Yes (GitHub) | Yes | None |
| Best for | Cross-team consistency | GitHub-centric teams | Large structured projects | Small teams, fast start |
| Learning curve | Medium | Low | High | Minimal |
| DS/ML fit | Good | Moderate | Good for MLOps | Best for experimentation |

<Callout type="tip">
  If you are a solo data scientist or a small team, start with a custom SPEC.md template. Adopt a framework only when the pain of inconsistency outweighs the cost of learning it.
</Callout>

<SlideEnd />