What is a skill?
A skill is a Markdown file stored in a subdirectory of .claude/skills/ that contains a reusable prompt template. On the surface that sounds similar to a custom command, and the two do share the same idea: write the instruction once, reuse it many times. The difference is how and when they are invoked.
Custom commands are triggered explicitly — you type /review and Claude runs it. Skills can be invoked the same way, but they can also be picked up without explicit invocation. Claude reads each skill’s description field alongside the current task and decides whether to apply the skill. That makes the description field the most important part of a well-written skill: write it precisely, and Claude will reach for the skill in the right situations. Write it vaguely, and the skill will rarely be used.
Skills are also richer than commands. They can carry longer instructions, structured output requirements, and domain-specific guidance that would be unwieldy to type in a prompt every time.
How are skills activated?
Skills are activated in two ways.
Manually — you reference the skill directly in a prompt, or you document it in CLAUDE.md so Claude knows to use it for a specific class of task:
## Skills
When reviewing code quality, use the skill at `.claude/skills/code-review.md`.
With this in CLAUDE.md, any time you ask Claude to review code, it will pull in that skill’s instructions automatically, without you having to say “use the code-review skill” every time.
Implicitly — Claude reads the description field of each loaded skill and decides whether the skill is relevant to the current task. This is a judgment call by the model, not a keyword lookup. A precise description like “Review Python code for quality, type safety, and DS best practices” gives Claude enough signal to invoke the skill when you ask for a code review. A vague one like “helps with code” does not.
The two activation paths are complementary. CLAUDE.md references are explicit and reliable — use them for skills you want applied consistently. Implicit activation through the description field works well once the description is specific enough to leave no ambiguity.
direction: right
slash: User types /command
exec: Claude runs command
prompt: User types a prompt
check: Read skill descriptions
invoke: Invoke skill automatically
direct: Respond without skill
slash -> exec
prompt -> check
check -> invoke: match found
check -> direct: no match How to create a skill?
A skill file has YAML frontmatter with at minimum a name and a description, followed by the prompt body — the actual instructions Claude will follow when the skill is invoked.
Here is a complete example for a Python code review skill:
---
name: code-review
description: Review Python code for quality, type safety, and DS best practices
---
Review the provided code for:
1. Type hints — all functions must have type annotations
2. Docstrings — public functions need a one-line summary
3. Error handling — no bare `except:` clauses
4. Data practices — no hardcoded paths, no global state for model objects
For each issue: file, line number, severity (high/medium/low), and suggested fix.
A few things worth noting about this structure. The description is written from Claude’s perspective — it describes what the skill does, not what you want to happen. That is what makes automatic activation work. The prompt body is numbered and specific: it tells Claude exactly what to check and exactly what format to use for the output. Vague instructions in the body produce vague output.
You can also add a paths field to the frontmatter to scope the skill to specific file patterns (e.g. paths: "src/**/*.py"), but for most use cases a well-written description is sufficient.
Dos and don’ts
| Do | Don’t |
|---|---|
Write a clear description so Claude knows when to invoke the skill | Leave description vague or empty |
| Keep each skill focused on one responsibility | Put multiple unrelated tasks in one skill |
Commit skills to .claude/skills/ in git | Keep skills local if your team would benefit from them |
| Reference skills from CLAUDE.md for reliable activation | Rely only on automatic invocation for critical workflows |
| Test the skill on a real case before committing | Share untested skills with your team |
The single most common mistake is writing a skill with a weak description. If the description says something like “helps with code”, Claude has no basis for automatic activation. Compare that to “Review Python code for quality, type safety, and DS best practices” — that is specific enough for Claude to match against real tasks.
Scope is the second common problem. A skill that tries to handle data validation, model training, and documentation all at once is hard to invoke correctly and hard to maintain. One skill, one job.
Metaskill: skill-creator
Once you have a few skills in your project, a natural next step is to standardize how new ones get written. The most effective way to do that is with a skill that creates other skills — a skill-creator.
A skill-creator skill contains instructions that tell Claude how to author skill files: what frontmatter fields are required, how the description should be phrased, what level of detail belongs in the prompt body, and any project-specific standards your team follows. Once it exists, you can say “create a skill for validating dataset schemas” and Claude generates a properly structured .md file that matches your conventions.
---
name: skill-creator
description: Create a new Claude Code skill file following project conventions
---
When asked to create a new skill, create a directory in `.claude/skills/` and place a `SKILL.md` file inside it:
Frontmatter:
- `name`: kebab-case, describes the action (e.g. `validate-schema`, `summarize-notebook`)
- `description`: one sentence, starts with a verb, specific enough for automatic activation
Body:
- Numbered steps for multi-part tasks
- Specify output format explicitly if the result needs a consistent structure
- No open-ended instructions — every step should have a clear stopping condition
Save the file and confirm the path. Do not activate the skill until the user reviews it.
Every new skill follows the same format. The collection stays consistent without anyone having to enforce it.