Most AI workflows are one giant instruction set — brittle, expensive, and impossible to maintain. A Skills Graph replaces that with a modular, composable architecture where specialized SKILL.md files load on demand, reference each other, and evolve independently.
When AI workflows grow, most teams respond by making the system prompt longer. This works until it doesn't — context windows bloat, instructions conflict, and the model's attention dilutes across thousands of tokens it doesn't need for the current task.
The Skills Graph inverts this. Instead of front-loading every capability, you mount only the skills relevant to the task at hand. The system prompt becomes an index; execution reads the playbook.
A Skills Graph is organized into three layers. Public skills are stable, domain-agnostic utilities anyone can use. User skills encode your personal workflow, voice, and project context. Example skills are patterns and templates for building new capabilities.
The graph visualization below shows a real implementation with 27 mounted skills. Hover any node to see its description. Edges show documented cross-references between skills.
The tier a skill belongs to determines its scope, audience, and maintenance ownership.
Domain-agnostic utilities maintained by a platform or team. Any workflow can mount these. They never contain user-specific context.
Workflow-specific skills that encode personal voice, project stacks, brand rules, and domain expertise. The most valuable tier — hardest to replicate.
Reference implementations and patterns for building new skills. Also includes specialized tools less frequently needed but critical when triggered.
Every skill is a plain Markdown file with a YAML frontmatter block and a structured workflow body. The model reads the file at invocation time — no preprocessing, no embedding, no separate retrieval step. The file is the skill.
YAML block with name and description fields. The description is the trigger signal — it's what the system matches against the user's request to decide whether to load this skill.
One paragraph establishing the mental model for this skill. Orients the model before it reaches procedural instructions. Often the most important section.
Step-by-step instructions broken into numbered phases. Each phase is discrete and testable. Complex skills reference external files rather than embedding everything inline.
External Markdown files the skill reads on demand. This is the primary token optimization mechanism — detailed lookup tables, templates, and examples only load when that phase executes.
Verification checklists embedded at the end of the workflow. The model self-reviews against these criteria before delivering output.
1--- 2name: chrome-extension-builder 3description: Expert Chrome extension developer 4 specializing in Manifest V3, service workers, 5 and modern extension architecture. Use when 6 building, debugging, or publishing extensions. 7--- 8 9# Chrome Extension Builder 10 11## Core Requirements 12 13Target: Manifest V3 only. No MV2 patterns. 14Output: Complete file tree, ready to load. 15Testing: Include chrome://extensions steps. 16 17## Workflow 18 19### Step 1: Clarify Scope 20Determine permissions, host access, 21and background service worker needs. 22 23### Step 2: Scaffold Architecture 24See: references/mv3-patterns.md 25 26### Step 3: Implement & Validate 27See: references/debug-checklist.md 28 29### Step 4: Quality Verification 30☐ manifest.json passes validation 31☐ No deprecated MV2 APIs used 32☐ CSP headers declared correctly 33☐ Permissions follow least-privilege 34☐ Loads clean in chrome://extensions
How the system resolves a user request into skill invocations. Each trace shows which skills fire, which reference files load, and in what sequence — making the execution path auditable.
"Build me a Chrome extension that highlights todos on any page"
"Chrome extension" → loads skills/user/chrome-extension-builder/SKILL.md
Scope clarification — content script needed, no background worker, host_permissions: <all_urls>
Loads references/mv3-patterns.md — manifest scaffold, content script template, CSP rules
Generates complete file tree: manifest.json, content.js, popup.html, styles.css
Loads references/debug-checklist.md — chrome://extensions load steps, common MV3 errors
Quality gate: manifest validates, no MV2 APIs, CSP declared, least-privilege permissions
Complete extension folder — load unpacked at chrome://extensions and it runs
"Create a pitch deck for the 2 Squirrels AI consulting offer"
"deck" → matches pptx + "visual deliverable" → triggers design-elevation automatically
Reads skills/public/pptx/SKILL.md — establishes python-pptx workflow, slide structure, and file output pattern
Reads skills/user/design-elevation/SKILL.md — applies interrogation checklist before any code is written
design-elevation reads references/technique-catalog.md and selects slide-appropriate techniques
.pptx file — design-elevated, downloadable, with proper layout hierarchy applied
"/cc-handoff" or "package this for Claude Code"
Slash command /cc-handoff — direct match, no ambiguity. Loads skills/user/cc-handoff/SKILL.md
Scans conversation context — identifies project type, tech stack, scope from prior messages
Loads references/stack-defaults.md — pre-populated project profiles for auto-filling common fields
Loads references/output-template.md — 14-section document structure for the handoff prompt
Structured .md handoff document — paste-ready for Claude Code CLI, zero round-trips
"Create a new skill for writing technical RFCs in our company format"
"create a skill" → loads skills/examples/skill-creator/SKILL.md — the skill for building skills
skill-creator conducts intake: name, trigger phrases, output format, references needed, quality gates
Generates SKILL.md with frontmatter, philosophy block, phased workflow, and reference stubs
Complete skill file tree — ready to drop into skills/user/rfc-writer/ and mount
The most powerful aspect of a Skills Graph is cross-referencing — one skill explicitly invoking another as a sub-skill. This creates emergent capability from composition without duplicating instructions.
Cross-links are declared in the workflow body: → invoke: skills/public/pptx/SKILL.md before generating slides
Every slide deck triggers design elevation automatically. pptx handles structure; design-elevation handles aesthetics.
PDF creation skill references pdf-reading when input files need parsing first.
Extension builder references mcp-builder when the extension needs to communicate with an MCP server backend.
Design skill reads brand guidelines when the output needs to stay on-brand.
When creating large skills, skill-creator invokes system-prompt-optimizer to reduce token footprint.
When generating a handoff for a Flutter project, cc-handoff reads flutter-expert for stack context.
Generic file router delegates to pdf-reading when the uploaded file is a PDF.
Co-authoring workflow chains to docx skill for final document production.
These principles emerged from building and maintaining a 27-skill graph over several months. Violating any of them predictably leads to the same problems monolithic prompts create.
A skill should have a single, clear purpose expressible in one sentence. If the name requires "and" to describe it, split it.
The description field determines when the skill fires. It's a contract between the skill author and the matching system. Treat it like a unit test — specific, unambiguous, tested.
Put detailed lookup tables, templates, and examples in reference files. The main SKILL.md should read like a workflow spec, not a dictionary. Reference files load on demand.
If skill A ever invokes skill B, that dependency must be documented in A's workflow body. Undocumented cross-dependencies are bugs waiting to surface at the worst moment.
Every skill must end with a concrete verification checklist. "Does this look good?" is not a quality gate. Specific, binary pass/fail criteria are.
Skills drift. Descriptions stop matching their actual behavior. Cross-links go stale. Build a periodic audit into your workflow — review the full graph, test trigger accuracy, prune dead skills.
You don't need special tooling. A Skills Graph is a directory structure, a naming convention, and a discipline.
List every repeated task you give an AI assistant. Group by domain. These are your proto-skills. Any task you've explained more than three times should be a skill.
Create skills/public/, skills/user/, and skills/examples/. Start with user skills — they'll have the highest ROI since they encode your specific context.
Start with a skill you use daily. Write the frontmatter description first — if you can't write a clear trigger description, you don't understand the skill's scope yet. That's valuable information.
Any section of a skill longer than 20 lines that could stand alone is a reference file. Create a references/ subdirectory per skill and move it there. The skill should reference it by path.
Once you have 5+ skills, draw the dependency graph. Which skills naturally chain? Document those relationships in the workflow bodies. Now you have a graph, not a collection.
Create a skill-creator skill. It should run intake, generate SKILL.md structure, and output a complete file tree. Now your graph can grow itself.
A 27-skill graph without auditing becomes 27 independent experiments with conflicting assumptions. Every quarter: test every trigger description, retire unused skills, update cross-link documentation.