Skip to content

Integration

Table of Contents

12. Bringing Skills into the Development Workflow

The real value of a skill does not appear when it is used in isolation. It appears when it is connected to key points in the development workflow and becomes part of a quality pipeline.

12.1 Local Development: Makefile-Driven Quality Gates

The go-makefile-writer skill helps us generate standardized Makefiles for Go projects. The core idea is simple: developers should be able to catch locally everything that CI would later catch.

Take the issue2md project as an example. Its ci target directly mirrors the CI flow:

ci: fmt-check ci-core  ## Local/CI required gate parity checks

ci-core: cover-check lint build-all  ## Run coverage gate, lint, build

Running make ci locally and running make ci COVER_MIN=80 in CI are really the same command path. This is what the go-ci-workflow skill calls "Local Parity."

12.2 CI Workflow: Skill-Driven Automated Gates

A CI workflow should be a thin wrapper around Makefile targets. Using the issue2md project as an example (adapt tool versions and target names for your own repository):

# .github/workflows/ci.yml (example; adapt as needed)
jobs:
  ci:
    steps:
       - name: Install golangci-lint
        run: go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@v2.6.2
       - name: Run CI gate
        run: make ci COVER_MIN=80

  docker-build:
    steps:
       - name: Build image
        run: make docker-build

Each CI job delegates to one Makefile target. Developers run make ci locally, and CI runs make ci too. The workflow is consistent end to end.

issue2md CI pipeline: triggered by the fix(github) commit from §8.1, all 6 jobs passed in 1m 29s

This exact CI run was triggered by the commit created with the git-commit skill in §8.1. From local quality gates to all-green CI, the whole chain is skill-driven.

12.3 Code Review: AI-Driven PR Review

The go-code-reviewer skill can also be integrated into CI to run automated code review on every PR. Using the tcg-ucs project as an example (adapt the skill path, prompt text, and tool permissions for your own repository):

# .github/workflows/claude-code-review.yml (example; adapt as needed)
name: Claude Code Review
on:
  pull_request:
    branches: [master, main]
    types: [opened, synchronize]

jobs:
  review:
    steps:
       - uses: anthropics/claude-code-action@v1.0.3
        with:
          prompt: |
            Read .claude/skills/go-code-reviewer/SKILL.md,
            get the PR diff via `gh pr diff`,
            select review mode (Lite/Standard/Strict),
            execute the full review workflow,
            post findings as PR comments.

The workflow YAML only handles orchestration. All review logic lives inside the skill. If you want to update review standards, you update the skill file, not the CI workflow.

12.4 The Full Quality Pipeline

When the stages above are connected, they form a complete quality pipeline from coding to merge:

Coding
make fmt / make lint (local quality checks, generated by go-makefile-writer)
git commit (git-commit skill: secret scan + quality gates + standardized message)
git push
create PR (create-pr skill: 8 gates + structured PR body)
CI triggered
  ├── make ci (format + test + lint + coverage + build)
  ├── make docker-build (container image validation)
  ├── Claude Code Review (go-code-reviewer skill: automated code review)
  └── govulncheck (dependency vulnerability scan)
Human review + merge

Each stage is backed by a corresponding skill, and local behavior stays aligned with CI behavior.

12.5 Team Adoption Strategy: Gradual Rollout and Handling Resistance

The technical integration of a skill is only half the problem. The other half is team buy-in. Even the best tool gets shelved if the rollout triggers resistance. The following graduated approach has been validated in practice. The guiding principle is simple: prove value first, then codify as process.

Phase 1: Individual pilot (weeks 1–2)

One or two team members who are already comfortable with AI tooling use the skill in their daily work, collecting real positive examples and genuine problem reports. The goal at this stage is not adoption — it is accumulating evidence that persuades the rest of the team. Without real-world proof, any evangelism rings hollow.

Phase 2: Opt-in availability (weeks 2–4)

Commit the skill to the project's .claude/skills/ directory, but do not mandate its use. At a team meeting, share the concrete examples gathered in Phase 1 — for instance, "the skill caught a credential leak before it reached the repo." The cardinal rule here is: persuade through evidence, not by mandate. Pushing adoption by decree tends to generate backlash and slows uptake.

Phase 3: CI gate integration

Once most team members are voluntarily using the skill, integrate it into the CI pipeline as a quality gate. At this point the team already has a mental model of how the skill behaves, so the integration does not create surprises. Adding the gate too early is a common mistake — setting a hard block before trust is established turns the skill into "that annoying obstacle" rather than a safety net.

Common objections and how to address them

Objection Response
"The skill slows down my commits" First check whether false positives are high; tune the skill before broadening adoption
"I don't trust AI-generated review comments" Start with advisory mode (output as suggestions, not blockers) during the early phases
"The rules are too rigid for my use case" Add reasonable degradation and opt-out paths to the skill

Gradual adoption is ultimately an engineering problem requiring the same care as writing the skill itself: manage expectations, build trust incrementally, and reduce friction at every step.


13. How Skills Relate to Other Claude Code Features

Claude Code provides multiple extension mechanisms, each with its own purpose. Understanding the differences is the key to using them correctly.

13.1 Feature Comparison

Feature When It Loads Best Use Context Cost
CLAUDE.md Auto-loaded in every session Project-wide conventions: coding style, commit rules, tech stack notes, build commands Paid on every request
Rules (.claude/rules/) Every session or when matching files are opened Fine-grained rules for specific file types or directories Conditional cost
Skill Loaded on demand (description is always in context; body loads only when used) Reusable domain knowledge and workflows Low (unused skills cost nothing)
Sub-agent When scheduled Isolated parallel subtasks that prevent the main session from bloating Separate context, does not affect the main session
MCP When the session starts Connect to external services such as GitHub API, Gmail, or databases Paid on every request
Hook On events Deterministic automation, such as formatting before commit or lint on save Zero (never enters AI context)
Custom slash command When the user types / Largely absorbed by skills now; .claude/commands/ is still supported Same as skill

13.2 Selection Decision Tree

Does this knowledge/workflow need to apply in every session?
├── Yes → Is it shorter than 200 lines?
│   ├── Yes → Put it in CLAUDE.md
│   └── No → Split it into .claude/rules/ or extract it into a skill
├── No → Does it need explicit user invocation?
│   ├── Yes → Does it have side effects?
│   │   ├── Yes → Skill + disable-model-invocation: true
│   │   └── No → Skill (default)
│   └── No → Is it deterministic logic (such as formatting or lint)?
│       ├── Yes → Hook (zero AI context cost)
│       └── No → Does it need external services?
│           ├── Yes → MCP server
│           └── No → Skill + user-invocable: false (background knowledge)

13.3 Common Composition Patterns

These features are not mutually exclusive. In practice, the best results come from combining them:

Combination Effect
CLAUDE.md + Skill CLAUDE.md defines global conventions (commit format, logging style), while the skill provides the concrete workflow (how git-commit should execute)
Skill + Sub-agent Set context: fork in the skill so complex analysis runs in isolated context and does not blow up the main session
Skill + MCP The skill defines the operating flow, while MCP provides the external tools (for example, the skill defines the PR creation process and MCP connects to GitHub)
Skill + Hook The skill defines code review standards, while a hook runs formatting and static checks automatically before git commit
Skill + Skill Skills can reference each other (for example, go-ci-workflow can invoke $go-makefile-writer to repair missing Makefile targets)

13.4 One-Sentence Summary

CLAUDE.md is always-on background knowledge, a skill is on-demand professional capability, a hook is zero-cost deterministic protection, MCP is the bridge to external systems, and a sub-agent is an isolated parallel worker.

Put the right knowledge in the right place. That way, the AI has the context it needs at the right moment without wasting the context window on irrelevant information.


14. A Cross-Tool Comparison of AI Coding Assistant Customization

Skills are not the only way to customize an AI coding assistant. Looking at other tools in the market helps explain both the design philosophy behind skills and their unique strengths.

14.1 Customization Capabilities of Mainstream Tools

Capability Claude Code (Skills) Cursor (Rules) GitHub Copilot Windsurf
Config files CLAUDE.md + .claude/rules/ + skills .cursor/rules/ .github/copilot-instructions.md .windsurfrules
On-demand loading Yes (description trigger + selective references) Yes (glob matching) No No
Callable workflows Yes (/skill-name + argument passing) Limited No No
Sub-agent isolation Yes (context: fork) No No No
Dynamic context injection Yes (!`command` preprocessing) No No No
Script wrapping Yes (scripts/ directory) No No No
Tool-permission control Yes (allowed-tools allowlist) No No No
Cross-tool interoperability Yes (Agent Skills Open Standard) No No No
Distribution model Enterprise / personal / project / plugin levels Project level Project level Project level

14.2 Comparing AI Code Review Tools

There are also several options in the CI-integrated code review space:

Tool Adoption Depth of Customization Typical Use
GitHub Copilot Code Review 60M+ reviews, 12,000+ organizations Low (customized through .github/copilot-instructions.md) Works out of the box; now covers 20%+ of code review activity on GitHub
CodeRabbit 2M repositories, 10,000+ paying customers Medium (YAML config + rules) Install the GitHub App and use it directly
Claude Code + custom skills Early adoption stage High (full system of gates, anti-examples, progressive disclosure, and more) Requires engineering effort, but offers the highest ceiling for customization depth and review quality

Copilot and CodeRabbit win on out-of-the-box adoption and scale. Claude Code + skills win on customization depth and domain fit. For teams with domain-specific review requirements, such as Go concurrency safety or resource-lifecycle management, the latter often has better return on investment.

14.3 Trend Outlook

AI code review has already moved from "novelty" to "infrastructure." Copilot Code Review now covers more than 20% of code review volume on GitHub. But in terms of depth of customization, the industry is still early:

  • Most teams: use Copilot or CodeRabbit with default settings and little deep customization
  • More advanced teams: pass project-level rules through configuration files such as .github/copilot-instructions.md
  • Leading-edge practice: build full domain review systems with custom skills, including gates, anti-examples, version awareness, and degradation strategies

The gap between "using AI" and "using AI well" is exactly the set of design patterns and engineering practices discussed in this guide.