Marketplace-Style Skill References for Copilot Workflows
An Open Letter to AI Coding Assistant Providers
GitHub revolutionized DevOps automation with Actions. The uses: directive, semantic versioning, and enterprise governance controls became the standard for sharing and composing workflows. Every major CI/CD platform followed suit.
Agent Skills are following the same trajectory. Anthropic launched them, GitHub Copilot adopted them, and the ecosystem is accelerating. But we’re missing the critical infrastructure that made Actions successful: marketplace-style references with versioning and governance.
Below is an open letter to all AI coding assistant providers, GitHub included, requesting formal consideration of marketplace-style skill references. The architecture is drawn from a feature request I submitted to GitHub, but the problem it solves applies to every platform running agentic workflows at enterprise scale.
The Problem: Scaling Skills Across 350+ Repositories
In enterprise environments with 350+ microservices, deploying a new capability, Agent Skill, or workflow pattern presents a consistent challenge: how do we distribute it consistently, version it safely, and audit its usage?
GitHub Actions solved this with uses: owner/repo@v4. You reference actions declaratively, pin to specific versions, and rely on marketplace discovery. Security teams enforce allowlists. Developers compose workflows without copying code.
Agent Skills have none of this. Skills live in .github/skills/ or .claude/skills/ directories, discovered via filesystem scanning. Want to use the same skill across 50 repositories? Copy it manually or set up git submodules. Need to audit which AI capabilities are deployed? Scan every repository by hand. Want to update a skill? Hope you catch every instance.
The gap is clear:
| Capability | GitHub Actions | MCP Servers | Agent Skills |
|---|---|---|---|
| Declarative YAML reference | Yes: uses: owner/repo@v1 |
Yes: gh aw mcp add |
No: Manual copy |
| Version pinning | Yes: SHA/tag/branch | Yes: Container tags | No |
| Marketplace/Registry | Yes: GitHub Marketplace | Yes: MCP Registry | No |
| Enterprise allowlists | Yes: Actions policies | Yes: MCP policies | No |
| Cross-repo sharing | Yes: Native | Yes: Native | Only via submodules |
Claude Code, Cursor, and every platform supporting Agent Skills faces the same limitations. The open Agent Skills specification (agentskills.io) defines skill structure but says nothing about distribution, versioning, or governance.
The Solution: Declarative Skill References
The solution mirrors Actions: introduce a skills: block in workflow frontmatter that references skills from a registry, with semantic versioning and policy controls.
Here’s what it looks like:
---
name: incident-response-workflow
on:
issues:
types: [labeled]
skills:
- uses: anthropics/skills/pdf@v1
- uses: anthropics/skills/docx@v1.2.0
- uses: my-org/internal-skills/incident-response@main
- uses: my-org/observability-skills/datadog-triage@sha-abc1234
mcp-servers:
datadog:
url: "https://mcp.datadoghq.com/sse"
tools:
github:
toolsets: [default, actions]
---
Analyze incidents and generate reports using standardized procedures.
Actions established this pattern. Applied to skills, the benefits compound:
Versioning and reproducibility: Pin skills to specific versions or SHA hashes. Generate lockfiles (.github/skills.lock.yml) for deterministic builds.
Discovery: Extend the GitHub MCP Registry or create platform-specific registries. Search for skills, browse by category, and add them with CLI commands:
gh aw skill search "pdf processing"
gh aw skill add my-workflow anthropics/skills/pdf --version v1
gh aw skill list my-workflow
Governance: Apply the same policy framework used for Actions and MCP servers. Allow all skills, registry-only, or explicit allowlists. Audit skill usage across the organization. Enforce SHA pinning in protected branches.
Composition: Build complex workflows by combining skills from multiple sources. Skills remain modular, testable, and independently maintained.
Real-World Impact: DevOps at Scale
In production environments using Datadog for observability across 350+ services, consistent incident triage procedures are critical: query metrics, analyze logs, identify recent deployments, and escalate appropriately.
Today, the incident-triage skill must be copied to each repository that needs it. Version drift is inevitable. Updates require pull requests to 50+ repositories. Auditing which repositories have which version requires custom tooling.
With marketplace-style references, this becomes:
skills:
- uses: enterprise-org/devops-skills/incident-triage@v2
- uses: enterprise-org/devops-skills/datadog-analysis@v2
New repositories inherit these skills automatically via organization-level policies. Updates propagate by bumping the version number in a template. Security audits query the registry API. Deployment time drops from 2 hours per repository to 2 minutes.
The business impact is measurable:
| Metric | Current State | With Marketplace References |
|---|---|---|
| Skill deployment time | 2 hours/repo (manual) | 2 minutes (declarative) |
| Version consistency | 60% (drift across repos) | 100% (lockfile enforced) |
| Security audit capability | Manual scanning | Automated reporting |
| Cross-team skill sharing | Ad-hoc | Governed marketplace |
Technical Architecture: Build on What Works
The implementation can leverage existing infrastructure:
Registry backend: Extend the GitHub MCP Registry architecture (api.mcp.github.com) to serve skill metadata. The API patterns, policy engine, and allowlist mechanisms already exist.
Skill resolution order: Prioritize explicit skills: references, fall back to repository-local skills (.github/skills/), then organization-level managed skills, and finally enterprise-level defaults.
Security model: Require SHA pinning for production branches. Support optional skill signing for verified publishers. Inherit the existing Copilot coding agent sandbox model for execution.
Cross-platform compatibility: Skills authored for Claude Code work identically in GitHub Copilot, Cursor, or any platform implementing the Agent Skills specification. The registry serves metadata; each platform handles execution according to its capabilities.
The underlying technology already exists. These patterns made Actions successful; applying them to the next generation of automation is the natural evolution.
Why This Matters Now
Agent Skills are at an inflection point. Anthropic’s open specification, GitHub’s Copilot integration, and community momentum (awesome-copilot, skillsmp.com, awesome-claude-skills) signal that skills will become ubiquitous.
But without marketplace-style distribution, we’re headed toward the same chaos that existed before package managers: vendored dependencies, version drift, and no central authority. Every organization will build custom solutions. Governance will be afterthought. Security teams will struggle to maintain visibility.
GitHub Actions didn’t just enable workflow automation - it established the standard for sharing, versioning, and governing automation at scale. The industry followed because the architecture was sound: declarative references, semantic versioning, marketplace discovery, and enterprise controls.
Agent Skills need the same foundation. The open specification provides structure. The community is creating skills. What’s missing is the distribution and governance layer that makes it safe to run AI agents in production.
The Request
To GitHub, Anthropic, JetBrains, Microsoft, and every provider building AI coding assistants:
Consider implementing marketplace-style skill references using declarative YAML syntax analogous to GitHub Actions. Build on the existing MCP Registry infrastructure. Apply the proven policy framework from Actions. Establish the standard before fragmentation occurs.
For GitHub specifically: you defined how the industry shares automation. The gh-aw framework, MCP Registry, and Copilot Agent Skills support are strong foundations. Extending the registry to include skill references with versioning and governance controls is the natural next step.
The timeline needs to be aggressive given the pace of ecosystem evolution:
- Q1 2026: RFC and design document for public comment
- Q2 2026: Private preview for enterprise customers with registry support
- Q3 2026: General availability with governance controls
Enterprise teams running at scale are prepared to serve as design partners, private preview participants, and production validation environments. Real-world governance challenges demand solutions now, not later.
The Industry Follows Leaders
GitHub defined how the industry shares and governs automation with Actions. Agent Skills are the next frontier, and enterprises need the same versioning, discoverability, and policy controls before AI agents scale across production environments.
The architecture is clear. The use cases are proven. The infrastructure largely exists. What’s needed is formal roadmap consideration with an RFC target of Q1 2026.
The ecosystem is moving. Someone will establish the standard for skill distribution and governance. It should be the platforms that pioneered the underlying technology.
Robert Allen
Open Source Maintainer
zircote/swagger-php, zircote/git-adr, zircote/marketplace, zircote/structured-madr
The complete feature request with technical specifications is available at gist.github.com/zircote/c202e4d9f87215e4a5191b45543bb8cf
Comments will be available once Giscus is configured.