Friday Roundup - Week 2: The Tooling Flywheel
The AI development ecosystem doesn’t stand still. This week brought incremental improvements that, taken together, show where the industry is headed: modular, typed, and increasingly agentic.
Claude Code: Incremental Progress
Claude Code 2.10 and 2.12 landed quietly, but the changes matter if you’re using it daily. The releases focus on stability improvements and better context handling—less flashy than new features, but more valuable when you’re deep in a multi-step refactoring.
The hook system I wrote about last week continues to prove its value. When the base tool gets more stable, custom tooling built on top compounds those gains. That’s the flywheel effect: better foundations enable better extensions, which increase adoption, which justify more foundation work.
What’s notable isn’t what changed—it’s that the changes are becoming more surgical. Early releases introduced big swings in behavior. Recent updates feel like a team that knows what they’re optimizing for.
Gemini CLI Skills: Modularity Done Right
Google’s Gemini CLI Skills feature introduces something other CLI tools should study: modular, on-demand expertise for AI agents.
Instead of front-loading every possible instruction into initial context (burning tokens and slowing response time), skills are discrete folders containing specialized workflows, examples, and optionally scripts or data files. The CLI discovers them automatically and loads only what’s needed.
# List available skills
gemini skills list
# Enable a skill for your project
gemini skills enable security-audit --scope project
# Create a custom skill
mkdir -p .gemini/skills/my-workflow
echo "# My Workflow Instructions" > .gemini/skills/my-workflow/SKILL.md
The architecture is simple: .gemini/skills/ directories at project, user, or extension scope, with precedence favoring project-specific over global. A SKILL.md file contains the instructions. That’s it.
Why this matters: As AI agents get more capable, context management becomes the bottleneck. Skills solve progressive disclosure: the agent sees skill descriptions initially, then loads full details only when relevant. This scales far better than monolithic instruction files.
I’ve started porting some of my Claude Code hooks to Gemini skills as an experiment. The mental model feels cleaner—each skill is self-contained, testable, and composable.
TypeScript Overtakes Python: AI Drives the Shift
GitHub’s Octoverse 2025 confirms what many suspected: TypeScript became the most-used language on GitHub in 2025, surpassing Python and JavaScript.
The driver? AI-generated code.
When AI assistants produce thousands of lines automatically, type systems become the safety net. According to research cited in GitHub’s Octoverse analysis, 94% of LLM-generated compilation errors were type-check failures—issues that strong typing surfaces before runtime.
The trend isn’t just TypeScript. Rust, Go, C#, and Java all show similar growth patterns. Dynamic languages still dominate for prototyping and data science, but production codebases increasingly default to typed.
The shift in mental model: Type safety isn’t about catching developer mistakes anymore—it’s about validating AI output. The economics have changed. When a junior developer writes buggy code, you coach them. When an AI generates buggy code, you need automated checks.
Major frameworks made this inevitable. React, Next.js, Angular, SvelteKit, and Astro all generate TypeScript by default now. The path of least resistance is typed. Tools like Vite and Bun hide the complexity, making TypeScript approachable even for beginners.
Nearly 80% of new developers use Copilot within their first week. They’re learning in a typed ecosystem from day one. That’s a generational shift.
2025 in Review: GitHub’s Top Posts
GitHub compiled its most-read developer posts of 2025, and three themes dominate:
1. Agentic AI
GitHub Copilot’s “agent mode” evolved from autocomplete to autonomous iteration. The agent recognizes errors, proposes fixes, and iterates—proactive problem-solving, not just suggestion.
Agent HQ, announced at GitHub Universe 2025, brought agents from Anthropic, OpenAI, Google, Cognition, and xAI directly into GitHub. Mixed teams of AI agents collaborating across organizations. That’s not a demo anymore; it’s shipping to paid subscribers.
2. Model Context Protocol (MCP)
MCP standardizes how AI agents communicate with tools and environments. The Model Context Protocol servers launched in 2025 with over 1,000 servers providing access to APIs, databases, and development tools.
Interoperability was the missing piece. Now your Claude Code hooks can use the same MCP servers as your Gemini CLI skills. The ecosystem stops fragmenting and starts composing.
3. Spec-Driven Development
Instead of coding first and documenting later, spec-driven development inverts the process: start with clear specifications, let both humans and AI align from the outset.
GitHub’s Spec Kit toolkit codifies this approach. Write the spec, validate the plan, then implement. It’s TDD for architecture—and it works particularly well when AI agents are co-authors.
# Example: agents.md specification
name: API Refactoring Agent
goal: Refactor REST endpoints to use OpenAPI 3.1
constraints:
- Maintain backward compatibility
- Add comprehensive tests
- Update documentation inline
tools:
- git-adr
- swagger-php
- pytest
The industry moved from “AI writes code” to “AI collaborates on software design.” That’s a maturity inflection point.
What It Means
These aren’t disconnected updates—they’re the same pattern repeating across vendors:
- Modularization: Skills, hooks, MCP servers—the winning architecture is composable.
- Type safety: AI output needs automated validation; typed languages provide it.
- Agentic workflows: AI agents don’t just generate code—they iterate, test, and fix autonomously.
- Standardization: MCP and spec-driven approaches reduce fragmentation, enable ecosystems.
The flywheel is turning. Better tools enable more ambitious workflows, which surface new requirements, which drive better tools.
If you’re building AI tooling—or just using it—pay attention to modularity and interoperability. The vendors betting on composability (Gemini skills, Claude hooks, MCP) are moving faster than those building monoliths.
Looking Ahead
Next week I’ll dig into practical patterns for integrating MCP servers with existing CI/CD workflows. If you’re using GitHub Actions and want to add agentic validation, that’s the stack.
What tools are you experimenting with this week? Anything surprising? I’m particularly interested in novel uses of MCP servers outside the usual IDE integrations.
Links:
Comments will be available once Giscus is configured.