AI Agent Skills: Claude, Cursor & Antigravity Compared
Converting rules into skills—and why it matters for your workflow.
Remember when we thought "prompt engineering" would be the hard part? Turns out the real challenge is teaching AI to remember things between conversations. Enter: Skills.
Skills are essentially structured knowledge bundles that persist across sessions. Instead of re-explaining your stack, conventions, and preferences every time, you define them once and the AI references them automatically.
The Three Contenders
Right now, three major platforms support skills in meaningfully different ways:
1. Claude Skills (Anthropic)
Anthropic open-sourced their skills framework on GitHub. Skills are markdown files with YAML frontmatter defining triggers, descriptions, and structured instructions.
Pros: Open source, well-documented, integrates with Claude Projects.
Cons: Requires Claude API or Pro subscription.
2. Cursor Agent Skills
Cursor's approach is file-system based. You drop SKILL.md files in .agent/skills/ directories, and the agent discovers them automatically.
Pros: Lives in your repo, version controlled, discoverable.
Cons: Cursor-specific, no cross-platform portability.
3. Google Antigravity Skills
Antigravity (the codename for Google's advanced agentic coding assistant) uses a similar file-based approach but with deeper integration into Gemini's reasoning capabilities.
Pros: Deep reasoning integration, multi-step verification, access to Google Search for grounding.
Cons: Newer, less documentation available.
Converting a Rule to a Skill
Here's the pattern I use to convert a .cursorrules file (or any system prompt) into a proper skill:
Step 1: Extract the Core Behavior
Most rules are a mix of "do this" instructions and "don't do that" constraints. Separate them:
Step 2: Add Contextual Triggers
Skills work best when they activate automatically based on context:
Step 3: Include Examples
The difference between a mediocre skill and a great one? Concrete examples. AI learns from patterns:
Step 4: Add Self-Correction
This is the secret sauce. Force the AI to verify its own output:
Which Should You Use?
| Factor | Claude | Cursor | Antigravity |
|---|---|---|---|
| Repo Integration | ⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐ |
| Cross-Platform | ⭐⭐⭐ | ⭐ | ⭐⭐ |
| Reasoning Depth | ⭐⭐⭐ | ⭐⭐ | ⭐⭐⭐ |
| Grounding/Search | ❌ | ❌ | ⭐⭐⭐ |
My recommendation? Use what your team already has. If you're in VS Code with Cursor, go Cursor. If you're deep in the Google ecosystem, Antigravity is worth exploring.
Or just use Landi and let us handle the AI complexity while you focus on what actually matters. No skill configuration required. 😉