Understanding Antigravity Skills: The "Apps" for Agentic IDEs
Don't just prompt your AIβteach it. Learn how to package knowledge and workflows into reusable "Skills" that power the next generation of autonomous coding agents.
If you've been using Agentic IDEs like Antigravity, you know the power of giving an agent a complex task. But repeating the same instructionsβ"Use this specific linter config," or "Deploy using this exact script"βgets tedious.
Enter Skills. Think of them as "plugins" or "apps" for your AI agent.
What are Skills?
Skills are an open standard (based on agentskills.io) for extending an agent's capabilities. They are essentially folders containing knowledge, instructions, and tools that help an agent perform specific tasks consistently.
Instead of typing a 50-line prompt every time you need to run a migration or review code, you package that knowledge into a Skill.
.agent/skills folder) or Global (living in your user directory for use across all projects).Why Reusability Matters
The transition from "Chatbot" to "Colleague" happens when the AI remembers how you work.
- Consistency: Ensure the agent always runs tests before committing, exactly how your team prefers.
- Speed: Stop explaining standard operating procedures. Just say "Run the deployment skill."
- Sharing: Check a skill into your repo, and every developer on your team (and their AI agents) instantly knows how to use it.
The Magic of Progressive Disclosure
You might worry: "If I have 100 skills, won't that confuse the agent?"
No. Antigravity uses a technique called Progressive Disclosure to manage context limits.
- Discovery: When you start a chat, the agent only sees a list of skill names and descriptions. It doesn't read the full content yet.
- Activation: If (and only if) you ask for a task that matches a description, the agent decides to "open" that skill.
- Execution: The agent reads the detailed
SKILL.mdfile and executes the instructions.
This keeps the agent's context window clean and focused, loading heavy knowledge only when it's critically needed.
How to Create a Skill
Creating a skill is as simple as creating a folder.
1. The Folder Structure
.agent/skills/
βββ my-awesome-skill/
β βββ SKILL.md # The Brain (Required)
β βββ scripts/ # Helper scripts (Optional)
β βββ examples/ # Reference code (Optional)
β βββ resources/ # Templates (Optional)2. The SKILL.md File
This is the only mandatory file. It must start with YAML frontmatter.
---
name: my-awesome-skill
description: "Deploys the application to the staging environment using the team's CI/CD scripts."
---
# Deployment Skill
## When to use this skill
Use this skill when the user asks to "deploy to staging" or "ship to test".
## Instructions
1. First, check that the git status is clean.
2. Run the validation script at `./scripts/validate.sh`.
3. If successful, run `npm run deploy:staging`.Best Practices for Skills
- Clear Descriptions: The agent relies on the YAML
descriptionto know when to use the skill. Make it specific and action-oriented. - Scripts > Text: Don't make the agent write long bash commands. Put complex logic in a script file and tell the agent to run it.
- One Job: Keep skills focused. A "Testing" skill and a "Deployment" skill are better than one giant "DevOps" skill.
End-to-End Example: The "Token Optimizer" Skill
In large-scale AI engineering, context is currency. LLMs charge by the token and have finite memory limits.
Feeding a 10,000-line file to an agent might cost $1.00 and cause it to forget instructions. We'll create a skill that acts as a safety guard, analyzing files before the agent reads them to prevent wasted money and context.
Step 1: The Folder
Create .agent/skills/token-optimizer/ in your project root.
Step 2: The Logic (The Script)
We'll create a Python script that does the heavy lifting. Create .agent/skills/token-optimizer/scripts/analyze.py.
# scripts/analyze.py
import sys
import tiktoken
def analyze_file(filepath):
enc = tiktoken.get_encoding("cl100k_base")
with open(filepath, 'r') as f:
content = f.read()
tokens = len(enc.encode(content))
print(f"File: {filepath}")
print(f"Token Count: {tokens}")
if tokens > 10000:
print("RECOMMENDATION: File is too large. Consider splitting classes.")
elif tokens < 100:
print("RECOMMENDATION: File is tiny. Consider merging.")
else:
print("RECOMMENDATION: Size is optimal.")
if __name__ == "__main__":
analyze_file(sys.argv[1])Step 3: The Brain (SKILL.md)
Now, we teach the agent how and when to use this script. Create .agent/skills/token-optimizer/SKILL.md.
---
name: token-optimizer
description: "Analyzes code files to calculate token usage and suggest optimizations for LLM context windows."
---
# Token Optimizer Skill
## When to use this skill
Use this skill when the user asks about:
- "How many tokens is this file?"
- "Is this file too big for the context window?"
- "Optimize this code for LLMs"
## Instructions
1. Identify the target file using the active document or user's prompt.
2. Run the analysis script:
`python .agent/skills/token-optimizer/scripts/analyze.py <absolute_file_path>`
3. Report the token count to the user.
4. If recommendations are provided by the script, explain *how* the user can implement them (e.g., "I can help you split this Class X into a separate file if you'd like").The Result
Now, whenever you ask "Is this file too big?", the agent won't guess. It will recognize the intent, trigger the token-optimizer skill, run your precise Python logic, and give you a calculated answer.
Ready to build your first Skill?
Start by creating a `.agent/skills` folder in your workspace.