06 - System Prompt & Skills
Control agent behavior with system prompts and skill files -- the two mechanisms that shape who your agent is and what it knows.
Why This Chapter Matters
So far, our agent has been a blank slate with a one-liner system prompt like "You are a helpful assistant." That works for demos, but real-world agents need a richer identity. Consider the difference between:
- "You are a helpful assistant." -- generic, no guardrails, no domain expertise
- A weather forecasting agent that always structures responses as weather reports, uses meteorological terminology, and cites data sources
The first is a chatbot. The second is a specialist. The difference is not in the model weights or the tools available -- it is in the instructions the agent receives before it ever sees a user message.
This chapter introduces two complementary mechanisms for shaping agent behavior:
- System prompts -- the foundational instructions that define the agent's personality, rules, and response format. Think of this as the agent's "job description."
- Skills -- modular Markdown files that inject domain-specific knowledge and behavioral rules. Think of these as "training manuals" that the agent can reference.
Together, they form a layered architecture: the system prompt sets the baseline identity, and skills add specialized capabilities on top. This is analogous to how a new employee might receive a company handbook (system prompt) plus role-specific training materials (skills).
What You'll Learn
- How
systemPromptOverrideinjects a custom system prompt - How
loadSkillsFromDir()discovers.mdfiles as skills - How
skillsOverrideinjects skills into the resource loader - Skill frontmatter format (
name,description,disable-model-invocation) - Best practices for writing effective system prompts
- How skill composition works and why it matters
The Role of System Prompts
A system prompt is the first message in any LLM conversation. It is sent before any user messages and establishes the "rules of engagement" for the entire session. The model treats it as high-priority instructions that should govern all subsequent responses.
What Makes a Good System Prompt?
Effective system prompts share several characteristics:
Identity and role: Tell the agent who it is. "You are WeatherBot, a friendly weather assistant" is better than "You are an AI." A clear identity helps the model maintain consistent behavior.
Behavioral rules: Define what the agent should and should not do. "Always greet the user warmly" and "Never provide medical advice" are behavioral rules.
Response format: If you want structured responses, say so. "Always respond with bullet points" or "Structure weather reports with Temperature, Humidity, and Forecast sections."
Tool usage guidance: Tell the agent when to use its tools. "When asked about weather, use the get_weather tool first" prevents the model from hallucinating weather data.
The specificity principle: vague instructions produce vague behavior. Instead of "Be helpful," try "When the user asks a question you can answer from your training data, answer directly. When the user asks about current weather, always use the get_weather tool rather than guessing." The more specific your instructions, the more predictable your agent.
System prompts are not security boundaries. A determined user can often coax the model into ignoring system prompt instructions through clever prompting. Do not put security-critical logic in the system prompt alone -- enforce it in code (like the confirmation pattern from Chapter 05).
Understanding Skills
What Are Skills?
Skills are Markdown files that extend the agent's knowledge and behavior. If the system prompt is the agent's "job description," skills are its "reference library" -- each skill is a self-contained document that teaches the agent about a specific domain or task.
Skills serve a different purpose than system prompts:
Skills as a Plugin System
If you have worked with plugin architectures in web frameworks (Gatsby plugins, Webpack loaders, VSCode extensions), the skill system will feel familiar. Each skill is a self-contained unit that:
- Declares itself via frontmatter (name, description)
- Provides content as Markdown that gets injected into the agent's context
- Can be discovered automatically from a directory structure
- Can be composed -- multiple skills work together without conflicts
This design means you can build a library of reusable skills and mix-and-match them for different agent configurations. A "code reviewer" agent might load skills for TypeScript best practices, security auditing, and performance optimization. A "customer support" agent might load skills for product knowledge, refund policies, and escalation procedures.
Skill File Format
Skills are Markdown files with YAML frontmatter:
Frontmatter Fields
What's Happening Under the Hood
When skills are loaded, the pi-coding-agent framework:
- Discovers Markdown files using the directory scan rules
- Parses the YAML frontmatter to extract metadata
- Injects the skill content into the agent's context window alongside the system prompt
- Labels each skill so the model knows which domain knowledge is available
The skill content becomes part of the "system context" that the LLM sees. This means the model can reference skill instructions when formulating responses, just as it references the system prompt. The key difference is that skills are additive -- each skill extends the agent's knowledge without replacing anything.
Discovery Rules
The loadSkillsFromDir() function looks for skills in two places:
- Direct
.mdfiles in the skills directory root (e.g.,skills/weather-expert.md) SKILL.mdfiles in subdirectories, searched recursively (e.g.,skills/weather/SKILL.md)
The subdirectory pattern is useful for complex skills that include additional assets:
Use the single-file format for simple behavioral rules (like response formatting) and the directory format for skills that need supporting materials or when you want to keep the skill organized with related assets.
Loading Skills
The diagnostics array contains any warnings or errors from skill parsing -- for example, a skill file with invalid frontmatter. Always check diagnostics in development to catch problems early.
The source parameter is a label that helps you track where skills came from when debugging. Use something descriptive like 'user-skills', 'bundled', or 'tutorial'.
Injecting Skills Into Resource Loader
Let's break down the key configuration points:
systemPromptOverride takes a function (not a string) that returns a string. This is a function so it can be re-evaluated -- useful if your system prompt includes dynamic content like the current date or user preferences.
noSkills: skills.length === 0 is a conditional toggle. If no skills were loaded from the directory, we disable the skill system entirely to avoid unnecessary processing. If skills were found, we set noSkills to false (implicitly, via the falsy evaluation) to enable skill loading.
skillsOverride is a function that returns the loaded skills and diagnostics. The spread operator ...(skills.length > 0 && { ... }) ensures we only set this option when skills exist.
await resourceLoader.reload() is essential -- the resource loader does not process its configuration until reload() is called. Forgetting this is a common source of "my system prompt isn't working" bugs.
Always call resourceLoader.reload() after creating a DefaultResourceLoader and before passing it to createAgentSession(). Without this call, neither your system prompt nor your skills will be active.
Writing Effective Skills
Here are guidelines for writing skills that produce reliable, high-quality agent behavior:
Be Explicit About Response Structure
Bad:
Good:
Use Examples
Models learn well from examples. Include one or two sample interactions:
Define Boundaries
Tell the skill what the agent should NOT do:
Skill Composition
When multiple skills are loaded, their contents are all injected into the context. This means skills can complement each other:
- Weather Expert skill defines how to format weather responses
- Friendly Assistant skill defines the overall tone and greeting behavior
- Safety Guidelines skill defines what topics to avoid
The model synthesizes instructions from all loaded skills simultaneously. This is powerful but requires care -- conflicting instructions across skills can confuse the model.
When designing a skill library, think in layers: base skills (tone, formatting, safety) that apply broadly, and domain skills (weather, coding, customer support) that apply to specific tasks. Avoid putting overlapping instructions in multiple skills.
Full Code
Run
Expected Behavior
The agent responds as "WeatherBot" -- notice the personality and response structure:
- Identity: It greets the user warmly (from the system prompt)
- Tool usage: It calls
get_weatherbefore responding (from the system prompt's tool guidance) - Response format: It structures the response with Temperature, Humidity, Forecast, and Advisory sections (from the weather-expert skill)
Without the skill, the agent would call the tool and give you a plain-text answer. With the skill, it formats the response like a professional weather report. This is the power of skill composition -- the system prompt defines who the agent is, and the skill defines how it presents domain-specific information.
Common Mistakes and Gotchas
Forgetting resourceLoader.reload(): The most common source of "my prompt isn't working." The DefaultResourceLoader is lazy -- it does not process configuration until reload() is called.
Conflicting instructions: If your system prompt says "Be extremely brief" but a skill says "Always provide detailed explanations," the model will be confused. Review your system prompt and skills as a unified set of instructions.
Overly long skills: Skills consume context window tokens. A 5,000-word skill leaves less room for conversation history. Keep skills focused and concise -- if a skill is longer than a page, consider splitting it into multiple skills.
Not testing skill discovery: Use the diagnostics return value from loadSkillsFromDir() to check for parsing errors. A skill with malformed frontmatter will silently fail to load.
Key Takeaways
-
System prompts define identity: They set the agent's personality, rules, and tool usage guidance. Think of them as the agent's job description.
-
Skills add domain knowledge: They are modular Markdown files that inject specialized instructions. Think of them as training manuals.
-
The two work together: System prompt for who the agent is, skills for what the agent knows. This separation keeps your configuration modular and maintainable.
-
Skills are a plugin system: You can build a library of reusable skills and compose them for different agent configurations, just like selecting plugins for a web framework.
-
Be specific: Vague instructions produce vague behavior. The more explicit and structured your prompts and skills, the more reliable your agent.
Next
Chapter 07: Multi-Session -- manage multiple conversation sessions.