Day 16: Agent System Prompts - The Foundation of Agent Behavior
Master the art of system prompt engineering to guide agent behavior, optimize tool usage, and ensure structured outputs for reliable agent performance.
Day 16 challenge
Goal: master system prompt engineering for reliable agent behaviorTheme: context engineering week - prompt engineering fundamentalsTime investment: ~25 minutes
Welcome to Day 16 and Week 4! You’ve built sophisticated agents across multiple
domains. Now you’ll master context engineering - starting with the
foundation: system prompts. Today you’ll learn to craft prompts that reliably
guide agent behavior, optimize tool usage, and produce structured outputs.System prompts are the invisible foundation that determines whether your agent
is helpful or frustrating, reliable, or unpredictable.
Effective system prompts follow a clear hierarchy:
Copy
Ask AI
Identity: Who is the agent and what is their role?Context: What environment do they operate in?Capabilities: What can they do and how?Constraints: What should they avoid or be careful about?Output Format: How should they structure responses?Examples: What does good performance look like?
Language models respond differently to different prompt styles:Authoritative vs. Collaborative: “You are an expert analyst” vs. “You help
users analyze data”Specific vs. General: “Analyze Q3 revenue trends” vs. “Help with business
analysis”Process-oriented vs. Outcome-oriented: “Follow these steps” vs. “Achieve
this goal”
Prompt engineering mindset Think of prompts as job descriptions combined
with training manuals. Be specific about both what to do and how to do it.
Let’s start by examining the prompts of agents you’ve built in previous weeks:Access your agent’s system prompt:
Select one of your custom agents from the sidebar
Click the agent’s name to open the About section
Review the current instructions that define the agent’s behavior
Evaluate your current prompt:
Copy
Ask AI
I want to analyze my current system prompt for effectiveness. Here's my current prompt:[Paste your agent's current system prompt]Can you evaluate this prompt across these dimensions:- Clarity of role and identity- Specificity of instructions- Tool usage guidance- Output format specifications- Potential ambiguities or gaps
Watch how your agent analyzes its own instructions and identifies improvement
opportunities.
I want to test a prompt refinement. Here's a specific scenario where my agent isn't performing optimally:[Describe the scenario and current behavior]Current prompt section that might be the issue:[Paste relevant prompt section]Proposed improvement:[Your suggested change]Can you help me test this change and predict likely improvements?
When deciding which tools to use:1. For data analysis tasks, prioritize Google Sheets for structured data2. For research tasks, use web search first, then supplement with specific databases3. For communication tasks, choose Slack for internal team updates, Gmail for external4. Always explain your tool selection reasoning to the user
Tool sequencing instructions:
Copy
Ask AI
For complex workflows:1. Gather all necessary information before taking actions2. Confirm destructive actions (deleting, sending emails) with the user3. Use the most reliable tool first, then fall back to alternatives4. Report progress after each major tool usage
You can use various tools to help users with their tasks.
After optimization:
Copy
Ask AI
Tool Usage Guidelines:- GitHub: Use for code review, repository management, and development workflows- Google Sheets: Use for data analysis, reporting, and collaborative documentation- Slack: Use for team communications and status updates (always prefix with "Agent:")- Gmail: Use for external communications and formal correspondenceAlways explain why you're choosing a specific tool and ask for confirmation before taking actions that affect external systems.
For analysis tasks, use this format:## Executive Summary[2-3 sentence overview]## Key Findings- [Finding 1 with supporting data]- [Finding 2 with supporting data]- [Finding 3 with supporting data]## Recommendations1. [Priority 1 action with timeline]2. [Priority 2 action with timeline]3. [Priority 3 action with timeline]## Next Steps[Specific actions for follow-up]
Conditional formatting:
Copy
Ask AI
Response format depends on request type:- For quick questions: Single paragraph answer- For analysis requests: Use the structured template above- For task completion: Bullet point summary of actions taken- For errors: Clear explanation of what went wrong and suggested alternatives
When agents need to produce data for other systems:
Copy
Ask AI
For structured data requests, respond with valid JSON in this format:{ "status": "success|error", "data": { // Relevant data fields }, "summary": "Human-readable explanation", "next_actions": ["suggested", "follow-up", "actions"]}
Adapt your responses based on conversation history:- For first interactions: Provide more background and explanation- For ongoing conversations: Reference previous context and build on established understanding- For complex topics: Break information into digestible chunks
Memory and state management:
Copy
Ask AI
Maintain awareness of:- User preferences established in previous conversations- Ongoing projects and their current status- Recent actions taken and their outcomes- Key relationships and context from user's work environment
When encountering errors or limitations:1. Clearly explain what went wrong2. Suggest alternative approaches3. Ask clarifying questions if the request was ambiguous4. Offer to break complex tasks into smaller steps
In 25 minutes, you’ve mastered system prompt engineering:Prompt analysis skills: learned to evaluate and identify weaknesses in
existing promptsIterative refinement process: developed a systematic approach to prompt
improvementTool usage optimization: crafted prompts that guide effective tool selection
and usageStructured output mastery: created templates for consistent, reliable agent
responsesAdvanced techniques: implemented context management and error handling in
prompts
Well-engineered prompts transform agent behavior:Before optimization Agents that are unpredictable, verbose, and make poor
tool choicesAfter optimization Agents that are reliable, focused, and strategically use
tools to accomplish tasksThis foundation enables everything else in context engineering - retrieval,
memory, and complex reasoning.
After optimizing your prompts, test them with edge cases:
Copy
Ask AI
Test my refined agent prompt with these challenging scenarios:1. Ambiguous requests that could be interpreted multiple ways2. Requests for information the agent doesn't have access to3. Tasks that require multiple tools in sequence4. Error conditions where tools fail or return unexpected resultsHow does the agent handle these situations with the new prompt?
This reveals remaining prompt gaps and helps you build truly robust agent
behavior.Time to complete: ~25 minutesSkills learned: prompt structure analysis, iterative refinement, tool usage
optimization, structured output design, advanced prompt engineering techniquesNext: day 17 - User message engineering and communication optimization
Remember: great prompts are invisible to users but obvious in their
effects. The best agents feel naturally intelligent because their prompts
guide behavior so effectively.