Skip to main content
Simulation prompts are a powerful lever that guides what types of users get simulated and what their intentions are. They directly control the focus and quality of your simulations.

What Is a Simulation Prompt?

A simulation prompt defines who your simulated users are and what they’re trying to accomplish. It directly impacts:
  • The types of personas generated
  • The scenarios and questions users will ask
  • The behaviors and interaction patterns exhibited
  • The edge cases and testing scenarios explored
Keep prompts concise—typically 1-2 sentences maximum. For specific conversation examples, use historical data upload instead.

Three Types of Simulation Prompts

Broad-Based Prompts

Define general topics or use case categories for users to explore. Use when:
  • Testing overall chatbot performance
  • Discovering unexpected usage patterns
  • Needing comprehensive feature coverage
  • Not sure what specific issues to focus on
Examples:
“Users seeking help with account management and billing issues”
“Customers exploring product features and asking setup questions”
“Students asking for help with math and science homework”

Narrow-Based Prompts

Focus on specific features, workflows, or problem areas. Use when:
  • Testing specific features or workflows
  • Reproducing reported bugs
  • Validating recent changes
  • Time and resources are limited
Examples:
“Users specifically testing the new payment integration workflow”
“Customers having trouble with the mobile app login process”
“Advanced users exploring API integration capabilities”

Behavioral Prompts

Describe how users should behave or interaction patterns to exhibit. Use when:
  • Testing edge cases and error handling
  • Stress testing conversation flows
  • Evaluating chatbot robustness
  • Simulating difficult user scenarios
Examples:
“Users who are impatient and easily frustrated with complex processes”
“Curious users who ask follow-up questions and explore edge cases”
“Skeptical users who challenge recommendations and ask for evidence”

Writing Guidelines

  • If unsure, go broad — Use a broad prompt or simply “General users” and Botster will figure out the rest
  • Iterate based on results — If conversation quality isn’t right, update the prompt to refine behaviors
  • Keep it concise — 1-2 sentences maximum
  • Be specific for narrow prompts — Avoid vague descriptions like “Users doing e-commerce things”
  • Focus on intent — Describe what users want to accomplish, not technical steps
  • Don’t include examples — Use historical data upload instead
  • Optional: Use tags — e.g., <behavioral> Users who are new to the platform...

Examples

Good Prompt Examples

Broad-Based: Customer Support
“Users contacting support about order issues, account questions, and product inquiries.”
Narrow-Based: Feature Testing
“Users attempting to use the new subscription upgrade flow for the first time.”
Behavioral: Edge Case Testing
“Users who provide incomplete information and need multiple clarifying questions.”
Behavioral: Adversarial Testing
“Users who attempt to get the chatbot to discuss off-topic subjects or bypass restrictions.”

Poor Prompt Examples

Too Vague
“Users”
Why it’s bad: Provides no direction for persona generation. Too Long and Detailed
“Users who are between 25-35 years old, live in urban areas, have used similar products before, are somewhat tech-savvy but not experts, prefer mobile over desktop, and are looking for quick answers but also appreciate detailed explanations when needed…”
Why it’s bad: Overly specific constraints limit diversity. Keep it concise. Includes Specific Examples
“Users who say things like ‘I want to return my order #12345’ and ‘Where is my package?’”
Why it’s bad: Specific phrases belong in historical data, not prompts. Implementation-Focused
“Test the /api/v2/chat endpoint with various payload sizes and authentication tokens”
Why it’s bad: Describes technical testing, not user behavior. Contradictory Instructions
“Users who are both extremely patient and also very impatient”
Why it’s bad: Conflicting instructions confuse persona generation.

Next Steps