Skip to main content

Prompting Guide

Introduction

Prompts are an integral part of the branchly platform, as we use generative AI extensively. Prompts are used for the following functions:

  • System Prompts: Defined by branchly to guide interactions across all customers.
  • Prompt Persona: Pre-configured or customizable by users to tailor responses to their use cases. This persona is used at multiple points during our AI process: for reformulating answers, for customizing responses, and during automatic evaluation of sessions.
  • Tool Names and Descriptions: Used across all tools to define when it should be used and how.

Think of defining your prompt persona as creating a manual for a new employee unfamiliar with your business. Effective prompts should be broad enough to cover necessary ground while being specific enough to provide clear guidance.

Best Practices

Effective prompt engineering is crucial for maximizing the potential of language models. By crafting well-structured prompts, you can ensure more accurate, efficient, and meaningful outputs, enhancing user experience and achieving desired outcomes.

  • Tone of voice: Prompts shape response style, behavior, and context. Ensure clarity and relevance. For example, specify the tone: "Respond in a professional manner suitable for a business audience."
  • Be Direct: Use language like “you must” or “your task is to …” or “your role is.” This aligns with the principle of using affirmative directives, which helps in achieving better results.
  • Meaning and Clarity: Consider the specific intent and meaning of your prompt to ensure it aligns with your goals. Use direct language and be straight to the point. Avoid repetition.
  • Formatting and Spelling: Adhere to basic markdown formatting rules and check spelling. Errors can significantly reduce prompt effectiveness. Use bullet points.
  • Leverage LLMs: Use large language models to refine and improve your prompts. This can be very beneficial to describe instructions in a simple and easy-to-understand manner.
  • Language: Use English for system prompts for consistency, as branchly defines its system prompts in English, which performs the best. However, you can give specific examples in your language as needed, if are not satisfied with a more general description.
  • Give in-context examples: Provide specific examples to guide the model's responses. This helps in setting clear expectations and gives guidance on how to follow. Example: "Here are two examples of effective summaries. Use a similar format."
  • Tool Names and Tool Descriptions: Treat the combination of tool names and tool descriptions as standalone prompts. Clearly specify when and how to use each tool. The tool descriptions should be so specific that they do not overlap with any other tool description!
    • Example:

      Imagine you have two different sources for event information, one internal page (part of the knowledge base) and one external page which you want the AI agent to access dynamically.

      • Weekend Events: Sourced from an external website's event calendar.
        • Good: A good tool name would be get_upcoming_weekend_events with a description of “Use this tool to access current events for the upcoming weekend.”
        • Bad: events with description: “Use this tool for event information”. This is bad because it does not capture that it is only relevant for events on the upcoming weekend, but for all events. This can lead to issues, as the “normal events” are part of the knowledge base and thereby a different tool.
      • Normal Events: Sourced from your website, regularly crawled for updates. No specific description needed, if you are specific about the weekend events.
  • No default instructions: There is no need to include default instructions like “list of sources at the end” or “answer only based on context provided,” as these are inherent to our AI process.
  • Highlight Importance: For especially sensitive topics, you can give a notice to the LLM that this is especially important. E.g., “This is important!”
  • Don’t Overdo It: Don’t put too much context and information inside a prompt. If it is static information that you need to include, consider adding the content to the knowledge base directly or in your data sources directly (i.e., your website).

By following these best practices, you can create prompts that effectively leverage the capabilities of language models, leading to better interactions and results.

Further resources

Here is a collection of helpful articles: