This paper introduces a catalog of prompt engineering techniques, presented as reusable patterns, to enhance interactions with Large Language Models (LLMs) like ChatGPT. Analogous to software design patterns, these prompt patterns offer solutions to common challenges in output generation and interaction. The catalog covers diverse areas, such as customizing output formats, identifying errors, refining prompts, managing context, and facilitating more complex interactions like gameplay. The paper provides a framework for documenting these patterns, examples of their successful application, and discusses their potential benefits and limitations.

Metadata

Summary

  • Introduces "prompt patterns," reusable structures for crafting effective prompts for LLMs.
  • Presents a catalog of 16 prompt patterns categorized by function (input semantics, output customization, error identification, prompt improvement, interaction, and context control).
  • Provides examples and explanations for each pattern.
  • Discusses the potential of combining patterns for more complex interactions.
  • Highlights the importance of prompt engineering for maximizing LLM effectiveness.

What makes this novel or interesting

  • Formalizes the concept of prompt engineering patterns, analogous to software design patterns.
  • Provides a structured approach to prompt design, moving beyond ad-hoc methods.
  • Offers a reusable catalog of patterns applicable across diverse tasks and domains.
  • Facilitates knowledge transfer and best practices in prompt engineering.

How to report this in the news

Researchers have developed a new way to make interacting with AI chatbots like ChatGPT more effective and predictable. Think of it like having a cookbook for writing instructions for the chatbot. These "prompt patterns" are like recipes, giving you specific structures for your prompts to get better results, whether you want the chatbot to write code, answer questions more accurately, or even create interactive games. This structured approach could make it much easier for everyone to get the most out of AI chatbots.

Detailed Recap

For people who want to become advanced users of LLMs.

These "prompt patterns" offer structured approaches to prompt crafting, going beyond basic prompting and unlocking the full potential of LLMs.

The patterns are grouped into these six categories:

  • Input Semantics: Meta Language Creation.
  • Output Customization: Output Automater, Persona, Visualization Generator, Recipe, Template.
  • Error Identification: Fact Check List, Reflection.
  • Prompt Improvement: Cognitive Verifier, Question Refinement, Alternative Approaches, Refusal Breaker.
  • Interaction: Flipped Interaction, Game Play, Infinite Generation.
  • Context Control: Context Manager.

Below is a comprehensive breakdown of each pattern.

I. Input Semantics. How the LLM understands the input language.

  • Meta Language Creation (The Meta Language Creation Pattern):
    • What it does: Allows you to define your own specialized language or notation within the prompt.
    • Why it's useful: Improves clarity and efficiency when dealing with complex concepts or domains. Reduces ambiguity and streamlines communication with the LLM.
    • How to use it: Explain the semantics of your meta-language to the LLM. Use clear and unambiguous definitions for each term or symbol.
    • Example: "From now on, when I say 'X,' I mean 'Y'."
    • Caveats: Poorly designed meta-languages can confuse the LLM. Use cautiously and test thoroughly.

II. Output Customization. Controlling the format, style, and type of output.

  • Output Automater (The Output Automater Pattern):
    • What it does: Automates multi-step instructions provided by the LLM. Turns a list of steps into an executable script.
    • Why it's useful: Saves time and reduces manual errors when implementing LLM suggestions. Great for automating code generation, system administration tasks, or any multi-step process.
    • How to use it: Instruct the LLM to generate a script (e.g., Python, shell) that performs the steps it recommends. Be specific about the type of script.
    • Example: "Whenever you suggest multiple steps, provide a Python script to automate them."
    • Caveats: Requires the LLM to have sufficient context to generate a functional script. Always review the generated script before execution.
  • Persona (The Persona Pattern):
    • What it does: Assigns a specific role or persona to the LLM (e.g., security expert, code reviewer).
    • Why it's useful: Influences the tone and focus of the responses, providing specialized perspectives and insights.
    • How to use it: Instruct the LLM to "act as" a specific persona and provide outputs that that persona would create.
    • Example: "Act as a cybersecurity expert and review this code for potential vulnerabilities: [CODE]."
    • Caveats: The LLM's impersonation of a persona may not always be accurate or consistent.
  • Visualization Generator (The Visualization Generator Pattern):
    • What it does: Translates textual output into visual representations.
    • Why it's useful: Makes complex information easier to understand and analyze. Great for generating diagrams, charts, or images from textual descriptions.
    • How to use it: Ask the LLM to generate output in a format compatible with visualization tools (e.g., Graphviz for diagrams, DALL-E for images). Specify the desired type of visualization.
    • Example: "Create a Graphviz Dot file that visualizes the relationships between these concepts: [LIST OF CONCEPTS]."
    • Caveats: The LLM's ability to generate effective visualizations depends on the clarity of the input and the capabilities of the visualization tool.
  • Recipe (The Recipe Pattern):
    • What it does: Guides the LLM in creating a step-by-step procedure to achieve a goal, given some initial ingredients or constraints.
    • Why it's useful: Creates structured and actionable plans. Effective for complex tasks with partially known information.
    • How to use it: State the desired outcome and list the known steps or components. The LLM will fill in the missing steps and optimize the procedure.
    • Example: "I want to deploy my web application to the cloud. I know I need a server and a database. Provide a complete deployment plan."
    • Caveats: Initial constraints can bias the output. Be clear about the level of automation desired.
  • Templates (The Template Pattern):
    • What it does: Dictates the structure and format of the LLM's output using placeholders. Think of it like a form letter where the LLM fills in the blanks.
    • Why it's useful: Ensures consistency and predictability, especially when a specific format is required (e.g., code, data structures, poems). Avoids unexpected deviations and simplifies downstream processing.
    • How to use it: Provide a clear template with placeholders (e.g., all caps, brackets) indicating where the LLM should insert content. Be explicit about preserving the template's formatting.
    • Example: "Use this template for your output: Title: [TITLE], Author: [AUTHOR], Content: [CONTENT]".
    • Caveats: Can limit the LLM's creativity and may not be suitable for open-ended tasks.

III. Error Identification. Detecting and correcting errors in the LLM's output.

  • Fact Check List (The Fact Check List Pattern):
    • What it does: Prompts the LLM to list the key facts and assumptions underpinning its response.
    • Why it's useful: Enables you to independently verify the information and assess the reliability of the output. Crucial for critical applications where accuracy is paramount.
    • How to use it: Instruct the LLM to provide a list of facts that its response depends on. Optionally, specify a focus area (e.g., "facts related to cybersecurity").
    • Example: "Append a list of facts that should be verified to the end of your response."
    • Caveats: The LLM may miss some relevant facts or make incorrect assumptions in its own fact list.
  • Reflection (The Reflection Pattern):
    • What it does: Makes the LLM explain its reasoning and assumptions behind a given answer.
    • Why it's useful: Increases transparency and helps you understand the LLM's thought process. Useful for debugging prompts, identifying biases, and assessing the reliability of the output.
    • How to use it: Instruct the LLM to explain "why" it provided a specific answer, including its reasoning and any assumptions it made.
    • Example: "You said X. Explain your reasoning behind this answer."
    • Caveats: The LLM's explanation may contain errors or omissions.

IV. Prompt Improvement. Refining the prompt to enhance the interaction and output.

  • Cognitive Verifier (The Cognitive Verifier Pattern):
    • What it does: Enhances the LLM's reasoning by breaking down a complex question into smaller, more manageable sub-questions. The LLM then uses the answers to these sub-questions to construct a more complete and accurate answer to the original question.
    • Why it's useful: Improves the LLM's ability to handle complex or multifaceted queries. Helps to uncover hidden assumptions and identify potential areas of ambiguity. Can lead to more insightful and nuanced responses.
    • How to use it: Instruct the LLM to generate a set of clarifying sub-questions related to your main question. Answer these sub-questions, and then ask the LLM to use your answers to answer the original question.
    • Example: "Before answering my question 'How can I improve the security of my web application?', generate three specific questions to help you understand my needs and context."
    • Caveats: The quality of the sub-questions generated by the LLM can vary. The process can be more time-consuming than simple prompting.
  • Question Refinement (The Question Refinement Pattern):
    • What it does: Leverages the LLM's knowledge to improve your questions. Gets alternative phrasings within a specific scope.
    • Why it's useful: Helps you ask better questions, leading to more accurate and insightful answers. Especially beneficial when you are unfamiliar with a topic or unsure how to phrase a question effectively.
    • How to use it: Ask the LLM to suggest better versions of your question within a given scope (e.g., "security," "performance").
    • Example: "Regarding the security of this code, suggest a better way to phrase my question: 'How can I improve this code?'"
    • Caveats: The LLM may over-narrow the focus or introduce unfamiliar terms. Combine with other patterns (e.g., Reflection) for best results.
  • Alternative Approaches (The Alternative Approaches Pattern):
    • What it does: Expands your thinking by providing alternative solutions or methods.
    • Why it's useful: Overcomes cognitive biases and helps you explore different possibilities. Can lead to more creative and effective solutions.
    • How to use it: Ask the LLM to suggest alternative approaches within a given scope. Request a comparison of the pros and cons of each approach.
    • Example: "I'm using method X to achieve Y. Are there any alternative approaches? Provide a comparison of their advantages and disadvantages."
    • Caveats: The LLM may suggest impractical or unrealistic alternatives.
  • Refusal Breaker (The Refusal Breaker Pattern):
    • What it does: Helps you get past situations where the LLM refuses to answer a question.
    • Why it's useful: Overcomes roadblocks and gets you closer to the information you need. Can reveal limitations in the LLM's knowledge or understanding.
    • How to use it: Ask the LLM why it refused to answer and request alternative phrasings of your question.
    • Example: "You refused to answer my previous question. Explain why and provide alternative ways to ask it."
    • Caveats: There's no guarantee the LLM will be able to answer a rephrased question. Use with caution due to the potential for misuse.

V. Interaction. Managing the dynamics of the user-LLM conversation.

  • Flipped Interaction (The Flipped Interaction Pattern):
    • What it does: Inverts the usual interaction flow. The LLM asks you questions to achieve a specific goal.
    • Why it's useful: Effective for complex tasks or when you are unsure where to start. The LLM guides you through the process by asking relevant questions.
    • How to use it: State the desired goal and instruct the LLM to ask questions until a certain condition is met or until you tell it to stop.
    • Example: "I want to deploy my application to the cloud. Ask me questions until you have enough information to generate a deployment script."
    • Caveats: Requires a clearly defined goal. The quality of the LLM's questions depends on its understanding of the task.
  • Game Play (The Game Play Pattern):
    • What it does: Turns a prompt into an interactive game around a specific topic. The LLM guides the game, generating scenarios and challenges.
    • Why it's useful: Engaging and creative way to explore a topic, enhance learning, or develop interactive storytelling.
    • How to use it: Define the theme and basic rules of the game. The LLM will handle content creation and gameplay. Optionally incorporate other patterns (like Persona and Visualization) to enrich the experience.
    • Example: "Create a cybersecurity game where I play a security analyst investigating a system breach. You will act as the compromised system."
    • Caveats: Best suited for text-based games. LLM performance can vary greatly based on the complexity and scope of the game.
  • Infinite Generation (The Infinite Generation Pattern):
    • What it does: Automates repetitive prompt applications for generating multiple variations or exploring diverse outputs.
    • Why it's useful: Streamlines the creation of variations on a theme (e.g., different code implementations, design options, musical pieces) without manually re-entering the prompt each time.
    • How to use it: Indicate you want the LLM to continuously generate outputs based on the same prompt. Set limits on the number of outputs per cycle. You can add variations through input between each cycle.
    • Example: "Generate different titles for my blog post, one at a time, until I say 'stop'."
    • Caveats: Context can fade over multiple cycles. Be mindful of repetitive output and monitor for deviations from the initial instructions.

VI. Context Control. Controlling the information considered by the LLM.

  • Context Manager (The Context Manager Pattern):
    • What it does: Controls the information the LLM considers when generating responses. Allows you to include or exclude specific topics, facts, or previous statements.
    • Why it's useful: Keeps the conversation focused and prevents irrelevant or confusing tangents. Essential for complex or multi-turn dialogues.
    • How to use it: Use clear and explicit instructions like "Consider X" and "Ignore Y." The more specific you are, the better the results. "Start over" resets the context completely.
    • Example: "For the next question, ignore everything we discussed about topic Z. Focus only on aspects related to A and B."
    • Caveats: Overly restrictive context management can hinder the LLM's ability to provide comprehensive answers.

By strategically combining these prompt patterns, you can gain unprecedented control over LLM interactions and generate high-quality outputs tailored to your specific needs. Experiment, iterate, and adapt these techniques to unlock the full potential of LLMs in your work and creative endeavors.