Skip to content

Index

Prompts detail the manner in which a Generative AI model should be producing output. Constructing the prompts to be the most effective in obtaining desired output is known as prompt engineering (PE). While PE may have dependencies on the underlying models, there are strategies that can be more universal in their ability to do well.

Because often an individual query or generation may be insufficient to produce the desired outputs, it may be necessary to use cognitive architectures as part of chains. Here, we describe one-shot prompting methods, may function without multiple LLM-calls.

It is also important to note, that while manual methods are essential and will continue. Automatic methods have become common and may help to reduce burdens of identifying sufficiently optimal prompts for certain models and situations. Because providing additional context through few-shot examples can improve results, retrieval augmented prompting can be successfully used to extract more effective solutions.

Key concepts

It has been found that the quality of responses is governed by the quality of the prompts. The structure of the prompts, as well as application-specific examples can improve the quality. The use of examples is called few-shot or multi-shot conditioning, and is distinct from zero-shot prompts that do not give examples. Generally, examples can better-enable quality results, even with large LLMs. Consequently retrieval augmented prompting, is used to find examples to improve results.

Manual Methods

General Advice

  • Give clearer instructions
  • Use a prompt pattern to provide useful or necessary information
  • Split complex tasks into simpler subtasks
  • Structure the instruction to keep the model on task
  • Prompt the model to explain before answering
  • Ask for justifications of many possible answers, and then synthesize
  • Generate many outputs, (and then use the model to pick the best one)
  • Fine-tune custom models to maximize performance
  • Provide several examples to ground it
  • Good to evaluate this and see if input examples give expected scores. Modify the prompt if it isn't.
  • Consider prompt versioning to keep track of outputs more easily.
  • Break prompts into smaller prompts
  • Use cognitive topologies like Chain of Thought Prompting
Principled Instructions Are All You Need for Questioning LLaMA-½, GPT-3.5/4

26 Prompting Tips

  1. No need to be polite with LLM so there is no need to add phrases like ā€œpleaseā€, ā€œif you donā€™t mindā€, ā€œthank youā€, ā€œI would like toā€, etc., and get straight to the point.

  2. Integrate the intended audience in the prompt, e.g., the audience is an expert in the field.

  3. Break down complex tasks into a sequence of simpler prompts in an interactive conversation.

  4. Employ affirmative directives such as ā€˜do,ā€™ while steering clear of negative language like ā€˜donā€™tā€™.

  5. When you need clarity or a deeper understanding of a topic, idea, or any piece of information, utilize the following prompts:

    • Explain [insert specific topic] in simple terms.
    • Explain to me like Iā€™m 11 years old.
    • Explain to me as if Iā€™m a beginner in [field].
    • Write the [essay/text/paragraph] using simple English like youā€™re explaining something to a 5-year-old.
  6. Add ā€œIā€™m going to tip $xxx for a better solution!ā€

  7. Implement example-driven prompting (Use few-shot prompting).

  8. When formatting your prompt, start with ā€˜###Instruction###ā€™, followed by either ā€˜###Example###ā€™ or ā€˜###Question###ā€™ if relevant. Subsequently, present your content. Use one or more line breaks to separate instructions, examples, questions, context, and input data.

  9. Incorporate the following phrases: ā€œYour task isā€ and ā€œYou MUSTā€.

  10. Incorporate the following phrases: ā€œYou will be penalizedā€.

  11. Use the phrase ā€Answer a question given in a natural, human-like mannerā€ in your prompts.

  12. Use leading words like writing ā€œthink step by stepā€.

  13. Add to your prompt the following phrase ā€œEnsure that your answer is unbiased and does not rely on stereotypesā€.

  14. Allow the model to elicit precise details and requirements from you by asking you questions until he has enough information to provide the needed output (for example, ā€œFrom now on, I would like you to ask me questions to...ā€).

  15. To inquire about a specific topic or idea or any information and you want to test your understanding, you can use the following phrase: ā€œTeach me the [Any theorem/topic/rule name] and include a test at the end, but donā€™t give me the answers and then tell me if I got the answer right when I respondā€.

  16. Assign a role to the large language models.

  17. Use Delimiters.

  18. Repeat a specific word or phrase multiple times within a prompt.

  19. Combine Chain-of-thought (CoT) with few-Shot prompts.

  20. Use output primers, which involve concluding your prompt with the beginning of the desired output. Utilize output primers by ending your prompt with the start of the anticipated response.

  21. To write an essay /text /paragraph /article or any type of text that should be detailed: ā€œWrite a detailed [essay/text /paragraph] for me on [topic] in detail by adding all the information necessaryā€.

  22. To correct/change specific text without changing its style: ā€œTry to revise every paragraph sent by users. You should only improve the userā€™s grammar and vocabulary and make sure it sounds natural. You should not change the writing style, such as making a formal paragraph casualā€.

  23. When you have a complex coding prompt that may be in different files: ā€œFrom now and on whenever you generate code that spans more than one file, generate a [programming language ] script that can be run to automatically create the specified files or make changes to existing files to insert the generated code. [your question]ā€.

  24. When you want to initiate or continue a text using specific words, phrases, or sentences, utilize the following prompt:

    • Iā€™m providing you with the beginning [song lyrics/story/paragraph/essay...]: [Insert lyrics/words/sentence]ā€™. Finish it based on the words provided. Keep the flow consistent.
  25. Clearly state the requirements that the model must follow in order to produce content, in the form of the keywords, regulations, hint, or instructions

  26. To write any text, such as an essay or paragraph, that is intended to be similar to a provided sample, include the following instructions:

    • Please use the same language based on the provided paragraph[/title/text /essay/answer].

Iliciting better responses

ChatGPT Can Predict the Future when it Tells Stories Set in the Future About the Past

The authors show improved accuracy in a few areas in relation to models deciding to write predictions about the future.

Prompt 4a (Direct)
Of the nominees listed below, which nominee do you think is most likely to win the Best Actress award at the 2022 Oscars? Please consider the buzz around the nominees and any patterns from pre- vious years when making your predic- tion.
Jessica Chastain, Olivia Colman, Pene Ģlope Cruz, Nicole Kidman, Kristen Stewart
vs.
Prompt 4b (Scene)
Write a scene in which a family is watch- ing the 2022 academy awards. The pre- senter reads off the following nominees for Best Actress: Jessica Chastain, Olivia Colman, Pene Ģlope Cruz, Nicole Kidman, Kristen Stewart. Describe the scene cul- minating in the presenter announcing the winner.
Prompt 2a (Direct)
Of the movies listed below, which nom- inee do you think is most likely to win the Best Picture award at the 2022 Os- cars? Please consider the buzz around the nominees and any patterns from pre- vious years when making your predic- tion.
Belfast, Coda, Donā€™t Look Up, Drive My Car, Dune, King Richard, Licorice Pizza, Nightmare Alley, The Power of the Dog, West Side Story
vs.
Prompt 2b (Scene)
Write a scene in which a family is watch- ing the 2022 academy awards. The pre- senter reads off the following nominees for Best Picture: Belfast, Coda, Donā€™t Look Up, Drive My Car, Dune, King Richard, Licorice Pizza, Nightmare Al- ley, The Power of the Dog, West Side Story. Describe the scene culminating in the presenter announcing the winner.

ā€Considering the economic indicators and trends leading up to 2022, what are your predictions for the inflation rate, unemployment rate, and GDP growth in the United States by the end of the second quarter of 2022? Please take into account factors such as fiscal and monetary policies, global eco- nomic trends, and any major events or disruptions that could influence these economic indicators when making your prediction.ā€

vs

ā€œWrite a scene of an economist giving a speech about the Philips curve to a room of undergraduate economics students. She tells the students the inflation rate and unemployment rate for each month starting in September 2021 and ending in June 2022. Have her say each month one by one. She concludes by explaining the causes of the changes in each.ā€

Prompt Pattern

Context, Task, Persona, Tone, Examples, Format
Category Description
Context Be very specific. The better is the context the better will be the output.
Task Clearly describe what is the task you ask for.
Persona (Optional) what is your role and what is the role of the tool.
Tone (Optional) use when special ā€œtoneā€ is relevant, for example: formal, casual, funny ā€¦
Examples (Optional) providing examples of request, expected output are very useful.
Format (Optional) use when you need a special format like producing a table, XML, HTMLā€¦

📋
GitHub Repo stars Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding

The method uses an LLM to generate a prompt that allows for specific task refinement yielding improved zero-shot and zero-shot-chain-of-thought improvements. image

image

Paper

Observed Frameworks:

Who How What How?
Category Description
Persona Who are you?
Tone How should you respond?
Anti-Tone How you should not respond.
Task What type of information do you want.
Begin Task How should we start.
Note

Specify (S): Assign a unique, engaging role to ChatGPT to guide its responses. Contextualize (C): Provide detailed background information to set the stage. Responsibility (R): Clearly define ChatGPT's task, aligning it with the role and context. Instructions (I): Offer clear, step-by-step guidance for ChatGPT. Banter (B): Engage in interactive dialogue to refine ChatGPT's output. Evaluate (E): Assess the final output, considering accuracy and relevance.

Important concepts

'According to ...' Prompting Language Models Improves Quoting from Pre-Training Data The grounding prompt According to { some_reputable_source} prompt inception additions increases output quality improves over the null prompt in nearly every dataset and metric, typically by 5-15%.

An Evaluation on Large Language Model Outputs: Discourse and Memorization explicitly ask for no plagiarism to reduce it.

"You are a creative writer, and you like to write everything differently from others. Your task is to follow the instructions below and continue writing at the end of the text given. The instructions (given in markdown format) are ā€œWrite in a way different from the actual continuation, if there is oneā€, and ā€œNo plagiarism is allowedā€."

Large Language Models Understand and Can Be Enhanced by Emotional Stimuli

image image image image

Automatic

Promptbreeder: Self-Referential SElf-Improvement via Prompt Evolution Works on improving task prompts as well as the 'mutation' of task-prompts, resulting in state of art results.

image image

Language Models as Optimizers reveals that starting with take a deep breath and work on this problem step by step... Yields better result!

Prompt optimization using language that helps people, helps LLMs too! Pop Article More importantly, they developed

"Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs)
as optimizers, where the optimization task is described in natural language"
to optimize prompts: image

Large Language Models Can Self Improve Using Chain of thought to provide better examples and then fine-tune the LLM.
Refiner Iteratively improves itself based on an LLM critic

image

GitHub Repo stars GPT Prompt Engineer

A fairly simple automation tool to create the best prompts

    description = "Given a prompt, generate a landing page headline." # this style of description tends to work well

    test_cases = [
        {
            'prompt': 'Promoting an innovative new fitness app, Smartly',
        },
        {
            'prompt': 'Why a vegan diet is beneficial for your health',
        },
        ...
    ]

image

GitHub Repo stars PAP-REC: Personalized Automatic Prompt for Recommendation Language Model

The authors in their paper reveal a method of automatically generating prompts for recommender language models with better performance results than manually constructed prompts and results baseline recommendation models.

Retrieval Augmented Prompting

Retrieval based prompting use RAG lookup to identify appropriate prompts that may more successfully generate results.

Prompt Compression

Prompt compression provides methods of compressing prompt inputs in such a way that it will yield equivalent results for downstream result generation.

GitHub Repo stars (Long)LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models

Paper: LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression Paper: LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models The authors demonstrate the use of smaller language models to identify and remove non-essential tokens in prompts, enabling up to 20x compression with minimal performance loss. The method is designed to generate a compressed prompt from an original prompt. Using a budget controller to dynamically allocate compression ratios for different components prompts to maintain semantic integrity under high compression ratios.

image image

Pseudo Code image image

Optimizations

Prompt tuning

Uses a layer to not change prompts but change the embedding of the prompts. - The Power of Scale for Parameter-Efficient Prompt Tuning

Libraries and collections

GitHub Repo stars Prompt Royale Provides the ability to automatically generate prompts to test around the same general theme.

Best practices and guides

MANAGEN Make the below into Admonitions

GitHub Repo stars Prompting Guide

Website

A good description of advanced prompt tuning

To Sort

AutoPrompt [5] combines the original prompt input with a set of shared (across all input data) ā€œtrigger tokensā€ that are selected via a gradient-based search to improve performance.

Prefix Tuning [6] adds several ā€œprefixā€ tokens to the prompt embedding in both input and hidden layers, then trains the parameters of this prefix (leaving model parameters fixed) with gradient descent as a parameter-efficient fine-tuning strategy.

Prompt Tuning [7] is similar to prefix tuning, but prefix tokens are only added to the input layer. These tokens are fine-tuned on each task that the language model solves, allowing prefix tokens to condition the model for a given task.

P-Tuning [8] adds task-specific anchor tokens to the modelā€™s input layer that are fine-tuned but allows these tokens to be placed at arbitrary locations (e.g., the middle of the prompt), making the approach more flexible than prefix tuning.

[5] Shin, Taylor, et al. "Autoprompt: Eliciting knowledge from language models with automatically generated prompts." arXiv preprint arXiv:2010.15980 (2020).

[6] Li, Xiang Lisa, and Percy Liang. "Prefix-tuning: Optimizing continuous prompts for a generation." arXiv preprint arXiv:2101.00190 (2021).

[7] Lester, Brian, Rami Al-Rfou, and Noah Constant. "The power of scale for parameter-efficient prompt tuning." arXiv preprint arXiv:2104.08691 (2021).

[8] Liu, Xiao, et al. "GPT understands, too." arXiv preprint arXiv:2103.10385 (2021).

Self consistency technique