Unlock Your AI’s Full Potential: 10 Advanced Prompt Engineering Hacks in 2023

Liat Ben-Zur
Artificial Intelligence in Plain English
5 min readOct 2, 2023

--

Photo by Andrea De Santis on Unsplash

Human-to-AI Communication is a delicate dance that only a few have fully unlocked. Engaging with Large Language Models (LLMs) requires speaking in the right lingual nuances. This is often known as prompt engineering.

Here are 10 lesser-known, advanced prompt engineering strategies that can significantly enhance your interactions with LLMs.

1. Constraint Injection

What’s This? Putting some friendly boundaries in your prompts to get more focused answers from your AI.

Good Examples:

  1. “Write a short story without using the letter ‘e’.”
  2. “Provide an explanation using only simple words.”
  3. “Generate a poem that follows an ABAB rhyme scheme.”

Bad Examples:

  1. “Write a short story.”
  2. “Provide an explanation.”
  3. “Generate a poem.”

2. Temperature and Top-K Tuning

What’s This? Twisting these knobs can make your AI’s responses more predictable or wildly creative. Temperature controls the randomness while Top-K restricts the model’s choices to the K most likely next words. Tuning these parameters can balance creativity with coherence. It’s like tuning the personality of your AI!

Good Examples:

  1. Lower temperature (e.g., 0.2) for focused, deterministic responses.
  2. Higher temperature (e.g., 0.8) for creative, diverse responses.
  3. Top-k (e.g., 40) for a sweet spot between diversity and coherence.

Bad Examples:

  1. High temperature for factual queries.
  2. Low temperature for creative tasks.
  3. Not adjusting top-k, leading to overly narrow or random outputs.

3. Prompt Phrasing Variations

What’s This? Spice up how you ask questions to get different flavors of answers. Experimenting with different prompt phrasings can elicit more accurate or diverse responses. Small changes in prompt phrasing can make a big difference. Experiment iteratively.

Good Examples:

  1. “Translate the following English text to French:” versus “How would you say the following in French:”.
  2. “Describe the process of photosynthesis” versus “Explain how plants make food through photosynthesis”.
  3. “Calculate the area of the circle” versus “What is the area of a circle with a given radius?”

Bad Examples:

  1. Not varying sentence structure, verbs, and voice.
  2. Always using “Describe” instead of varying with “Explain”, “Tell me about”, etc.
  3. Not experimenting with active versus passive voice in prompts.
  4. Using the same phrasing like “Translate this text:” repeatedly.

4. Reward Modelling

What’s this? Reward models provide feedback to the AI, aiding in fine-tuning its performance over time. Reward good behavior, course-correct mistakes. Your AI will learn over time.

Good Examples:

  1. Providing feedback for better accuracy on factual queries.
  2. Rewarding concise and clear answers.
  3. Encouraging creativity in generating content.

Bad Examples:

  1. No feedback on incorrect or vague responses.
  2. Ignoring the opportunity to reinforce good performance.
  3. Not utilizing reward signals for model fine-tuning.

5. Domain Priming

What’s This: Set the stage right for your AI, so it knows the context and gives you relevant answers.Priming the AI by specifying a knowledge domain improves context and accuracy. Target your queries by establishing the right domain upfront.

Good Examples:

  1. “As a financial analyst, explain the concept of beta.”
  2. “In a medical context, define what systolic and diastolic blood pressures are.”
  3. “As a historian, describe the significance of the Battle of Waterloo.”

Bad Examples:

  1. “Explain the concept of beta.”
  2. “Define systolic and diastolic blood pressures.”
  3. “Describe the significance of the Battle of Waterloo.”

6. Sub-Modelling

What’s This? Split the big, scary questions into smaller, manageable chunks to get better answers. Breaking down complex requests into smaller steps helps avoid overwhelming the AI. Don’t go for a home run swing. Use divide and conquer instead.

Good examples:

  1. Breaking down a financial analysis query into sub-queries about market trends, risks, and company performance.
  2. Dividing a complex math problem into smaller, solvable steps.
  3. Parsing a large text by segments for detailed analysis.

Bad examples:

  1. Trying to analyze a complex financial scenario in one query.
  2. Attempting to solve a complex problem in a single step.
  3. Overloading the model with a large text for analysis without segmentation.

7. Explicit Negation

What’s This? Tell your AI what you don’t want in the answer, making sure it stays on track. Specifying exclusions and restrictions explicitly focuses the AI’s response.

Good Examples:

  1. “Explain the process without using technical jargon.”
  2. “Provide a summary without giving away the ending.”
  3. “Discuss the theory without referencing other theories.”

Bad Examples:

  1. “Explain the process.”
  2. “Provide a summary.”
  3. “Discuss the theory.”

8. Sequential Prompting

What’s This? Have a back-and-forth with your AI to refine the answers gradually. Refine responses through follow-up clarifying questions. Don’t settle after the first try. Iteratively request clarification.

Good Examples:

  1. Refining a translation by asking follow-up questions for ambiguous terms.
  2. Requesting further details on a topic in a step-by-step manner.
  3. Asking for clarification on answers that seem incorrect or incomplete.

Bad Examples:

  1. Accepting the first translation without questioning ambiguities.
  2. Requesting all details at once, leading to an overwhelming amount of information.
  3. Not seeking clarification on unclear or incorrect answers.

9. Benchmarking and A/B Testing

What’s This? Test different strategies to find what works best — it’s like having a fitting room for your prompts! Compare approaches and fine-tune parameters to determine the optimal prompting strategy.

Good Examples:

  1. Finding the Best Translation Strategy

A: “Translate this English text to Spanish.”

B: “How would you say this text in Spanish?”

Run both and compare which one provides more accurate or context-appropriate translations.

  1. A/B testing various Creative Writing Styles:

A: Use a high temperature (e.g., 0.8) for a more creative, free-flowing story.

B: Use a low temperature (e.g., 0.2) for a focused, plot-driven narrative.

Compare the results to decide which temperature setting best suits your creative needs.

  1. Querying Financial Data

A: “Provide me the stock trends for the last month.”

B: “Analyze the stock trends for the past 30 days.”

Run both and assess which phrasing yields more comprehensive and understandable financial data.

Bad Examples:

  1. Using the same prompt strategy without comparison.
  2. Sticking to default temperature settings without testing.
  3. Not analyzing the impact of domain priming on the quality of responses.

10. Custom Tokenization:

What’s This? Play around with how text is broken down to help your AI understand complex queries better. Strategic spacing and punctuation guides the AI’s parsing and comprehension. Mind your spaces, commas, and periods. Tokenization matters.

Good Examples:

  1. Adding whitespace around key terms to ensure correct tokenization. Scientific Query

Good: “Tell me about E=mc^2.”

Better: “Tell me about E = mc ^ 2.”

The latter ensures the equation is parsed correctly, avoiding misinterpretation.

2. Using special characters to separate or emphasize elements within the prompt. Programming Question:

Good: “How to write a for loop in Python?”

Better: “How to write a for loop in Python?”

Using backticks around ‘for’ emphasizes that it’s a keyword, guiding the AI to provide a more programming-specific answer.

3. Poetry Analysis:

Good: “Analyze the line ‘To be or not to be.’”

Better: “Analyze the line: ‘To be, or not to be.’”

The latter, with correct punctuation, ensures the AI understands that it’s a specific line from a play, likely prompting a more nuanced analysis.

Bad Examples:

  1. Ignoring tokenization issues leading to misinterpretation.
  2. Not utilizing special characters to guide tokenization.

Bottom line

Overlooking the impact of tokenization on the model’s understanding and output.

Prompt engineering is part art, part science. Creatively apply these advanced techniques to unlock your AI’s full potential!

In Plain English

Thank you for being a part of our community! Before you go:

--

--

Digital Transformation Leader | Strategic Advisor | PLG, Product Management, IoT & AI Disruption | Diversity & Inclusion | Speaker | Board Member | ex-CVP MSFT