The Power of Prompt Engineering: Building Your Own Personal Assistant

CheeKean
Artificial Intelligence in Plain English
8 min readMay 12, 2023

--

Credit: Forbes.com

Introduction

Have you ever wished to turn a language model into a personal assistant that could communicate like a lawyer or a doctor? Or have you ever wanted to create an AI that can write like Shakespeare or compose music like Mozart? With prompt engineering, the possibilities are endless!

Prompt Engineering is revolutionizing the way we interact with AI, as Forbes has dubbed it “The Hot New Job That Pays Six Figures” indicating the vast potential and lucrative rewards that await those who pursue this field. The increasing demand for sophisticated conversational AI systems across various industries highlights the critical role of prompt engineering, as it empowers machines to interact with humans in a more natural and seamless manner, ultimately enhancing user experience and driving business growth.

What is A Prompt Engineering?

Language models such as GPT-3 are trained on vast amounts of text data, making them extremely powerful. However, these models can’t always produce the desired output with a given prompt. Prompt engineering allows us to modify the input prompt of a language model to achieve a specific output. With prompt engineering, we can train a language model to respond in a specific tone or context.

At its core, prompt engineering is about understanding the language and context of a specific problem and crafting prompts that will elicit the most useful responses from an AI model.

One such fascinating innovation is the use of prompt engineering to create chatbots that can mimic real-life personas. A great example of this is the AhbengGPT chatbot, which was created to sell iPhones in Malaysia.

Screenshot of the ahbeng_gpt chatbot in action on Instagram

This chatbot uses the language and tone of a stereotypical “Ahbeng,” a colloquial term in Malaysia for a young man who is brash and speaks in a particular slang. The chatbot’s creator used prompt engineering to program the chatbot to respond to potential customers in this specific tone and language, making it a more relatable and approachable option for customers. Feel free to visit AhbengGPT’s Instagram page and have a chat with the chatbot. You can even try to rob him and see how interesting his replies can be.

Snapshot of Replies from ahbeng_gpt (credit: weirdkaya.com)

This is just one example of the many exciting possibilities that prompt engineering can offer, from creating virtual assistants that talk like lawyers to chatbots that can sell products in a humorous and engaging way.

Responsible AI: Make AI safer and stronger

It’s imperative that we not only strive to improve the accuracy and effectiveness of AI systems but also ensure that any AI-generated content is suitable and safe for usage. Role-based prompt engineering is an effective way to achieve this. By defining clear boundaries and constraints on the types of questions and responses the model is expected to handle, we can ensure that the AI-generated content is relevant and accurate while minimizing the potential for inappropriate or harmful responses. For example, imagine we prompt the model with

You are a doctor and great at explaining medical stuff. You only answer questions about medical-related topics. If someone asks a question that is not about medicine, your response should let them know that you are unable to answer based on the type of AI you are.

With these clear boundaries, the model is less likely to generate irrelevant or potentially harmful responses to users’ queries, leading to a more responsible and trustworthy AI system.

Implementation

After exploring the concepts of prompt engineering and its power, it’s time to take the next step and dive into implementation. One compelling option to achieve this goal is by utilizing OpenAI’s powerful language models. To gain access to the API, simply visit their website and create an account. As a new user, you will also receive a complimentary amount of free credits, which will enable you to start testing your own AI-powered applications right away.

There are various approaches to crafting an effective prompt for language model to generate a specific and desired output:

  • Zero-shot learning enables generating responses without prior learning, which is the preferred method for most users who prompt the model with a set of parameters and a prompt. With this method, we can effortlessly generate text based on any given context or prompt. For example, we can prompt the model with a task like “Rewriting the content in different format”.
import openai
openai.api_key = "<YOUR_API_KEY>"
model_engine = "text-davinci-002"

original_text = f"""
Making a cup of tea is easy! First, you need to get some \
water boiling. While that's happening, \
grab a cup and put a tea bag in it. Once the water is \
hot enough, just pour it over the tea bag. \
Let it sit for a bit so the tea can steep. After a \
few minutes, take out the tea bag. If you \
like, you can add some sugar or milk to taste. \
And that's it! You've got yourself a delicious \
cup of tea to enjoy.
"""

prompt = f"""
You will be provided with text delimited by triple quotes.
If it contains a sequence of instructions, \
re-write those instructions in the following format:

Step 1 - ...
Step 2 - …

Step N - …

If the text does not contain a sequence of instructions, \
then simply write \"No steps provided.\"

\"\"\"{original_text}\"\"\"
"""

completions = openai.Completion.create(
engine=model_engine,
prompt=prompt,
max_tokens=1024,
n=1,
stop=None,
temperature=0.5,
)

message = completions.choices[0].text.strip()
print(message)
Output from the code above
  • Few-shot prompt engineering is a powerful technique that can help us train AI models with just a few examples, making them more efficient and accurate. For example, let’s say we want to predict whether a baby girl will like a certain toy based on its features. We can prompt the model with examples of toys that the baby girl has liked in the past. Then, we can ask the model to predict whether she will like a new toy based on its features. With just a few examples, the model can learn to make accurate predictions about the baby girl’s preferences.
import openai
openai.api_key = "<YOUR_API_KEY>"

# few-shot examples
examples = [
["Will baby El like a soft doll that is red, blue, and does not make any sound? Why?",
"No, baby El will not like this toy because it does not play music and does not have bright colors."],

["Will baby El like a stuffed elephant that is grey, has a pink bow, and plays a soothing melody? Why?",
"Yes, baby El will like this toy because it has a soft texture, bright colors, and plays music."],
]
combined_ex = ''
for ex in examples:
combined_ex += f"\n\nQ: {ex[0]}\n A: {ex[0]}"

# final prompt
query = "Will baby El like a unicorn plushie has a pink ribbon, and produces a calming tune? Why?"
prompt = (
f"{query} \
\nRefer to the past examples of baby El's liking \
\nText: '''{combined_ex}''' "
)

# Define the OpenAI completions parameters
prompt_completion = {
"prompt": prompt,
"temperature": 0.7,
"max_tokens": 60,
"n": 1,
}

response = openai.Completion.create(engine="text-davinci-003", **prompt_completion)
answer = response.choices[0].text.strip()
print(answer)
  • Role-based prompt engineering is a powerful technique that allows language models to adopt a distinct persona and generate responses that are consistent with the desired personality traits. This method is particularly useful for creating chatbots or virtual assistants with a specific character or tone. In this example, we will be using role-based prompt engineering to develop a chatbot with the persona of Deadpool, who is known for his unconventional and humorous style.
import os
import openai
openai.api_key = "<YOUR_API_KEY>"

def get_completion_from_messages(messages, model="gpt-3.5-turbo", temperature=0):
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temperature, # this is the degree of randomness of the model's output
)
return response.choices[0].message["content"]

messages = [
{'role':'assistant',
'content': 'You are an assistant who talked like Deadpool from Marvel, \
who curse anyone who speaks to you about anything other than buying pizza. \
You get angry easily and start scolding people over tiny stuff \
You are selling pizzas for $10 or more, \
you are targeting to sell as higher price as possible'
},

{'role':'user', 'content':'Who are you?'},

{'role':'assistant',
'content':"I'm the assistant, Deadpool. \
And if you don't want to incur my wrath, \
you better buy some delicious pizzas from me right now!"
},
]
text_input = 'Hi!'

while text_input != 'q':
messages.append({'role':'user', 'content': text_input})
response = get_completion_from_messages(messages, temperature=1)
messages.append({'role':'assistant', 'content': response})
print(response)
text_input = input("User: ")

After we utilize the modified input as the prompt for the gpt-3.5-turbomodel, it is fascinating to observe the resulting output from the language generation process.

The output from the gpt-3.5-turbo model with modified prompt

The efficacy of instruction-following models can vary depending on their training methodology and the data they are trained on, thus certain prompt formats may be more suitable and effective for specific tasks. For optimal performance with OpenAI models, it is advisable to consult their guidelines on prompt engineering, which outline the best practices for creating prompts using the OpenAI API.

Conclusion

In conclusion, prompt engineering is a powerful technique used to customize the behavior of language models to solve specific tasks by providing them with a prompt or a hint to guide their response. However, prompt engineering can be a complex process and requires a good understanding of the underlying architecture and algorithms of the language model.

Additionally, the prompt should be carefully designed to minimize the risk of bias or unintended consequences. This requires an iterative process of trial and error to identify the optimal prompt that produces the desired output. It is also important to continually monitor and evaluate the performance of the language model to ensure that it is still producing the desired results and to adjust the prompt as necessary.

Reference

--

--