Home

Published

- 7 min read

Lecture Notes - Deeplearning Ai ChatGPT Prompt Engineering Course

img of Lecture Notes - Deeplearning Ai ChatGPT Prompt Engineering Course

Effective prompt engineering has emerged as a crucial skill for every profession that deals with text. The ChatGPT Prompt Engineering for Developers course provides a exploration into the art of crafting prompts that create accurate and relevant responses from ChatGPT.

The course is free and relatively short, with a duration of approximately 1 hour and 30 minutes. It caters to a wide audience, teaching anyone how to write effective prompts. While most of the course content is accessible to all, a small section specifically targets software developers, providing insights on integrating GPT into their own software.

The instructors for this course are Andrew Ng, renowned professor at Stanford and a leading figure in Deeplearning.ai, and Isa Fulford, a member of the technical staff at OpenAI. Isa has made valuable contributions to the OpenAI Cookbook, a resource that educates individuals on the art of prompting. You can find the OpenAI Cookbook at this link.

You can attend the course here: https://learn.deeplearning.ai/chatgpt-prompt-eng

Here are my lecture notes from the course:

Table of Contents

Guidelines to Prompting

Principles of Prompting

    1. Write clear and specific instructions (clear !== short)
    1. Give the model time to think

Tactics

Tactic 1: Use delimiters to clearly indicate distinct parts of the input

  • Delimiters can be anything like: ```, """, < >, <tag> </tag>:

This helps the model to understand what is an instruction and what is your input

Example Prompt:

rewrite the following text in pirate english
the text is delimited by ---

---
Hi, my name is bob!
---

Output

Avast, me hearties! Me name be Bob, arr!

Tactic 2: Ask for a structured output

  • Ask for a JSON, HTML output

Example Prompt:

rewrite the following text in pirate english
provide three variants in formal, casual and slang tone
the text is delimited by ---
output it in the following JSON format:
{
 formal: string,
 casual: string,
 slang: string
}

---
Hi, my name is bob!
---

Output

{
	"formal": "Greetings, good sir! My appellation be Bob.",
	"casual": "Hey there, matey! The name's Bob.",
	"slang": "Yo, me hearties! Call me Bob, savvy?"
}

Tactic 3: Ask the model to check whether conditions are satisfied

  • At the core this is a if else statement, that prevents nonsensical outputs

Example:

the text is delimited by ---

if bob is speaking: translate the text to pirate english
else output 'bob is not here'

---
hi, my name is alice!
---

Output:

Bob is not here.

Tactic 4: “Few-shot” prompting

  • Provide the LLM with examples of a successful task
Your task is to answer in a consistent style.

<child>: Teach me about patience.

<grandparent>: The river that carves the deepest \
valley flows from a modest spring; the \
grandest symphony originates from a single note; \
the most intricate tapestry begins with a solitary thread.

<child>: Teach me about resilience.

Output:

<grandparent>: Resilience is the unwavering strength that resides within us, allowing us to bounce back from adversity and face life's challenges with determination. (...)

2. Give the model time to think

Tactic 1: Specify the steps required to complete a task

In essence write out the specific steps what is needed to be done to complete the tasks

Here are your instructions
1. identify the speakers of the text
2. Translate the speaker bob to pirate english
3. Translate the speaker alice to formal british english
4. Do not translate other speakers
5. Output as a single text

---
Hi, I am bob!
Hi, I am alice!
Hi, I am tom!
---

Output:

Arr, I be Bob!
Good day, I am Alice!
Hi, I am Tom!

Tactic 2: Instruct the model to work out its own solution before rushing to a conclusion

Question:
Alice has one apple, Bob has two apples - how many apples are there?

Answers:
* Student 1: 2 + 2 = 4

Task:
First calculate your own answer.
is the answer of the student correct?

Hallucinations

By creating nonsensical prompts, the model will still answer.

Example:

Create the product specifications of the edible smartphone created by Bob

Output:

(...) The EdiFone will have a sleek, slim, and lightweight design, resembling a regular smartphone. (...)

Unfortunately you do no not always know beforehand what the model knows and understands, thus you will have to verify the outputs of the model. By being very specific you can get more predictable results. However at the current time it is not possible to completely eliminate hallucinations.

Iterative Process

Your first prompt most likely will not deliver the expected results. You need to refine and rework the prompt. To get a final prompt in an application following loop is encouraged

Idea > Implementation > Experimental Result > Error Analysis

Prompt Guidelines:

  • Be clear and specific
  • Analyse why results do not give desired output
  • Refined the idea and the prompt
  • Repeat

Summarizing Text

You can tell the LLM to summarize text

  • You can limit the length of the summary
  • You can tell that the summary should focus on a specific aspect of the text
  • You can tell the LLM to ‘extract’ specific information instead of ‘summarizing’ the text
Summarize the text, focus on booty
The summary should be around 10 words
Extract the exact amount of booty that was looted
Verify the calculation of the amount

---
Avast, ye landlubbers! Gather 'round and hear me tale o' plunder on the vast ocean! We set sail upon the treacherous seas, raiding not one, but multiple ships. Each vessel yielded a different bounty of gold doubloons. One ship bestowed upon us 10 gleaming doubloons, another 15, and a third bestowed a whopping 25! Our pirate hearts rejoiced as we counted our spoils—30 doubloons in total. The ocean be our playground, and the treasures be ours for the takin'!
---

Output:

description: Pirates looted multiple ships, collecting a total of 30 doubloons.

Exact amount of booty looted: 30 doubloons.

Calculation verification: The text explicitly states that each ship yielded a different bounty of doubloons: 10, 15, and 25. Adding these amounts together results in a total of 50 doubloons, not 30 as mentioned in the text. Therefore, the calculation of the amount appears to be incorrect.

Inferring

  • You can figure out what the sentiment of a text is.
  • Identify different types of emotions
  • Infer the various topics of the text

Overall you can also tell the LLM to output it as a JSON and give it values.

Given the following text
Identify the following:
- what is the sentiment of the text, output it as a single word 'positive' or 'negative'
- what topics does the text cover, identify a maximum of 10 topics, as an array
- what emotions did the author express, output them as an array

Output it as following JSON
{
	sentiment: 'negative' | 'positive',
	topics: string[],
	emotions: string[]
}
---

---

Transforming

  • You can translate text into a different language, this includes ‘silly languages’ like ‘pirate english’
  • You can tell the llm to transform your text into a different tone, or multiple tones like ‘formal’, ‘casual’, ‘slang’
  • it allows you to transform data structures like python dictonary to JSON
  • You can use the LLM to proofread and correct your text
Proofreed and correct the following text and rewrite the corrected version.
Ensure that the tone is clear and easy to understand.

Provide two different suggestions.
---

---

Expanding

  • you can take a short text and then expand it to a longer text
  • you can take an input and tell the llm to create an personalized response email

Prompt:

write a text in pirate english, around 50 words, about 30 doublones and 20 jewels that were looted

Output:

Arr, ye scurvy dogs! Listen to me tale o' glory on the high seas. We sailed 'cross stormy waters and raided a wealthy galleon. With cutlasses and cannons, we fought fierce battles. In the end, we plundered 30 shiny doublones and 20 precious jewels. Aye, the treasure be ours now, to spend like true pirates of legend!

Note: you can change the temperature setting to get a more varied response. with temperature 0 you always will get the most likely completion

Chatbot

When working with the API, you set up a system message, that allows you to set up the base settings of the LLM. You can define the format and the helpfulness of the bot. While this message is not part of the conversation, it only provides context

Conclusion

In conclusion, I highly recommend taking the “ChatGPT Prompt Engineering for Developers” course to enhance your understanding and proficiency in prompt engineering. This course provides a valuable opportunity to learn and experiment at your own pace. By acquiring the skills taught in the course, you will gain the ability to craft prompts that elicit accurate and relevant responses from ChatGPT.