Example: One-shot / few-shot learning
It’s often helpful to use something akin to one-shot learning or few-shot learning when engineering prompts. This approach involves providing a concrete example or a few concrete examples to show the model what you’d like in its response.
First, we’ll provide a naïve prompt, show where it goes astray, and then see how we can improve our results by providing a single example.
Say you’re teaching a first course in linear algebra, and you’d like some problems either for classroom demonstration or as an assignment.
The naïve prompt
ChatGPT’s response to the naïve prompt
Commentary on ChatGPT’s response to the naïve prompt
This is interesting. It’s not incorrect, and for some, this might be exactly the answer they were looking for: what kinds of problems can I solve using Gauss-Jordan elimination?
However, if the instructor is looking for a few elementary examples of solving systems of linear equations, they’d need some prompt engineering.
Claude’s response to the naïve prompt
Commentary on Claude’s response to the naïve prompt
Claude’s response is remarkably similar to that of ChatGPT.
A better prompt
Adding a single example is sufficient to get a much more focused response.
ChatGPT’s response
Claude’s response
Commentary on responses by ChatGPT and Claude
By providing a single example in the prompt we get more-or-less exactly what we were after.
Claude’s response includes augmented matrices which is nice but not essential.