Few-Shot Prompting
Sources:
Overview
While LLMs demonstrate remarkable Zero-Shot Prompting capabilities, they fall short on complex tasks. Few-shot prompting enables in-context learning by providing demonstrations in the prompt to steer the model toward better performance.
The demonstrations serve as conditioning for subsequent examples where the model generates a response.
Example
From Brown et al. 2020:
Prompt:
A "whatpu" is a small, furry animal native to Tanzania. An example of a sentence that uses the word whatpu is:
We were traveling in Africa and we saw these very cute whatpus.
To do a "farduddle" means to jump up and down really fast. An example of a sentence that uses the word farduddle is:
Output:
When we won the game, we all started to farduddle in celebration.
When to Use Few-Shot
| Scenario | Approach |
|---|---|
| Simple, common tasks | Zero-shot first |
| Model struggles with zero-shot | Add 1-3 examples |
| Complex reasoning or formatting | 3-5+ examples |
| Domain-specific knowledge | Include domain examples |
Best Practices
- Diverse examples: Cover different cases/edge scenarios
- Consistent format: Use identical structure across examples
- Quality over quantity: Well-crafted examples beat many poor ones
- Order matters: Place similar examples near the query
- Label balance: Include examples of different output types
Limitations
- Token limit constrains number of examples
- Examples can introduce bias if not diverse
- May not generalize to very different inputs
Related Techniques
| Technique | Description |
|---|---|
| Zero-Shot Prompting | No examples provided |
| Chain-of-Thought Prompting | Include reasoning steps |
| Self-Consistency | Sample multiple reasoning paths |
See Also
(c) No Clocks, LLC | 2024