Few-Shot Prompting

Sources:

Overview

While LLMs demonstrate remarkable Zero-Shot Prompting capabilities, they fall short on complex tasks. Few-shot prompting enables in-context learning by providing demonstrations in the prompt to steer the model toward better performance.

The demonstrations serve as conditioning for subsequent examples where the model generates a response.

Example

From Brown et al. 2020:

Prompt:

A "whatpu" is a small, furry animal native to Tanzania. An example of a sentence that uses the word whatpu is:
We were traveling in Africa and we saw these very cute whatpus.

To do a "farduddle" means to jump up and down really fast. An example of a sentence that uses the word farduddle is:

Output:

When we won the game, we all started to farduddle in celebration.

When to Use Few-Shot

ScenarioApproach
Simple, common tasksZero-shot first
Model struggles with zero-shotAdd 1-3 examples
Complex reasoning or formatting3-5+ examples
Domain-specific knowledgeInclude domain examples

Best Practices

  1. Diverse examples: Cover different cases/edge scenarios
  2. Consistent format: Use identical structure across examples
  3. Quality over quantity: Well-crafted examples beat many poor ones
  4. Order matters: Place similar examples near the query
  5. Label balance: Include examples of different output types

Limitations

  • Token limit constrains number of examples
  • Examples can introduce bias if not diverse
  • May not generalize to very different inputs
TechniqueDescription
Zero-Shot PromptingNo examples provided
Chain-of-Thought PromptingInclude reasoning steps
Self-ConsistencySample multiple reasoning paths

See Also


(c) No Clocks, LLC | 2024