Zero-Shot Prompting
Sources:
Overview
Modern LLMs, trained on large amounts of data and tuned to follow instructions, are capable of performing tasks zero-shot - without any examples provided.
Example
Prompt:
Classify the text into neutral, negative, or positive.
Text: I think the vacation is okay.
Sentiment:
Output:
neutral
The model correctly classifies sentiment without any prior examples.
When Zero-Shot Works
| ✅ Works Well | ❌ May Struggle |
|---|---|
| Common tasks (summarization, classification) | Domain-specific terminology |
| Well-defined output formats | Complex multi-step reasoning |
| Tasks similar to training data | Novel or unusual formats |
| Simple instructions | Ambiguous requirements |
When to Upgrade
If zero-shot doesn’t work, try:
- Few-Shot Prompting - Provide 1-5 examples
- Chain-of-Thought Prompting - Ask for step-by-step reasoning
- Prompt refinement - More specific instructions
See Also
(c) No Clocks, LLC | 2024