Chain-of-Thought Prompting

A prompting technique that asks the model to show reasoning steps before the final answer, improving performance on arithmetic, logic, and commonsense tasks in sufficiently large models.

Chain-of-thought (CoT) prompting instructs language models to work through problems step by step rather than jumping directly to answers. The technique emerged from research showing that explicit reasoning traces improve accuracy on multi-step tasks. Zero-shot CoT uses a simple trigger like 'Let's think step by step,' while few-shot CoT provides worked examples demonstrating the reasoning process. CoT is an emergent ability that only appears in large models and can actually hurt performance on simple tasks that don't require multi-step reasoning.

Also known as

CoT, chain of thought, step-by-step reasoning, CoT prompting