Fine-tuning
A training technique that adapts a pre-trained model to specific tasks using smaller, task-specific datasets at dramatically lower cost than training from scratch.
Fine-tuning is the adaptation stage of the foundation model pipeline, where a pre-trained model's behavior is adjusted for particular applications. Full fine-tuning updates all model parameters but requires more compute and risks catastrophic forgetting. Parameter-efficient methods like LoRA update only a fraction of weights, making adaptation accessible to teams without massive GPU clusters. Organizations can spend thousands on fine-tuning rather than hundreds of millions on pre-training.
Also known as
finetuning, model fine-tuning, task adaptation