PEFT
Parameter-Efficient Fine-Tuning, a family of techniques that adapt large models by updating only a small subset of parameters.
PEFT methods include LoRA, adapters, prefix tuning, and other approaches that modify or add a small number of parameters rather than updating an entire model. These techniques dramatically reduce the compute, memory, and storage requirements for model adaptation, enabling organizations to fine-tune foundation models for specific tasks without the infrastructure required for full fine-tuning.
Also known as
Parameter-Efficient Fine-Tuning, parameter-efficient methods