Foundation Model
A large AI model trained on broad data at scale that can be adapted to a wide range of downstream tasks, serving as the base for specialized applications.
Foundation models are large-scale AI systems trained on vast, diverse datasets that learn general-purpose representations applicable across many tasks. Rather than being trained for a single purpose, they serve as a starting point that can be fine-tuned or prompted for specific applications like chatbots, coding assistants, or image generation. The term was coined by Stanford researchers in 2021 and encompasses models like GPT-4, Claude, and Gemini.
Also known as
base model, pretrained model, foundation AI