Interpretability
The degree to which humans can understand and trace an AI system's reasoning and outputs.
In AI, interpretability refers to the ability of researchers to evaluate, trace, and build upon an AI system's reasoning process. In scientific contexts, interpretability is critical because a prediction or recommendation that can't be interrogated and understood is effectively useless for building reliable knowledge. Both the Allen Institute and HHMI partnerships emphasize interpretability as a core requirement for AI agents in research.
Also known as
explainability, transparent AI