Brain-Score is a collection of benchmarks and models. The benchmarks consist of data and metrics that score any model on how brain-like it is. By following a unified BrainModel interface, all models can be treated as an experimental subject and tested on all benchmarks.

brainscore_vision.score(model_identifier: str, benchmark_identifier: str, conda_active: bool = False) Score[source]

Score the model referenced by the model_identifier on the benchmark referenced by the benchmark_identifier. The model needs to implement the BrainModel interface so that the benchmark can interact with it. The benchmark will be looked up from the benchmarks and evaluates the model (looked up from models) on how brain-like it is under that benchmark’s experimental paradigm, primate measurements, comparison metric, and ceiling. This results in a quantitative Score ranging from 0 (least brain-like) to 1 (most brain-like under this benchmark).

  • model_identifier – the identifier for the model

  • benchmark_identifier – the identifier for the benchmark to test the model against


a Score of how brain-like the candidate model is under this benchmark. The score is normalized by this benchmark’s ceiling such that 1 means the model matches the data to ceiling level.