Brain-Score¶
Brain-Score is a collection of benchmarks: combinations of data and metrics that score any model on how brain-like it is.
Data is organized in BrainIO, metrics and benchmarks are implemented in this repository, and standard models are implemented in candidate-models.
The primary method this library provides is the score_model function.
- brainscore.score_model(model_identifier, benchmark_identifier, model)[source]¶
Score a given model on a given benchmark. The model needs to implement the
BrainModel
interface so that the benchmark can interact with it. The benchmark will be looked up from thebenchmark_pool
and evaluates the model on how brain-like it is under that benchmark’s experimental paradigm, primate measurements, comparison metric, and ceiling. This results in a quantitativeScore
ranging from 0 (least brain-like) to 1 (most brain-like under this benchmark).The results of this method are cached by default (according to the identifiers), calling it twice with the same identifiers will only invoke once.
- Parameters
model_identifier – a unique identifier for this model
model – the model implementation following the
BrainModel
interfacebenchmark_identifier – the identifier of the benchmark to test the model against
- Returns
a
Score
of how brain-like the candidate model is under this benchmark. The score is normalized by this benchmark’s ceiling such that 1 means the model matches the data to ceiling level.- Seealso
- Seealso
Contents:
- Examples
- Model Tutorial
- About
- Quickstart
- Overview
- Install Brain-Score Repos and Dependencies
- Submitting a Model to Brain-Score.org Part 1: Preparing the Model
- Submitting a Model to Brain-Score.org Part 2: Upload
- Submitting a Model to Brain-Score.org Part 3: Custom model (Optional)
- Common Errors: Setup
- Common Errors: Submission
- Frequently Asked Questions
- Benchmark Tutorial
- BrainModel interface
- Benchmarks
- Metrics
- Submission
- Utils
- API Reference