Brain-Score is a collection of benchmarks: combinations of data and metrics that score any model on how brain-like it is.
The primary method this library provides is the score_model function.
- brainscore.score_model(model_identifier, benchmark_identifier, model)¶
Score a given model on a given benchmark. The model needs to implement the
BrainModelinterface so that the benchmark can interact with it. The benchmark will be looked up from the
benchmark_pooland evaluates the model on how brain-like it is under that benchmark’s experimental paradigm, primate measurements, comparison metric, and ceiling. This results in a quantitative
Scoreranging from 0 (least brain-like) to 1 (most brain-like under this benchmark).
The results of this method are cached by default (according to the identifiers), calling it twice with the same identifiers will only invoke once.
model_identifier – a unique identifier for this model
model – the model implementation following the
benchmark_identifier – the identifier of the benchmark to test the model against
Scoreof how brain-like the candidate model is under this benchmark. The score is normalized by this benchmark’s ceiling such that 1 means the model matches the data to ceiling level.
- Model Tutorial
- Install Brain-Score Repos and Dependencies
- Submitting a Model to Brain-Score.org Part 1: Preparing the Model
- Submitting a Model to Brain-Score.org Part 2: Upload
- Submitting a Model to Brain-Score.org Part 3: Custom model (Optional)
- Common Errors: Setup
- Common Errors: Submission
- Frequently Asked Questions
- Benchmark Tutorial
- BrainModel interface
- API Reference