Helpers and Utils
- class brainscore_vision.metric_helpers.Defaults[source]
- expected_dims = ('presentation', 'neuroid')
- neuroid_coord = 'neuroid_id'
- neuroid_dim = 'neuroid'
- stimulus_coord = 'stimulus_id'
- class brainscore_vision.benchmark_helpers.PrecomputedFeatures(features: DataAssembly | dict, visual_degrees)[source]
- __init__(features: DataAssembly | dict, visual_degrees)[source]
- Parameters:
features – The precomputed features. Either an assembly of features, indexable with stimulus_id or a dictionary mapping from stimulus identifier to feature assemblies.
visual_degrees – Some visual degrees to use for the precomputed features. Since features are precomputed, this should only affect the place_on_screen in the benchmark’s __call__ method.
- property identifier: str
The unique identifier for this model.
- Returns:
e.g. ‘CORnet-S’, or ‘alexnet’
- look_at(stimuli, number_of_trials=1)[source]
Digest a set of stimuli and return requested outputs. Which outputs to return is instructed by the
start_task()
andstart_recording()
methods.- Parameters:
stimuli – A set of stimuli, passed as either a
StimulusSet
or a list of image file pathsnumber_of_trials – The number of repeated trials of the stimuli that the model should average over. E.g. 10 or 35. Non-stochastic models can likely ignore this parameter.
- Returns:
task behaviors or recordings as instructed
- start_recording(region, *args, **kwargs)[source]
Instructs the model to begin recording in a specified
RecordingTarget
and return the specified time_bins. For all followings call oflook_at()
, the model returns the corresponding recordings. These recordings are aNeuroidAssembly
with exactly 3 dimensions:presentation: the presented stimuli (cf. stimuli argument of
look_at()
). If aStimulusSet
was passed, the recordings should contain all of theStimulusSet
columns as coordinates on this dimension. The stimulus_id coordinate is required in either case.neuroid: the recorded neuroids (neurons or mixtures thereof). They should all be part of the specified
RecordingTarget
. The coordinates of this dimension should again include as much information as is available, at the very least a neuroid_id.time_bins: the time bins of each recording slice. This dimension should contain at least 2 coordinates: time_bin_start and time_bin_end, where one time_bin is the bin between start and end. For instance, a 70-170ms time_bin would be marked as time_bin_start=70 and time_bin_end=170. If only one time_bin is requested, the model may choose to omit this dimension.
- Parameters:
recording_target – which location to record from
time_bins – which time_bins to record as a list of integer tuples, e.g. [(50, 100), (100, 150), (150, 200)] or [(70, 170)]
- start_task(task, fitting_stimuli=None)[source]
Instructs the model to begin one of the tasks specified in
Task
. For all followings call oflook_at()
, the model returns the expected outputs for the specified task.- Parameters:
task – The task the model should perform, and thus which outputs it should return
fitting_stimuli – A set of stimuli for the model to learn on, e.g. image-label pairs
- brainscore_vision.benchmark_helpers.bound_score(score: Score)[source]
Force score value to be between 0 and 1. If score is lower than 0, set to 0. If score is greater than 1, set to 1.
Provide generic helper classes.