Helpers and Utils

class brainscore_vision.metric_helpers.Defaults[source]
expected_dims = ('presentation', 'neuroid')
neuroid_coord = 'neuroid_id'
neuroid_dim = 'neuroid'
stimulus_coord = 'stimulus_id'
class brainscore_vision.benchmark_helpers.PrecomputedFeatures(features: Union[DataAssembly, dict], visual_degrees)[source]
__init__(features: Union[DataAssembly, dict], visual_degrees)[source]
Parameters:
  • features – The precomputed features. Either an assembly of features, indexable with stimulus_id or a dictionary mapping from stimulus identifier to feature assemblies.

  • visual_degrees – Some visual degrees to use for the precomputed features. Since features are precomputed, this should only affect the place_on_screen in the benchmark’s __call__ method.

look_at(stimuli, number_of_trials=1)[source]

Digest a set of stimuli and return requested outputs. Which outputs to return is instructed by the start_task() and start_recording() methods.

Parameters:
  • stimuli – A set of stimuli, passed as either a StimulusSet or a list of image file paths

  • number_of_trials – The number of repeated trials of the stimuli that the model should average over. E.g. 10 or 35. Non-stochastic models can likely ignore this parameter.

Returns:

task behaviors or recordings as instructed

start_recording(region, *args, **kwargs)[source]

Instructs the model to begin recording in a specified RecordingTarget and return the specified time_bins. For all followings call of look_at(), the model returns the corresponding recordings. These recordings are a NeuroidAssembly with exactly 3 dimensions:

  • presentation: the presented stimuli (cf. stimuli argument of look_at()). If a StimulusSet was passed, the recordings should contain all of the StimulusSet columns as coordinates on this dimension. The stimulus_id coordinate is required in either case.

  • neuroid: the recorded neuroids (neurons or mixtures thereof). They should all be part of the specified RecordingTarget. The coordinates of this dimension should again include as much information as is available, at the very least a neuroid_id.

  • time_bins: the time bins of each recording slice. This dimension should contain at least 2 coordinates: time_bin_start and time_bin_end, where one time_bin is the bin between start and end. For instance, a 70-170ms time_bin would be marked as time_bin_start=70 and time_bin_end=170. If only one time_bin is requested, the model may choose to omit this dimension.

Parameters:
  • recording_target – which location to record from

  • time_bins – which time_bins to record as a list of integer tuples, e.g. [(50, 100), (100, 150), (150, 200)] or [(70, 170)]

start_task(task, fitting_stimuli=None)[source]

Instructs the model to begin one of the tasks specified in Task. For all followings call of look_at(), the model returns the expected outputs for the specified task.

Parameters:
  • task – The task the model should perform, and thus which outputs it should return

  • fitting_stimuli – A set of stimuli for the model to learn on, e.g. image-label pairs

visual_degrees() int[source]

The visual degrees this model covers as a single scalar.

Returns:

e.g. 8, or 10

brainscore_vision.benchmark_helpers.bound_score(score: Score)[source]

Force score value to be between 0 and 1. If score is lower than 0, set to 0. If score is greater than 1, set to 1.

brainscore_vision.benchmark_helpers.check_standard_format(assembly, nans_expected=False)[source]

Provide generic helper classes.

class brainscore_vision.utils.LazyLoad(load_fnc)[source]
__call__(*args, **kwargs)[source]

Call self as a function.

__init__(load_fnc)[source]
reload()[source]
brainscore_vision.utils.combine_fields(objs, func)[source]
brainscore_vision.utils.fullname(obj)[source]

Resolve the full module-qualified name of an object. Typically used for logger naming.

brainscore_vision.utils.map_fields(obj, func)[source]
brainscore_vision.utils.recursive_dict_merge(dict1, dict2)[source]

Merges dictionaries (of dictionaries). Preference is given to the second dict, i.e. if a key occurs in both dicts, the value from dict2 is used.