BrainModel interface

class brainscore_vision.model_interface.BrainModel[source]

The BrainModel interface defines an API for models to follow. Benchmarks will use this interface to treat models as an experimental subject without needing to know about the details of the model implementation.

class RecordingTarget[source]

location to record from

IT = 'IT'
V1 = 'V1'
V2 = 'V2'
V4 = 'V4'
class Task[source]

task to perform

label = 'label'

Predict the label for each stimulus. Output a BehavioralAssembly with labels as the values.

The labeling domain can be specified in the second argument, e.g. ‘imagenet’ for 1,000 ImageNet synsets, or an explicit list of label strings. The model choices must be part of the labeling domain.

Example:

Setting up a labeling task for ImageNet synsets with start_task(BrainModel.Task.label, ‘imagenet’) and calling look_at(…) could output

<xarray.BehavioralAssembly (presentation: 3, choice: 1)>
     array([['n02107574'], ['n02123045'], ['n02804414']]), # the ImageNet synsets
     Coordinates:
       * presentation  (presentation) MultiIndex
       - stimulus_id   (presentation) object 'hash1' 'hash2' 'hash3'
       - stimulus_path (presentation) object '/home/me/.brainio/demo_stimuli/image1.png' ...
       - logit         (presentation) int64 239 282 432
       - synset        (presentation) object 'n02107574' 'n02123045' 'n02804414'

Example:

Setting up a labeling task for 2 custom labels with start_task(BrainModel.Task.label, [‘dog’, ‘cat’]) and calling look_at(…) could output

<xarray.BehavioralAssembly (presentation: 3, choice: 1)>
     array([['dog'], ['cat'], ['cat']]), # the labels
     Coordinates:
       * presentation  (presentation) MultiIndex
       - stimulus_id   (presentation) object 'hash1' 'hash2' 'hash3'
       - stimulus_path (presentation) object '/home/me/.brainio/demo_stimuli/image1.png' ...
odd_one_out = 'odd_one_out'

Predict the odd-one-out elements for a list of triplets of stimuli.

The model must be supplied with a list of stimuli where every three consecutive stimuli are considered to form a triplet. The model is expected to output a one-dimensional assembly with each value corresponding to the index (0, 1, or 2) of the triplet element that is different from the other two.

Output a BehavioralAssembly with the choices as the values.

Example:

Setting up an odd-one-out task for a list of triplets with start_task(BrainModel.Task.odd_one_out) and calling

look_at(['image1.png', 'image2.png', 'image3.png',    #triplet 1 
         'image1.png', 'image2.png', 'image4.png',    #triplet 2 
         'image2.png', 'image3.png', 'image4.png',    #triplet 3
         ...
         'image4.png', 'image8.png', 'image10.png'])  #triplet 50 

with 50 triplet trials and 10 unique stimuli could output

<xarray.BehavioralAssembly (presentation: 50, choice: 1)>
     array([[0], [2], [2], ..., [1]])  #  index of the odd-one-out per trial, i.e. 0, 1, or 2. (Each trial is one triplet of images.)
     Coordinates:
       * presentation  (presentation) MultiIndex
       - stimulus_id   (presentation) ['image1', 'image2', 'image3'], ..., , ['image4', 'image8', 'image10']
       - stimulus_path (presentation) object '/home/me/.brainio/demo_stimuli/image1.png' ...
passive = 'passive'

Passive fixation, i.e. do not perform any task, but fixate on the center of the screen. Does not output anything, but can be useful to fully specify the experimental setup.

Example:

Setting up passive fixation with start_task(BrainModel.Task.passive) and calling look_at(…) could output

None
probabilities = 'probabilities'

Predict the per-label probabilities for each stimulus. Output a BehavioralAssembly with probabilities as the values.

The model must be supplied with fitting_stimuli in the second argument which allow it to train a readout for a particular set of labels and image distribution. The fitting_stimuli are a StimulusSet and must include an image_label column which is used as the labels to fit to.

Example:

Setting up a probabilities task start_task(BrainModel.Task.probabilities, <fitting_stimuli>) (where fitting_stimuli includes 5 distinct labels) and calling look_at(<test_stimuli>) could output

<xarray.BehavioralAssembly (presentation: 3, choice: 5)>
     array([[0.9 0.1 0.0 0.0 0.0]
            [0.0 0.0 0.8 0.0 0.2]
            [0.0 0.0 0.0 1.0 0.0]]), # the probabilities
     Coordinates:
       * presentation  (presentation) MultiIndex
       - stimulus_id   (presentation) object 'hash1' 'hash2' 'hash3'
       - stimulus_path (presentation) object '/home/me/.brainio/demo_stimuli/image1.png' ...
       - choice        (choice) object 'dog' 'cat' 'chair' 'flower' 'plane'
property identifier: str

The unique identifier for this model.

Returns:

e.g. ‘CORnet-S’, or ‘alexnet’

look_at(stimuli: Union[StimulusSet, List[str]], number_of_trials=1) Union[BehavioralAssembly, NeuroidAssembly][source]

Digest a set of stimuli and return requested outputs. Which outputs to return is instructed by the start_task() and start_recording() methods.

Parameters:
  • stimuli – A set of stimuli, passed as either a StimulusSet or a list of image file paths

  • number_of_trials – The number of repeated trials of the stimuli that the model should average over. E.g. 10 or 35. Non-stochastic models can likely ignore this parameter.

Returns:

task behaviors or recordings as instructed

start_recording(recording_target: RecordingTarget, time_bins: List[Tuple[int]]) None[source]

Instructs the model to begin recording in a specified RecordingTarget and return the specified time_bins. For all followings call of look_at(), the model returns the corresponding recordings. These recordings are a NeuroidAssembly with exactly 3 dimensions:

  • presentation: the presented stimuli (cf. stimuli argument of look_at()). If a StimulusSet was passed, the recordings should contain all of the StimulusSet columns as coordinates on this dimension. The stimulus_id coordinate is required in either case.

  • neuroid: the recorded neuroids (neurons or mixtures thereof). They should all be part of the specified RecordingTarget. The coordinates of this dimension should again include as much information as is available, at the very least a neuroid_id.

  • time_bins: the time bins of each recording slice. This dimension should contain at least 2 coordinates: time_bin_start and time_bin_end, where one time_bin is the bin between start and end. For instance, a 70-170ms time_bin would be marked as time_bin_start=70 and time_bin_end=170. If only one time_bin is requested, the model may choose to omit this dimension.

Parameters:
  • recording_target – which location to record from

  • time_bins – which time_bins to record as a list of integer tuples, e.g. [(50, 100), (100, 150), (150, 200)] or [(70, 170)]

start_task(task: Task, fitting_stimuli) None[source]

Instructs the model to begin one of the tasks specified in Task. For all followings call of look_at(), the model returns the expected outputs for the specified task.

Parameters:
  • task – The task the model should perform, and thus which outputs it should return

  • fitting_stimuli – A set of stimuli for the model to learn on, e.g. image-label pairs

visual_degrees() int[source]

The visual degrees this model covers as a single scalar.

Returns:

e.g. 8, or 10