hooks#

This module provides hooks that can be registered with layers and modules to collect statistics during a forward pass.

Hooks#

sinabs.hooks.firing_rate_hook(module: StatefulLayer, input_: Any, output: Tensor)[source]#

Forwared hook to be registered with a spiking sinabs layer.

Calculate the mean firing rate per neuron per timestep.

The hook should be registered with the layer using torch.register_forward_hook. It will be called automatically at each forward pass. Afterwards the data can be accessed with module.hook_data[‘firing_rate’]

Parameters:
  • module (StatefulLayer) – A spiking sinabs layer, such as IAF or LIF.

  • input – List of inputs to the layer. Ignored here.

  • output (Tensor) – The layer’s output.

  • input_ (Any)

Effect:

If module does not already have a hook_data attribute, it will be added and the mean firing rate will be stored under the key ‘firing_rate’. It is a scalar value.

sinabs.hooks.firing_rate_per_neuron_hook(module: StatefulLayer, input_: Any, output: Tensor)[source]#

Forwared hook to be registered with a spiking sinabs layer.

Calculate the mean firing rate per timestep for each neuron.

The hook should be registered with the layer using torch.register_forward_hook. It will be called automatically at each forward pass. Afterwards the data can be accessed with module.hook_data[‘firing_rate_per_neuron’]

Parameters:
  • module (StatefulLayer) – A spiking sinabs layer, such as IAF or LIF.

  • input – List of inputs to the layer. Ignored here.

  • output (Tensor) – The layer’s output.

  • input_ (Any)

Effect:

If module does not already have a hook_data attribute, it will be added and the mean firing rate will be stored under the key ‘firing_rate_per_neuron’. It is a tensor of the same shape as neurons of the spiking layer.

sinabs.hooks.input_diff_hook(module: Conv2d | Linear, input_: List[Tensor], output: Tensor)[source]#

Forwared hook to be registered with a Conv2d or Linear layer.

Calculate the difference between the output if all weights were positive and the absolute of the actual output. Regularizing this value during training of an SNN can help reducing discrepancies between simulation and deployment on asynchronous processors.

The hook should be registered with the layer using torch.register_forward_hook. It will be called automatically at each forward pass. Afterwards the data can be accessed with module.hook_data[‘diff_output’]

Parameters:
  • module (Conv2d | Linear) – Either a torch.nn.Conv2d or Linear layer

  • input – List of inputs to the layer. Should hold a single tensor.

  • output (Tensor) – The layer’s output.

  • input_ (List[Tensor])

Effect:

If module does not already have a hook_data attribute, it will be added and the difference value described above will be stored under the key ‘diff_output’. It is a tensor of the same shape as output.

sinabs.hooks.conv_layer_synops_hook(module: Conv2d, input_: List[Tensor], output: Tensor)[source]#

Forwared hook to be registered with a Conv2d layer.

Calculate the mean synaptic operations per timestep. To be clear: Synaptic operations are summed over neurons, but averaged across batches / timesteps. Note that the hook assumes spike counts as inputs. Preceeding average pooling layers, which scale the data, might lead to false results and should be accounted for externally.

The hook should be registered with the layer using torch.register_forward_hook. It will be called automatically at each forward pass. Afterwards the data can be accessed with module.hook_data[‘layer_synops_per_timestep’]

Parameters:
  • module (Conv2d) – A torch.nn.Conv2d layer

  • input – List of inputs to the layer. Must contain exactly one tensor

  • output (Tensor) – The layer’s output.

  • input_ (List[Tensor])

Effect:

If module does not already have a hook_data attribute, it will be added and the mean firing rate will be stored under the key ‘layer_synops_per_timestep’. It is a scalar value. It will also store a connectivity map under the key ‘connection_map’, which holds the fanout for each input neuron.

sinabs.hooks.linear_layer_synops_hook(module: Linear, input_: List[Tensor], output: Tensor)[source]#

Forwared hook to be registered with a Linear layer.

Calculate the mean synaptic operations per timestep. To be clear: Synaptic operations are summed over neurons, but averaged across batches / timesteps. Note that the hook assumes spike counts as inputs. Preceeding average pooling layers, which scale the data, might lead to false results and should be accounted for externally.

The hook should be registered with the layer using torch.register_forward_hook. It will be called automatically at each forward pass. Afterwards the data can be accessed with module.hook_data[‘layer_synops_per_timestep’]

Parameters:
  • module (Linear) – A torch.nn.Linear layer.

  • input – List of inputs to the layer. Must contain exactly one tensor

  • output (Tensor) – The layer’s output.

  • input_ (List[Tensor])

Effect:

If module does not already have a hook_data attribute, it will be added and the mean firing rate will be stored under the key ‘layer_synops_per_timestep’.

class sinabs.hooks.ModelSynopsHook(dt: float | None = None)[source]#

Forwared hook to be registered with a Sequential.

Calculate the mean synaptic operations per timestep for the Conv2d and Linear layers inside the Sequential. Synaptic operations are summed over neurons, but averaged across batches / timesteps. Other than the layer-wise synops hook, this hook accounts for preceeding average pooling layers, which scale the data.

To use this hook, the conv_layer_synops_hook and linear_layer_synops_hook need to be registered with the layers inside the Sequential first. The hook should then be instantiated with or without a dt and registered with the Sequential using torch.register_forward_hook. Alternatively, refer to the function register_synops_hooks for a more convenient way of setting up the hooks.

The hook will be called automatically at each forward pass. Afterwards the data can be accessed in several ways:

  • Each layer that has a synops hook registered, will have an entry ‘synops_per_timestep’ in its hook_data. Other than the ‘layer_synops_per_timestep’, this entry takes preceding average pooling layers into account.

  • The same values can be accessed through a dict inside the hook_data of the Sequential, under the key synops_per_timestep. The keys inside this dict correspond to the layer indices within the Sequential, e.g.: sequential.hook_data[‘synops_per_timestep’][1]

  • The hook_data of the sequential also contains a scalar entry ‘total_synops_per_timestep’ which sums the synops over all layers.

  • If dt is not None, for each of the entries listed above, there will be a corresponding ‘synops_per_second’ entry, indicating the synaptic operations per second, under the assumption that dt is the time step in seconds.

Parameters:

dt (float | None) – If not None, should be a float that indicates the simulation time step in seconds. The synaptic operations will be also provided in terms of synops per second.

Helper functions#

sinabs.hooks.register_synops_hooks(module: Sequential, dt: float | None = None)[source]#

Convenience function to register all the necessary hooks to collect synops statistics in a sequential model.

This can be used instead of calling the torch function register_forward_hook on all layers.

Parameters:
  • module (Sequential) – Sequential model for which the hooks should be registered.

  • dt (float | None) – If not None, should be a float indicating the simulation time step in seconds. Will also calculate synaptic operations per second.

sinabs.hooks.get_hook_data_dict(module: Module) Dict[source]#

Convenience function to get hook_data attribute of a module if it has one and create it otherwise.

Parameters:

module (Module) – The module whose hook_data dict is to be fetched. If it does not have an attribute of that name, it will add an empty dict.

Returns:

The hook_data attribute of module. Should be a Dict.

Return type:

Dict

sinabs.hooks.conv_connection_map(layer: Conv2d, input_shape: Size, output_shape: Size, device: None | device | str = None) Tensor[source]#

Generate connectivity map for a convolutional layer The map indicates for each element in the layer input to how many postsynaptic neurons it connects (i.e. the fanout)

Parameters:
  • layer (Conv2d) – Convolutional layer for which connectivity map is to be generated

  • input_shape (Size) – Shape of the input data (N, C, Y, X)

  • output_shape (Size) – Shape of layer output given input_shape

  • device (None | device | str) – Device on which the connectivity map should reside. Should be the same as that of the input to layer. If None, will select device of the weight of layer.

Returns:

Connectivity map indicating the fanout for each

element in the input

Return type:

torch.Tensor

sinabs.hooks._extract_single_input(input_data: List[Any]) Any[source]#

Extract single element of a list.

Parameters:

input_data (List[Any]) – List that should have only one element

Returns:

The only element from the list

Raises:
  • ValueError if input_data does not have exactly

  • one element.

Return type:

Any