sinabs.utils.get_activations(torchanalog_model, tsrData, name_list=None)[source]#

Return torch analog model activations for the specified layers

sinabs.utils.get_network_activations(model: torch.nn.modules.module.Module, inp, name_list: Optional[List] = None, bRate: bool = False) List[numpy.ndarray][source]#

Returns the activity of neurons in each layer of the network

  • model (torch.nn.modules.module.Module) – Model for which the activations are to be read out

  • inp – Input to the model

  • bRate (bool) – If true returns the rate, else returns spike count

  • name_list (Optional[List]) – list of all layers whose activations need to be compared

Return type


sinabs.utils.normalize_weights(ann: torch.nn.modules.module.Module, sample_data: torch.Tensor, output_layers: List[str], param_layers: List[str], percentile: float = 99)[source]#

Rescale the weights of the network, such that the activity of each specified layer is normalized.

The method implemented here roughly follows the paper: Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification by Rueckauer et al. https://www.frontiersin.org/article/10.3389/fnins.2017.00682

  • ann (torch.nn.modules.module.Module) – Torch module

  • sample_data (torch.Tensor) – Input data to normalize the network with

  • output_layers (List[str]) – List of layers to verify activity of normalization. Typically this is a relu layer

  • param_layers (List[str]) – List of layers whose parameters preceed output_layers

  • percentile (float) – A number between 0 and 100 to determine activity to be normalized by where a 100 corresponds to the max activity of the network. Defaults to 99.