utils#
- sinabs.utils.reset_states(model: Module) None [source]#
Helper function to recursively reset all states of spiking layers within the model.
- Parameters:
model (Module) – The torch module
- Return type:
None
- sinabs.utils.zero_grad(model: Module) None [source]#
Helper function to recursively zero the gradients of all spiking layers within the model.
- Parameters:
model (Module) – The torch module
- Return type:
None
- sinabs.utils.get_activations(torchanalog_model, tsrData, name_list=None)[source]#
Return torch analog model activations for the specified layers.
- sinabs.utils.get_network_activations(model: Module, inp, name_list: List = None, bRate: bool = False) List[ndarray] [source]#
Returns the activity of neurons in each layer of the network.
- Parameters:
model (Module) – Model for which the activations are to be read out
inp – Input to the model
bRate (bool) – If true returns the rate, else returns spike count
name_list (List) – list of all layers whose activations need to be compared
- Return type:
List[ndarray]
- sinabs.utils.normalize_weights(ann: Module, sample_data: Tensor, output_layers: List[str], param_layers: List[str], percentile: float = 99)[source]#
Rescale the weights of the network, such that the activity of each specified layer is normalized.
The method implemented here roughly follows the paper: Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification by Rueckauer et al. https://www.frontiersin.org/article/10.3389/fnins.2017.00682
- Parameters:
ann (Module) – Torch module
sample_data (Tensor) – Input data to normalize the network with
output_layers (List[str]) – List of layers to verify activity of normalization. Typically this is a relu layer
param_layers (List[str]) – List of layers whose parameters preceed output_layers
percentile (float) – A number between 0 and 100 to determine activity to be normalized by where a 100 corresponds to the max activity of the network. Defaults to 99.