This module defines the DynapcnnLayer class that is used to reproduce the behavior of a layer on the dynapcnn chip.

class sinabs.backend.dynapcnn.dynapcnn_layer.DynapcnnLayer(conv: Conv2d, spk: IAFSqueeze, in_shape: Tuple[int, int, int], pool: SumPool2d | None = None, discretize: bool = True, rescale_weights: int = 1)[source]#

Create a DynapcnnLayer object representing a dynapcnn layer.

Requires a convolutional layer, a sinabs spiking layer and an optional pooling value. The layers are used in the order conv -> spike -> pool.

  • conv (torch.nn.Conv2d or torch.nn.Linear) – Convolutional or linear layer (linear will be converted to convolutional)

  • spk (sinabs.layers.IAFSqueeze) – Sinabs IAF layer

  • in_shape (tuple of int) – The input shape, needed to create dynapcnn configs if the network does not contain an input layer. Convention: (features, height, width)

  • pool (int or None) – Integer representing the sum pooling kernel and stride. If None, no pooling will be applied.

  • discretize (bool) – Whether to discretize parameters.

  • rescale_weights (int) – Layer weights will be divided by this value.


Torch forward pass.

get_neuron_shape() Tuple[int, int, int][source]#

Return the output shape of the neuron layer.

Return type:

features, height, width


Computes the amount of memory required for each of the components. Note that this is not necessarily the same as the number of parameters due to some architecture design constraints.

\[K_{MT} = c \cdot 2^{\lceil \log_2\left(k_xk_y\right) \rceil + \lceil \log_2\left(f\right) \rceil}\]
\[N_{MT} = f \cdot 2^{ \lceil \log_2\left(f_y\right) \rceil + \lceil \log_2\left(f_x\right) \rceil }\]
Return type:

A dictionary with keys kernel, neuron and bias and the corresponding memory sizes

zero_grad(set_to_none: bool = False) None[source]#

Reset gradients of all model parameters.

See similar function under torch.optim.Optimizer for more context.


set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details.

Return type: