layers#

All the layers implemented in this package can be used similar to torch.nn layers in your implementations.

Main spiking layers#

Those layers make use of the sinabs.activation module.

Non-spiking layers#

Pooling#

class sinabs.layers.SpikingMaxPooling2dLayer(pool_size: Union[numpy.ndarray, List, Tuple], strides: Optional[Union[numpy.ndarray, List, Tuple]] = None, padding: Union[numpy.ndarray, List, Tuple] = (0, 0, 0, 0))#
forward(binary_input)#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_output_shape(input_shape: Tuple) Tuple#

Returns the shape of output, given an input to this layer

Parameters

input_shape – (channels, height, width)

Returns

(channelsOut, height_out, width_out)

class sinabs.layers.SumPool2d(kernel_size, stride=None, ceil_mode=False)#

Non-spiking sumpooling layer to be used in analogue Torch models. It is identical to torch.nn.LPPool2d with p=1.

Parameters
  • kernel_size – the size of the window

  • stride – the stride of the window. Default value is kernel_size

  • ceil_mode – when True, will use ceil instead of floor to compute the output shape

Conversion from images / analog signals#

The hybrid layers have inputs and outputs of different formats (eg. take analog values as inputs and produce spikes as outputs.)

class sinabs.layers.Img2SpikeLayer(image_shape, tw: int = 100, max_rate: float = 1000, norm: float = 255.0, squeeze: bool = False, negative_spikes: bool = False)#

Layer to convert images to Spikes

forward(img_input)#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class sinabs.layers.Sig2SpikeLayer(channels_in, tw: int = 1, norm_level: float = 1, spk_out: bool = True)#

Layer to convert analog Signals to Spikes

forward(signal)#

Convert a signal to the corresponding spikes

Parameters

signal – [Channel, Sample(t)]

Returns

Parent layers#

Other Sinabs layers might inherit from those.

class sinabs.layers.StatefulLayer(state_names: List[str])#

Pytorch implementation of a stateful layer, to be used as base class.

property does_spike: bool#

Return True if the layer has an activation function

forward(*args, **kwargs)#

Not implemented - You need to implement a forward method in child class

init_state_with_shape(shape, randomize: bool = False) None#

Initialise state/buffers with either zeros or random tensor of specific shape.

is_state_initialised() bool#

Checks if buffers are of shape 0 and returns True only if none of them are.

reset_states(randomize: bool = False, value_ranges: Optional[Dict[str, Tuple[float, float]]] = None)#

Reset the state/buffers in a layer.

Parameters
  • randomize (Bool) – If true, reset the states between a range provided. Else, the states are reset to zero.

  • value_ranges (Optional[Dict[str, Tuple[float, float]]]) – A dictionary of key value pairs: buffer_name -> (min, max) for each state that needs to be reset. The states are reset with a uniform distribution between the min and max values specified. Any state with an undefined key in this dictionary will be reset between 0 and 1 This parameter is only used if randomize is set to true.

NOTE: If you would like to reset the state with a custom distribution, you can do this individually for each parameter as follows.

layer.<state_name>.data = <your desired data>; layer.<state_name>.detach_()

state_has_shape(shape) bool#

Checks if all state have a given shape.

zero_grad(set_to_none: bool = False) None#

Zero’s the gradients for buffers/state along with the parameters. See torch.nn.Module.zero_grad() for details

class sinabs.layers.SqueezeMixin#

Utility mixin class that will wrap the __init__ and forward call of other classes. The wrapped __init__ will provide two additional parameters batch_size and num_timesteps and the wrapped forward will unpack and repack the first dimension into batch and time.

Auxiliary layers#

class sinabs.layers.Cropping2dLayer(cropping: Union[numpy.ndarray, List, Tuple] = ((0, 0), (0, 0)))#

Crop input image by

forward(binary_input)#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_output_shape(input_shape: Tuple) Tuple#

Retuns the output dimensions

Parameters

input_shape – (channels, height, width)

Returns

(channels, height, width)

class sinabs.layers.FlattenTime#

Utility layer which always flattens first two dimensions. Meant to convert a tensor of dimensions (Batch, Time, Channels, Height, Width) into a tensor of (Batch*Time, Channels, Height, Width).

class sinabs.layers.UnflattenTime(batch_size: int)#

Utility layer which always unflattens (expands) the first dimension into two separate ones. Meant to convert a tensor of dimensions (Batch*Time, Channels, Height, Width) into a tensor of (Batch, Time, Channels, Height, Width).

forward(x)#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

ANN layers#

These are utility layers used in the training of ANNs, in order to provide specific features suitable for SNN conversion.

class sinabs.layers.NeuromorphicReLU(quantize=True, fanout=1, stochastic_rounding=False)#

NeuromorphicReLU layer. This layer is NOT used for Sinabs networks; it’s useful while training analogue pyTorch networks for future use with Sinabs.

Parameters
  • quantize – Whether or not to quantize the output (i.e. floor it to the integer below), in order to mimic spiking behavior.

  • fanout – Useful when computing the number of SynOps of a quantized NeuromorphicReLU. The activity can be accessed through NeuromorphicReLU.activity, and is multiplied by the value of fanout.

  • stochastic_rounding – Upon quantization, should the value be rounded stochastically or floored Only done during training. During evaluation mode, the value is simply floored

forward(inp)#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class sinabs.layers.QuantizeLayer(quantize=True)#

Layer that quantizes the input, i.e. returns floor(input).

Parameters

quantize – If False, this layer will do nothing.

forward(data)#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.