from_torch#

This module provides support for importing models into the sinabs from pytorch.

class sinabs.from_torch.SpkConverter(input_shape=None, spike_threshold=1.0, min_v_mem=- 1.0, membrane_subtract=None, bias_rescaling=1.0, num_timesteps=None, batch_size=1, synops=False, add_spiking_output=False, backend='bptt', kwargs_backend=None)#

Converts a Torch model and returns a Sinabs network object. The modules in the model are analyzed, and a copy is returned, with all ReLUs, LeakyReLUs and NeuromorphicReLUs turned into SpikingLayers.

Parameters
  • input_shape – If provided, the layer dimensions are computed. Otherwise they will computed at the first forward pass.

  • spike_threshold – The membrane potential threshold for spiking in convolutional and linear layers (same for all layers).

  • min_v_mem – The lower bound of the potential in convolutional and linear layers (same for all layers).

  • membrane_subtract – Value subtracted from the potential upon spiking for convolutional and linear layers (same for all layers).

  • bias_rescaling – Biases are divided by this value.

  • num_timesteps – Number of timesteps per sample. If None, batch_size must be provided to seperate batch and time dimensions.

  • batch_size – Must be provided if num_timesteps is None and is ignored otherwise.

  • synops – If True (default: False), register hooks for counting synaptic operations during foward passes.

  • add_spiking_output – If True (default: False), add a spiking layer to the end of a sequential model if not present.

  • backend – String defining the simulation backend (currently sinabs or exodus)

  • kwargs_backend – Dict with additional kwargs for the simulation backend

convert(model)#

Converts the Torch model and returns a Sinabs network object.

Returns network

the Sinabs network object created by conversion.

sinabs.from_torch.from_model(model, input_shape=None, spike_threshold=1.0, min_v_mem=- 1.0, membrane_subtract=None, bias_rescaling=1.0, num_timesteps=None, batch_size=1, synops=False, add_spiking_output=False, backend='sinabs', kwargs_backend=None)#

Converts a Torch model and returns a Sinabs network object. The modules in the model are analyzed, and a copy is returned, with all ReLUs, LeakyReLUs and NeuromorphicReLUs turned into SpikingLayers.

Parameters
  • model – a Torch model

  • input_shape – If provided, the layer dimensions are computed. Otherwise they will be computed at the first forward pass.

  • spike_threshold – The membrane potential threshold for spiking in convolutional and linear layers (same for all layers).

  • min_v_mem – The lower bound of the potential in convolutional and linear layers (same for all layers).

  • membrane_subtract – Value subtracted from the potential upon spiking for convolutional and linear layers (same for all layers).

  • bias_rescaling – Biases are divided by this value.

  • num_timesteps – Number of timesteps per sample. If None, batch_size must be provided to seperate batch and time dimensions.

  • batch_size – Must be provided if num_timesteps is None and is ignored otherwise.

  • synops – If True (default: False), register hooks for counting synaptic operations during forward passes.

  • add_spiking_output – If True (default: False), add a spiking layer to the end of a sequential model if not present.

  • backend – String defining the simulation backend (currently sinabs or exodus)

  • kwargs_backend – Dict with additional kwargs for the simulation backend