from_torch
from_torch#
This module provides support for importing models into the sinabs from pytorch.
- sinabs.from_torch.from_model(model: torch.nn.modules.module.Module, input_shape: typing.Optional[typing.Tuple[int, int, int]] = None, spike_threshold: float = 1.0, spike_fn: typing.Callable = <class 'sinabs.activation.spike_generation.MultiSpike'>, reset_fn: typing.Callable = MembraneSubtract(subtract_value=None), surrogate_grad_fn: typing.Callable = SingleExponential(grad_width=0.5, grad_scale=1.0), min_v_mem: float = -1.0, bias_rescaling: float = 1.0, batch_size: typing.Optional[int] = None, num_timesteps: typing.Optional[int] = None, synops: bool = False, add_spiking_output: bool = False, backend: str = 'sinabs', kwargs_backend: dict = {})[source]#
Converts a Torch model and returns a Sinabs network object. The modules in the model are analyzed, and a copy is returned, with all ReLUs, LeakyReLUs and NeuromorphicReLUs turned into SpikingLayers.
- Parameters
model (torch.nn.modules.module.Module) – Torch model
input_shape (Optional[Tuple[int, int, int]]) – If provided, the layer dimensions are computed. Otherwise they will be computed at the first forward pass.
spike_threshold (float) – The membrane potential threshold for spiking (same for all layers).
spike_fn (Callable) – The spike dynamics to determine the number of spikes out
reset_fn (Callable) – The reset mechanism of the neuron (like reset to zero, or subtract)
surrogate_grad_fn (Callable) – The surrogate gradient method for the spiking dynamics
min_v_mem (float) – The lower bound of the potential in (same for all layers).
bias_rescaling (float) – Biases are divided by this value.
batch_size (Optional[int]) – Must be provided if num_timesteps is None and is ignored otherwise.
num_timesteps (Optional[int]) – Number of timesteps per sample. If None, batch_size must be provided to seperate batch and time dimensions.
synops (bool) – If True (default: False), register hooks for counting synaptic operations during forward passes.
add_spiking_output (bool) – If True (default: False), add a spiking layer to the end of a sequential model if not present.
backend (str) – String defining the simulation backend (currently sinabs or exodus)
kwargs_backend (dict) – Dict with additional kwargs for the simulation backend
- class sinabs.from_torch.SpkConverter(input_shape: typing.Optional[typing.Tuple[int, int, int]] = None, spike_threshold: float = 1.0, spike_fn: typing.Callable = <class 'sinabs.activation.spike_generation.MultiSpike'>, reset_fn: typing.Callable = MembraneSubtract(subtract_value=None), surrogate_grad_fn: typing.Callable = SingleExponential(grad_width=0.5, grad_scale=1.0), min_v_mem: float = -1.0, bias_rescaling: float = 1.0, batch_size: typing.Optional[int] = None, num_timesteps: typing.Optional[int] = None, synops: bool = False, add_spiking_output: bool = False, backend: str = 'sinabs', kwargs_backend: dict = <factory>)[source]#
Converts a Torch model and returns a Sinabs network object. The modules in the model are analyzed, and a copy is returned, with all ReLUs and NeuromorphicReLUs turned into SpikingLayers.
- Parameters
input_shape (Optional[Tuple[int, int, int]]) –
spike_threshold (float) –
spike_fn (Callable) –
reset_fn (Callable) –
surrogate_grad_fn (Callable) –
min_v_mem (float) –
bias_rescaling (float) –
batch_size (Optional[int]) –
num_timesteps (Optional[int]) –
synops (bool) –
add_spiking_output (bool) –
backend (str) –
kwargs_backend (dict) –
- Return type
None
- convert(model: torch.nn.modules.module.Module) sinabs.network.Network [source]#
Converts the Torch model and returns a Sinabs network object.
- Parameters
model: A torch module.
- Returns
network: The Sinabs network object created by conversion.
- Parameters
model (torch.nn.modules.module.Module) –
- Return type
- spike_fn#