ExpLeak#

class sinabs.layers.ExpLeak(tau_mem: Union[float, torch.Tensor], shape: Optional[torch.Size] = None, train_alphas: bool = False, min_v_mem: Optional[float] = None, norm_input: bool = False, record_states: bool = False)[source]#

Leaky Integrator layer which is a special case of LIF without activation function.

Neuron dynamics in discrete time:

\[V_{mem}(t+1) = \alpha V_{mem}(t) + (1-\alpha)\sum z(t)\]

where \(\alpha = e^{-1/tau_{mem}}\) and \(\sum z(t)\) represents the sum of all input currents at time \(t\).

Parameters
  • tau_mem (Union[float, torch.Tensor]) – Membrane potential time constant.

  • min_v_mem (Optional[float]) – Lower bound for membrane potential v_mem, clipped at every time step.

  • train_alphas (bool) – When True, the discrete decay factor exp(-1/tau) is used for training rather than tau itself.

  • shape (Optional[torch.Size]) – Optionally initialise the layer state with given shape. If None, will be inferred from input_size.

  • norm_input (bool) – When True, normalise input current by tau. This helps when training time constants.

  • record_states (bool) – When True, will record all internal states such as v_mem or i_syn in a dictionary attribute recordings. Default is False.