ExpLeak
Contents
ExpLeak#
Exponential Leaky layer which acts as a lowpass filter.
- class sinabs.layers.ExpLeak(tau_mem: Union[float, torch.Tensor], shape: Optional[torch.Size] = None, train_alphas: bool = False, min_v_mem: Optional[float] = None, norm_input: bool = False, record_states: bool = False)#
A Leaky Integrator layer.
Neuron dynamics in discrete time:
\[V_{mem}(t+1) = \alpha V_{mem}(t) + (1-\alpha)\sum z(t)\]where \(\alpha = e^{-1/tau_{mem}}\) and \(\sum z(t)\) represents the sum of all input currents at time \(t\).
- Parameters
tau_mem (float) – Membrane potential time constant.
min_v_mem (float or None) – Lower bound for membrane potential v_mem, clipped at every time step.
train_alphas (bool) – When True, the discrete decay factor exp(-1/tau) is used for training rather than tau itself.
shape (torch.Size) – Optionally initialise the layer state with given shape. If None, will be inferred from input_size.
norm_input (bool) – When True, normalise input current by tau. This helps when training time constants.
record_states (bool) – When True, will record all internal states such as v_mem or i_syn in a dictionary attribute recordings. Default is False.
ExpLeakSqueeze#
- class sinabs.layers.ExpLeakSqueeze(batch_size=None, num_timesteps=None, **kwargs)#
Same as parent ExpLeak class, only takes in squeezed 4D input (Batch*Time, Channel, Height, Width) instead of 5D input (Batch, Time, Channel, Height, Width) in order to be compatible with layers that can only take a 4D input, such as convolutional and pooling layers.
- forward(input_data: torch.Tensor) torch.Tensor #
Forward pass with given data.
- Parameters
input_current – torch.Tensor Data to be processed. Expected shape: (batch, time, …)
- Returns
- torch.Tensor
Output data. Same shape as input_data.