layers#

Spiking#

IAF(spike_threshold, spike_fn, reset_fn, ...)

Integrate and Fire neuron layer.

IAFSqueeze([batch_size, num_timesteps])

IAF layer with 4-dimensional input (Batch*Time, Channel, Height, Width).

IAFRecurrent(rec_connect, spike_threshold, ...)

Integrate and Fire neuron layer with recurrent connections.

LIF(tau_mem, torch.Tensor], tau_syn, ...)

Leaky Integrate and Fire neuron layer.

LIFSqueeze([batch_size, num_timesteps])

LIF layer with 4-dimensional input (Batch*Time, Channel, Height, Width).

LIFRecurrent(tau_mem, torch.Tensor], ...)

Leaky Integrate and Fire neuron layer with recurrent connections.

ExpLeak(tau_mem[, shape, train_alphas, ...])

Leaky Integrator layer.

ExpLeakSqueeze([batch_size, num_timesteps])

ExpLeak layer with 4-dimensional input (Batch*Time, Channel, Height, Width).

ALIF(tau_mem, torch.Tensor], tau_adapt, ...)

Adaptive Leaky Integrate and Fire neuron layer.

ALIFRecurrent(tau_mem, torch.Tensor], ...)

Adaptive Leaky Integrate and Fire neuron layer with recurrent connections.

Non-spiking#

ExpLeak(tau_mem[, shape, train_alphas, ...])

Leaky Integrator layer.

ExpLeakSqueeze([batch_size, num_timesteps])

ExpLeak layer with 4-dimensional input (Batch*Time, Channel, Height, Width).

Pooling#

SpikingMaxPooling2dLayer(pool_size[, ...])

Torch implementation of SpikingMaxPooling.

SumPool2d(kernel_size[, stride, ceil_mode])

Non-spiking sumpooling layer to be used in analogue Torch models.

Conversion from images / analog signals#

Img2SpikeLayer(image_shape[, tw, max_rate, ...])

Layer to convert images to Spikes

Sig2SpikeLayer(channels_in[, tw, ...])

Layer to convert analog Signals to Spikes

Parent layers#

StatefulLayer(state_names)

Pytorch implementation of a stateful layer, to be used as base class.

SqueezeMixin()

Utility mixin class that will wrap the __init__ and forward call of other classes.

Auxiliary#

Cropping2dLayer([cropping])

Crop input image by

FlattenTime()

Utility layer which always flattens first two dimensions.

UnflattenTime(batch_size)

Utility layer which always unflattens (expands) the first dimension into two separate ones.

ANN layers#

NeuromorphicReLU([quantize, fanout, ...])

NeuromorphicReLU layer.

QuantizeLayer([quantize])

Layer that quantizes the input, i.e. returns floor(input).