How is Sinabs different?#

There are many SNN simulators out there. What does Sinabs do differently? Sinabs is meant to extend PyTorch by adding stateful and spiking layers, which can then take full advantage of the optimised, gradient-based training mechanisms. You might find that the design of our sequential models differs slightly from that of recurrent layers in PyTorch, as we do not pass the state as an input or receive it as output. We do this so that we can be compatible with the nn.Sequential architecture, without having to manually define the flow of tensors between layers.

What is the difference between Rockpool and Sinabs?#

Rockpool provides multiple computational backends such as Jax, Nest or PyTorch and wraps its own api around it! That allows for powerful abstractions and many additional features such as graph tracing, support for continuous time systems and neuromorphic hardware. Sinabs on the other hand focuses on simplicity: Built exclusively on PyTorch, it’s meant to be a thin layer that adds support for spiking layers that are not part of PyTorch. Traditionally, Sinabs added support for SynSense’s convolutional neuromorphic hardware (Dynap-CNN), while Rockpool focuses on SynSense’s hardware for lower-dimensional signals (Xylo). You can read about both hardware architectures here. That means that Sinabs comes with built-in weight transfer functionality which converts a pre-trained ANN to an SNN, because vision models often have strong spatial dependencies. Rockpool on the other hand adds support for analog audio frontends and exact hardware simulation in software.