Articles

Machine Learning with SNNs for low-power inference


Presentation at UWA

3rd May, 2024

LIF neuron model

LIF neuron model

An LIF neuron as a small recurrent unit

AI is extremely power-hungry. But brain-inspired computing units can perform machine learning inference at sub-milliWatt power. Learn about low-power computing architectures from SynSense, which use quantised spiking neurons for ML inference. I presented this slide deck at the UWA Computer Science seminar in Perth.

Hands-on with Rockpool and Xylo


OpenNeuromorphic 26th April 2023

26th April, 2023

Learn how to build and deploy a spiking neural network with Rockpool and Xylo.

Putting the Neural back into Networks


Part 3: I got 99 problems but a spike ain't one

13th January, 2020

Ocean scene

Ocean scene

Photo by @tgerz on Unsplash

In the last post, we saw that spike generation can cause major issues with gradient descent, by completely zeroing the gradients. We also learned how to get around this issue by faking the neuron output by making a spiking surrogate.

Putting the Neural back into Networks


Part 2: More spikes, more problems

12th January, 2020

A rock pool

A rock pool

Photo by @silasbaisch on Unsplash

In the last post, we learned about how a spiking neuron differs from a common-or-garden Artificial Neuron. TL;DR: Spiking neurons understand time and have internal dynamics, Artificial Neurons don’t.

Putting the Neural back into Networks


Part 1: Why spikes?

11th January, 2020

A rock pool

A rock pool

Photo by @silasbaisch on Unsplash

Not long ago, one of the gods of modern machine learning made a slightly controversial statement. In the final slide of his ISSCC 2019 keynote [1], Yann LeCun [2, 3, 4] (that’s “Mr CNN” to you) said he was skeptical about the usefulness of spiking neural networks, as almost a throwaway remark.