Putting the Neural back into Networks


Part 2: More spikes, more problems

12th January, 2020

A rock pool

A rock pool

Photo by @silasbaisch on Unsplash

In the last post, we learned about how a spiking neuron differs from a common-or-garden Artificial Neuron. TL;DR: Spiking neurons understand time and have internal dynamics, Artificial Neurons don’t.

Machine Learning with SNNs for low-power inference


Presentation at UWA

3rd May, 2024

LIF neuron model

LIF neuron model

An LIF neuron as a small recurrent unit

AI is extremely power-hungry. But brain-inspired computing units can perform machine learning inference at sub-milliWatt power. Learn about low-power computing architectures from SynSense, which use quantised spiking neurons for ML inference. I presented this slide deck at the UWA Computer Science seminar in Perth.

Putting the Neural back into Networks


Part 3: I got 99 problems but a spike ain't one

13th January, 2020

Ocean scene

Ocean scene

Photo by @tgerz on Unsplash

In the last post, we saw that spike generation can cause major issues with gradient descent, by completely zeroing the gradients. We also learned how to get around this issue by faking the neuron output by making a spiking surrogate.