Putting the Neural back into Networks


Part 2: More spikes, more problems

12th January, 2020

A rock pool

A rock pool

Photo by @silasbaisch on Unsplash

In the last post, we learned about how a spiking neuron differs from a common-or-garden Artificial Neuron. TL;DR: Spiking neurons understand time and have internal dynamics, Artificial Neurons don’t.

Brains and Machines podcast

17th May, 2024
SynSense team retreat in Engelberg. Image credit: Dylan Muir.

I had the privilege of speaking with Sunny Bains on the Brains and Machines podcast. Listen in to learn about SynSense low-power sensory processors, and Neuromorphics is 2024.

Machine Learning with SNNs for low-power inference


Presentation at UWA

3rd May, 2024

LIF neuron model

LIF neuron model

An LIF neuron as a small recurrent unit

AI is extremely power-hungry. But brain-inspired computing units can perform machine learning inference at sub-milliWatt power. Learn about low-power computing architectures from SynSense, which use quantised spiking neurons for ML inference. I presented this slide deck at the UWA Computer Science seminar in Perth.