Articles

Putting the Neural back into Networks


Part 3: I got 99 problems but a spike ain't one

13th January, 2020

Ocean scene

Ocean scene

Photo by @tgerz on Unsplash

In the last post, we saw that spike generation can cause major issues with gradient descent, by completely zeroing the gradients. We also learned how to get around this issue by faking the neuron output by making a spiking surrogate.

Putting the Neural back into Networks


Part 2: More spikes, more problems

12th January, 2020

A rock pool

A rock pool

Photo by @silasbaisch on Unsplash

In the last post, we learned about how a spiking neuron differs from a common-or-garden Artificial Neuron. TL;DR: Spiking neurons understand time and have internal dynamics, Artificial Neurons don’t.

Putting the Neural back into Networks


Part 1: Why spikes?

11th January, 2020

A rock pool

A rock pool

Photo by @silasbaisch on Unsplash

Not long ago, one of the gods of modern machine learning made a slightly controversial statement. In the final slide of his ISSCC 2019 keynote [1], Yann LeCun [2, 3, 4] (that’s “Mr CNN” to you) said he was skeptical about the usefulness of spiking neural networks, as almost a throwaway remark.

Schroedinger's importer

27th August, 2018

A Python package I'm working on combines several submodules with mixed licensing — some will be open source and redistributed, others will be proprietary and in-house only. I wanted to ease importing of the package by automatically detecting which submodules are present, and dynamically importing only those.

Visualised behaviour of dynamic neural networks

27th July, 2017
Behaviour of a random two-neuron dynamic neural network.

The dynamics of simple nonlinear systems can be dramatically beautiful. I visualised a large number of randomly-generated systems as an exploration.