Background

Take inspiration from the brain and make AI more energy efficient

Sander Bohté is a senior researcher in neural computation at CWI, where he was a co-founder of the machine learning group. He also holds a professorship in cognitive neurobiology at the University of Amsterdam (UvA) and a professorship in cognitive computational neuroscience at the University of Groningen (RUG).

Leestijd: 5 minuten

Thanks to a mathematical breakthrough achieved at CWI, AI applications like speech recognition, gesture recognition and electrocardiogram (ECG) classification can become a hundred to a thousand times more energy efficient. This means it will be possible to put much more elaborate AI in chips, enabling applications to run locally on a smartphone or smartwatch instead of in the cloud.

Since 2012, the field of artificial intelligence (AI) has made great steps forward thanks to a technique called deep learning. Deep learning has led to numerous practical applications: Apple uses it in the voice recognition of Siri, Facebook automatically tags photos with it and Google Translate translates texts with it, to name just a few.

Deep learning is based on information processing by large artificial neural networks that have dozens or even hundreds of layers. It doesn’t come for free, however. Since 2012, training the largest deep neural networks has become 300,000 times more computationally intensive. Every few months, training costs have doubled. The training of text generator GPT-3, for example, which amazed the world in 2020 by writing human-style texts in all kinds of styles, cost the annual consumption of 300 Dutch households (1 GWh, 4.1 million euros on the electricity bill).

This article is exclusively available to premium members of Bits&Chips. Already a premium member? Please log in. Not yet a premium member? Become one and enjoy all the benefits.

Login

Related content