The AI arms race spawns new hardware architectures
As society turns to artificial intelligence to solve problems across ever more domains, we’re seeing an arms race to create specialized hardware that can run deep learning models at higher speeds and lower power consumption.
Some recent breakthroughs in this race include new chip architectures that perform computations in ways that are fundamentally different from what we’ve seen before. Looking at their capabilities gives us an idea of the kinds of AI applications we could see emerging over the next couple of years.
Neural networks, composed of thousands and millions of small programs that perform simple calculations to perform complicated tasks such as detecting objects in images or converting speech to text are key to deep learning.
But traditional computers are not optimized for neural network operations. Instead they are composed of one or several powerful central processing units (CPU). Neuromorphic computers use an alternative chip architecture to physically represent neural networks. Neuromorphic chips are composed of many physical artificial neurons that directly correspond to their software counterparts. This make them especially fast at training and running neural networks.
The concept behind neuromorphic computing has existed since the 1980s, but it did not get much attention because neural networks were mostly dismissed as too inefficient. With renewed interest in deep learning and neural networks in the past few years, research on neuromorphic chips has also received new attention.