AlphaZero is from DeepMind Technologies, a subsidiary under Alphabet, which is Google’s parent company. It can tackle not only chess, but also shogi and Go — two equally difficult, if not even more challenging, games.
AlphaZero comes after many years of research, succeeding AlphaGo Zero from last year, the world’s best Go player. But this time around there wasn’t any human help. AlphaZero taught itself how to play from scratch.
The neural-net AI studied each of the three games, using a process that’s similar to how a brain is structured. (Neural nets are similar in some ways to neurons in our bodies: It’s essentially the way the computer takes info and works through it, sort of like a very complex equation.) AlphaZero trained for 9 hours on chess, 12 hours on shogi, and 13 days on Go. Playing itself, it thought about the same moves over and over again. And it worked.
The sheer hardware of the AlphaZero is intense (think a Mac Pro on steroids). It used 5,000 tensor processing units, or TPUs, in training alone. These processors are for AI and neural net tasks. Google Photos employs them for AI features within the app.
All of this shows how advanced computers are becoming. With neural net AI inside, decision-making abilities aren’t far off.