Why is solving for matrix multiplication efficiency such a big deal in computing

You must have come across matrix multiplication in school books. But did you know how important they are in every aspect of our daily lives, from processing images on our phones and recognizing speech commands to creating graphics for computer games?

It’s at the heart of almost everything math.

With the latest version of DeepMind, AlphaTensor, an artificial intelligence system, researchers have shed light on a 50-year-old basic mathematical question to find the fastest way to multiply two matrices.

AlphaTensor has discovered more efficient and newer algorithms for many sizes of arrays. DeepMind said in statement.

Progress is an extension of Alpha Zero, a single system that mastered board games (chess, go, and shogi) from scratch without human input. Furthermore, the research reveals that AlphaZero is a powerful algorithm that can be extended beyond traditional games to help solve open problems in mathematics.

The problem at hand

Matrix multiplication is one of the simplest forms of mathematics but becomes very complex when applied in the digital world. Anything that can be solved numerically – from weather forecasting to data compression – usually uses arrays. For example, you can read this article on your screen because its pixels are represented as a grid, and they are updated with new information faster than your eyes can track.

Despite its omnipresent nature, the account is not well understood. Moreover, no one knows a faster way to solve the problem because there are infinite ways to do it.

DeepMind Game Plan

Breakthroughs in machine learning have helped researchers from creating art to predicting protein structures. Increasingly, researchers are now using algorithms to become their mentor and correct flaws.

DeepMind researchers have done their best – making AI champions in games.

The team tackled the matrix multiplication problem by turning it into a 3D single-player board game called TensorGame. The game is very challenging as the number of possible algorithms, even for small cases of matrix doubling, is greater than the number of atoms in the universe.

The 3D board represents the multiplication problem and each movement represents the next step in solving it. Thus, the series of movements made in the game is an algorithm.

To run the game, the researchers trained a new version of AlphaZero called “AlphaTensor”. Instead of learning the best moves in Go or chess, the system learned the best moves to take when hitting matrices. Then, using DeepMind’s the favorite Reinforcement learning, the system rewarded for winning the game in the fewest number of moves possible.

The AI ​​system discovered a way to multiply two 4×4 matrices using only 47 multiplications, instead of the 64 it would take if you were to painstakingly multiply each row with each column of the matrix corresponding to it. This is also two steps lower than the number 49 found by Volker Strassen in 1969, where his multiplication method for 4×4 matrices held the record for the fastest for over 50 years.

What awaits us?

This discovery can boost some computation speeds by up to 20% on devices like the Nvidia V100 graphics processing unit (GPU) and Google tensor processing unit (TPU) v2, but there is no guarantee that these gains will also be seen on a smartphone or laptop.

DeepMind now plans to use AlphaTensor to research other types of algorithms. “While we may be able to push the boundaries a little further with this computational approach,” Gray Ballarda computer scientist at Wake Forest University in Winston-Salem, North Carolina, He said“I’m excited for theorists to begin analyzing the new algorithms they’ve discovered to find clues about where to look for the next breakthrough.”

Leave a Comment