The AI Chip Arms Race: Will Google, Apple, Microsoft, or Amazon Kill The CPU?

Google+ Pinterest LinkedIn Tumblr +

Microprocessing chips are often likened to the “brain” of computers. The two main types — CPUs and GPUs — are required to run software, render images, and run any task that requires multiple parallel processes, such as machine learning or data analysis. GPUs are currently required for the vast majority of AI-powered processes, and the dominant player in the space, Nvidia, has remained steady for the past five years or so. NewtonX interviews with executives at tech giants including Amazon and Microsoft, however, revealed that incumbents are not content to let Nvdia dominate the chip market, and a new arms race between Google, Amazon, Apple, and Microsoft has begun. The NewtonX interview series on the subject elucidated how the competition is playing out, and what each company hopes to gain from winning the new chips arms race.

The insights from this article are sourced from NewtonX surveys, panels, and expert consultations. To gain access to these services visit newtonx.com.

What’s In A Chip? GPUs, CPUs, and TPUs Explained

CPUs (Computer Processing Units) have been around since, well, personal computers have. CPUs are not synonymous with computers, however — rather, they are the brain of the computer. They sit on a computer’s motherboard and are built by placing billions of microscopic transistors onto a single computer chip. Those transistors allow the chip to make the calculations it needs to run programs that are stored on the computer’s memory. Over time, these transistors have become smaller and smaller, which has resulted in increasingly fast CPU speeds.

GPUs (Graphics Processing Units) are a newer technology: Nvidia released the first GPU, the GeForce 256, in 1999. Essentially, GPUs allow for quick image rendering that offloads the burden of graphically intense applications from CPUs. GPUs have since become an essential component of AI computing, as they allow for real-time graphics processing. In fact, the world’s fastest supercomputer, the Tianhe-1A in China, is a hybrid of CPU and Nvidia GPU.

Both CPUs and GPUs are built using architecture from British chip design firm ARM, the company behind virtually every chip in today’s smartphones.

TPUs are not-yet-commercially-available AI accelerator chips developed by Google for neural network machine learning. Google used TPUs in its AlphaGo, Google Street View text processing, Google search results, and to process over 100 million photos a day in Google Photos. TPUs are not replacements for either GPUs or CPUs; they are used for high volume computation (like categorizing millions of photos at once). TPUs accelerate the time to accuracy for large, complex neural network models; models that previously took weeks to train on CPU/GPU hardware platforms can be trained in a matter of hours on TPUs. Google was the first company to develop a viable AI-specific chip, but its competitors are not far behind.

A Bid For Processing Power: What Each Tech Giant Is Throwing Into the Mix

In early 2018, ARM unveiled two new chips: the ARM Machine Learning Processor, which increases compute speed for AI applications speech and facial recognition, and the ARM Object Detection Processor, which is optimized for detecting people and objects. The release was timely: that same month, Google said it would allow other companies to buy access to its TPUs through its cloud computing service. As with all ARM chip architecture, the company does not manufacture the chips itself, but instead licenses its designs to third parties.

And the third parties are already on board. Apple designed and built a neural engine chip as part of the iPhone X to take care of the phone’s ANNs for image and speech processing. NewtonX chip experts say that the neural engine will likely become a central fixture of not only the iPhone, but all smart phones, as they become more heavily reliant on AR, image recognition, and speech recognition. Indeed, in 2017 China’s telecommunications company, Huawei, also announced a neural processing unit to accelerate machine learning.

The New York Times estimates that the number of AI-dedicated startup chip companies is at 45 and growing, and experts say the number is even larger, considering the number of stealth companies in China.

Google also entered the smartphone AI chip fray with its first custom imaging chip on the Pixel 2. The processor handles Google’s machine learning powered HDR+ photography five times faster than the phone’s main CPU, but uses a tenth of the energy. (Google’s primary motivating factor in developing the TPU was also to increase compute power while lowering energy expenditure). The chip also makes Google’s HDR+ available to third party camera apps like Instagram.

Not to be left behind, Amazon has already started building its own AI chips for Alexa. In 2015, Amazon acquired Israeli chipmaker Annapurna Labs, and currently the company has almost 500 people with high levels of chip expertise on staff.

Why So Chip Happy? What AI Chips Give The Tech Giants

Prior to the AI chip revolution, most AI-powered processing occurred in the cloud. This had two drawbacks: it increased response time and used massive amounts of energy in data centers. As smartphones and other verticals including healthcare and transportation become increasingly AI and machine learning reliant, having AI-specific chips that can carry the processing load for running AI tasks will be paramount. While much of the training for AI tasks will still occur in the cloud, the actual running of the trained algorithm can occur on the hardware.

The introduction of tech giants to this space may spell trouble for chip-only manufacturers such as Nvidia and AMD, particularly if any one of the giants sells its AI-specific hardware to other companies, as Google seems wont to do.

 

Share.

About Author

Comments are closed.