value insights

Intel Buys Deep Learning Startup Nervana To Bolster AI Business- Valutrics

The chipmaker says it will use Nervana Systems’ expertise in accelerating deep learning algorithms to expand Intel’s capabilities in the field of artificial intelligence.

7 Cyber-Security Skills In High Demand
(Click image for larger view and slideshow.)

Intel is getting serious about artificial intelligence, specifically as a driver of chip sales. On Tuesday, the company announced plans to acquire Nervana Systems, a deep learning company based in San Diego, Calif.

Financial terms for the deal were not disclosed, but Recode reports the deal is valued at around $408 million, citing “a source with knowledge of the deal.”

“With this acquisition, Intel is formally committing to pushing the forefront of AI technologies,” said Naveen Rao, CEO and cofounder of Nervana Systems, in a blog post.

Artificial intelligence is not science fiction, said Diane Bryant, EVP and general manager of the data center group at Intel, in a blog post. Rather, she said, it is all around us, enabling speech recognition, image recognition, fraud detection, and self-driving cars.

“Encompassing compute methods like advanced data analytics, computer vision, natural language processing and machine learning, artificial intelligence is transforming the way businesses operate and how people engage with the world,” said Bryant.

At the heart of the industry’s enthusiasm for AI is the massive amount of data available through online interaction, mobile devices, sensors, and various other inputs to train AI models. With enough data, AI can make good decisions, and it may be necessary to make sense of large data sets.

“The ability to analyze and derive value from that data is one of the most exciting opportunities for us all,” said Bryant. “Central to that opportunity is artificial intelligence.”

Intel's Diane Bryant with Nervana's cofounder Naveen Rao.(Image: Intel)

Intel’s Diane Bryant with Nervana’s cofounder Naveen Rao.

(Image: Intel)

Intel says it intends to use Nervana’s technology to optimize the Intel Math Kernel Library and to integrate it into industry-standard AI frameworks, and to improve the deep learning performance of Intel Xeon and Intel Xeon Phi processors.

The chipmaker aims to supply the hardware that powers the intensive computation behind AI software. It also has to move fast, because other technology companies are already developing silicon optimized for AI number crunching.

Bryant says that more than 97% of the servers handling machine learning computation rely on Intel processors, but Intel’s dominance may not last as other companies develop processors optimized for AI workloads.

[Curious about machine learning? Read Google Open Sources Machine Learning Library TensorFlow.]

At Google I/O in May, CEO Sundar Pichai revealed that Google has created an application-specific integrated circuit (ASIC) called the Tensor Processing Unit (TPU) solely to handle deep learning calculations made with the Tensor Flow framework that Google released as an open source project last year.

In April, NVIDIA unveiled its Tesla P100 GPU chip for deep learning computation. In December last year, Facebook open sourced the design for its Big Sur AI server, made with components from Quanta and NVIDIA. In February, MIT researchers showed off “Eyeriss,” a chip design they claim can handle AI processing about 10 times faster than current GPUs.

Overall, fewer than 10% of servers worldwide handled machine learning tasks last year, according to Bryant, but industry enthusiasm for the AI appears bound to drive more usage. During Google’s Q3 2015 earnings call, Pichai described AI and its related disciplines as central to the company’s future.

Baidu, Facebook, and Microsoft, and perhaps even Apple (which last week acquired AI startup Turi) have shown similar interest in the technology. Intel has finally recognized that AI isn’t a passing fad.