Intel has unveiled its Intel Nervana Neural Network Processors (NNP) for training (NNP-T1000) and inference (NNP-I1000). This purpose-built ASICs for complex deep learning would provide incredible scale and efficiency for cloud and data center customers. Intel has also unveiled its “next-generation” Intel Movidius Myriad Vision Processing Unit (VPU) for edge media, computer vision and inference applications.

Naveen Rao
“Purpose-built hardware like Intel Nervana NNPs and Movidius Myriad VPUs are necessary to continue the incredible progress in AI,” said Naveen Rao, Intel corporate vice president and general manager of the Intel Artificial Intelligence Products Group.

These products would further strengthen Intel’s portfolio of Artificial Intelligence (AI) solutions, which is expected to generate more than $3.5 billion in revenue in 2019. Intel’s AI portfolio helps customers enable AI model development and deployment at any scale from massive clouds to tiny edge devices, and everything in between.

“With this next phase of AI, we’re reaching a breaking point in terms of computational hardware and memory. Purpose-built hardware like Intel Nervana NNPs and Movidius Myriad VPUs are necessary to continue the incredible progress in AI,” said Naveen Rao, Intel corporate vice president and general manager of the Intel Artificial Intelligence Products Group. “Using more advanced forms of system-level AI will help us move from the conversion of data into information toward the transformation of information into knowledge.”

Intel Nervana NNP

Now in production and being delivered to customers, the new Intel Nervana NNPs are part of a systems-level AI approach offering a full software stack developed with open components and deep learning framework integration for maximum use.

According to Intel, the Intel Nervana NNP-T strikes the right balance between computing, communication and memory, allowing “near-linear, energy-efficient scaling from small clusters up to the largest pod supercomputers.” Intel’s Nervana NNP-I solution would be power- and budget-efficient and ideal for running intense, multimodal inference at real-world scale using flexible form factors. Both products were developed for the AI processing needs of leading-edge AI customers like Baidu and Facebook.

Intel Movidius VPU

Additionally, Intel’s “next-generation” Intel Movidius VPU, scheduled to be available in the first half of 2020, would incorporate unique, highly efficient architectural advances that are expected to deliver leading performance. It would deliver more than 10 times the inference performance as the previous generation.

Intel has also announced its new Intel DevCloud for the Edge. Along with the Intel Distribution of OpenVINO toolkit, it would address a key pain point for developers. It would allow them to try, prototype and test AI solutions on a broad range of Intel processors before they buy hardware.

Intel