Intel Naveen Rao: Not only CPU or GPU, enterprise-level artificial intelligence requires a more comprehensive approach

[Introduction]: At the Intel Artificial Intelligence Developer Conference held in San Francisco on May 23, Intel demonstrated the latest development results of artificial intelligence, and enterprise-level artificial intelligence became the focus of the conference.

At the Intel Artificial Intelligence Developer Conference in San Francisco on May 23, we introduced the latest on Intel's artificial intelligence portfolio and Intel's NervanaTM neural network processor. This is an exciting week, and the Intel Artificial Intelligence Developers Conference brings together top talent in the field of artificial intelligence. We realize that Intel needs to collaborate with the entire industry, including developers, academia, software ecosystems, etc., to unlock the full potential of artificial intelligence. Therefore, I am very excited to be with many people in the industry. This includes developers who have participated in demonstrations, research, and hands-on training with us, as well as many supporters from Google*, AWS*, Microsoft*, NovarTIs*, and C3 IoT*. It is this extensive collaboration that helps us empower the artificial intelligence community to provide the hardware and software support needed to accelerate technological innovation and advancement in the field of artificial intelligence.

Naveen Rao speaks

As we accelerate the transition to AI-driven future computing, we need to provide a comprehensive enterprise-class solution. This means our solutions offer the widest range of computing power and support a wide range of architectures from milliwatts to kilowatts. Enterprise-class artificial intelligence also means supporting and extending the tools, open frameworks, and infrastructure that the industry has invested in to better enable researchers to perform tasks in different artificial intelligence workloads. For example, artificial intelligence developers are increasingly inclined to program directly against open source frameworks, rather than specific product software platforms, which will facilitate faster and more efficient development. The news we posted at the conference covered all of these areas and announced several new partners that will help developers and our customers benefit from artificial intelligence more quickly.

Intel Artificial Intelligence portfolio expanded for diverse artificial intelligence workloads

A recent Intel survey shows that more than 50% of our US corporate customers are turning to existing cloud solutions based on Intel® Xeon® processors to meet their initial needs for artificial intelligence. This in fact affirms Intel's approach to meeting the uniqueness of artificial intelligence workloads by offering a wide range of enterprise-class products including Intel® Xeon® processors, Intel® NervanaTM and Intel® MovidiusTM technologies, and Intel® FPGAs. Claim.

An important part of our discussion today is the optimization of Intel Xeon scalable processors. Compared to the previous generation, these optimizations have greatly improved training and reasoning performance, enabling more companies to leverage existing infrastructure and reduce overall costs as they move toward the initial stages of artificial intelligence. The latest Intel Nervana Neural Network Processor (NNP) series also has updated news sharing: The Intel Nervana Neural Network Processor has clear design goals for high computational utilization and true model parallelism through inter-chip interconnects. The industry talks a lot about theoretical peak performance or TOP/s numbers; but the reality is that many calculations are meaningless unless the memory subsystem on the architectural design can support the full utilization of these computing units. In addition, many performance data published in the industry use a large square matrix, but this is usually not found in real neural networks.

Intel is committed to developing a balanced architecture for neural networks that also includes high bandwidth between chips in low latency. Preliminary performance benchmarks on our neural network processor family show very competitive results in both utilization and connectivity. Specific details include:

Using A(1536, 2048) and B (2048, 1536) matrix-sized matrix-matrix multiplication (GEMM) operations, a computational utilization of more than 96.4% is achieved on a single chip. This means an actual (non-theoretical) performance of approximately 38 TOP/s on a single chip1. For A(6144, 2048) and B(2048, 1536) matrix sizes, multi-chip distributed GEMM operations supporting model parallel training achieve near-linear expansion and 96.2% expansion efficiency2, enabling multiple neural network processors to Connect together and break the memory limitations faced by other architectures.

With a delay of less than 790 nanoseconds, we measured a unidirectional inter-chip transmission efficiency of 3 to 89.4% of the theoretical bandwidth and used it for a 2.4Tb/s high bandwidth, low latency interconnect.

All this is done in a single chip with a total power of less than 210 watts, and this is just the Intel Nervana neural network processor prototype (Lake Crest). The main goal of the product is to gather feedback from our early partners.

We are developing the first commercial neural network processor product, the Intel Nervana NNP-L1000 (Spring Crest), which is scheduled for release in 2019. Compared to the first generation of Lake Crest products, we expect the Intel Nervana NNP-L1000 to achieve 3-4 times the training performance. The Intel Nervana NNP-L1000 will also support bfloat16, a numerical data format widely used in the industry for neural networks. In the future, Intel will expand support for bfloat16 in the artificial intelligence product line, including Intel Xeon processors and Intel FPGAs. This is part of a comprehensive strategy to bring leading AI training capabilities to our chip portfolio.

Artificial intelligence for the real world

The breadth of our products allows organizations of all sizes to easily launch their own artificial intelligence journey through Intel. For example, Intel is working with NovarTIs to use deep neural networks to accelerate high-content screening—a key element of early drug development. The cooperation between the two parties reduced the time for training the image analysis model from 11 hours to 31 minutes - an improvement of more than 20 times4. To enable customers to develop artificial intelligence and IoT applications more quickly, Intel and C3 IoT announced a collaboration on optimized AI hardware and software solutions – a C3 IoT AI application based on Intel AI technology. In addition, we are integrating deep learning frameworks such as TensorFlow*, MXNet*, Paddle Paddle*, CNTK*, and ONNX* on top of nGraph, a framework-neutral deep neural network (DNN) model compiler. We have announced that the Intel Artificial Intelligence Lab has opened up a natural language processing library for Python* to help researchers start their own natural language processing algorithms.

The future of computing depends on our ability to jointly deliver enterprise-level solutions through which companies can leverage the full potential of artificial intelligence. We are eager to develop and deploy this transformative technology with the community and our customers, and look forward to a more exciting experience at the Artificial Intelligence Developers Conference.

Automotive Connector Terminal

Automotive Connector Terminal,Automotive Brass Terminal,Auto Stamping Connector Parts,Automobile Electric Wire Harness Terminal

Wenzhou Langrun Electric Co.,Ltd , https://www.langrunele.com