Logo
Sign in
Product Logo
Intel oneAPI Deep Neural Network LibraryIntel Corporation

Develop faster deep learning frameworks with a library that combines primitives and a single API to develop for CPUs, GPUs, or both.

Vendor

Vendor

Intel Corporation

Company Website

Company Website

Product details

Develop Faster Deep Learning Frameworks and Applications

The Intel® oneAPI Deep Neural Network Library (oneDNN) provides highly optimized implementations of deep learning building blocks. With this open source, cross-platform library, deep learning application and framework developers can use the same API for CPUs, GPUs, or both—it abstracts out instruction sets and other complexities of performance optimization. Using this library, you can:

  • Improve performance of frameworks you already use, such as OpenVINO™ toolkit, AI Tools from Intel, PyTorch*, and TensorFlow*. 
  • Develop faster deep learning applications and frameworks using optimized building blocks.
  • Deploy applications optimized for Intel CPUs and GPUs without writing any target-specific code.

Features

  • Automatic Optimization - Use existing deep learning frameworks - Develop and deploy platform-independent deep learning applications with automatic detection of instruction set architecture (ISA) and ISA-specific optimization
  • Network Optimization - Identify performance bottlenecks using Intel® VTune™ Profiler - Use automatic memory format selection and propagation based on hardware and convolutional parameters - Fuse primitives with operations applied to the primitive’s result, for instance, Conv+ReLU - Quantize primitives from FP32 to FP16, bf16, or int8 using Intel® Neural Compressor
  • Optimized Implementations of Key Building Blocks - Convolution - Matrix multiplication - Pooling - Batch normalization - Activation functions - Recurrent neural network (RNN) cells - Long short-term memory (LSTM) cells 
  • Abstract Programming Model - Primitive: Any low-level operation from which more complex operations are constructed, such as convolution, data format reorder, and memory - Memory: Handles to memory allocated on a specific engine, tensor dimensions, data type, and memory format - Engine: A hardware processing unit, such as a CPU or GPU - Stream: A queue of primitive operations on an engine
Find more products by category
Development SoftwareView all