Logo
Sign in
Product Logo
Intel oneAPI Collective Communications LibraryIntel Corporation

Distribute machine and deep learning model training across multiple nodes using a library of optimized communication patterns.

Vendor

Vendor

Intel Corporation

Company Website

Company Website

Product details

Implement Multi-Node Communication Patterns

The Intel® oneAPI Collective Communications Library (oneCCL) enables developers and researchers to more quickly train newer and deeper models. This is done by using optimized communication patterns to distribute model training across multiple nodes. The library is designed for easy integration into deep learning frameworks, whether you are implementing them from scratch or customizing existing ones.

  • Built on top of lower-level communication middleware. Message passing interface (MPI) and libfabrics transparently support many interconnects, such as Cornelis Networks*, InfiniBand*, and Ethernet.
  • Optimized for high performance on Intel CPUs and GPUs. 
  • Allows the tradeoff of compute for communication performance to drive scalability of communication patterns.
  • Enables efficient implementations of collectives that are heavily used for neural network training, including all-gather, all-reduce, and reduce-scatter.

Features

  • **Common APIs to Support Deep Learning Frameworks: **oneCCL exposes a collective API that supports: - Commonly used collective operations found in deep learning and machine learning workloads - Interoperability with SYCL* from the Khronos* Group
  • **Deep Learning Optimizations: **The runtime implementation enables several optimizations, including:  - Asynchronous progress for compute communication overlap - Dedication of one or more cores to ensure optimal network use - Message prioritization, persistence, and out-of-order execution - Collectives in low-precision data types
Find more products by category
Development SoftwareView all