Logo
Sign in
Product Logo
Merlin PyTorch ContainerNVIDIA

Enables preprocessing, feature engineering with NVTabular, and training deep-learning recommenders with PyTorch.

Vendor

Vendor

NVIDIA

Company Website

Company Website

Product details

The Merlin PyTorch container allows users to perform preprocessing and feature engineering with NVTabular, train deep-learning based recommender system models with PyTorch, and serve the trained models on Triton Inference Server. This container is part of the NVIDIA Merlin framework, which accelerates the entire recommender systems pipeline on the GPU, from data ingestion and training to deployment. Merlin empowers data scientists, machine learning engineers, and researchers to build high-performing recommenders at scale. Each stage of the Merlin pipeline offers an easy-to-use API and is optimized to support hundreds of terabytes of data.

Features

  • NVTabular: Performs data preprocessing and feature engineering for tabular data, scaling to manipulate terabyte-scale datasets.
  • PyTorch: Used for training deep-learning based recommender system models.
  • Triton Inference Server: Provides GPU-accelerated inference, simplifying the deployment of AI models at scale.
  • Multi-Arch Support: Compatible with Linux/amd64 and Linux/arm64 architectures.
  • Security: Signed images and comprehensive security scanning.

Benefits

  • High Performance: Accelerates the entire recommender systems pipeline on the GPU.
  • Scalability: Supports large datasets, making it suitable for extensive data processing and model training.
  • Ease of Deployment: Simplifies the deployment of trained models with Triton Inference Server.
  • Versatility: Supports various AI frameworks and deployment environments.
Find more products by category
Development SoftwareView all