
A high-performance quantum simulator for quantum machine learning, offering GPU acceleration and seamless integration with PennyLane.
Vendor
Xanadu
Company Website
Lightning is a high-performance quantum simulator designed to provide ultimate performance for quantum machine learning applications. Written in C++ and accessible via Python, it offers two simulators: lightning.qubit and lightning.gpu, enabling rapid simulation of large workflows on GPUs or supercomputers. Lightning integrates seamlessly with PennyLane, providing cutting-edge techniques for quantum optimization to achieve the fastest quantum machine learning workflows. It supports a wide variety of architectures, including x86, ARM, AARCH64, and PowerPC, making it usable on laptops, in the cloud, and on supercomputing centers. The lightning.gpu simulator leverages NVIDIA's CuQuantum SDK for GPU-accelerated circuit simulation. Optimized for machine learning, Lightning includes built-in support for highly efficient computations of quantum gradients, allowing users to train and optimize quantum algorithms at scale. Its C++ kernels for various quantum gates and operations ensure optimized performance for applications ranging from quantum chemistry to machine learning. Seamless integration with PennyLane provides direct access to optimizers and algorithms, and connects large-scale simulations with machine learning frameworks like PyTorch, TensorFlow, and JAX.
Features & Benefits
- Universal Architecture Support: Runs on x86, ARM, AARCH64, and PowerPC architectures, from laptops to supercomputers.
- GPU Acceleration: Utilizes NVIDIA's CuQuantum SDK for accelerated circuit simulation via lightning.gpu.
- Machine Learning Optimization: Built with built-in support for efficient quantum gradient computations for training and optimization.
- High Performance: Optimized C++ kernels for various quantum gates ensure fast circuit simulation.
- PennyLane Integration: Seamlessly works with PennyLane optimizers and algorithms, and connects to ML frameworks.