GPU
NVIDIA H100 and L40S GPU instances to accelerate your application calculations on a wide variety of artificial intelligence and high-performance computing tasks
To boost the productivity of data scientists and implement new AI services more quickly, you need to train increasingly complex models faster and faster. Our NVIDIA H100 and L40S GPUs reduce Deep Learning training procedures to just a few hours.
Fully compatible with the Kubernetes platform, the container systems and the virtual machines GPU technology simplifies access to computing resources for all users, whatever the type of workload.
Adjust the number of H100 GPU instances to suit your needs: multi-instance (MI) GPU technology allows you to partition a GPU into seven separate secure instances, each with 5GB or 10GB of dedicated memory. Your users can access all the benefits of GPU acceleration.
NVIDIA L40S technical data
FP32
91.6 TFLops
FP32 Tensor Core
366 TFlops
FP16
733 TFlops
FP8
1466 TFlops
Memory
48 GB
Maximum consumption
350 W
NVIDIA H100 technical data
FP32
51 TFLops
FP32 Tensor Core
756 TFlops
FP16
1513 TFlops
FP8
3026 TFlops
Memory
80 GB
Maximum consumption
350 W