Compute
GPU
NVIDIA L40S GPU instances to accelerate your application calculations on a wide variety of artificial intelligence and high-performance computing tasks
Available Q2 2024
Speeding up treatment
Putting pre-trained neural networks into production plays an essential role in developing responses and recommendations for AI services. Our GPUs can deliver up to 27 times the inference performance of a single-socket server, significantly reducing operating costs.
Innovating with AI and Machine Learning
To boost the productivity of data scientists and implement new AI services more quickly, you need to train increasingly complex models faster and faster. Our NVIDIA L40S GPUs reduce Deep Learning training procedures to just a few hours.
Improving efficiency
Fully compatible with the Kubernetes platform, the container systems and the virtual machines L40S GPU technology simplifies access to computing resources for all users, whatever the type of workload.
Technical
specifications
specifications
FP32
91.6 TFLops
FP32 Tensor Core
366 TFlops
FP16
733 TFlops
FP8
1466 TFlops
RT Performance
212 Flops
Maximum consumption
350 W