Develop, train, and deploy AI applications on the cloud.
- 2.4x more memory capacity
- 1.6x more memory bandwidth
- 1.3x more FP16 TFLOPS
- Store larger models, such as Llama3 70B on a single GPU; (no parallelism required)
- Full support for PyTorch and Tensorflow