Compute resources required to continue advancing state-of-the-art results are increasing due to larger neural network models and datasets. To keep up, NVIDIA GPUs, available on AWS, have been adding features designed to accelerate neural network operations. Tensor Cores are one of these features, providing an order of magnitude more math than regular cores, while at the same time requiring no changes to the model or training hyper-parameters. In this session, we’ll discuss considerations and techniques for taking advantage of Tensor Cores, demonstrate how to use them in popular deep learning frameworks, and share results from a wide range of neural network tasks and models.