Many practitioners of deep learning start from tailoring DL code from existing tutorials and examples on the Internet. These examples are often developed without optimisation of GPU utilisation in mind. In this session we provide attendees with simple steps of best coding practices in order to optimise their code to optimize GPU/CPU utilization for training and utilisation that significantly reduce cost of both training and inference using Apache MXNet and Amazon SageMaker.
Speaker:
Cyrus Vahid, Principal Evangelist AI MXNet, Amazon Web Services
Resource: