Automate machine learning From debugging deep learning to detecting model drift in production (Level

March 11, 2020
Machine learning (ML) involves more than just training models; developers need to debug these deep learning models as well as monitor their performance in production so that they serve their intended business purpose. However, models can become outdated as the nature of data changes, causing model drift in production and generating irrelevant results. This type of model degradation tends to go undetected. In this session, we cover how to help radically reduce troubleshooting time in building and training high-quality ML models and how to identify and detect drift in your ML model post-deployment. Speaker: Aparna Elangovan, Prototype Engineer, AI/ML, AWS
Previous Video
Machine learning deployment on AWS Best practices to decide what, where, and how (Level 300) - AWS I
Machine learning deployment on AWS Best practices to decide what, where, and how (Level 300) - AWS I

Putting ML solutions in production requires knowing the what, where, and how of deploying ML models. One ne...

Next Video
Image classification with AWS DeepLens
Image classification with AWS DeepLens

Learn how to build a custom deep learning image classification model with AWS DeepLens, Amazon SageMaker, A...