Machine learning deployment on AWS Best practices to decide what, where, and how (Level 300) - AWS I

March 11, 2020
Putting ML solutions in production requires knowing the what, where, and how of deploying ML models. One needs to know what the model is (resource requirements in production) and what the business context is (input workload, output consumers, batch vs. real-time inference, etc.); where to deploy (cloud or edge, based on cost-effectiveness, fulfillment of business SLAs, etc.); and how to deploy (based on ease of deployment, scaling, A/B testing, etc.). Working backwards from these customer questions, AWS offers the broadest and deepest range of ML deployment options. This session covers these options and how they address the above questions from a best-practices perspective. Speaker: Sujoy Roy, Senior Data Scientist, AWS
Previous Video
Building computer vision applications on AWS
Building computer vision applications on AWS

Next Video
Automate machine learning From debugging deep learning to detecting model drift in production (Level
Automate machine learning From debugging deep learning to detecting model drift in production (Level

Machine learning (ML) involves more than just training models; developers need to debug these deep learning...