Building machine learning workflows with Kubernetes and Amazon SageMaker (Level 200) - AWS Innovate

March 8, 2020
Until recently, data scientists had to spend significant time performing operational tasks, such as ensuring that frameworks, runtimes, and drivers for CPUs and GPUs worked well together. They also needed to design and build machine learning (ML) pipelines to orchestrate complex workflows for deploying ML models in production. In this session, we dive into Amazon SageMaker and container technologies and discuss how easy it is to integrate tasks such as model training and deployment into Kubernetes and Kubeflow-based ML pipelines. Further, we show how the new Amazon SageMaker Operators for Kubernetes makes it easier to use Kubernetes to train, tune, and deploy ML models in Amazon SageMaker. Speaker: Arun Balaji, Partner Solutions Architect, AISPL
Previous Video
Accelerate the building of deep learning applications (Level 300) - AWS Innovate
Accelerate the building of deep learning applications (Level 300) - AWS Innovate

The AWS Deep Learning AMIs provide machine learning practitioners and researchers with the infrastructure a...

Next Video
Simplify and accelerate time-series forecasting and real-time personalization (Level 200) - AWS Innovate
Simplify and accelerate time-series forecasting and real-time personalization (Level 200) - AWS Innovate

Deploying custom machine learning (ML) models to solve complex business challenges does not have to be hard...