×

First Name
Last Name
Company Name
Country
CDN Province
US State
India State
AU State
Postal Code
Phone Number
Job Role
Industry
This information is associated with my:
Compliance Opt-in
Thank you!
Error - something went wrong!

Optimising ML Inference on AWS Using Amazon SageMaker

Amazon SageMaker is a fully managed machine learning (ML) service, that allows data scientists and developers to build, train, and deploy ML models for any use case, with fully managed infrastructure, tools, and workflows and reduces shadow IT when deploying ML Models. In this session, we will cover various Amazon SageMaker features with focus on its deployment capabilities. Amazon SageMaker provides multiple features to manage resources and optimise inference performance when deploying machine learning models. We will see how these features can be leveraged to deploy and manage machine learning models at scale.

Speakers:

  • Mark Shoebridge, AI & ML Business Development ANZ, AWS
  • Romina Sharifpour, Senior AI/ML Specialist SA, AWS
  • Sara van de Moosdijk, Senior AI/ML Partner SA, AWS
  • Shahin Namin, Machine Learning/Computer Vision Consultant, DiUS
     

Target Audience: Data Scientist, Head of Analytics, Dev Ops

Previous Video
Supercharge personalisation in Retail
Supercharge personalisation in Retail

Watch this webinar where our subject matter experts, and customer TEG, will discuss how businesses can use ...

Next Video
Enabling business analysts and the line of business to scale the adoption of ML
Enabling business analysts and the line of business to scale the adoption of ML

Organisations are looking to leverage the broad set of in-house analytics skills to help scale the adoption...