Security vulnerabilities in your ML workflows are often overlooked by cybersecurity personnel due to their indirect ability to makes changes within your broader system/decision processes. As attacks become more sophisticated, the ability to manipulate machine learning outputs to influence and drive decisions that benefit the attacker will become more common - and the ability to prevent them will start in your machine learning environment. Join Eliiza as they discuss the ways AWS can support the prevention of such adversarial attacks.
Speaker: Brendan Nicholls, Practice Lead - Machine Learning Engineering. To stay connected, please join Eliiza at the monthly Melbourne MLOps Community Meetup or contact Brendan on LinkedIn.