Alejandro Saucedo
Alejandro is the Chief Scientist at the Institute for Ethical AI & Machine Learning, where he contributes to policy and industry standards on the responsible design, development and operation of AI, including the fields of explainability, GPU acceleration, privacy preserving ML and other key machine learning research areas. Alejandro Saucedo is also Director of Engineering at Seldon Technologies, where he leads teams of machine learning engineers focused on the scalability and extensibility of machine learning deployment and monitoring products. With over 10 years of software development experience, Alejandro has held technical leadership positions across hyper-growth scale-ups and has a strong track record building cross-functional teams of software engineers. He is currently appointed as governing council Member-at-Large at the Association for Computing Machinery, and is currently the Chairperson of the GPU Acceleration Kompute Committee at the Linux Foundation.
LInkedin: https://linkedin.com/in/axsaucedo
Twitter: https://twitter.com/axsaucedo
Github: https://github.com/axsaucedo
Website: https://ethical.institute/
Session
Every phase across the end-to-end machine learning lifecycle exposes a plethora of security risks that often go unnoticed by machine learning practitioners. In this talk we uncover the most critical (and common) security risks in the machine learning lifecycle, covering in-depth concepts as well as practical examples of ways in which these can be exploited as well as resolved and mitigated (analogous to the OWASP Top 10 industry standard).
Throughout the talk we will be using a hands on example, where we will be training, packaging and deploying a model from scratch, outlining key risk areas for each step together with tools and practices that can be used to mitigate these risks. By the end of this talk, machine learning practitioners will have a robust intuition of the importance of security best practices throughout the machine learning lifecycle, together with tools and frameworks that can help mitigate undesirable outcomes due to security flaws.