The adoption of ML models is rising across all industries, as a side-effect Malicious ML models are a new and emerging threat that can compromise your systems by executing code when loaded. In this session, you will learn how these attacks work, and how to protect yourself from it. You will see the results of a large-scale scan of ML models from the Hugging Face repository, and the impact of the malicious models that were found. You will also learn the ML-Ops best practices for applying security controls, scanning, and actions to safeguard your ML models and systems. This session is essential for anyone who works with ML models or maintains tools for ML-Ops, as this poses a serious risk to any modern organisation.
Harel Avissar
JFrog
Harel Avissar is Director of Product at JFrog, working on the next leaps in DevSecOps and particularly on how the JFrog Platform arms developers with the information and knowledge they need to build software they can trust. With over 15 years of Cyber Security and Leadership background Harel has seen numerous times how new technologies collide with malicious actors waiting to take advantage of new opportunities. With the rise of AI it would be naive to assume thing will go differently, won't it? So lets leap ahead of the next Software Supply Chain attack!