How adversarial noise protects my selfies from the (AI-based deepfake) TikTok dance trend
2024-09-26 , Main Track

"In the age of AI, where a single photo can make realistic deep fakes, how can you protect your personal photos?"

... were my thoughts when I saw the TikTok AI dance filter.

Join me, as I use AI security techniques to prevent my photos from being made into AI-based deepfakes dancing to Jojo Siwa!

This session will cover the practice of AI security with the following example use case: protecting my personal photos against misuse through AI-based deepfake generation. With an AI security lens, I will describe the different components of an AI-based deepfake model. Then I will walk you through an adversarial machine learning method to alter the photos to prevent them from being used by the deepfake models. We’ll then apply what we learnt on leading deepfake models, including the famous TikTok dance filter, to see what happens!

As AI systems are being rapidly adopted and developed, understanding how they can be exploited is important. With this deepfake example, I aim to show you that AI security techniques can provide us with an additional tool in our cyber toolbox.

This will be a practical and fun session for both users or developers of AI systems - and anyone else who is interested in learning about the surprising ways machine learning models can fail!

Tania Sadhani is an AI security researcher with Mileva Security Labs; working on investigating and addressing the unique vulnerabilities of machine learning systems.