Matthew Purcell
Matthew Purcell is a Senior Technical Trainer at Amazon Web Services specialising in artificial intelligence and machine learning. His passion is making advanced technologies approachable for everyone, which he does at AWS through teaching highly technical courses and also developing educational content - including for AWS DeepRacer, AI/ML workshops for AWS Summits and AWS re:Invent, authoring AI/ML content for Udacity, plus creating a variety of new AWS training courses providing approachable introductions to generative AI.
Matthew has been a long-time attendee at BSides Canberra, having attended every conference since its inception in 2016.
Session
Large Language Models (LLMs) have emerged as a transformative generative AI technology, powering a wide range of applications from conversational AI to content generation. However, as with any powerful tool, LLMs are not immune to vulnerabilities and potential exploitation. This talk delves into how prompts can be misused to extract sensitive information, inject malicious content, or manipulate the model's outputs in unintended ways. Importantly, we also discuss how you can mitigate these risks.
Through real examples and live demonstrations, we'll explore techniques like prompt injection attacks, data leakage exploits, and adversarial prompting. You'll witness firsthand how carefully crafted prompts can bypass security measures, access restricted information, or trigger unexpected behaviors, highlighting the critical need for robust security measures and responsible development practices.
This session will equip you with a deeper understanding of the potential vulnerabilities in LLMs, allowing you to stay ahead of emerging threats and learn best practices for securing these powerful models against exploitation...and avoid the prompting pandemonium that can ensue if LLMs are not appropriately secured.