In this video, Lakera AI showcases its new Policy Control Center for securing generative AI applications. The webinar demonstrates how Lakera Guard, their Gen AI security platform, integrates with applications to protect Large Language Models (LLMs) from prompt injections, inappropriate content, and data leaks through customizable policies and detectors. The Policy Control Center enables users to fine-tune the sensitivity levels, create custom detectors, and integrate with existing security workflows for comprehensive threat monitoring and analysis.
Metadata
- Type of Content: Webinar Recording
- Domain: youtube.com
- Date Published: October 16, 2024
- URL: https://www.youtube.com/watch?v=2yn7TjI7OAY
Summary
- Lakera launches its new Policy Control Center for securing generative AI applications.
- Lakera Guard acts as a security layer, screening both inputs and outputs of LLMs to detect threats like prompt injections, jailbreaks, and sensitive data leaks.
- The platform offers customizable policies with 16 out-of-the-box detectors across categories like prompt defense, content moderation, PII detection, and unknown links.
- Users can adjust the sensitivity of detectors and create custom ones using regular expressions to cater to specific needs.
- Lakera Guard integrates seamlessly with any LLM provider, offering flexibility and future-proofing for evolving AI models.
- The platform operates in monitoring mode, providing insights and allowing developers to build custom workflows based on the identified threats.
- Lakera emphasizes a collaborative approach between security teams and developers, simplifying the security implementation process.
- Real-world examples demonstrate how to tailor policies for chatbots, RAG applications, summarization tools, and third-party LLM interactions.
- The webinar concludes by highlighting the ease of integration and the availability of resources like documentation and demos.
What makes this novel or interesting
- Fine-grained control over AI security: Lakera Guard's Policy Control Center allows for highly customized security measures, going beyond basic content moderation and addressing the unique challenges of Gen AI applications.
- Developer-first approach: The platform focuses on simplifying the integration process for developers, abstracting away the complexities of AI security and enabling them to focus on building their applications.
- Future-proof architecture: Lakera Guard is designed to adapt to the rapid evolution of AI models and emerging threats, providing a sustainable solution for securing Gen AI applications in the long run.
Verbatim Quotes
- Topic: Lakera Guard and its capabilities
- "At the core, Lakera Guard ensures secure inputs."
- "We're providing that total, holistic protection around your application and around the LLM to make sure that nothing bad is going in or out."
- "Lakera is completely agnostic to the underlying model that you're using."
- "Lakera is in monitoring mode. It's never going to block a prompt on your behalf. You will build workflows based on the intelligence it provides."
- Topic: Policy Control Center features
- "The major upgrades and updates here are related to policies and projects."
- "The first thing that you can do with policies is decide we want to run this set of defense detectors."
- "You can then build additional detectors for that using regular expression."
- "The easiest way to think about a project is perhaps on an application-by-application basis."
- Topic: Benefits and use cases
- "It's worth also pointing out, again, that we can protect both LLM inputs and LLM outputs. This is a really important part of creating a defense in depth."
- "One thing we—We also want to emphasize is this is a future-proof architecture. AI is moving and evolving really quickly, and so we deliberately designed this in a way that it's easy for us to add new detectors."
- "It's worth bearing in mind who the audience is here."
- "We recommend creating a custom detector with your system prompt and possibly even little sections of your system prompt, particularly the sensitive parts. So, you can create a filter there that's stops anyone trying to extract that from your model."
How to describe the concept
Imagine a security guard stationed between you and a powerful AI. This guard, Lakera Guard, checks everything you ask the AI and everything the AI tries to tell you, making sure nothing dangerous gets through. Lakera's new system gives you tools to customize what the guard looks for, so you can be sure your AI is used safely and responsibly.