The 2025 OWASP Top 10 Risks for AI Applications

20 Feb 2025
Highlights the most critical security risks affecting real-world LLM and generative AI applications
No longer a futuristic concept, artificial intelligence (AI) and large language models (LLMs) have rapidly become integral components of modern business operations. Offering unprecedented capabilities to enhance productivity, improve customer interactions and drive innovative solutions, the same power that makes LLMs transformative across numerous sectors also makes them prime targets for exploitation by malicious actors. As the widespread adoption of AI and LLMs continues, understanding the security risks associated with their implementation has never been more important.
The Open Worldwide Application Security Project (OWASP), a globally recognised authority on application security, has recently unveiled the 2025 version of the OWASP LLM Top 10, a key community-driven resource for understanding and navigating LLM security challenges.
The OWASP LLM Top 10 project provides a ranked list of the most critical security risks affecting real-world LLM and generative AI applications. Through raising awareness about the most significant vulnerabilities and offering actionable insights to mitigate the unique risks impacting LLMs, the project aims to empower organisations to safeguard their LLM implementations effectively.
The 2025 edition of the OWASP LLM Top 10 features significant revisions compared to its predecessor, including the introduction of several new categories of risk, revamped existing categories, and updated mitigation strategies. These updates are a welcome acknowledgement and necessary response to the rapid evolution and advancement in LLM technology. The updates reflect OWASP’s commitment to ensuring the guidance remains both relevant and effective in addressing the emerging threats posed by the expanding use of LLMs across industries.
One standout entry, retaining its position at the top of the OWASP LLM Top 10 for 2025, is prompt injection. Through providing the LLM, such as a customer service chatbot, with cleverly crafted instructions, a malicious actor can attempt to manipulate the chatbot into performing unintended actions. While the nature and severity of successful prompt injection attacks are largely dependent on the business context, the unintended behaviour could facilitate a malicious actor revealing sensitive information, overriding or bypassing safeguards like profanity filters, or disseminating misinformation through content manipulation.
Whether you are a decision-maker, developer, or other security-minded professional, the OWASP LLM Top 10 for 2025 serves as an invaluable informative resource. To explore the complete list, visit the official OWASP LLM Top 10 for 2025 website.
The OWASP LLM Top 10 is not the only framework helping to guide AI innovation. Several other standards and regulations that will play a complementary role include:
- The EU AI Act: The use of artificial intelligence in the EU will be regulated by the AI Act. This regulation categorises AI applications based on the level of risk they pose to users and imposes stringent compliance and transparency requirements accordingly, ensuring that AI is used responsibly across sectors within the EU.
- ISO 42001: This international standard specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organisations, helping to implement robust governance structures to ensure the responsible development and deployment.
- NIST AI Risk Management Framework: A voluntary resource designed to help organisations manage the unique risks posed by generative AI. The framework aims to promote the trustworthy and responsible development and use of AI technologies, and consists of four core functions: govern, map, measure and manage.
Recent announcements of significant global investment in AI indicate continued growth and evolution in this space. We look forward to observing how the OWASP LLM Top 10 and other frameworks keep pace, supporting organisations in harnessing AI benefits whilst safeguarding against the potential risks.