Use Cases > Protect GenAI Apps from the OWASP Top 10 LLM Risks 

Protect GenAI Apps from Cyber Threats

With increased risks like sensitive data leakage and reputational harm, GenAI security is not a ‘nice-to-have’ – it’s essential.

Why is GenAI Security Important?

Bloomberg Intelligence has forecasted the explosive growth of GenAI to USD$1.3 trillion by 2032 with a CAGR of 42%.

While the rapid adoption of GenAI technologies such as LLMs by OpenAI, Meta and Anthropic will help to power transformative business and technologies impact over the next few years, it has also introduced new security and safety challenges.

GenAI Security Challenges

New Attack Surface

Threat actors now target GenAI applications at every interaction stage – prompt, output etc. Organisations need to ensure their GenAI security measures are comprehensive.

Sensitive Data Leakage

Improper handling of inputs and outputs can lead to leaks of sensitive information, including personally identifiable information (PII) or other regulated data.

Hallucination

LLMs are prone to generating inaccurate or hallucinated responses that could misinform users

Reputational Harm

Prohibited or harmful responses from GenAI applications can erode customer trust and negatively impact your organisation’s credibility.

Go Beyond Default LLM Protection Layers

LLM foundation models have safety and security measures embedded into the LLM model by AI model providers. For example, Meta’s Llama Guard 3 Vision, is designed to safeguard content for both LLM inputs and responses. These measures allow the model to maintain ethical behaviour and safety standards.

However, these measures are insufficient – all models are susceptible to vulnerabilities such as jailbreaks, prompt injection attacks and data leakage.

According to Easy Jailbreak, LLM models have an average breach probability of around 62%.

Protection Against the OWASP Top 10 for LLM Applications 2025

In November 2024, a global community-led security initiative released its OWASP Top 10 for LLM Applications and GenAI.  This highlights several new and significant security risks that organisations need to be aware of and prepared for as they launch new applications powered by LLMs or GenAI.

A quick summary of the ten major security risks are as follows:

LLM01: Prompt Injection
Vulnerabilities occur when malicious prompts alter LLM behaviour, leading to unauthorised actions or harmful outputs. The risks include data leakage, privilege escalation, or critical decision manipulation. Some security measures are input/output filtering, privilege control, adversarial testing, and human oversight.
LLM02: Sensitive Information Disclosure
LLMs unintentionally expose confidential data. The risks of this are leakage of proprietary or sensitive information from datasets or prompts. Preventive measures include data sanitisation, output validation, and privacy-preserving architectures.
LLM03: Supply Chain Vulnerabilities
Compromises through third-party libraries, plugins, or dependencies in LLM pipelines. The risks include system compromise or malicious data/model poisoning. Some security measures are dependency audits, provenance tracking, and secure integration practices.
LLM04: Data and Model Poisoning:
Attackers manipulate training datasets or fine-tuned models to introduce biases or vulnerabilities. The risks of this are skewed outputs or exploitable models. Preventive measures include dataset validation, robust training practices, and ongoing model evaluation.
LLM05: Improper Output Handling
Poor management of LLM outputs causes unintended or unsafe usage. The risks include generation of harmful or biased content. Some security measures are output constraints, format validation, and multi-layer moderation systems.
LLM06: Excessive Agency
Overly autonomous LLMs can perform actions beyond intended scope. The risks of this are risky actions, data manipulation, or unauthorised system access. Preventive measures include implementing the least privilege principle, human-in-the-loop mechanisms, and strict permission boundaries.
LLM07: System Prompt Leakage
Leakage of sensitive instructions or system-level prompts to users. The risks include exposed operational logic or critical data. Some security measures are secure prompt isolation, encryption, and robust access controls.
LLM08: Vector and Embedding Weaknesses
Vulnerabilities in embedding-based methods like RAG (Retrieval-Augmented Generation). The risks of this are manipulated outputs or semantic failures. Preventive measures include embedding validation, tamper-proof mechanisms, and attack simulations.
LLM09: Misinformation
LLMs generate misleading or false information. The risks include damage to credibility, operational errors. Some security measures are grounded training data, accuracy checks, and user education.
LLM10: Unbounded Consumption
Resource overuse by LLMs due to inadequate limits. The risks of this are denial of service or excessive costs. Preventive measures include rate limiting, resource monitoring, and fail-safe mechanisms.
Previous slide
Next slide

WebOrion® Protector Plus, our purpose-built GenAI Firewall, helps provide defences against the OWASP Top 10 for LLM Applications. 

How WebOrion® Protector Plus Secures Common GenAI Apps

WebOrion® Protector Plus is designed to secure a wide range of GenAI use cases, including:

While RAG applications can improve an LLM model’s response accuracy, they are still prone to security risks such as prompt injection, jailbreaking, sensitive data leakage and off-topic responses.

WebOrion® Protector Plus can mitigate these risks:

  • Prompt Injection and Jailbreaking: Cloudsine’s ShieldPrompt™ technology consists of multiple layers of protection against prompt injection attacks and prevents jailbreaking with retokenization.
  • Sensitive Data Leakage: WebOrion® Protector Plus has powerful content safeguards to stop unauthorised access and leakage of confidential information.
  • Content Moderation and Evaluation: WebOrion® Protector Plus checks if your RAG application correctly understands the context of the user prompt and moderates the response to eliminate off-topic, harmful or inaccurate outputs.

According to G2, 16% of organisations already utilise chatbots while 55% plan to adopt them. Furthermore, the global chatbot market is expected to grow almost 200%, from USD$7.01 billion in 2024 to USD$20.81 billion by 2029.

With the use of AI chatbots on the rise, one security concern organisations need to be aware of is: false or inaccurate information.

To overcome this risk, our ShieldPrompt™ technology comes with response validation, ensuring that the LLM output is factual, aligned with the context, and compliant with predefined constraints.

With the proliferation of AI Agents, organisations utilising these agents face multiple security risks such as indirect and direct prompt injection attacks, jailbreaking attempts, hallucination and harmful responses.

WebOrion® Protector Plus can mitigate these risks by detecting and blocking malicious prompts. Furthermore, it has content guardrails to prevent sensitive data leakage and harmful outputs from the AI agent.

Take the first step towards securing your GenAI apps