Skip to content

Blog

LLM02: Insecure Output Handling

In our previous post, we explored Prompt Injection Vulnerabilities. Now, we will shift our focus to Insecure Output Handling, the next most critical vulnerabilities frequently encountered in applications of Large Language Models (LLMs) in OWASP top 10.

Continuing our analysis, we will use the same model, Microsoft's Phi-2. For those who wish to follow along, I am sharing the corresponding Jupyter Notebook. Given the size of this model, it is possible to run all the examples in Google Colab, utilizing the Nvidia Tesla T4 GPU. Alternatively, you also execute the notebook on your local machine.

Open in Google Colab

LLM01: Prompt Injections Vulnerabilities in Large Language Models

Ever since the release of the OWASP Top 10 for Large Language Model (LLM) Applications, I have been delving into various examples of the most critical vulnerabilities commonly observed in LLM applications.

My objective has been to deepen my understanding of these vulnerabilities, focusing on their exploitability and impact in real-world scenarios. After extensive research and analysis, I've decided to share some of my Jupyter notebooks and insights in the form of this blog post.

Open in Google Colab