LLM02: Insecure Output Handling
In our previous post, we explored Prompt Injection Vulnerabilities. Now, we will shift our focus to Insecure Output Handling, the next most critical vulnerabilities frequently encountered in applications of Large Language Models (LLMs) in OWASP top 10.
Continuing our analysis, we will use the same model, Microsoft's Phi-2. For those who wish to follow along, I am sharing the corresponding Jupyter Notebook. Given the size of this model, it is possible to run all the examples in Google Colab, utilizing the Nvidia Tesla T4 GPU. Alternatively, you also execute the notebook on your local machine.