Academic Research
Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection
This research introduces Indirect Prompt Injection (IPI), a method to remotely manipulate Large Language Models (LLMs) via malicious prompts in data sources, risking data theft, misinformation, and much more, highlighting the need for stronger defenses.