PhiloCyber logo

Featured Posts

Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection
Academic Research

Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection

This research introduces Indirect Prompt Injection (IPI), a method to remotely manipulate Large Language Models (LLMs) via malicious prompts in data sources, risking data theft, misinformation, and much more, highlighting the need for stronger defenses.

More Featured Posts

Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection

Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection

Apr 2, 2025

Can LLM's Find and Fix Vulnerable Software?

Can LLM's Find and Fix Vulnerable Software?

Jun 1, 2024

Tips and Tricks to tackle your Bug Bounty Hunter exam (cBBH) by Hack The Box

Tips and Tricks to tackle your Bug Bounty Hunter exam (cBBH) by Hack The Box

May 5, 2024

Introduction to AI Security Course by Lakera AI

Introduction to AI Security Course by Lakera AI

Apr 27, 2024

PhiloCyber Logo

Recent Posts

view all