PhiloCyber logo

Featured Posts

Indirect Prompt Injection: Manipulating LLMs Through Hidden Commands
AI Security

Indirect Prompt Injection: Manipulating LLMs Through Hidden Commands

Exploring how attackers can manipulate LLMs through indirect prompt injection, with a hands-on walkthrough of PortSwigger's lab challenge.

More Featured Posts

Indirect Prompt Injection: Manipulating LLMs Through Hidden Commands

Indirect Prompt Injection: Manipulating LLMs Through Hidden Commands

Apr 5, 2025

Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection

Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection

Apr 2, 2025

Can LLM's Find and Fix Vulnerable Software?

Can LLM's Find and Fix Vulnerable Software?

Jun 1, 2024

Tips and Tricks to tackle your Bug Bounty Hunter exam (cBBH) by Hack The Box

Tips and Tricks to tackle your Bug Bounty Hunter exam (cBBH) by Hack The Box

May 5, 2024

PhiloCyber Logo

Recent Posts

view all