Security researchers at the Black Hat conference revealed a serious flaw in OpenAI’s ChatGPT Connectors feature, which links the AI to external services like Google Drive. A team from Zenity Labs demonstrated that a single “poisoned” document could be used to trigger an indirect prompt injection attack that silently extracts sensitive information without any user interaction.

In their proof-of-concept exploit, called AgentFlayer, the attacker embeds a malicious prompt in near-invisible white text inside what appears to be a normal document. When ChatGPT processes that file through Connectors, it follows the hidden instructions and pulls API keys and other secret data out of the victim’s connected Drive account. Because the exploit is “zero-click,” victims don’t need to open or interact with the file for their data to leak.

This vulnerability highlights the growing security risk as AI systems become more integrated with user data and external tools. Experts warn that connecting powerful language models to outside systems increases the attack surface and that robust protections against prompt injection are essential. OpenAI has since deployed mitigations for this specific exploit, but the incident underscores larger concerns about handling sensitive data in AI-driven environments.

Read more on this news HERE

Share on