- Advertisement -spot_img

Top 5 This Week

Related Posts

Critical ChatGPT Vulnerability “ShadowLeak” Could Steal Gmail Data with a Single Email

Cybersecurity experts from Radware have uncovered a serious vulnerability in ChatGPT’s Deep Research tool, which could allow attackers to steal sensitive Gmail data with just one malicious email. The flaw, dubbed “ShadowLeak”, was patched by OpenAI on September 3, but researchers warn it highlights a broader risk in AI-powered integrations.

The ShadowLeak vulnerability worked by embedding hidden instructions in the HTML code of an email, often using white text on a white background, CSS tricks, or metadata manipulation. These elements were invisible to the human recipient but fully readable to the AI model. When Deep Research later analyzed the inbox, it would unknowingly execute the attacker’s commands, sending confidential data to a server controlled by the hacker.

What made ShadowLeak especially dangerous was its server-side execution. The exploit ran from OpenAI’s own infrastructure, meaning victims’ devices showed no suspicious outbound connections. To the end user, the only activity appeared to be a harmless AI request like “summarize today’s emails.” Behind the scenes, however, private information such as personal data, business records, legal correspondence, customer files, and even login credentials could be exfiltrated.

Radware stressed that this vulnerability wasn’t limited to Gmail. Any third-party integration that allows ChatGPT to scan private documents could theoretically be exposed to similar risks if input sanitization is insufficient. That expands the potential threat to enterprise tools, corporate communications, and even legal or medical data pipelines.

The timeline of events is also telling. Radware discovered the flaw on June 18, 2025, and responsibly disclosed it to OpenAI. The company then released a patch on September 3, closing the gap before ShadowLeak could be exploited at scale. While no large-scale attacks have been confirmed, the proof-of-concept demonstrates how AI can be manipulated in unexpected ways.

Conclusion:
The discovery of ShadowLeak underscores the growing security challenges of AI-driven platforms. As tools like ChatGPT become deeply embedded into corporate and personal workflows, they also become high-value targets for cybercriminals. The lesson is clear: AI security requires constant vigilance, robust input filtering, and proactive collaboration between AI providers and cybersecurity researchers. OpenAI’s swift response is encouraging, but ShadowLeak is a reminder that the next-generation cyber battlefield is not just in human systems—it’s in the AI systems we trust with our most private data.

Popular Articles