Researchers at Radware have discovered a new server-side data theft vulnerability, dubbed “ShadowLeak,” that targets OpenAI’s ChatGPT, specifically its Deep Research feature. This is a zero-click attack, meaning it requires no user interaction, making it a particularly stealthy and dangerous threat.
What is Server-Side Data Theft?
Server-side data theft refers to an attack where a threat actor manipulates a server into making unintended requests to internal or external resources on their behalf. The attack exploits a vulnerability in the server’s functionality, such as a Server-Side Request Forgery (SSRF), to steal data directly from the server.
Unlike client-side attacks that target a user’s browser, server-side attacks operate within the server’s trusted environment, making them difficult to detect. In the case of ShadowLeak, the attack leverages the autonomous capabilities of an AI agent to exfiltrate data from OpenAI’s cloud servers.

Key Points
Zero-Click Attack: The ShadowLeak attack does not require the user to click a malicious link or open an infected file. An attacker sends a specially crafted email that, when processed by the Deep Research agent, instructs it to silently collect and send data back to a hacker-controlled URL.
Target: The vulnerability specifically targets ChatGPT’s Deep Research capability, which has access to enterprise tools like Gmail, Google Drive, Outlook, and Microsoft Teams.
Stealthy Nature: The attack leaves no clear traces because the data transfer happens directly from OpenAI’s servers and does not pass through the user’s computer or network. This makes it a significant challenge for traditional security solutions to detect.
Enterprise Risk: The attack highlights a new class of risks for businesses that integrate AI models with their internal, sensitive data sources.
Impact
The vulnerability allows attackers to exfiltrate sensitive user data, including private information from emails and other enterprise services. This can lead to a significant data breach without the user ever being aware their information was compromised. The fact that the attack is zero-click and server-side means it can bypass standard security measures, posing a substantial threat to companies relying on these AI integrations.
Solutions
- Vulnerability Remediation: OpenAI was notified of the vulnerability on June 18 and has since patched it, confirming the attack no longer works.
- Agent Behavior Monitoring: Radware recommends that security teams implement continuous monitoring of AI agent behavior to ensure its actions and intent align with the user’s original goals. This proactive approach can help detect and block deviations from legitimate tasks.
- Security for AI Deployments: The security firm emphasizes the need for security leaders and developers to re-evaluate safeguards around their AI deployments and SaaS platforms, recognizing the new threat vectors posed by autonomous agents.
Company View
According to a SecurityBrief Australia article, OpenAI’s Vice President of Product for ChatGPT, Nick Turley, was quoted in an August 2025 interview saying the platform has 5 million paying business users, highlighting the wide potential exposure to this type of exploit if mitigations are not in place. The company acted promptly to fix the vulnerability after being notified.