Microsoft’s security researchers have uncovered a new and highly sophisticated backdoor malware dubbed SesameOp, which leverages the OpenAI Assistants API to conduct covert cyberattacks. This discovery reveals a dangerous new trend in cybercrime — the abuse of legitimate AI services as command-and-control (C2) channels to bypass traditional security detection systems.
According to Microsoft’s Detection and Response Team (DART), the malware was first identified during an investigation into a July 2025 cyberattack. The SesameOp backdoor granted attackers persistent access to compromised environments while remaining nearly invisible for months. Instead of using conventional malicious infrastructure, which can be easily traced or blocked, the attackers cleverly used OpenAI’s Assistants API to manage infected devices remotely.
A New Era of AI-Driven Cyber Espionage
What makes SesameOp particularly alarming is its novel use of the OpenAI Assistants API as both a storage and relay mechanism for encrypted commands. Once installed, the malware retrieves compressed, encrypted instructions from the API, decrypts them locally, and executes them on targeted systems. The data stolen during these operations is re-encrypted and sent back through the same API channel, blending seamlessly with legitimate network traffic.
In its report, Microsoft explained, “Instead of relying on traditional C2 servers, the threat actor behind SesameOp uses OpenAI as a stealthy communication channel to orchestrate malicious activities within compromised environments.” This approach allows attackers to bypass firewalls and intrusion detection systems, as traffic between infected devices and OpenAI’s servers appears legitimate.
Technical Breakdown of the Attack Chain
The attack chain uncovered by DART included a heavily obfuscated loader and a .NET-based backdoor. The malware was deployed through .NET AppDomainManager injection into several Microsoft Visual Studio utilities — a clever method designed to avoid detection by blending into normal developer processes. Once embedded, SesameOp established persistence using internal web shells and strategically disguised malicious processes. These techniques indicate the malware was designed for long-term espionage operations rather than quick, opportunistic attacks.
Importantly, Microsoft confirmed that SesameOp does not exploit any vulnerabilities within OpenAI’s systems. Instead, it abuses legitimate features of the Assistants API to hide its activity. Microsoft and OpenAI worked together to identify the compromised accounts and disable the API keys used in the attack, effectively cutting off the threat actors’ access.
Microsoft’s Response and Mitigation Guidance
Following the incident, Microsoft urged organizations to strengthen their endpoint and network defenses. Recommended actions include auditing firewall logs for suspicious traffic patterns, enabling tamper protection, and activating endpoint detection in block mode. Additionally, security teams should closely monitor outbound connections to cloud-based APIs, as legitimate platforms can now serve as unintended vectors for cyberattacks.
Conclusion: The Blurring Line Between AI and Cybercrime
The SesameOp malware marks a significant escalation in how threat actors exploit artificial intelligence platforms. By using the OpenAI Assistants API as a covert channel, attackers demonstrated that even legitimate AI tools can be turned into instruments of espionage. Microsoft’s quick collaboration with OpenAI shows the importance of industry partnerships in combating emerging AI-powered cyber threats. As organizations increasingly rely on AI for productivity and automation, the need for proactive security measures has never been greater.





