For the first time in AI agent, security defects may allow the hacker to attack the user via email
Security researchers have discovered the first zero-click AI vulnerability in the Microsoft 365 Copilot AI agent, highlighting a way of stealing data through emails without user conversations for the attackers. The defect is now fixed.
Listen to the story

In short
- Zero-clicic vulnerability exposes sensitive data from Microsoft Copilot
- The attack exploited how AI recovered and processed commercial emails
- Microsoft decided the issue; No real world attacks were detected
In the first-first discovery, cyber security researchers have identified a major security defect in the Microsoft 365 Copilot AI agent. The vulnerability called Ecolak, sent only an email to the attackers allowed to quietly steal sensitive data from the user environment. No clicks, downloads or user tasks were required. The issue was exposed by researchers at AIM Labs in January 2025 and was reported to Microsoft. In May, the tech veteran fixed the blame server-side, which means that users did not need to take any action. Microsoft also confirmed that no customer was affected, and there is no evidence that the blame was used in real -world attacks.
Nevertheless, the search is a significant twist for AI safety, as Ecolak is considered the first zero-click AI vulnerability to affect a large language model-based assistant.
How Ecolak Attack works
Microsoft 365 Copilot is made in office apps such as Word, Excel, Outlook and Teams. It uses AI to generate materials, analyze data and answer questions using internal documents, emails and chats. It depends on Openai’s model and working on the microsoft graph. Ecolak targeted how it processes information from emails and documents while answering assistant user questions.
Here is how the attack worked:
-An attacker sends an email to the target like a business. The email has a text that looks normal but hides a particular signal, designed to confuse the AI ​​assistant.
-When the user later asks questions related to Copilot, the system reinforces the earlier email using its recovery-oriented generation (RAG) engine, thinking that it is relevant to the query.
At this point, the hidden signal is active. It directly instructs AI to extract internal data and keep it in a link or image.
When the email is displayed, the embedded link is automatically accessed by the browser -sending internal data to the useful server has gone wrong without the user.
Some Markdown image formats used in the attack are designed to send automated requests to browsers, making it possible to exfiltrate this data.
Although Microsoft uses material safety policies (CSP) to block requests for unknown websites, services such as Microsoft Teams and Sharepoint are rely default. This allowed the attackers to bypass some rescues.
A new kind of AI vulnerability
Ecolak is more than just one software bug – it introduces a new class of dangers known as LLM scope violations. The term refers to how big language models handle the information without direct instructions by a user and leak information. In its report, AIM Labs warned that such weaknesses are particularly dangerous in the enterprise environment, where AI agents are deeply integrated into internal systems.
“This attack chain shows a new exploitation technique … taking advantage of internal model mechanics,” said AIM Labs. The team believes that the same risk may be present in other raga-based AI systems, not only of Microsoft. Because Ecolak did not require any user interaction and could work completely automated methods, AIM Labs says it highlights the hazards that could be more common because AI becomes more embedded in business operations.
Microsoft labeled vulnerability as important, assigned it to CVE-2025–32711, and released a server-side fix in May. The company assured the users that there was no exploitation and the issue is now resolved.
Even if there was no loss, researchers say the warning is clear. The AIM Labs report stated that the increasing complexity and intensive integration of LLM applications in commercial workflows is already overshadowing traditional defense.