Microsoft’s Copilot AI Assistant Bug Unveils Confidential Emails
Microsoft has confirmed a significant security flaw that allowed its Copilot AI assistant to read and summarize customers’ confidential emails for weeks, bypassing data loss prevention policies designed to protect sensitive information.
The bug, tracked internally as CW1226324, was first detected on January 21, 2026, and affected Microsoft 365 Copilot Chat users. The flaw allowed the AI tool to process emails marked with confidential sensitivity labels stored in users’ Sent Items and Drafts folders in Outlook desktop, even when organizations had implemented DLP policies specifically to prevent such access.
According to Microsoft’s service alert, a code issue incorrectly allowed Copilot Chat to pick up items with confidential labels that should have been excluded from AI processing. Microsoft began rolling out a fix in early February and confirmed on February 20 that the root cause has been addressed for most customers. A Microsoft spokesperson stated that while the issue did not provide anyone access to information they weren’t already authorized to see, the behavior did not meet the intended Copilot experience designed to exclude protected content.
The incident raises concerns about AI integration into enterprise workflows, particularly as organizations rely on sensitivity labels and DLP policies to protect proprietary information, legal communications, and financial data. Microsoft has not disclosed how many customers were affected during the multi-week exposure window.
Source: TechCrunch
