Microsoft Addresses Critical ‘Reprompt’ Vulnerability in Copilot AI Assistant
Microsoft has recently patched a critical security vulnerability in its Copilot AI assistant. This flaw allowed attackers to stealthily exfiltrate user data with just a single click on a malicious link.
The exploit, known as “Reprompt”, was discovered by security researchers at Varonis Threat Labs. It could bypass Copilot’s built-in security controls and siphon off sensitive personal information without any user interaction beyond clicking a seemingly legitimate Microsoft link. The attack leveraged the ‘q’ URL parameter to inject malicious prompts, enabling hackers to maintain control even after the Copilot chat window was closed.
Through this vulnerability, attackers could potentially gain access to sensitive information including:
- File access history
- Location data
- Private conversations
This issue predominantly affected Microsoft Copilot Personal users. However, enterprise customers using Microsoft 365 Copilot were not impacted due to additional security layers.
Varonis responsibly disclosed the issue to Microsoft on August 31, 2025. In response, the company deployed a fix on January 13, 2026, as part of its Patch Tuesday update. This incident underscores the ongoing security challenges in AI-powered assistants and the importance of keeping systems updated.
Source: The Hacker News
