Unintentionally, Microsoft's AI is helping users hack its most important product: Windows 11.
Table of Content
⬤ Users have revealed that Copilot can be easily paid to provide links containing illegal tools for activating Windows 11.
⬤ The incident has sparked widespread debate about the effectiveness of Microsoft's controls in preventing its AI from promoting piracy.
⬤ Microsoft has not issued an official comment, raising questions about whether the issue was a temporary glitch or a deep-rooted flaw.
In a controversial and questionable development, Microsoft's AI assistant, Copilot, recently found itself at the center of a new storm of criticism and ridicule after it was discovered that it was providing users with instructions on how to activate Windows 11 illegally. This surprise began as a note posted by a former Microsoft engineer on the X platform, and quickly gained traction on Reddit.

Microsoft has always approached software piracy with measured pragmatism; while it publicly opposes illegal copying, speculation has swirled about a more flexible approach behind the scenes. In 1998, Bill Gates himself suggested that piracy could be an indirect means of market penetration, especially in countries like China, where the use of free software could lead to user dependence and future profits when they switch to paid models. This historical background adds an additional dimension to the recent Copilot incident.
However, what happened went beyond tacit acceptance to actual assistance. Users discovered that Copilot, upon receiving a polite request, would provide them with direct links to GitHub repositories containing illegal activation software.
It even provided detailed instructions on how to use it to bypass legitimate Windows 11 activation procedures, using simple steps that could be completed in seconds. Independent investigations have confirmed these findings, showing that accessing this information did not require any complex operations or advanced manipulation, but rather a direct request.
