Microsoft’s AI division mistakenly exposed 38TB of sensitive employee information

The Artificial Intelligence (AI) division of Microsoft has accidentally exposed 38TB of sensitive information data of company employees, after sharing a repository of AI models on GitHub.

A publicly shared repository on GitHub, to facilitate AI tools and models to work with in image recognition, included a ‘url’ that gave access to an internal storage account.

The ‘url’ had a shared access signing token (SAS) that, far from restricting access to storage resources, allowed viewing the information stored in Azure, as the Wiz researchers discovered.

In this way, they were exposed 38TB of sensitive information of Microsoft employees, such as backup copies of workstation profiles and internal Microsoft Teams messages, as noted in TechCrunch.

Furthermore, Wiz researchers point out that the url had been exposing the information since 2020, and that it was configured to give complete control anyone can make modifications or inject malicious content.

Microsoft has confirmed this security incident in a post shared on the Security Response Center (MSRC) blog, where denies that customer data was affected.

Wiz notified Microsoft of the data exposure on June 22. The MSRC teams then revoked the SAS token and prevented external access to the storage. Microsoft specifies that They mitigated the problem on June 24 and they began an investigation with which they conclude that this incident does not pose risks to customers.

By Editor

Leave a Reply