“Microsoft AI researchers accidentally exposed 38 terabytes of confidential data online, including personal backups, passwords, secret keys, and internal Microsoft Teams messages. The data was exposed through a misconfigured Azure Storage bucket. This is a significant data breach that could have serious consequences for Microsoft and its customers. Microsoft has taken steps to secure the data and investigate the incident.”
On September 18, 2023, cloud security startup Wiz disclosed that Microsoft AI researchers had accidentally exposed 38 terabytes of confidential data online. This significant data breach has the potential for serious consequences for Microsoft and its customers. The exposed data includes sensitive personal information that could be used for identity theft and fraud, as well as secret keys that could grant unauthorized access to Microsoft systems and data.
The data exposure occurred through a misconfigured Azure Storage bucket that was linked to a GitHub repository containing open-source code and AI models for image recognition. The misconfiguration allowed for “full control” permissions instead of the intended “read-only” permissions. As a result, anyone who knew where to look could potentially delete, replace, or inject malicious content into the data.
Wiz, the cloud security startup, discovered the exposed data in August 2023 and immediately notified Microsoft. The company took immediate action to secure the exposed data and launched an investigation into the incident. As of now, Microsoft has not released a statement about the breach.
While this incident highlights the accidental nature of the data exposure, it also raises questions about the security measures in place at Microsoft. The company must thoroughly investigate the incident and take steps to prevent similar incidents from happening in the future.
The Significance of the Data Breach
The exposed data contains the personal backups of two Microsoft employees’ personal computers, passwords to Microsoft services, and over 30,000 internal Microsoft Teams messages from hundreds of employees. This sensitive information could be exploited for identity theft and fraud, potentially causing significant harm to the affected individuals.
Additionally, the exposed secret keys pose a major risk to Microsoft’s systems and data. With unauthorized access, malicious actors could potentially compromise Microsoft’s infrastructure, leading to further data breaches or even disruption of services.
Human Error and Robust Security Measures
It is important to note that the exposure of the data was accidental and caused by human error. This incident highlights the fact that even companies with robust security measures in place are not immune to human mistakes. Microsoft, like any other organization, has a responsibility to its customers to protect their data and privacy.
Microsoft must conduct a thorough investigation into the incident to identify the root cause of the misconfiguration and implement measures to prevent similar incidents in the future. This may involve enhancing employee training, implementing stricter access controls, and conducting regular security audits.
The accidental exposure of 38 terabytes of confidential data by Microsoft AI researchers is a significant data breach with potential consequences for both the company and its customers. The exposed data includes sensitive personal information and secret keys that can be exploited for malicious purposes. Microsoft needs to address this incident promptly and take proactive steps to prevent similar incidents from occurring in the future.