Microsoft AI research division accidentally exposed 38TB of sensitive data

Microsoft AI researchers accidentally exposed 38TB of sensitive data via a public GitHub repository since July 2020. Cybersecurity firm Wiz discovered that the Microsoft AI research division accidentally leaked 38TB of sensitive while publishing a bucket of open-source training data on GitHub. The exposed data exposed a disk backup of two employees’ workstations containing secrets, […]

The post Microsoft AI research division accidentally exposed 38TB of sensitive data appeared first on Security Affairs.

This article has been indexed from Security Affairs

Read the original article: