A recent report from VentureBeat reveals that HuggingFace, a prominent AI leader specializing in pre-trained models and datasets, narrowly escaped a potential devastating cyberattack on its supply chain. The incident underscores existing vulnerabilities in the rapidly expanding field of generative AI.
Lasso Security researchers conducted a security audit on GitHub and HuggingFace repositories, uncovering more than 1,600 compromised API tokens. These tokens, if exploited, could have granted threat actors the ability to launch an attack with full access, allowing them to manipulate widely-used AI models utilized by millions of downstream applications.
The seriousness of the situation was emphasized by the Lasso research team, stating, “With control over an organization boasting millions of downloads, we now possess the capability to manipulate existing models, potentially turning them into malicious entities.”
HuggingFace, known for its open-source Transformers library hosting over 500,000 models, has become a high-value target due to its widespread use in natural language processing, computer vision, and other AI tasks. The potential impact of compromising HuggingFace’s data and models could extend across various industries implementing AI.
The focus of Lasso’s audit centered on API tokens, acting as keys for accessing proprietary models and sens
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: