Security researcher Michael Bargury revealed serious flaws in Microsoft Copilot during the recent Black Hat USA conference, demonstrating how hackers might be able to use this AI-powered tool for malicious purposes. This revelation highlights the urgent need for organisations to rethink their security procedures when implementing AI technology such as Microsoft Copilot.
Bargury’s presentation highlighted numerous ways in which hackers could use Microsoft Copilot to carry out cyberattacks. One of the most significant findings was the use of Copilot plugins to install backdoors in other users’ interactions, allowing data theft and AI-driven social engineering attacks.
Hackers can use Copilot’s capabilities to discreetly search for and retrieve sensitive data, bypassing standard security measures that focus on file and data protection. This is accomplished via modifying Copilot’s behaviour using prompt injections, which alter the AI’s responses to fit the hacker’s goals.
One of the most concerning parts of this issue is its ability to enable AI-powered social engineering attacks. Hackers can
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: