IBM researchers have discovered a way to use generative AI tools to hijack live audio calls and manipulate what is being said without the speakers knowing. The “audio-jacking” technique – which uses large-language models (LLMs), voice cloning, text-to-speech, and speech-to-text capabilities – could be used by bad actors to manipulate conversations for financial gain, Chenta..
The post IBM Shows How Generative AI Tools Can Hijack Live Calls appeared first on Security Boulevard.
This article has been indexed from Security Boulevard