Whose Voice Is It Anyway? AI-Powered Voice Spoofing for Next-Gen Vishing Attacks

Written by: Emily Astranova, Pascal Issa


 

<

div class=”block-paragraph_advanced”>

Executive Summary

  • AI-powered voice cloning can now mimic human speech with uncanny precision, creating for more realistic phishing schemes. 
  • According to news reports, scammers have leveraged voice cloning and deepfakes to steal over HK$200 million from an organization.
  • Attackers can use AI-powered voice cloning in various phases of the attack lifecycle, including initial access, and lateral movement and privilege escalation.
  • Mandiant’s Red Team uses AI-powered voice spoofing to test defenses, demonstrating the effectiveness of this increasingly sophisticated attack technique.
  • Organizations can take steps to defend against this threat by educating employees, and using source verification such as code words. 

Introduction

Last year, Mandiant published a blog post on threat actor use of generative AI, exploring how attackers were using generative AI (gen AI) in phishing campaigns and information operations (IO), notably to craft more convincing content such as images and videos. We also shared insights into attackers’ use of large language models (LLMs) to develop malware. In the post, we emphasized that while attackers are interested in gen AI, use has remained relatively limited.

This post continues on that initial research, diving into some new AI tactics, techniques, and procedures (TTPs) and tr

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from Threat Intelligence

Read the original article: