The rise of artificial intelligence (AI) is becoming a critical trend to monitor, with the potential for malicious actors to exploit the technology as it advances, according to the Cyber Security Agency (CSA) on Tuesday (Jul 30). AI is increasingly used to enhance various aspects of cyberattacks, including social engineering and reconnaissance.
The CSA’s Singapore Cyber Landscape 2023 report, released on Tuesday, highlights that malicious actors are leveraging generative AI for deepfake scams, bypassing biometric authentication, and identifying vulnerabilities in software. Deepfakes, which use AI techniques to alter or manipulate visual and audio content, have been employed for commercial and political purposes. This year, several Members of Parliament received extortion letters featuring manipulated images, and Senior Minister Lee Hsien Loong warned about deepfake videos misrepresenting his statements on international relations.
Traditional AI typically performs specific tasks based on predefined data, analyzing and predicting outcomes but not creating new content. This technology can generate new images, videos, and audio, exemplified by ChatGPT, OpenAI’s chatbot.
AI has also enabled malicious actors to scale up their operations. The CSA and its partners analyzed phishing emails from 2023, finding that about 13 percent contained AI-generated content, which was grammatically superior and more logically structured. These AI-generated emails aimed to reduce logical gaps and enhance legitimacy by adapting to vari
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: