Artificial intelligence and machine learning (AI/ML) systems have proven to be effective in improving the sophistication of phishing lures, creating fake profiles, and developing basic malware. Security experts have demonstrated that a complete attack chain may be established, and malicious hackers have already begun experimenting with AI-generated code.
The Check Point Research team employed current AI tools to design a whole attack campaign which began with a phishing email sent by OpenAI’s ChatGPT that prompts the target to open an Excel document. Researchers also developed an Excel macro that runs malware obtained from a URL and a Python script to infect the intended system using the Codex AI programming tool.
To evaluate the effectiveness of AI in data collection and team response to cyberattacks on vital systems and services, as well as to draw attention to the need for solutions that enhance human-machine collaboration to lower cyber risk.
In recent weeks, ChatGPT, a large language model (LLM) based on OpenAI’s generative pre-trained transformer (GPT-3) third iteration, sparked a scope of what-if scenarios for the possible uses of AI/ML. Due to the dual-use nature of AI/ML models, firms are looking for ways to use the technology to increase efficiency, while campaigners for digital rights are concerned about the effects the technology will have on businesses and employees.
However, other aspects of security and privacy are also being i
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: