Are Chatbots Making it Difficult to Trace Phishing Emails?

Chatbots are curbing a crucial line of defense against bogus phishing emails by rectifying grammatical and spelling errors, a key attribute to trace fraudulent mails, according to experts. 

The warning comes as international advisory published from the law enforcement agency Europol concerning the potential criminal use of ChatGPT and other “large language models.” 

How Does Chatbot Aid Phishing Campaign? 

Phishing campaigns are frequently used as bait by cybercriminals to lure victims into clicking links that download malicious software or provide sensitive information like passwords or pin numbers. 

According to the Office for National Statistics, half of all adults in England and Wales reported receiving a phishing email last year, making phishing emails one of the most frequent kinds of cyber threat. 

However, artificial intelligence (AI) chatbots can now rectify the flaws that trip spam filters or alert human readers, addressing a basic flaw with some phishing attempts—poor spelling and grammar. 

According to Corey Thomas, chief executive of the US cybersecurity firm Rapid7 “Every hacker can now use AI that deals with all misspellings and poor grammar[…]The idea that you can rely on looking for bad grammar or spelling in order to spot a phishing attack is no longer the case. We used to say that you could identify phishing attacks because the emails look a certain way. That no longer works.” 

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article:

Are Chatbots Making it Difficult to Trace Phishing Emails?