BlogNews11TH DEC 2023
AuthorSamir Yawar
4 min read
News

AI giving rise to new holiday phishing scams and email breaches

Twitter
Facebook
WhatsApp
Email
LinkedIn
holiday phishing scams feature image
BlogNews11TH DEC 2023
4 min read
News

AI giving rise to new holiday phishing scams and email breaches

AuthorSamir Yawar
Twitter
Facebook
WhatsApp
Email
LinkedIn
holiday phishing scams feature image

Holiday phishing scams have an unlikely ally in 2023 - generative artificial intelligence tools like ChatGPT.

As consumers and companies spend a record-breaking $270 billion in online shopping, generative AI is arming phishing attackers with a new weapon in their arsenal.

In response, cybersecurity professionals are also leveraging AI to build innovative machine-learning tools that combat growing instances of phishing fraud.

Cybercriminals have long considered online holiday fraud to be a bonanza. In 2022, more than $73 million was gobbled up by online con artists, according to the FBI.

Researchers conclude that AI contributed to a 1,265% increase in phishing email scams last year.

ChatGPT and its evil clones power scam emails

ChatGPT and its malicious offshoots like FraudGPT and WormGPT are making phishing as well as spear phishing campaigns easier and faster to deploy against vulnerable targets. Researchers from IBM conclude that ChatGPT can write convincing emails almost as well as social engineering experts. All this while taking a fraction of the time it takes human beings to do so.

AI tools can do a better job with:

  • Lack of obvious spelling and grammar mistakes that give away scam texts and messages

  • Mimicking legitimate websites and making them almost impossible to distinguish from the real thing.

Experts agree that generative AI is boosting both success rates and volumes of online scams this year by making use of tactics such as Amazon gift card scams, charity scams and fake delivery tracking links.

For its part, ChatGPT has certain safeguards built into it to forbid malicious applications. However, threat actors are using carefully worded prompts to sidestep those safeguards. Tools like WormGPT use their own custom language model and data sources which makes them ideal for cybercriminals.

What are security vendors doing to protect against AI and holiday phishing scams?

As much as these cyber scams have gotten more conniving and convincing, cybersecurity tool makers are also rising to the threats with the help of AI.

In November, Google released an open-source RETVec (Resilient & Efficient Text Vectorizer) tool that trains spam filter AI models against text manipulation tactics used by phishers.

These text manipulation methods can be classified into:

  • Homoglyphs: Letters or numbers that look the same but mean different things, like the big letter O and the number zero (0).

  • Invisible characters: Empty spaces that sneaky people might use to spread out tricky words in messages to fool spam filters.

  • Keyword stuffing: Hiding extra words in emails to trick spam detectors into thinking the message is normal and not spam.

Google claims that its AI tool has:

  • Improved spam detection by 38%

  • Reduced false positives by 19.49%

  • Minimized false negatives by 17.71%

Cybersecurity software solutions provider Norton has released its free Norton Genie app. Part AI chatbot and part text analyzer. The app scans uploaded messages for signs of phishing, even offering answers to users’ questions about any content they find suspicious.

Norton says that Genie is always learning and has been trained on millions of scam messages that users have uploaded so far. It is as easy as copy-pasting text or uploading a screenshot, or sending a link to Norton’s free app.

Reports of new holiday phishing scams and AI-powered anti-phishing tools showcase how the fight against cybercrime continues to be an ongoing concern.

Samir Yawar
Samir Yawar / Content Lead
Samir wants a world where people can instinctively whack online scams and feel accomplished without the need for psychic powers. As an ISC2 member, he is doing his bit to turn cybersecurity awareness training into a fun concept with simple, approachable and accessible content. Reach out to him at X @yawarsamir
FAQsFrequently Asked Questions
Once cybercriminals obtain financial information, they can engage in various illegal activities to monetize it. This may include making unauthorized purchases, conducting identity theft, selling the information on the dark web, or even using the data for ransom purposes.