Researchers at Meta, the owner of Facebook, released a report this week which indicated that since March 2023, Meta “has blocked and shared with our industry peers more than 1,000 malicious links from being shared across our technologies” of unique ChatGPT-themed web addresses designed to deliver malicious software to users’ devices.

According to Meta’s report, “to target businesses, malicious groups often first go after the personal accounts of people who manage or are connected to business pages and advertising accounts. Threat actors may design their malware to target a particular online platform, including building in more sophisticated forms of account compromise than what you’d typically expect from run-of-the-mill malware.”

In one recent campaign, the threat actor “leveraged people’s interest in Open AI’s ChatGPT to lure them into installing malware…we’ve seen bad actors quickly pivot to other themes, including posing as Google Bard, TikTok marketing tools, pirated software and movies, and Windows utilities.”

The Meta report provides useful tools to guard against these attacks and responses in the event a device is affected.

Bad actors will use the newest technology as weapons. According to Cyberscoop, Meta researchers have said “hackers are using the skyrocketing interest in artificial intelligence chatbots such as ChatGPT to convince people to click on phishing emails, to register malicious domains that contain ChatGPT information and develop bogus apps that resemble the generative AI software.”

With any new technology comes new risk. Staying abreast of these risks and understanding how threat actors can pivot from personal accounts to business accounts, may prevent attacks against individuals and their companies.