I am not a huge fan of using chatbots, as I never end up getting my questions fully answered. I get the efficiency of using a chatbot for simple questions, but my questions are usually not so easily resolved, so I end up completely frustrated with the process and trying to find a human being to help. This happens a lot with my internet service provider. I start with the chatbot, don’t get very far and then yell, “Can’t you just let me talk to someone who can fix my problem?”

At any rate, it seems that lots of people use chatbots and are quite comfortable giving chatbots all sorts of information. Probably not a great idea after reading a summary of research done by Trustwave.

Bleeping Computer obtained research from Trustwave before publication which shows that threat actors are deploying phishing attacks “using automated chatbots to guide visitors through the process of handing over their login credentials to threat actors.” Using a chatbot “gives a sense of legitimacy to visitors of the malicious sites, as chatbots are commonly found on websites for legitimate brands.”

According to Bleeping Computer, the process begins with a phishing email claiming to have information about the delivery of a package (it’s an old trick that still works) from a well-known delivery company. After clicking on “Please follow our instructions” to figure out why your package can’t be delivered, the victim is directed to a PDF file that contains links to a malicious phishing site. When the page loads, a chatbot appears to explain why the package couldn’t be delivered – the explanation usually being that the label was damaged – and shows the victim a picture of the parcel. Then the chatbot requests that the victim provide their personal information and confirms the scheduled delivery of the package.

The victim is then directed to a phishing page where the victim inserts account credential to pay for the shipping, including credit card information. The threat actors provide legitimacy to the process by requiring a one-time password to the victim’s mobile phone number (which the victim gave the chatbot) via SMS so the victim believes the transaction is legit.

The moral of this story: continue to be suspicious of any emails, texts, or telephone calls -(phishing, smishing, and vishing) and now chatbots – asking for your personal or financial information.