ChatGPT is amazing! It is all the rave. Its capabilities are awe-inspiring (except to educators who are concerned their students will never write a term paper again). It has reportedly passed a bar exam and a physician board exam and has written sermons, research papers, and more.

But all amazing technology has its ups and downs. Not wishing to take anything away from ChatGPT, it is important to understand that since it is so amazing, not only do we want to use it, but so do the bad guys.

Without getting into a much longer discussion on the ethical considerations of using AI (do a little research yourself on that topic), there are some concerns being raised about the use of AI products, including ChatGPT, that are worth keeping an eye on.

According to Axios, researchers at Check Point Research recently discovered that hackers were using ChatGPT to “write malware, create data encryption tools and write code creating new dark web marketplaces.” It is also being used to generate phishing emails.

Similarly concerning is that some software code developers are using AI to write code and are “creating more vulnerable code.” Those using AI were “also more likely to believe they wrote secure code than those without access.”

ChatGPT and other AI assistants are extremely helpful when used for everyday purposes but can also be used maliciously by threat actors. It is just another tool in their toolbox to use to attack victims. Being aware of how new technology can be used maliciously is an important way to stay vigilant and prevent becoming a victim.