Impersonation schemes are on the rise, and artificial intelligence (including deep fakes and voice cloning) will only make these schemes more difficult to detect.

Threat actors are emboldened, evidenced by the fact that the Cybersecurity and Infrastructure Security Agency (CISA) recently published an alert that threat actors are impersonating CISA employees in vishing attacks in order to obtain money. (View our previous related posts here.)

Threat actors impersonate government employees to try to scare individuals into providing information and financial payment, including the IRS and the FTC. The FTC has provided numerous Scam Alerts on this subject matter, which can be accessed at www.ftc.gov

CISA reminds us that “CISA staff will never contact you with a request to wire money, cash, cryptocurrency, or use gift cards and will never instruct you to keep the discussion secret.”

Remember that scammers are bold and unscrupulous. Heed the recommendations of CISA and FTC on how to detect and mitigate against impersonation voice calls.

Wow! It’s hard to believe this blog marks the 400th Privacy Tip since I started writing many years ago. I hope the tips have been helpful over the years and that you have been able to share them with others to spread the word. 

I thought it would be fun to pick 10 (ok—technically, a few more than 10) Privacy Tips and re-publish them (in case you missed them) in honor of our 400th Privacy Tip milestone. We have published tips that are relevant to the hot issues of the time, but some are time-honored. It was really hard to pick, but here they are:

Continue Reading Privacy Tip #400 – Best of First 400 Privacy Tips

The Health Sector Cybersecurity Coordination Center (HC3) recently issued an Alert warning that “threat actors employing advanced social engineering tactics to target IT help desks in the health sector and gain initial access to target organizations” have been on the rise.

The social engineering scheme starts with a telephone call to the IT help desk from “an area code local to the target organization, claiming to be an employee in a financial role (specifically in revenue cycle or administrator roles). The threat actor is able to provide the required sensitive information for identity verification, including the last four digits of the target employee’s social security number (SSN) and corporate ID number, along with other demographic details. These details were likely obtained from professional networking sites and other publicly available information sources, such as previous data breaches. The threat actor claimed that their phone was broken and, therefore, could not log in or receive MFA tokens. The threat actor then successfully convinced the IT help desk to enroll a new device in multi-factor authentication (MFA) to gain access to corporate resources.”

After the threat actor gains access, login information related to payer websites is targeted, and they submit a form to make ACH changes for payer accounts. “Once access has been gained to employee email accounts, they sent instructions to payment processors to divert legitimate payments to attacker-controlled U.S. bank accounts. The funds were then transferred to overseas accounts. During the malicious campaign, the threat actor also registered a domain with a single letter variation of the target organization and created an account impersonating the target organization’s Chief Financial Officer (CFO).”

The threat actors are leveraging spearphishing voice techniques and impersonating employees, also known as “vishing.” IC3 noted that “threat actors may also attempt to leverage AI voice impersonation techniques to social engineer targets, making remote identity verification increasingly difficult with these technological advancements. A recent global study found that out of 7,000 people surveyed, one in four said that they had experienced an AI voice cloning scam or knew someone who had.”

IC3 provides numerous mitigations to assist with the prevention of these vishing schemes, which are outlined in the Alert.

I am not a huge fan of using chatbots, as I never end up getting my questions fully answered. I get the efficiency of using a chatbot for simple questions, but my questions are usually not so easily resolved, so I end up completely frustrated with the process and trying to find a human being to help. This happens a lot with my internet service provider. I start with the chatbot, don’t get very far and then yell, “Can’t you just let me talk to someone who can fix my problem?”

At any rate, it seems that lots of people use chatbots and are quite comfortable giving chatbots all sorts of information. Probably not a great idea after reading a summary of research done by Trustwave.

Bleeping Computer obtained research from Trustwave before publication which shows that threat actors are deploying phishing attacks “using automated chatbots to guide visitors through the process of handing over their login credentials to threat actors.” Using a chatbot “gives a sense of legitimacy to visitors of the malicious sites, as chatbots are commonly found on websites for legitimate brands.”

According to Bleeping Computer, the process begins with a phishing email claiming to have information about the delivery of a package (it’s an old trick that still works) from a well-known delivery company. After clicking on “Please follow our instructions” to figure out why your package can’t be delivered, the victim is directed to a PDF file that contains links to a malicious phishing site. When the page loads, a chatbot appears to explain why the package couldn’t be delivered – the explanation usually being that the label was damaged – and shows the victim a picture of the parcel. Then the chatbot requests that the victim provide their personal information and confirms the scheduled delivery of the package.

The victim is then directed to a phishing page where the victim inserts account credential to pay for the shipping, including credit card information. The threat actors provide legitimacy to the process by requiring a one-time password to the victim’s mobile phone number (which the victim gave the chatbot) via SMS so the victim believes the transaction is legit.

The moral of this story: continue to be suspicious of any emails, texts, or telephone calls -(phishing, smishing, and vishing) and now chatbots – asking for your personal or financial information.

The FBI’s Internet Crime Complaint Center (IC3) recently issued a warning alerting consumers that scammers are using malicious QR Codes to reroute unsuspecting customers to malicious sites to try to steal their data.

Also known as QRishing, [view related post] criminals are taking advantage of our familiarity with QR codes after using them at restaurants and other establishments during the pandemic, to use them to commit crimes. The criminals embed malicious codes into QR codes to redirect a user to a malicious site and then attempt to get the user to provide personal information, financial information or other data that the criminals can use to perpetrate fraud or identity theft.

Embedding malicious code into a QR code is no different than embedding it into a link or attachment to a phishing email or a smishing text. Consumers are not as alert to question QR codes as we are to spot malicious emails and texts.

Hence, the alert from IC3. IC3 is warning consumers to check and re-check any URL generated by a QR code and to be cautious about using them for any form of payment.

QR codes should be viewed as suspiciously as emails and texts. Be cautious when asked to scan a QR code, and refuse to provide any type of personal information or financial information after scanning one.

2021 is behind us. Whether that is positive or negative for you, in my world, it was another record year. A record year of data breaches.

According to The Identity Theft Research Center (ITRC), data breaches in 2021 surpassed the previous record year of 2020 by 17 percent. The incidents ranged from the theft of cryptocurrency (Livecoin went out of business following an attack) to ransomware attacks (Colonial Pipeline), to zero-day vulnerabilities against Microsoft Exchange Server, and finally, the big one: Log4j.

There is speculation that the Log4j vulnerability will last for years. The Log4j vulnerability is so concerning that the FTC issued a warning this week to companies declaring that if companies don’t mitigate the vulnerability, they could be subject to an enforcement action [view related posts here and here].

What does this all mean to us as consumers? Many of us roll our eyes and say “All of our information is out there anyway, so why bother trying to protect it?” I say, don’t give up. Here are a few tips that are still important for protecting your data and your privacy:

  • If your information is compromised, sign up for credit monitoring or a credit freeze if offered.
  • Continue to check your credit report, which you can get for free once a year, to help determine whether any fraudulent accounts have been opened in your name.
  • Protect your Social Security number and driver’s license number. Don’t just give them when asked or fill them in on a form.
  • Mind your cookies.
  • Check the privacy settings on your phone and update them frequently.
  • Opt-in to “do not track” options.
  • Use DuckDuckGo as your browser.
  • Consider the Jumbo privacy app.
  • Read the privacy policies of apps and devices before you download or activate them.
  • Be aware of phishing, vishing, smishing, and qrishing.
  • Understand what IoT devices you have and activate unique passwords for them.
  • Change the default passwords on your home router and wi-fi.
  • Update the software on your devices as soon as you can.

And there are so many more! Check out all of our privacy tips at www.dataprivacyandsecurityinsider.com and don’t give up! Even though 2022 looks to be another whopper year for data breaches, if we don’t try to protect our privacy, then who will?

Although a security researcher has confirmed that LinkedIn users’ data, including full names, gender, email addresses, telephone numbers, and industry information is for sale on RaidForums by a hacker self-dubbed “GOD User TomLiner,” LinkedIn has stated that it is not from a data breach of its networks. According to LinkedIn, “[O]ur initial analysis indicates that the dataset includes information scraped from LinkedIn as well as information obtained from other sources….This was not a LinkedIn data breach and our investigation has determined that no private LinkedIn member data was exposed….”

No matter how the data ended up for sale on a hacker forum, if you are a LinkedIn user, you should be aware of it, and understand how that information can be used against you. Having valid email addresses and telephone numbers give hackers and scammers the ability to use them for targeting phishing and vishing schemes and other social engineering scams. In addition, the information can be used to compile dossiers and aggregated with other publicly available information for targeted campaigns.

As a precaution, security experts are suggesting that LinkedIn users update their passwords and enable multi-factor authentication on their LinkedIn accounts.

Many individuals already use facial recognition technology to authenticate and authorize payment through their smartphone. According to Jupiter Research, by 2025 (only four years away), 95 percent of smartphones will have biometric technology capabilities for authentication, including face, fingerprint, iris, and voice recognition. According to Juniper Research, this will amount to the authentication of over $3 trillion in payment transactions on a yearly basis.

Technology vendors are starting to use biometric information more and more to provide services to consumers. For instance, Spotify recently released its “Hey Spotify” feature for its app. If you use Spotify, and the new feature is rolled out to your device, you will see a pop-up with a big green button at the bottom that reads, “Turn on Hey Spotify” and a very small link in white that reads, “Maybe later.” Above the big green button in white is text that reads, “LEARN HOW WE USE VOICE DATA” and “When we hear ‘Hey Spotify’ your voice input and other information will be sent to Spotify.”

The big green button is very noticeable and the white text less so, but when you click on the “LEARN HOW” button, you are sent to a link that reads, “When you use voice features, your voice input and other information will be sent to Spotify.” Hmmm. What other information?

It continues, “This includes audio recording and transcripts of what you say, and other related information such as the content that was returned to you by Spotify.” This means that your biometric information–your voice–and what you actually say to Hey Spotify is collected by Spotify. Spoiler alert: you only have one voice and you are giving it to an app that is collecting it and sharing it with others, including unknown third parties.

The Spotify terms then explain that it will use your voice, audio recordings, transcripts and the other information that is collected “to help us provide you with advertising that is more relevant to you. It also includes sharing information, from time to time, with our service providers, such as cloud storage providers.”  It then explains that you can “interact with advertisements on Spotify using your voice. During a voice-enabled ad, you will hear a voice prompt followed by an audible tone.” Of course, you should know that your response will then be recorded,  collected, and shared.

In response to the question “Is Spotify recording all of my conversations?,” the terms state that “Spotify listens in short snippets of a few seconds which are deleted if the wake-word is not detected.” That means that it is listening frequently until you say, “Hey Spotify.” It doesn’t say how often the short snippets occur.

Consumers can turn off the voice controls and voice ads by disabling their microphone. This is true for all apps that include access to the microphone, which is why it is important to frequently look at your privacy settings and see which apps have access to your microphone and to manage that capability (along with all of the apps in your privacy settings).

It is important to know which apps have access to your biometric information and who they share it with, as you cannot manage that biometric information once you give it away. You don’t know how they are really using it, or how they are storing, securing, disclosing, or retaining it. Think about your Social Security number and how many times you have received a breach notification letter. You can try to protect your credit and your identity with credit monitoring and credit freezes, but you can’t use those tools for the disclosure of your biometric information to scammers and fraudsters.

Your voice can be used for fraudulent purposes. It can be used for authentication to get into accounts, and for vishing (see blog post on vishing here).  Your voice is unique and sharing it with apps or others without knowing how it is secured is something worth considering. If the information is not secured and is subject to a security incident, it gives criminals another very potent tool to commit fraud against you and others.

Before providing your biometric information to any app, or anyone else for that matter, read the Privacy Policy and Terms of Use and understand what you are giving away merely for the convenience of using the app.

A cyber-attack against–Bithumbone of South Korea’s largest cryptocurrency exchanges and one of the five largest in the world—has reaped access to the data of 30,000 users and drained their accounts in the process. Bithumb is one of the biggest ethereum exchanges by volume in South Korea, representing more than 44 percent of trading in that country.

The Korea Internet and Security Agency is investigating the incident that occurred on June 30 when an intruder obtained access to Bithumb’s system through the hacking of an employee’s home PC. The incident affected 3 percent of Bithumb’s users.

The data that was compromised in the incident included users’ names, mobile telephone numbers and email addresses. In addition, some users’ disposable password used in financial transactions was also compromised. This led to the draining of some of those users’ accounts.

The hackers used “voice phishing” (vishing), which is when the hacker directly contacts the company on the telephone, poses as an executive and tries to get information from an unsuspecting employee—including usernames, passwords and security codes or answers to security questions in order to gain access to the company’s system.

In this case, it is being reported that the attacker posed as an executive of Bithumb in a telephone call, claimed that suspicious activity was found on the account, and asked for the credentials so he could fix it. The victim complied and the hacker gained access to account information and thereafter drained multiple financial accounts of users.

Bithumb is offering to compensate victims and is continuing to investigate the incident.

The lesson is that hackers and criminals are very bold and using new techniques to steal. We talk a lot about email phishing and spear phishing, but vishing should not be overlooked. Employee education is important in alerting employees to these sophisticated techniques.