CrowdStrike recently published its 2025 Global Threat Report, which among other conclusions, emphasized that social engineering tactics aimed to steal credentials grew an astounding 442% in the second half of 2024. Correspondingly, use of stolen credentials to attack systems increased.

Other observations in the report include:

  • Adversaries are operating with unprecedented speed and adaptability;
  • China expanded its cyber espionage enterprise;
  • Stolen credential use is increasing ;
  • Social engineering tactics aim to steal credentials;
  • Generative AI drives new adversary risks;
  • Cloud-conscious actors continue to innovate; and
  • Adversaries are exploiting vulnerabilities to gain access

The details behind these conclusions include that the time an adversary starts moving through a network “reached an all-time low in the past year. The average fell to 48 minutes, and the fastest breakout time we observed dropped to a mere 51 seconds.” This means that threat actors are breaking in and swiftly moving within the system, making it difficult to detect, block, and tackle.

Vishing “saw explosive growth—up 442% between the first and second half of 2024.”

CrowdStrike’s observations are instructive to plan and harden defenses against these risks. Crucial pieces of the defense are:

  • Continued education and training of employees (including how social engineering schemes work;
  • The importance of protecting credentials;
  • How credentials are used to enter into a system.

Although we have been repeatedly educating employees on these themes, the statistics and real life experiences show that the message is not getting through. Addressing these specific risks through your training program may help ebb the tide of these successful social engineering campaigns.

I often get asked whether law enforcement is making any headway in catching cybercriminals. Although it is a challenging task, a recent example of a big win for law enforcement deserves celebration.

Authorities from 40 countries, territories, and regions came together to assist INTERPOL with a recent global cybercrime initiative known as Operation HAECHI-V. The initiative took place between July and November 2024, resulting in the arrest of 5,500 individuals and the seizure of over $400 million. The initiative culminated with “Korean and Beijing authorities jointly dismantl[ing] a widespread voice phishing syndicate responsible for financial losses totaling $1.1 billion and affecting over 1,900 victims.”

One scam outlined in an INTERPOL purple notice, warns consumers:

[O]f an emerging cryptocurrency fraud practice called the USDT Token Approval Scam that allows bad actors to drain victims’ wallets by leveraging romance-themed baits to trick them into buying popular Tether stablecoins (USDT tokens) and investing them. Once the scammers have gained their trust, the victims are provided with a phishing link claiming to allow them to set up their investment account….In reality, by clicking they authorize full access to the scammers, who can then transfer funds out of their wallet without the victim’s knowledge.

We love wins for law enforcement, and this win was significant, but it also informs consumers about how these schemes work. Pay attention to the techniques. This one included both phishing and vishing. Those techniques continue to be tried and true for international cyber criminals.

Everyone thinks they can spot a phishing email. If true, we would not see so many security incidents, data breaches, and ransomware attacks. The statistics are overwhelming that phishing emails are a significant cause of data breaches.

If everyone was able to spot a phishing email, threat actors would stop using them. It wouldn’t be worth their time, and they would use other methods to attack victims. However, because of their effectiveness, phishing attacks actually surged 40% in 2023, according to research by Egress.

One theory about why this is true is because of the use of artificial intelligence (AI). Threat actors are using generative AI to draft phishing emails that look and sound like they are in the victim’s native language. There are no grammatical errors or misspellings in the message, which used to make detection easier. In addition, AI-generated deepfake videos or voiceovers are used by threat actors in phishing attacks to lure victims into believing that the threat actor is someone they know, trust, or love. Further, AI can assist threat actors with actually writing the malware code for the attack.

Threat actors are also hiring other attackers to carry out phishing campaigns, which is known as Phishing-as-a-Service (PhaaS). This allows threat actors to conduct more campaigns to a wider net of potential victims.

According to The Hacker News, “While AI and PHaaS have made phishing easier, businesses and individuals can still defend against these threats. By understanding the tactics used by threat actors and implementing effective security measures, the risk of falling victim to phishing attacks can be reduced.”

Recognize that phishing (and smishing, vishing, and qrishing) campaigns are increasing. Stay abreast of the new tactics used, and stay vigilant in identifying and protecting yourself against them.

Impersonation schemes are on the rise, and artificial intelligence (including deep fakes and voice cloning) will only make these schemes more difficult to detect.

Threat actors are emboldened, evidenced by the fact that the Cybersecurity and Infrastructure Security Agency (CISA) recently published an alert that threat actors are impersonating CISA employees in vishing attacks in order to obtain money. (View our previous related posts here.)

Threat actors impersonate government employees to try to scare individuals into providing information and financial payment, including the IRS and the FTC. The FTC has provided numerous Scam Alerts on this subject matter, which can be accessed at www.ftc.gov

CISA reminds us that “CISA staff will never contact you with a request to wire money, cash, cryptocurrency, or use gift cards and will never instruct you to keep the discussion secret.”

Remember that scammers are bold and unscrupulous. Heed the recommendations of CISA and FTC on how to detect and mitigate against impersonation voice calls.

Wow! It’s hard to believe this blog marks the 400th Privacy Tip since I started writing many years ago. I hope the tips have been helpful over the years and that you have been able to share them with others to spread the word. 

I thought it would be fun to pick 10 (ok—technically, a few more than 10) Privacy Tips and re-publish them (in case you missed them) in honor of our 400th Privacy Tip milestone. We have published tips that are relevant to the hot issues of the time, but some are time-honored. It was really hard to pick, but here they are:

Continue Reading Privacy Tip #400 – Best of First 400 Privacy Tips

The Health Sector Cybersecurity Coordination Center (HC3) recently issued an Alert warning that “threat actors employing advanced social engineering tactics to target IT help desks in the health sector and gain initial access to target organizations” have been on the rise.

The social engineering scheme starts with a telephone call to the IT help desk from “an area code local to the target organization, claiming to be an employee in a financial role (specifically in revenue cycle or administrator roles). The threat actor is able to provide the required sensitive information for identity verification, including the last four digits of the target employee’s social security number (SSN) and corporate ID number, along with other demographic details. These details were likely obtained from professional networking sites and other publicly available information sources, such as previous data breaches. The threat actor claimed that their phone was broken and, therefore, could not log in or receive MFA tokens. The threat actor then successfully convinced the IT help desk to enroll a new device in multi-factor authentication (MFA) to gain access to corporate resources.”

After the threat actor gains access, login information related to payer websites is targeted, and they submit a form to make ACH changes for payer accounts. “Once access has been gained to employee email accounts, they sent instructions to payment processors to divert legitimate payments to attacker-controlled U.S. bank accounts. The funds were then transferred to overseas accounts. During the malicious campaign, the threat actor also registered a domain with a single letter variation of the target organization and created an account impersonating the target organization’s Chief Financial Officer (CFO).”

The threat actors are leveraging spearphishing voice techniques and impersonating employees, also known as “vishing.” IC3 noted that “threat actors may also attempt to leverage AI voice impersonation techniques to social engineer targets, making remote identity verification increasingly difficult with these technological advancements. A recent global study found that out of 7,000 people surveyed, one in four said that they had experienced an AI voice cloning scam or knew someone who had.”

IC3 provides numerous mitigations to assist with the prevention of these vishing schemes, which are outlined in the Alert.

I am not a huge fan of using chatbots, as I never end up getting my questions fully answered. I get the efficiency of using a chatbot for simple questions, but my questions are usually not so easily resolved, so I end up completely frustrated with the process and trying to find a human being to help. This happens a lot with my internet service provider. I start with the chatbot, don’t get very far and then yell, “Can’t you just let me talk to someone who can fix my problem?”

At any rate, it seems that lots of people use chatbots and are quite comfortable giving chatbots all sorts of information. Probably not a great idea after reading a summary of research done by Trustwave.

Bleeping Computer obtained research from Trustwave before publication which shows that threat actors are deploying phishing attacks “using automated chatbots to guide visitors through the process of handing over their login credentials to threat actors.” Using a chatbot “gives a sense of legitimacy to visitors of the malicious sites, as chatbots are commonly found on websites for legitimate brands.”

According to Bleeping Computer, the process begins with a phishing email claiming to have information about the delivery of a package (it’s an old trick that still works) from a well-known delivery company. After clicking on “Please follow our instructions” to figure out why your package can’t be delivered, the victim is directed to a PDF file that contains links to a malicious phishing site. When the page loads, a chatbot appears to explain why the package couldn’t be delivered – the explanation usually being that the label was damaged – and shows the victim a picture of the parcel. Then the chatbot requests that the victim provide their personal information and confirms the scheduled delivery of the package.

The victim is then directed to a phishing page where the victim inserts account credential to pay for the shipping, including credit card information. The threat actors provide legitimacy to the process by requiring a one-time password to the victim’s mobile phone number (which the victim gave the chatbot) via SMS so the victim believes the transaction is legit.

The moral of this story: continue to be suspicious of any emails, texts, or telephone calls -(phishing, smishing, and vishing) and now chatbots – asking for your personal or financial information.

The FBI’s Internet Crime Complaint Center (IC3) recently issued a warning alerting consumers that scammers are using malicious QR Codes to reroute unsuspecting customers to malicious sites to try to steal their data.

Also known as QRishing, [view related post] criminals are taking advantage of our familiarity with QR codes after using them at restaurants and other establishments during the pandemic, to use them to commit crimes. The criminals embed malicious codes into QR codes to redirect a user to a malicious site and then attempt to get the user to provide personal information, financial information or other data that the criminals can use to perpetrate fraud or identity theft.

Embedding malicious code into a QR code is no different than embedding it into a link or attachment to a phishing email or a smishing text. Consumers are not as alert to question QR codes as we are to spot malicious emails and texts.

Hence, the alert from IC3. IC3 is warning consumers to check and re-check any URL generated by a QR code and to be cautious about using them for any form of payment.

QR codes should be viewed as suspiciously as emails and texts. Be cautious when asked to scan a QR code, and refuse to provide any type of personal information or financial information after scanning one.

2021 is behind us. Whether that is positive or negative for you, in my world, it was another record year. A record year of data breaches.

According to The Identity Theft Research Center (ITRC), data breaches in 2021 surpassed the previous record year of 2020 by 17 percent. The incidents ranged from the theft of cryptocurrency (Livecoin went out of business following an attack) to ransomware attacks (Colonial Pipeline), to zero-day vulnerabilities against Microsoft Exchange Server, and finally, the big one: Log4j.

There is speculation that the Log4j vulnerability will last for years. The Log4j vulnerability is so concerning that the FTC issued a warning this week to companies declaring that if companies don’t mitigate the vulnerability, they could be subject to an enforcement action [view related posts here and here].

What does this all mean to us as consumers? Many of us roll our eyes and say “All of our information is out there anyway, so why bother trying to protect it?” I say, don’t give up. Here are a few tips that are still important for protecting your data and your privacy:

  • If your information is compromised, sign up for credit monitoring or a credit freeze if offered.
  • Continue to check your credit report, which you can get for free once a year, to help determine whether any fraudulent accounts have been opened in your name.
  • Protect your Social Security number and driver’s license number. Don’t just give them when asked or fill them in on a form.
  • Mind your cookies.
  • Check the privacy settings on your phone and update them frequently.
  • Opt-in to “do not track” options.
  • Use DuckDuckGo as your browser.
  • Consider the Jumbo privacy app.
  • Read the privacy policies of apps and devices before you download or activate them.
  • Be aware of phishing, vishing, smishing, and qrishing.
  • Understand what IoT devices you have and activate unique passwords for them.
  • Change the default passwords on your home router and wi-fi.
  • Update the software on your devices as soon as you can.

And there are so many more! Check out all of our privacy tips at www.dataprivacyandsecurityinsider.com and don’t give up! Even though 2022 looks to be another whopper year for data breaches, if we don’t try to protect our privacy, then who will?

Although a security researcher has confirmed that LinkedIn users’ data, including full names, gender, email addresses, telephone numbers, and industry information is for sale on RaidForums by a hacker self-dubbed “GOD User TomLiner,” LinkedIn has stated that it is not from a data breach of its networks. According to LinkedIn, “[O]ur initial analysis indicates that the dataset includes information scraped from LinkedIn as well as information obtained from other sources….This was not a LinkedIn data breach and our investigation has determined that no private LinkedIn member data was exposed….”

No matter how the data ended up for sale on a hacker forum, if you are a LinkedIn user, you should be aware of it, and understand how that information can be used against you. Having valid email addresses and telephone numbers give hackers and scammers the ability to use them for targeting phishing and vishing schemes and other social engineering scams. In addition, the information can be used to compile dossiers and aggregated with other publicly available information for targeted campaigns.

As a precaution, security experts are suggesting that LinkedIn users update their passwords and enable multi-factor authentication on their LinkedIn accounts.