On December 19, 2025, the Federal Bureau of Investigation (FBI) published an Alert warning the public that it has data from as far back as 2023 that “malicious actors have impersonated senior U.S. state government, White House, and Cabinet level officials, as well as members of Congress to target individuals, including officials’ family members and personal acquaintances.”

The malicious actors send AI-generated voice messages in vishing campaigns and AI-generated text messages in smishing campaigns that impersonate officials that establish communication with the victim on an encrypted messaging application to:

  • Discuss current events;
  • Ask about U.S. policy;
  • Propose a meeting with high-ranking officials;
  • Request copies of personal documents;
  • Request a wire transfer to an overseas financial institution;
  • Note appointment of the victim to a company’s board of directors;
  • Request an authentication code that allows the threat actor to sync their device with the victim’s contact list; and
  • Request the victim introduce the threat actor to a known associate.

The threat actor starts the communication with a text message and then asks the victim to move to an encrypted platform such as Signal, Telegram, or WhatsApp.

The Alert provides recommendations for spotting a fake message, including:

  • Verify the identity of the person calling you or sending text or voice messages. Before responding, research the originating number, organization, and/or person purporting to contact you. Then independently identify a phone number for the person and call to verify their authenticity.
  • Carefully examine the email address, messaging contact information, including phone numbers, URLs, and spelling used in any correspondence or communications. Scammers often use slight differences to deceive you and gain your trust. For instance, actors can incorporate publicly available photographs in text messages, use minor alterations in names and contact information, or use AI-generated voices to masquerade as a known contact.
  • Look for subtle imperfections in images and videos, such as distorted hands or feet, unrealistic facial features, indistinct or irregular faces, unrealistic accessories such as glasses or jewelry, inaccurate shadows, watermarks, voice call lag time, voice matching, and unnatural movements.
  • Listen closely to the tone and word choice to distinguish between a legitimate phone call or voice message from a known contact versus AI-generated voice cloning, as they can sound nearly identical.
  • AI-generated content has advanced to the point that it is often difficult to identify. When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help.

We have educated our readers about phishing, smishing, QRishing, and vishing scams, and now we’re warning you about what we have dubbed “snailing.” Yes, believe it or not, threat actors have gone retro and are using snail mail to try to extort victims. TechRadar is reporting that, according to GuidePoint Security, an organization received several letters in the mail, allegedly from the BianLian cybercriminal gang, stating:

“I regret to inform you that we have gained access to [REDACTED] systems and over the past several weeks have exported thousands of data files, including customer order and contact information, employee information with IDs, SSNs, payroll reports, and other sensitive HR documents, company financial documents, legal documents, investor and shareholder information, invoices, and tax documents.”

The letter alleges that the recipient’s network “is insecure and we were able to gain access and intercept your network traffic, leverage your personal email address, passwords, online accounts and other information to social engineer our way into [REDACTED] systems via your home network with the help of another employee.” The threat actors then demand $250,000-$350,000 in Bitcoin within ten days. They even offer a QR code in the letter that directs the recipient to the Bitcoin wallet.

It’s comical that the letters have a return address of an actual Boston office building.

GuidePoint Security says the letters and attacks mentioned in them are fake and are inconsistent with BianLian’s ransom notes. Apparently, these days, even threat actors get impersonated. Now you know—don’t get scammed by a snailing incident.

CrowdStrike recently published its 2025 Global Threat Report, which among other conclusions, emphasized that social engineering tactics aimed to steal credentials grew an astounding 442% in the second half of 2024. Correspondingly, use of stolen credentials to attack systems increased.

Other observations in the report include:

  • Adversaries are operating with unprecedented speed and adaptability;
  • China expanded its cyber espionage enterprise;
  • Stolen credential use is increasing ;
  • Social engineering tactics aim to steal credentials;
  • Generative AI drives new adversary risks;
  • Cloud-conscious actors continue to innovate; and
  • Adversaries are exploiting vulnerabilities to gain access

The details behind these conclusions include that the time an adversary starts moving through a network “reached an all-time low in the past year. The average fell to 48 minutes, and the fastest breakout time we observed dropped to a mere 51 seconds.” This means that threat actors are breaking in and swiftly moving within the system, making it difficult to detect, block, and tackle.

Vishing “saw explosive growth—up 442% between the first and second half of 2024.”

CrowdStrike’s observations are instructive to plan and harden defenses against these risks. Crucial pieces of the defense are:

  • Continued education and training of employees (including how social engineering schemes work;
  • The importance of protecting credentials;
  • How credentials are used to enter into a system.

Although we have been repeatedly educating employees on these themes, the statistics and real life experiences show that the message is not getting through. Addressing these specific risks through your training program may help ebb the tide of these successful social engineering campaigns.

I often get asked whether law enforcement is making any headway in catching cybercriminals. Although it is a challenging task, a recent example of a big win for law enforcement deserves celebration.

Authorities from 40 countries, territories, and regions came together to assist INTERPOL with a recent global cybercrime initiative known as Operation HAECHI-V. The initiative took place between July and November 2024, resulting in the arrest of 5,500 individuals and the seizure of over $400 million. The initiative culminated with “Korean and Beijing authorities jointly dismantl[ing] a widespread voice phishing syndicate responsible for financial losses totaling $1.1 billion and affecting over 1,900 victims.”

One scam outlined in an INTERPOL purple notice, warns consumers:

[O]f an emerging cryptocurrency fraud practice called the USDT Token Approval Scam that allows bad actors to drain victims’ wallets by leveraging romance-themed baits to trick them into buying popular Tether stablecoins (USDT tokens) and investing them. Once the scammers have gained their trust, the victims are provided with a phishing link claiming to allow them to set up their investment account….In reality, by clicking they authorize full access to the scammers, who can then transfer funds out of their wallet without the victim’s knowledge.

We love wins for law enforcement, and this win was significant, but it also informs consumers about how these schemes work. Pay attention to the techniques. This one included both phishing and vishing. Those techniques continue to be tried and true for international cyber criminals.

Everyone thinks they can spot a phishing email. If true, we would not see so many security incidents, data breaches, and ransomware attacks. The statistics are overwhelming that phishing emails are a significant cause of data breaches.

If everyone was able to spot a phishing email, threat actors would stop using them. It wouldn’t be worth their time, and they would use other methods to attack victims. However, because of their effectiveness, phishing attacks actually surged 40% in 2023, according to research by Egress.

One theory about why this is true is because of the use of artificial intelligence (AI). Threat actors are using generative AI to draft phishing emails that look and sound like they are in the victim’s native language. There are no grammatical errors or misspellings in the message, which used to make detection easier. In addition, AI-generated deepfake videos or voiceovers are used by threat actors in phishing attacks to lure victims into believing that the threat actor is someone they know, trust, or love. Further, AI can assist threat actors with actually writing the malware code for the attack.

Threat actors are also hiring other attackers to carry out phishing campaigns, which is known as Phishing-as-a-Service (PhaaS). This allows threat actors to conduct more campaigns to a wider net of potential victims.

According to The Hacker News, “While AI and PHaaS have made phishing easier, businesses and individuals can still defend against these threats. By understanding the tactics used by threat actors and implementing effective security measures, the risk of falling victim to phishing attacks can be reduced.”

Recognize that phishing (and smishing, vishing, and qrishing) campaigns are increasing. Stay abreast of the new tactics used, and stay vigilant in identifying and protecting yourself against them.

Impersonation schemes are on the rise, and artificial intelligence (including deep fakes and voice cloning) will only make these schemes more difficult to detect.

Threat actors are emboldened, evidenced by the fact that the Cybersecurity and Infrastructure Security Agency (CISA) recently published an alert that threat actors are impersonating CISA employees in vishing attacks in order to obtain money. (View our previous related posts here.)

Threat actors impersonate government employees to try to scare individuals into providing information and financial payment, including the IRS and the FTC. The FTC has provided numerous Scam Alerts on this subject matter, which can be accessed at www.ftc.gov

CISA reminds us that “CISA staff will never contact you with a request to wire money, cash, cryptocurrency, or use gift cards and will never instruct you to keep the discussion secret.”

Remember that scammers are bold and unscrupulous. Heed the recommendations of CISA and FTC on how to detect and mitigate against impersonation voice calls.

Wow! It’s hard to believe this blog marks the 400th Privacy Tip since I started writing many years ago. I hope the tips have been helpful over the years and that you have been able to share them with others to spread the word. 

I thought it would be fun to pick 10 (ok—technically, a few more than 10) Privacy Tips and re-publish them (in case you missed them) in honor of our 400th Privacy Tip milestone. We have published tips that are relevant to the hot issues of the time, but some are time-honored. It was really hard to pick, but here they are:

Continue Reading Privacy Tip #400 – Best of First 400 Privacy Tips

The Health Sector Cybersecurity Coordination Center (HC3) recently issued an Alert warning that “threat actors employing advanced social engineering tactics to target IT help desks in the health sector and gain initial access to target organizations” have been on the rise.

The social engineering scheme starts with a telephone call to the IT help desk from “an area code local to the target organization, claiming to be an employee in a financial role (specifically in revenue cycle or administrator roles). The threat actor is able to provide the required sensitive information for identity verification, including the last four digits of the target employee’s social security number (SSN) and corporate ID number, along with other demographic details. These details were likely obtained from professional networking sites and other publicly available information sources, such as previous data breaches. The threat actor claimed that their phone was broken and, therefore, could not log in or receive MFA tokens. The threat actor then successfully convinced the IT help desk to enroll a new device in multi-factor authentication (MFA) to gain access to corporate resources.”

After the threat actor gains access, login information related to payer websites is targeted, and they submit a form to make ACH changes for payer accounts. “Once access has been gained to employee email accounts, they sent instructions to payment processors to divert legitimate payments to attacker-controlled U.S. bank accounts. The funds were then transferred to overseas accounts. During the malicious campaign, the threat actor also registered a domain with a single letter variation of the target organization and created an account impersonating the target organization’s Chief Financial Officer (CFO).”

The threat actors are leveraging spearphishing voice techniques and impersonating employees, also known as “vishing.” IC3 noted that “threat actors may also attempt to leverage AI voice impersonation techniques to social engineer targets, making remote identity verification increasingly difficult with these technological advancements. A recent global study found that out of 7,000 people surveyed, one in four said that they had experienced an AI voice cloning scam or knew someone who had.”

IC3 provides numerous mitigations to assist with the prevention of these vishing schemes, which are outlined in the Alert.

I am not a huge fan of using chatbots, as I never end up getting my questions fully answered. I get the efficiency of using a chatbot for simple questions, but my questions are usually not so easily resolved, so I end up completely frustrated with the process and trying to find a human being to help. This happens a lot with my internet service provider. I start with the chatbot, don’t get very far and then yell, “Can’t you just let me talk to someone who can fix my problem?”

At any rate, it seems that lots of people use chatbots and are quite comfortable giving chatbots all sorts of information. Probably not a great idea after reading a summary of research done by Trustwave.

Bleeping Computer obtained research from Trustwave before publication which shows that threat actors are deploying phishing attacks “using automated chatbots to guide visitors through the process of handing over their login credentials to threat actors.” Using a chatbot “gives a sense of legitimacy to visitors of the malicious sites, as chatbots are commonly found on websites for legitimate brands.”

According to Bleeping Computer, the process begins with a phishing email claiming to have information about the delivery of a package (it’s an old trick that still works) from a well-known delivery company. After clicking on “Please follow our instructions” to figure out why your package can’t be delivered, the victim is directed to a PDF file that contains links to a malicious phishing site. When the page loads, a chatbot appears to explain why the package couldn’t be delivered – the explanation usually being that the label was damaged – and shows the victim a picture of the parcel. Then the chatbot requests that the victim provide their personal information and confirms the scheduled delivery of the package.

The victim is then directed to a phishing page where the victim inserts account credential to pay for the shipping, including credit card information. The threat actors provide legitimacy to the process by requiring a one-time password to the victim’s mobile phone number (which the victim gave the chatbot) via SMS so the victim believes the transaction is legit.

The moral of this story: continue to be suspicious of any emails, texts, or telephone calls -(phishing, smishing, and vishing) and now chatbots – asking for your personal or financial information.

The FBI’s Internet Crime Complaint Center (IC3) recently issued a warning alerting consumers that scammers are using malicious QR Codes to reroute unsuspecting customers to malicious sites to try to steal their data.

Also known as QRishing, [view related post] criminals are taking advantage of our familiarity with QR codes after using them at restaurants and other establishments during the pandemic, to use them to commit crimes. The criminals embed malicious codes into QR codes to redirect a user to a malicious site and then attempt to get the user to provide personal information, financial information or other data that the criminals can use to perpetrate fraud or identity theft.

Embedding malicious code into a QR code is no different than embedding it into a link or attachment to a phishing email or a smishing text. Consumers are not as alert to question QR codes as we are to spot malicious emails and texts.

Hence, the alert from IC3. IC3 is warning consumers to check and re-check any URL generated by a QR code and to be cautious about using them for any form of payment.

QR codes should be viewed as suspiciously as emails and texts. Be cautious when asked to scan a QR code, and refuse to provide any type of personal information or financial information after scanning one.