Scammers are always looking for new ways to dupe victims. If you battle your weight, you think about it a lot and are always looking for easier ways to lose some pounds. There is no easy way, but we are always looking for an easier way.

With the advent of GLP-1 weight loss drugs and their popularity, it has been reported that it is more difficult to obtain weight loss drugs, and they are in high demand. Scammers are using this craze to find vulnerable victims desperate to find drugs and scam them out of money. According to McAfee’s Threat Research Team, “in the first four months of 2024, malicious phishing attempts centered around Ozempic, Wegovy, and Semaglutide increased 183%” compared to the last quarter of 2023.

The phishing schemes offer the drugs online and accept payment through Bitcoin, Zelle, Venmo, and Cash App. McAfee found “449 risky website URLs and 176,871 dangerous phishing attempts centered around these drugs” in the first quarter of 2024. McAfee also found that scammers are impersonating doctors from outside the U.S. on Facebook, promising to send weight loss drugs without a prescription. Scammers are also using Craigslist and other online marketplaces to offer phone services. McAfee researchers found “207 scam postings for Ozempic” in just one day in April 2024.

The results of falling for these scams can be serious. They can steal your information and your money and harm your health. In most instances, you pay, and they send you nothing—in others, they send you fake drugs. McAfee explains that instead of receiving the correct drug, the victim may receive an “EpiPen with allergy medication, insulin pens, or pens loaded with a saline solution.” These drugs are then injected by unaware victims, which can cause serious harm.

McAfee provides the following tips to avoid online weight loss scams:

  • Buying weight loss drugs without a prescription is illegal.
  • Only buy from reputable pharmacies.
  • Watch out for unreasonably low prices.
  • Keep an eye out for website errors and missing product details.
  • Look for misleading claims.
  • Consider AI-powered scam text protection through a text scam detector.
  • Stay vigilant.

Stay on top of the newest scams so you can protect yourself. Understand that scammers are always using the newest craze to find new victims.

On October 16, 2024, the New York Department of Financial Services (DFS) issued an Industry Letter to regulated entities entitled “Cybersecurity Risks Arising from Artificial Intelligence and Strategies to Combat Related Risks.”

The letter “is intended to be a tool to assist Covered Entities in understanding and assessing cybersecurity risks associated with the use of AI and the controls that may be used to mitigate those risks.” It does not impose additional compliance requirements beyond the DFS’ Cybersecurity Regulation but is designed to provide guidance on how the Regulation’s framework can be used to assess and mitigate risks arising from artificial intelligence (AI).

The guidance outlines risks such as AI-enabled social engineering, AI-enhanced cybersecurity attacks, exposure or theft of vast amounts of nonpublic information, and increased vulnerabilities due to third-party, vendor, and other supply chain dependencies.

The guidance provides suggestions on how organizations can use controls and measures to mitigate AI-related threats, including: risk assessments and risk-based programs, policies, procedures and plans; third-party service provider and vendor management; access controls; and cybersecurity training, monitoring, and data management. DFS suggests that AI threats are evolving, and “it is vital for Covered Entities to review and reevaluate their cybersecurity programs and controls at regular intervals, as required by Part 500.” Although the guidance does not impose any additional compliance obligations, in the event of a DFS audit, these basic measures will no doubt be evaluated. Whether your organization is a DFS-regulated entity or not, the guidance is basic cybersecurity hygiene for any organization when it comes to the risks of AI and mitigating those risks, so it’s worth a look.

Last week, we outlined the lawsuits against TikTok by New York, California, and North Carolina, that followed in the footsteps of Nebraska, Nevada (which filed suit against TikTok in February of 2024), and Indiana, which filed suit against TikTok in 2022. Since last week, at least 11 more states have joined the fray, including Illinois, Kentucky, Louisiana, Massachusetts, Mississippi, New Jersey, Oregon, South Carolina, Vermont, Washington, and the District of Columbia.

The coalition of Attorneys General have each filed a suit against TikTok in their own state jurisdictions, alleging that TikTok mislead the public about the safety of the platform, that the platform knowingly uses addictive features, and that it harms young people’s mental health.

Separately, the Texas Attorney General has filed suit against TikTok alleging that it is violating Texas’ “The Security Children Online Through Parental Empowerment Act,” which went into effect on September 1, 2024. The law “bans social media companies from selling or even sharing a minor’s information unless it has the approval of a guardian of the minor.” We anticipate that more states will join the cause piling on top of TikTok’s current legal woes.

On October 11, 2024, following the filing of a lawsuit against TikTok by the Kentucky Attorney General, Senators Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN), who authored the bipartisan “Kids Online Safety Act,” requested that TikTok CEO Shou Chew provide all “documents, communications, and research held by TikTok regarding the safety of minors on its platform.” The request is “in response to shocking revelations of TikTok’s awareness of, and indifference to, its platform’s substantial harm to children and teens.” The letter specifically requests that TikTok provide the Senators with the “documents and information  previously produced to the Kentucky Attorney General’s Office and other states Attorneys General.” TikTok is required to produce the documents by October 25, 2024. 

Unfortunately, when natural disasters hit innocent victims and good-natured people want to help those in need, scammers swoop in to manipulate the bleak situation to commit fraud or price gouging.

Following Hurricanes Helene and Milton , the Federal Trade Commission (FTC), the Department of Justice, and the Consumer Financial Protection Bureau (CFPB) issued a warning to consumers about scammers trying to exploit these recent natural disasters to launch scams. The scams include:

  • Fraudulent charities soliciting donations for disaster victims that imitate the names of charities linked to the disaster.
  • Scammers impersonating government officials, offering disaster relief in exchange for personal information or money.
  • Scammers promoting non-existent businesses or investment opportunities related to disaster recovery, such as rebuilding or flood-proofing.
  • Price gouging for essential goods and services needed by disaster victims.

According to the warning:

To avoid scams and frauds while you’re recovering from a hurricane or another natural disaster, remember only scammers will insist you pay for services by wire transfer, gift card, payment app, cryptocurrency or in cash. Avoid anyone who promises they can help you qualify for relief from the Federal Emergency Management Agency (FEMA) ― for a fee. That’s a scam. FEMA will never require you to pay a fee to get disaster relief. Never sign your insurance check over to someone else. Be sure to research contractors and get estimates from more than one before signing a contract for work. Get a written contract for repairs and read it carefully before signing it. The FTC has information for consumers on how to avoid scams and prepare for natural disasters, and the CFPB has published a disaster and emergencies toolkit—both provide tips and tools to follow during natural disasters. We all want to help victims of natural disasters, but be careful and help in a safe manner so resources go directly to those in need and not to scammers.

Following in the footsteps of Nebraska, the Attorneys General of North Carolina, California, and New Jersey filed complaints against TikTok and its owner, ByteDance, Ltd., on October 8, 2024.

The suits are lengthy and full of allegations against TikTok and how it is responsible for a “profound mental health crisis” of American teenagers. The suits allege that TikTok designed its social media platform to target youth and “manipulates them into habitual use, and mines the data produced by their excessive and compulsive use for more and more profit.”

The New Jersey complaint alleges that, for American youth, there are no limits on its use and that it is designed to promote excessive use. On the other hand, the Chinese equivalent of TikTok, known as Douyin, “limits which hours in the day young users can access it and for how long. Chinese youth are required to wait through a five-second pause between videos when they spend too much time on Douyin. Some are limited to 40 minutes of use per day.” This revelation is telling—the Chinese version of TikTok requires limitations on use by young users but does not restrict and, in fact, encourages unhealthy levels of use for American youth.

The complaints each outline in detail how TikTok lured Americans between 13 and 17 to become users and how TikTok designed its platform “to promote excessive, compulsive, and addictive use” and achieved young users’ usage to “almost constantly.” The complaints all outline how the use of TikTok by young users has been harmful to them. California alleges that “TikTok designs and provides beauty filters that it knows harm its young users” and that “encourage unhealthy, negative social comparison-which, in turn, can cause body image issues and related mental and physical disorders.” In addition, it alleges that “the platform’s addictive qualities, and the resulting excessive use by minors, harms those minors’ mental and physical health. Among the harms suffered by TikTok’s younger users are abnormal neurological changes, insufficient sleep, inadequate socialization with others, and increased risk of mood disorders such as depression and anxiety.”  If that isn’t enough to get off TikTok, I don’t know what is. The federal government and some state governments prohibit employees from using TikTok. Montana attempted to prohibit its use and was sued by TikTok. In a rare bipartisan move, Congress passed—and President Biden signed—legislation prohibiting TikTok from use in the U.S. That law is being challenged in litigation by TikTok, a Chinese-based platform, on First Amendment grounds. Now, four state Attorneys General are sounding the alarm about the harmful effects on youth in our country when using TikTok. How much is enough for people to understand the national security threat that TikTok poses and the threat it poses to our children?

This week, Marriott International, Inc. and its subsidiary Starwood Hotels & Resorts Worldwide LLC (collectively, Marriott) agreed to settle on the terms of a settlement order with the Federal Trade Commission (FTC) for its alleged failures to implement reasonable security measures which in turn led to three data breaches between 2014 and 2020, affecting over 344 million consumers across the globe. The type of data affected included names, passport information, payment card numbers, loyalty numbers, dates of birth, email addresses, and other types of personal information. Specifically, the FTC alleged that Marriott failed to implement appropriate password controls, access controls, firewall controls, or network segmentation; patch outdated software and systems; adequately log and monitor network environments; or deploy adequate multifactor authentication.

Pursuant to the Order, Marriott will:

  • Provide all U.S. consumers with a means to request deletion of their personal information;
  • Allow all U.S. consumers to review loyalty rewards accounts upon request and reinstate loyalty points if such points were stolen as a result of the breach(es);
  • Clearly and transparently disclose to consumers how Marriott collects, maintains, uses, deletes, and discloses consumers’ personal information;
  • Minimize the retention of personal information only for as long as such information is needed to fulfill the purpose for which it was collected;
  • Implement and maintain a comprehensive information security program and certify compliance to the FTC annually for 20 years; and,
  • Undergo an independent, third-party security risk assessment every two years;

The FTC does not have legal authority to require Marriott to pay civil penalties in this matter. Additionally, this week, Marriott agreed to pay a $52 million penalty to 49 states and the District of Columbia to resolve similar data security allegations made by state regulators.

A new report published by the software company Egress this month, Phishing Threat Trends Report, is a must-read. It outlines the proliferation of phishing toolkits on the dark web (that basically allows any Tom, Dick, and Harry Hacker) to launch successful phishing campaigns, how “commodity phishing attacks are overwhelming security teams,” the anatomy of advanced persistent threats, the most prolific phishing tactic in 2024, and how AI-assisted attacks are becoming more challenging to detect.

Presently, I would like to focus on one piece of the Egress report that is near and dear to me:, the latest phishing tactics. Phishing continues to be one of the most prevalent causes of security incidents and data breaches. There are some fascinating statistics in the report for all of us to process and internalize. First, the “most phished day of the year so far” was June 10th, 2024, and the most common time to receive a phishing email is at 12:37 p.m. This means that we should all be hyper vigilant while we are checking our emails during the lunch hour. Second, there was a 28% increase in phishing emails in the second quarter of 2024 than the first quarter. During that time frame, 44% of phishing emails were sent from already compromised accounts, which allowed threat actors to bypass authentication protocols; 23% of phishing emails included malicious attachments; 20% relied solely on social engineering; and 12% contained a QR code. Oh, those QR codes—please educate yourself and your users on not clicking on QR codes received in an email. We predict QRishing will continue to rise.

The top five words used in phishing attacks are “urgent,” “sign,” “password,” “document,” and “delivery.” This is helpful as well, as users’ antennae can go up at the mention of these words in emails. The most impersonated brands are Adobe, Microsoft, Chase, and Meta. Finally, employees are only “accurately reporting” 29% of phishing emails received.

There’s a lot packed into the Egress report, and it is full of useful information. What I want to focus on here is the most prolific phishing tactic in 2024: impersonation.

According to the report, between January 1st and August 31st, 2024, 26% of phishing emails detected appeared “to be sent from brands that are not connected to the recipient via an established business relationship.” This means we should be wary of any emails we receive from a business with whom we have no relationship. Next, 16% of phishing attacks include phishing emails that impersonate the company the recipient works for. “HR was the most impersonated department in these types of attacks, with cybercriminals taking advantage of employees being quick to click on fake benefit packages or similar bait.” This means we should be wary of emails coming from HR and take measures to authenticate that the email was actually from your HR department. One big clue is whether the banner alerting users that the email is external is present on the email. If an email purports to come from your HR department, but an external alert banner is present, it’s a sure sign that it is a malicious phishing email.

The next most common impersonation is your employer’s IT and Finance departments. This makes sense since these departments often ask people to respond with information or to fill out surveys. The report emphasizes that “two of the most impersonated internal systems were e-signatures and employee feedback surveys, and the Microsoft logo appeared in more impersonation attacks than any other (again tied to system use and credential theft, and hijacking legitimate SharePoint links as an obfuscation technique to get through reputation-based detection).”

Hackers are also singling out those who are new to the organization. Egress found that employees in their initial 2-7 weeks on the job were the most targeted. The phishing emails to this group impersonated VIPs like the CEO, CFO, and chief people officer. This reinforces the importance of implementing phishing training into a company’s new employee orientation and includes statistics like the above to emphasize the threat.

Finally, hackers are impersonating celebrities. Although I would love to get an email from Taylor Swift, I think that if I get one, it’s probably not real. Why on earth do people fall for these? It’s called “authority bias,” where people act more quickly and don’t follow instructions (like the voice in their brain that says, “Really, Taylor Swift is not and would NEVER email me. Perhaps this is a phishing email?”) According to Egress, the four celebrities most frequently impersonated include Jeff Bezos, Elon Musk, Warren Buffet, and Mackenzie Scott. Really, folks, none of those individuals are emailing you either, so don’t fall for it. The Egress report is a great tool to update you on the most recent phishing tactics, and if you are a security professional, is great material to incorporate into your next cybersecurity training for employees.

A new US National Cybersecurity Alliance survey  shows that over one-third (38%) of “employees share sensitive work information with artificial intelligence (AI) tools without their employer’s permission.” Not surprisingly, “Gen Z and millennial workers are more likely to share sensitive work information without getting permission.”

The problem with employees sharing workplace data with chatbots is that if a worker inputs sensitive personal information or proprietary information into the model, that information is then used to train the model. If another user enters a query that the original information is responsive to, then the sensitive or proprietary data is provided in the response. That’s how generative AI works. The data disclosed is used to teach the model and is no longer private.

According to Dark Reading, several cases illustrate how significant the risk of employees sharing confidential information with chatbots is:

“A financial services firm integrated a GenAI chatbot to assist with customer inquiries, …Employees inadvertently input client financial information for context, which the chatbot then stored in an unsecured manner. This not only led to a significant data breach, but also enabled attackers to access sensitive client information, demonstrating how easily confidential data can be compromised through the improper use of these tools.”

Another real example of the inadvertent disclosure of proprietary and confidential information by a misinformed employee is:

“An employee, for whom English was a second language, at a multinational company, took an assignment working in the US…. In order to improve his written communications with his US based colleagues, he innocently started using Grammarly to improve his written communications. Not knowing that the application was allowed to train on the employee’s data, the employee sometimes used Grammarly to improve communications around confidential and proprietary data. There was no malicious intent, but this scenario highlights the hidden risks of AI.”

These examples are more common than we think, and the percentage of employees using generative AI tools is only growing.

To combat the risk of inadvertent disclosure of company data by employees, it is essential for companies to develop and implement an AI Governance Program, an AI Acceptable Use Program, and provide training to employees about the risks and appropriate uses of AI in the organization. According to the NCA survey, more than half of all employees have NOT been trained on the safe use of AI tools. According to the NCA, “this statistic suggests that many organizations may underestimate the importance of training.”

Employees’ use of unapproved generative AI tools by employees poses a risk to organizations because IT professionals are unable to adequately secure the environment from tools that are under their radar. Now is the time to develop governance over AI use, determine appropriate and approved tools for employees, and train them on the risks and safe use of AI in your organization.

This week, the Federal Communications Commission (FCC) announced a data protection and cybersecurity settlement with T-Mobile, resolving the FCC’s investigations related to the data breaches suffered by T-Mobile that affected millions of consumers in 2021, 2022, and 2023.

As part of the settlement, T-Mobile has agreed to:

  • Remediate security flaws;
  • Improve the company’s cyber hygiene;
  • Implement standard security safeguards, such as multi-factor authentication;
  • Implement stronger corporate governance, including regular reports to the board by T-Mobile’s Chief Information Security Officer;
  • Implement a modern zero trust architecture and segment its networks; and,
  • Consistent application of best practice identity and access methods.

T-Mobile has agreed to invest $15.75 million in cybersecurity pursuant to the settlement, in addition to the civil penalty it will pay, $15.75 million.

FCC Chairwoman Jessica Rosenworcel said, “Consumers’ data is too important and much too sensitive to receive anything less than the best cybersecurity protections.  We will continue to send a strong message to providers entrusted with this delicate information that they need to beef up their systems or there will be consequences.” This settlement exemplifies why security safeguards are just as important as privacy compliance—you can’t have privacy without security.