I continue to marvel at how many Americans are using TikTok but are oblivious to the fact that they are being duped by one of our foreign adversaries—the Chinese Communist Party. Folks, listen to and heed the warnings of both state and federal governments on the dangers that the use of TikTok poses to national security. Think about your country instead of yourself and stop using TikTok. It’s a matter of national security.

I am not an alarmist by nature, but the increased mention of TikTok in day-to-day conversations is very concerning, considering the overwhelming warnings about how the Chinese Communist Party is collecting information on Americans. The way to visualize it is to imagine there is a member of the Chinese Communist Party on your shoulder looking at everything you do, tracking your location, accessing your personal and health information and that of your children and other members of your family. We wouldn’t like it if our own government were surveilling us like that. Why are we comfortable with a foreign adversary doing it?

You don’t have to listen to me—just scroll through the articles below—from both sides of the media aisle (this is actually a bipartisan issue)—and get on the collective wagon to voluntarily ban TikTok on a national basis. We can all do this together to spare the government from having to ban us from harming ourselves or our national security.

The saga started in 2020, when President Trump attempted to ban TikTok in the U.S. with an executive order citing national security concerns. TikTok then pivoted to potentially selling its U.S. business to an American company. That strategy fizzled.

President Biden revoked Trump’s order, but started an investigation into security threats posed by Tik Tok. FCC Commissioner Brendan Carr asked Apple and Google to remove TikTok from their app stores.

Commissioner Carr wants TikTok to be banned for all U.S. users, citing concerns over how TikTok is handling the massive amounts of data it gathers from U.S. users and lingering doubts “that it’s not finding its way back into the hands of the [Chinese Communist Party.”

FBI Director Christopher Wray has testified before the Homeland Security Committee of the U.S. House of Representatives that the FBI has ‘national security concerns’ about the use of TikTok by American users. Wray testified that his concerns include “the possibility that the Chinese government could use it to control data collection on millions of users or control the recommendation algorithm, which could be used for influence operations if they so chose, or to control software on millions of devices, which gives it an opportunity to potentially technically compromise personal devices.”

U.S. federal agencies including the State Department, Department of Defense, the Transportation Safety Administration, Department of Homeland Security, the U.S. military and the Pentagon have already banned federal workers from using TikTok.

State governors also are getting into the action to ban the use of TikTok by state workers. The Governor of South Dakota issued an executive order this week banning state workers and contractors from using the app or accessing TikTok’s website from state-issued devices. Enough is enough. Let’s start a grassroots movement to ban the use of TikTok on our own. I urge you to join the movement.

The Health Care Sector Cybersecurity Coordination Center (IC3) recently released an Analyst’s Note to health care organizations providing information on a new variant of ransomware called Venus (also known as GOODGAME).

According to IC3, the threat actors “are known to target publicly exposed Remote Desktop Services to encrypt Windows devices.” The ransomware then “will attempt to terminate 39 processes associated with database servers and Microsoft Office applications” and will “delete event logs, Shadow Copy Volumes, and disable Data Execution Prevention using” AES and RSA algorithms and append the ‘.venus’ extension and a ‘goodgamer’ filemarker.

According to reporting from the Verge and the Markup, several popular e-filing providers have been transmitting sensitive financial information to Meta through Meta Pixel. Meta Pixel is a free advertising analytics service offered by Meta that, similar to cookie files and other persistent user identifiers, collects personalized data about how the users interact with content across the Internet. The Meta Pixel service allows Meta to tailor advertising profiles for users regardless of whether they have a Facebook account.

According to the report, using the Meta Pixel code, several popular e-filing sites collected information such as users’ filing status, their adjusted gross income, and the amount of their refund, then sent that information back to Meta’s servers. Meta’s terms of service prohibit the use of Meta Pixel code to collect sensitive information, and the code on the e-filing sites appeared to be misconfigured or left to default settings in many cases.

Businesses that are considering partnering with an outside analytics provider may wish to carefully inventory what data can be collected and where it will be sent. Privacy laws such as the California Privacy Rights Act could expose a business to Attorney General investigations, fines, and private lawsuits for mishandling sensitive information. While use of these technologies is often free, the legal liability might not be.

Starting December 1, Facebook reportedly will remove several biographic details from user profiles, including “Religious views,” “Political views,” “Interested in” (indicating the user’s sexual orientation), and “Address.” Many state privacy laws, including California’s Privacy Rights Act, restrict how businesses can collect and use these types of sensitive personal information. Facebook has not confirmed why it is removing these biographical details or whether it will delete this information or simply remove it from public profiles

The French data privacy authority (DPA) announced that it will fine Discord, Inc. 800,000 euros under the General Data Protection Regulation (GDPR). Discord is a social messaging platform popular with gamers, technology enthusiasts, and the LGBTQ+ community.

The alleged GDPR breaches include failure to establish a written information security policy and data retention schedule, failure to require secure account passwords, and failure to conduct regular data protection assessments. The regulators specifically called out the Discord app’s unusual practice of staying active in the background, keeping the user active on voice chat after the user has clicked the “close” button.

The DPA noted in its findings that Discord cooperated with its investigation and has taken steps to remediate the alleged violations.

This week, a lawsuit was filed in the U.S. District Court of Massachusetts against the Commonwealth of Massachusetts for its use of a COVID-19 contact-tracing app for residents’ mobile phones. However, very few residents voluntarily downloaded the app. The solution? The lawsuit alleges that Massachusetts caused the app to be downloaded to certain residents’ mobile devices without consent or knowledge. The complaint alleges that “on June 15, 2021, [the Massachusetts Department of Public Health (DPH)] worked with [a third party application developer] to secretly install the Contact Tracing App onto over one million Android mobile devices located in Massachusetts without the device owners’ knowledge or permission.” The complaint further alleges that “[w]hen some Android device owners discovered and subsequently deleted the App, DPH would re-install it onto their devices. The App causes an Android mobile device to constantly connect and exchange information with other nearby devices via Bluetooth and creates a record of such other connections. If a user opts in and reports being infected with COVID-19, an exposure notification is sent to other individuals on the infected user’s connection record.”

The complaint also alleges that the app collected information about the user’s travel, social interactions and internet usage. The app was installed as a “settings feature” instead of an “applications file” in order to remain unnoticed.

The lawsuit alleges violations of the Fourth and Fifth Amendments to the U.S. Constitution, Articles XIV and X of the Massachusetts Declaration of Rights, and the Computer Fraud and Abuse Act. The class seeks an injunction against continued use of the spyware and an order requiring the DPH to remove the spyware from users’ mobile devices. The class also seeks to recover attorneys’ fees and $1 for symbolic damages.

The City Council of Chula Vista, California (in the San Diego metropolitan area), announced a new policy governing how city law enforcement can use technology to protect residents from data collected by surveillance equipment. The policy was developed by a city task force after the police department began using Automated License Plate Readers in 2020 and will now be effective. However, this new policy directly affects Chula Vista’s signature drone program. The goal of the policy is to require any kind of technology that city officials and law enforcement intend to use to be reviewed by the task force, which will then determine the impact the technology will have on the public and city systems and resident privacy.

The task force consists of technology experts, financial auditors, public safety professionals, and government transparency activists. To streamline the review, all technology will fall under one of the following categories: general technology, which includes emails and cellphones; sensitive technology, such as drones and traffic signal cameras; and surveillance technology, such as the license plate readers. The highest level of oversight will apply to surveillance technology.

If the technology and its use are approved by the task force, the city manager would be required to report at least once every two years about how the technology has been used, any adverse impacts, and the status of the data collected. The goal is to keep government officials accountable. So, where does that leave drones? Well, they will be subject to this policy (as noted above), but the policy does allow the City Manager or the City Council to waive certain elements of the policy “in the event of exigent circumstances or other circumstances that make compliance impossible or infeasible.” It is likely that other policies and tasks forces like this in Chula Vista will continue to pop up as city residents question the scope and oversight of surveillance using new technologies, including drones, by government entities.

Dark Reading reports that thousands of college and university students are being targeted by cyber-attackers who are using a legitimate domain to impersonate Instagram and steal credentials of the users. The attack is able to evade security measures of Microsoft 365 and Exchange.

According to the report, “The socially engineered attack, which has targeted nearly 22,000 mailboxes, used the personalized handles of Instagram users in messages informing would-be victims that there was an ‘unusual login’ on their account.” The attackers also sent email messages to the victims from a valid email domain, which made it more difficult for users and security technology to identify it as malicious.

The email impersonating Instagram uses a familiar tactic to lure victims into believing it to be true: a sense of urgency. The email appears to come from Instagram’s support team and includes the sender’s name, Instagram profile, and email address. The user is then informed that “an unrecognized device from a specific location and machine…had logged in to their account,” and asked to click on a link asking them to “secure” their login details, which of course redirects the user to a fraudulent landing page that then allows the attackers to steal the user’s credentials.

The researchers from Armorblox who investigated the scam suggest that users watch out for social engineering cues, review all emails for any inconsistencies, and employ multifactor authentication and password-management best practices across both personal and professional accounts.

Palo Alto’s Unit 42 recently issued a threat assessment alert outlining a new, unique phishing scam that has been successful. The scam is believed to have been carried out by the Luna Moth/Silent Ransom Group and is targeting businesses in the legal and retail sectors. Unit 42 predicts that the scam is “expanding in scope.”

According to the alert, the scam uses “legitimate trusted technology tools to carry out attacks…This threat actor has significantly invested in call centers and infrastructure that’s unique to each victim.” Education of users is critical to prevent the campaign from continuing to be successful.

The scam uses callback phishing, which is a social engineering attack that involves direct contact between the threat actor and the user. The scam starts with a phishing email to the user’s corporate email account, attaching an invoice for less than $1,000 and advising the user that the user’s credit card has been charged for a service. The email is personalized to the user, does not contain any malicious code or malware and is sent using a legitimate email service, with the invoice attached as a pdf. None of this appears suspicious to the user.

The invoice includes a unique ID and telephone number with a few extra characters that are not noticeable, and when the user calls the number (which many users are told to do if something looks suspicious), the user is “routed to a threat actor-controlled call center and connected to a live agent.” The threat actor assists the user with canceling the subscription and requests that the user download and run a remote tool allowing for the threat actor to have remote access to the user’s computer. The threat actor then downloads and installs a remote administration tool that provides access to the user’s computer to look for files to exfiltrate. Following exfiltration, the threat actor sends an extortion email to the victim demanding payment or the files will be released.

If the victim refuses to pay, the “attackers will threaten to contact victims’ customers and clients identified through the stolen data, to increase the pressure to comply.”

As users become better educated on these , threat actors are bobbing and weaving and trying to figure out new ways to infiltrate corporate systems and exfiltrate data. Keeping your users up to date on these schemes, and instilling them with a heavy dose of skepticism and caution is one way to combat these schemes. According to Unit 42, “if people targeted by these types of attacks reported these invoices to their organization’s purchasing department, the organization might be better able to spot the attack, particularly if a number of individuals report similar messages.” Protection of corporate data is a team sport. Be an active member of the team and report any suspicious messages to your IT professionals and look at every email with a healthy and heavy dose of suspicion.

As companies hustle to follow the new California Privacy Rights Act (CPRA) regulations, they’ve hit a substantial hiccup: there aren’t any yet. The California Privacy Rights Agency (CPPA), the newly-created body with administrative authority over the CPRA’s implementation, has yet to release its finalized regulations. The CPRA takes effect on January 1, 2023, and covered businesses are in the final stretch of completing their compliance programs.

The CPPA has released two draft proposals so far, and the more recent draft is in a public consultation period until November 21, 2022. To make matters even more opaque, the CPPA removed several requirements from the first draft to “simplify implementation at this time,” leaving businesses guessing as to which conditions they will eventually need to follow. Many of these proposed rules define technical requirements for websites and mobile applications, so companies will need a runway to achieve a seamless implementation. Luckily, the CPPA has signaled that it will give businesses a soft grace period before pursuing significant enforcement actions. The CPPA’s most recent draft proposal says that it may “consider all facts it determines to be relevant, including the amount of time between the effective date of the statutory or regulatory requirement(s) and the possible or alleged violation(s) of those requirements, and good faith efforts to comply with those requirements.” Responsible businesses, though, should proceed as if the most recent draft regulations are the law and plan to update once the final draft is released. Otherwise, they might find themselves scrambling to push out complicated technical updates against the January 1, 2023 deadline.