Information governance has evolved rapidly, with technology driving the pace of change. Looking ahead to 2024, we anticipate technology playing an even larger role in data management and protection. In this blog post, we’ll delve into the key predictions for information governance in 2024 and how they’ll impact businesses of all sizes.

  1. Embracing AI and Automation: Artificial intelligence and automation are revolutionizing industries, bringing about significant changes in information governance practices. Over the next few years, it is anticipated that an increasing number of companies will harness the power of AI and automation to drive efficient data analysis, classification, and management. This transformative approach will not only enhance risk identification and compliance but also streamline workflows and alleviate administrative burdens, leading to improved overall operational efficiency and effectiveness. As organizations adapt and embrace these technological advancements, they will be better equipped to navigate the evolving landscape of data governance and stay ahead in an increasingly competitive business environment.
  2. Prioritizing Data Privacy and Security: In recent years, data breaches and cyber-attacks have significantly increased concerns regarding the usage and protection of personal data. As we look ahead to 2024, the importance of data privacy and security will be paramount. This heightened emphasis is driven by regulatory measures such as the California Consumer Privacy Act (CCPA) and the European Union’s General Data Protection Regulation (GDPR). These regulations necessitate that businesses take proactive measures to protect sensitive data and provide transparency in their data practices. By doing so, businesses can instill trust in their customers and ensure the responsible handling of personal information.
  3. Fostering Collaboration Across Departments: In today’s rapidly evolving digital landscape, information governance has become a collective responsibility. Looking ahead to 2024, we can anticipate a significant shift towards closer collaboration between the legal, compliance, risk management, and IT departments. This collaborative effort aims to ensure comprehensive data management and robust protection practices across the entire organization. By adopting a holistic approach and providing cross-functional training, companies can empower their workforce to navigate the complexities of information governance with confidence, enabling them to make informed decisions and mitigate potential risks effectively. Embracing this collaborative mindset will be crucial for organizations to adapt and thrive in an increasingly data-driven world.
  4. Exploring Blockchain Technology: Blockchain technology, with its decentralized and immutable nature, has the tremendous potential to revolutionize information governance across industries. By 2024, as businesses continue to recognize the benefits, we can expect a significant increase in the adoption of blockchain for secure and transparent transaction ledgers. This transformative technology not only enhances data integrity but also mitigates the risks of tampering, ensuring trust and accountability in the digital age. With its ability to provide a robust and reliable framework for data management, blockchain is poised to reshape the way we handle and secure information, paving the way for a more efficient and trustworthy future.
  5. Prioritizing Data Ethics: As data-driven decision-making becomes increasingly crucial in the business landscape, the importance of ethical data usage cannot be overstated. In the year 2024, businesses will place even greater emphasis on data ethics, recognizing the need to establish clear guidelines and protocols to navigate potential ethical dilemmas that may arise. To ensure responsible and ethical data practices, organizations will invest in enhancing data literacy among their workforce, prioritizing education and training initiatives. Additionally, there will be a growing focus on transparency in data collection and usage, with businesses striving to build trust and maintain the privacy of individuals while harnessing the power of data for informed decision-making.

The future of information governance will be shaped by technology, regulations, and ethical considerations. Businesses that adapt to these changes will thrive in a data-driven world. By investing in AI and automation, prioritizing data privacy and security, fostering collaboration, exploring blockchain technology, and upholding data ethics, companies can prepare for the challenges and opportunities of 2024 and beyond.

Chinese authorities have arrested alleged hackers in what appears to be the first-ever reported case of hackers using AI to develop ransomware. These alleged hackers reportedly used ChatGPT to refine the code for their home-grown ransomware encryption tool. ChatGPT has been banned in China in favor of Chinese tools such as Baidu’s Ernie Bot. However, many residents avoid Chinese website restrictions through virtual private networks (VPNs).

Legitimate developers have been singing AI’s praises as a method to automate repetitive coding tasks, and many startups are offering AI-powered coding suites. Attackers using AI to automate and improve their code makes complete sense – ransomware is just malicious use of legitimate encryption techniques. It is hardly surprising that other legitimate tools are finding criminal uses. 

While AI has legitimate uses, it can also be used to automate and improve malicious activities. Experts in this nascent industry need to develop strategies to prevent and mitigate the misuse of AI, ensuring it benefits society rather than causes harm.

Montana’s legislature last year passed legislation, signed by the Governor, to ban the use of TikTok within the borders of the state, seeking to protect Montana consumers’ personal information and limit spying by the Chinese government through TikTok.

Some Montana users sued Montana to block the ban. A federal judge in Montana issued a preliminary injunction blocking the ban that was set to start on January 1, 2024. On January 2, 2024, the state filed a notice to appeal the ruling to the U.S. Court of Appeals for the Ninth Circuit.

Montana’s ban of the use of TikTok in the state by anyone was more expansive than previous bans by the federal government and other states prohibiting its use by federal and state employees.

We have repeatedly written about the risks and concerns about consumers’ use of TikTok. Its use continues to be a national security threat, and Montana’s appeal is one we will be watching closely.

On December 15, 2023, the Cybersecurity & Infrastructure Security Agency (CISA) issued a Secure by Design Alert and guidance on “How Manufacturers Can Protect Customers by Eliminating Default Passwords.”

The guidance was created by CISA to “urge technology manufacturers to proactively eliminate the risk of default password exploitation by implementing principles one and three of the joint guidance, ‘Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Secure by Design Software”:

  • Take ownership of customer security outcomes.
  • Build organizational structure and leadership to achieve these goals.

It is CISA’s conclusion that if software manufacturers implement these two principles, they “will prevent exploitation of static default passwords in their customers’ systems.” Since threat actors are exploiting default passwords, CISA is urging manufacturers to proactively eliminate them so customers can’t use them, and they can’t continue to be exploited. According to CISA, “Years of evidence have demonstrated that relying upon thousands of customers to change their passwords is insufficient, and only concerted action by technology manufacturers will appropriately address severe risks facing critical infrastructure organizations.”

May software developers listen and respond to CISA’s urging to help keep their customers safe from known threats.

On December 8, 2023, New York Attorney General Leticia James penned her approval to an Assurance of Discontinuance with third party dental administrator Healthplex, settling the enforcement action for $400,000 and a litany of data privacy and security compliance requirements.

The AG’s investigation commenced following a November 24, 2021, successful phishing attack against Healthplex. The threat actor gained access to an email account of a Healthplex employee containing over twelve years of email, including enrollment information of insureds.

The AG’s investigation learned that the threat actor had access to the employee’s account for less than one day, but during that time, may have had access to member data, including personal information. Healthplex’s O365 account did not enable multi-factor authentication at the time, and the logs were unable to determine which emails were accessed by the threat actor. Healthplex notified individuals whose information was included in the email account.

Following the incident, Healthplex enabled multi-factor authentication, upgraded its O365 license for enhanced logging capabilities, provided additional security training for employees and implemented a 90-day email retention policy.

Despite implementing these sound measures in response to the incident, note that the NYAG cites these measures as lacking before the incident, and in essence, relies on them for the settlement with Healthplex, along with another finding that Healthplex’s data security assessments did not identify those very vulnerabilities.

As with other regulatory settlements, the Assurance of Discontinuance is worthy of a read by those responsible for compliance in an organization. If there is a security incident, and an organization responds to the incident with security measures that may have prevented it or are sound measures that could have been implemented before the incident, regulators will take note. In this case, the security measures of implementing MFA, data retention procedures, employee education, and enhanced logging for O365 are measures that organizations may wish to implement now if they are not already in place.

Last week, the California Privacy Protection Agency (CPPA) voted in favor of a legislative proposal that would require web browsers to include a feature that allows web users the ability to exercise their privacy rights under the California Consumer Privacy Act (CCPA) through opt-out preference signals.

Under the California Consumer Privacy Act (CCPA), businesses must adhere to a user’s opt-out web preference signals as a valid request to opt-out of the sale and/or sharing of their information. These signals are supposed to be ‘global’ meaning that through an opt-out preference signal, a user can opt-out of the sale and sharing of their information on all websites that they interact with without having to make separate requests to each website. However, to exercise this right under the CCPA, a user must either use a browser that SUPPORTS these opt-out preference signals or take extra steps to download a browser plugin to support these signals. Currently, only a few browsers offer built-in support for opt-out preference signals: Mozilla Firefox, DuckDuckGo and Brave. That’s only 10 percent of the global desktop market share. But note, that none of these (or any others) are loaded onto devices by default, which means it is not apparent (or easy) for users to take advantage of these built-in protections.

If the California legislature adopts this proposal from the CPPA, it will be the first state to require browser vendors to enable this type of signals.

The California Privacy Protection Agency (CPPA) recently met to discuss automated decision-making technology, privacy risk assessments and cybersecurity audits under the California Consumer Privacy Act (CCPA) as amended by the California Privacy Rights Act (CPRA). However, the CPPA also decided to step outside the anticipated agenda and discuss additional revisions to the existing regulations. Once again. changes are on the horizon. What kind of changes? Here are the key things that would change under the CCPA for your organization’s online privacy policy:

  • “Meaningful Understanding” of Sources and Sales/Sharing with Third Parties: the draft revisions would add a requirement for privacy policies to provide “meaningful understanding” of the sources that the business uses to collect personal information and the categories of third parties to which the business shares or sells personal information.
  • Clarifying Disclosures to Service Providers and Contractors: the draft revisions would remove an ambiguity related to the definition of a “third party” and require businesses to explicitly identify the categories of personal information disclosed to a service provider or contractor in the last 12 months.
  • Privacy Policy Links for Mobile Applications: the draft revisions would require mobile apps to include a link to their privacy policies in the settings menu of the app. This link would be in addition to the link on the website homepage and the app store download page.

After the CPPA finalizes the draft revisions, the proposed rule changes will be published for a 45-day public comment period. However, the CPPA did not provide an anticipated start date for that comment period yet.

On Monday, December 18, 2023, the Federal Trade Commission (FTC) released its report on takeaways gleaned from a public event it held in October with creative professionals, including artists and authors. The 43-page report, entitled, “Generative Artificial Intelligence and the Creative Economy Staff Report: Perspectives and Takeaways,” provides an insider view of the FTC’s interest and thoughts on learning about the risks of generative AI on creative work. It is “intended as a useful resource for the legal, policy and academic communities who are considering the implications of generative AI. That, and establishing its jurisdiction over AI and the issues that “implicate the FTC’s enforcement and policy authority.”

We previously wrote about how toys, baby monitors, and other smart devices collect, use, and disclose personal information about children, and risks to children’s privacy. As adults responsible for the safety of children in our care, learning about how smart devices collect, use, and disclose personal information of children should be a top priority, just as we oversee physical safety for those in our care.

Although the Children’s Online Privacy Protection Act (COPPA) was a good first start to try to regulate the collection of children’s information online, the law has not kept pace with rapidly developing technology and online tools. The last update to COPPA was in 2013, which is ancient history in the world of technology.

The Federal Trade Commission is proposing changes to COPPA that would strengthen restrictions on the use and disclosure of children’s personal information and would “further limit the ability of companies to condition access to services on monetizing children’s data. The proposal aims to shift the burden from parents to providers to ensure that digital services are safe and secure for children.”

For parents and guardians: although the FTC is working to protect children, we should too. Stay educated on how children’s privacy is affected by their online use, how their data is stored, used, and monetized, and educate children about how to protect themselves.

There was a big win for the good guys against the bad guys this week. On December 13, 2023, after obtaining an order from the federal court in the Southern District of New York to seize U.S. based infrastructure and take offline websites used by a group Microsoft identifies as Storm-1152, Microsoft’s Digital Crimes Unit disrupted:

  • Hotmailbox.me, a website selling fraudulent Microsoft Outlook accounts
  • 1stCAPTCHA, AnyCAPTCHA, and NoneCAPTCHA, websites that facilitate the tooling, infrastructure, and selling of the CAPTCHA solve service to bypass the confirmation of use and account setup by a real person. These sites sold identity verification bypass tools for other technology platforms
  • The social media sites actively used to market these services

The takedown is impressive and outlined in detail by Amy Hogan-Burney, General Manager, Associate General Counsel, Cybersecurity Policy & Protection at Microsoft. According to Hogan-Burney, “Fraudulent online accounts act as the gateway to a host of cybercrime, including mass phishing, identity theft and fraud, and distributed denial of service (DDoS) attacks.” She issues a missive to cybercriminals, “We are sending a strong message to those who seek to create, sell or distribute fraudulent Microsoft products for cybercrime: We are watching, taking notice and will act to protect our customers.” You go, Microsoft!