Candid Color Systems Inc., based in Oklahoma, faces a class action lawsuit for its alleged violations of the Illinois Biometric Information Privacy Act (BIPA). Candid Colors offers marketing services to photographers, including photo-matching technology that allows consumers to identify all of the photos taken of a particular student at a graduation ceremony.

The complaint, filed in the U.S. District Court for the Western District of Oklahoma, alleges that Candid Color collected and used biometric information of individuals collected at high school and college graduations without consent in violation of BIPA. The complaint states that Candid Color used students’ biometric identifiers to identify students without first informing the individuals and obtaining their consent before collection as required by BIPA.

The complaint further alleges that Candid Color profited from the biometric data collected from the students in violation of BIPA and did not make available its biometric data collection and destruction policies.

This is an interesting lawsuit: it was filed a few days after a similar lawsuit against Candid Color was dismissed by the U.S. District Court for the Southern District of Illinois, which found that Candid Color did not have enough contacts with Illinois to support jurisdiction. The plaintiffs seek to represent a class of Illinois residents whose biometric data was collected by Candid Color. The plaintiffs seek statutory damages of $5,000 per reckless or intentional BIPA violation and $1,000 per negligent violation. We’ll see if this suit proceeds and how the court applies the recent amendments made to BIPA by the Illinois Governor’s bill amending BIPA.

This week, Ken Paxton, the Texas Attorney General, filed suit against General Motors for alleged violations of the Texas Deceptive Trade Practices Act in collecting and selling drivers’ data to insurers without consumer consent.

In June, the Attorney General’s office announced an investigation into several car manufacturers for alleged collection of mass amounts of data and the later illegal sale of such data. General Motors is the first manufacturer to be hit with a lawsuit after the investigation began. The complaint alleges that in vehicle models 2015 and newer, technology is used to “collect, record, analyze, and transmit highly detailed driving data about each time a driver used their vehicle.” The complaint further alleges that this information was then sold to other companies to generate “driving scores.” Those driving scores were then sold to insurance companies. The complaint further alleges that General Motors “deceived” Texans by promoting enrollment in programs such as OnStar Smart Driver, which led to the consumer’s unknowing agreement to collect and share the data collected by their vehicle. As stated in the Attorney General’s report, “Despite lengthy and convoluted disclosures, General Motors never informed its customers of its actual conduct—the systematic collection and sale of their highly detailed driving data.” To read the filing, click here.

This week, the New York Attorney General issued two privacy guides—one for businesses and one for consumers—outlining online tracking and privacy controls for websites and browsers.

The investigation found that many websites’ consent-management tools failed to transmit opt-out signals to their tag-management tool, which is used to simplify tag management. This results in the tool allowing certain tags to remain active (e.g., targeted advertising tag), even if the user disabled a particular cookie via the consent-management tool.

Additionally, several websites’ tag privacy settings, which allow the operator to configure how much information is actually collected by the tags, were only enabled for states in which there are consumer privacy laws (e.g., California, Colorado). Moreover, many websites did not understand the purpose of the tag or even the exact type of data that the tag would collect from the website users based on the Attorney General’s review of the sites’ privacy policy and statements about tracking technologies made therein.

The business guide, available here, sets forth the “do’s and don’ts” for using website tracking technology and sets out the top mistakes that businesses make when using website tracking technologies, such as:

  • Not categorizing/mis-categorizing cookies and other trackers on the website’s consent-management tool;
  • Using tags that are hardcoded into websites such that consent-management tools cannot control them; 
  • Understanding the functions of each tracker deployed by the business’ website;
  • Implementation of a procedure to identify and prevent issues;
  • Designation of a qualified individual(s) to be responsible for implementing and managing all tracking technologies used on the website (including appropriate training);
  • Implementation of a process for investigating and identifying the types of data that will be collected from a tag and how the data will be used and shared;
  • Conducting regular tests on how the trackers are functioning;
  • Review tags on regularly to ensure tags are properly configured; and,
  • Aligning privacy controls and disclosures with New York privacy and consumer laws.

Additionally, the guide offers recommendations for the responsible use of website tracking technology:

  • Make sure that the disclosures and statements made in the website’s privacy policy are accurate regarding the website’s privacy controls and that such options are honored when selections are made and function properly and as described;
  • Avoid using language that may create confusion/misinterpretations in tracker banners and pop-up boxes;
  • Design the user interface for privacy controls to be easy to use and not misleading;
  • Make the ability to opt-out just as simple as it is to opt-in to online tracking;
  • Avoid large amounts of text that could overwhelm users;
  • Design website buttons that are intuitive to the user; and
  • Avoid using language to de-emphasize options to decline tracking.

This guide’s publication follows the Attorney General’s investigation of websites, finding that 13 high-traffic websites had dysfunctional and misleading privacy controls governing the use of cookies and web tags. The investigation determined that website users who opted-out or turned off trackers on these websites were continually tracked after making those choices. Without detailed regulatory guidelines on the use of website tracking technology, state-level guidelines like this one issued by the New York Attorney General will be a welcomed resource for businesses combing through the complicated web of online tracking and profiling.

Nebraska recently filed suit against TikTok, and the details of the harms associated with using TikTok by children are outlined in the complaint. Although the complaint seeks redress only for Nebraskans, the allegations are relevant to parents in all states.

We expect our state and federal governments to make laws that protect children from the sale and marketing of harmful products, such as alcohol, tobacco, drugs, and pornography. According to allegations by the Nebraska Attorney General, using TikTok is just as harmful to the health and well-being of children as alcohol, tobacco, and drugs. The use of TikTok by children is addicting. TikTok knows it, and it is marketing specifically to children because they are unable to understand dangers of its use. TikTok is not restricting the content that children can view, including  “mature and inappropriate content, content related to eating disorders, sadness and suicide, and pornography.” Parents: you can’t rely on TikTok to protect your child, particularly after reading Nebraska’s complaint. Take a hard look at the complaint so you can understand how TikTok is harming your child.—

On July 29, 2024, the American Bar Association issued ABA Formal Opinion 512 titled “Generative Artificial Intelligence Tools.”

The opinion addresses the ethical considerations lawyers are required to consider when using generative AI (GenAI) tools in the practice of law.

The opinion sets forth the ethical rules to consider, including the duties of competence, confidentiality, client communication, raising only meritorious claims, candor toward the tribunal, supervisory responsibilities of others, and setting of fees.

Competence

The opinion reiterates previous ABA opinions that lawyers are required to have a reasonable understanding of the capabilities and limitations of specific technologies used, including remaining “vigilant” about the benefits and risks of the use of technology, including GenAI tools. It specifically mentions that attorneys must be aware of the risk of inaccurate output or hallucinations of GenAI tools and that independent verification is necessary when using GenAI tools. According to the opinion, users must evaluate the tool being used, analyze the output, not solely rely on the tool’s conclusions, and cannot replace their judgment with that of the tool.

Confidentiality

The opinion reminds lawyers that they are ethically required to make reasonable efforts to prevent inadvertent or unauthorized access or disclosure of client information or their representation of a client. It suggests that, before inputting data into a GenAI tool, a lawyer must evaluate not only the risk of unauthorized disclosure outside the firm, but also possible internal unauthorized disclosure in violation of an ethical wall or access controls. The opinion stressed that if client information is uploaded to a GenAI tool within the firm, the client data may be disclosed to and used by other lawyers in the firm, without the client’s consent, to benefit other clients. The client data input into the GenAI tool may be used for self-learning or teaching an algorithm that then discloses the client data without the client’s consent.

The opinion suggests that before submitting client data to a GenAI tool, lawyers must review the tool’s privacy policy, terms of use, and all contractual terms to determine how the GenAI tool will collect and use the data in the context of the ethical duty of confidentiality with clients.

Further, the opinion suggests that if lawyers intend to use GenAI tools to provide legal services to clients, lawyers are required to obtain informed client consent before using the tool. The lawyer is required to inform the client of the use of the GenAI tool, the risk of use of the tool and then obtain the client’s informed consent prior to use. Importantly, the opinion states that “general, boiler-plate provisions [in an] engagement letter” are not sufficient” to meet this requirement.

Communication

With regard to lawyers’ duty to effectively communicate information  that is in the best interest of their client, the opinion notes that—depending on the circumstances—it  may be in the best interest of the client to disclose the use of GenAI tools, particularly if the use will affect the fee charged to the client, or the output of the GenAI tool will influence a significant decision in the representation of the client. This communication can be included in the engagement letter, though it may be appropriate to communicate directly with the client before including it in the engagement letter.

Meritorious Claims + Candor Toward Tribunal

Lawyers are officers of the court and have an ethical obligation to put forth meritorious claims and to be candid with the tribunal before which such claims are presented. In the context of the use of GenAI tools, as stated above, there is a risk that without appropriate evaluation and supervision (including the use of  independent professional judgment), the output of a GenAI tool can sometimes be erroneous or considered a “hallucination.” Therefore, to reiterate the ethical duty of competence, lawyers are advised to independently evaluate any output provided by a GenAI tool.

In addition, some courts require that attorneys disclose whether GenAI tools have been used in court filings. It is important to research and follow local court rules and practices regarding disclosure of the use of GenAI tools before submitting filings.

Supervisory Responsibilities

Consistent with other ABA Opinions relevant to the use of technology, the opinion stresses that managerial responsibilities include providing clear policies to lawyers, non-lawyers, and staff about the use of GenAI in the practice of law. I think this is one of the most important messages of the opinion. Firms and law practices are required to develop and implement a GenAI governance program, evaluate the risk and benefit of the use of a GenAI tool, educate all individuals in the firm on the policies and guardrails put in place to use such tools, and supervise their use. This is a clear message that lawyers and law firms need to evaluate the use of GenAI tools and start working on developing and implementing their own AI governance program for all internal users.

Fees

The key takeaway of the fees section of Opinion 512 is that a lawyer can’t bill a client to learn how to use a GenAI tool. Consistent with other opinions relating to fees, only extraordinary costs associated with the use of GenAI tools are permitted to be billed to the client, with the client’s knowledge and consent. In addition, the opinion points out that any efficiencies gained by the use of GenAI tools, with the client’s consent, should benefit the client through reduced fees.

Conclusion

Although consistent with other ABA opinions related to the use of technology, an understanding of ABA Opinion 512 is important as GenAI tools become more ubiquitous. It is clear that there will be additional opinions related to the use of GenAI tools from the ABA as well as state bar associations and that it is a topic of interest in the context of adherence with ethical obligations. A clear message from Opinion 512 is that now is a good time to consider developing an AI governance program.

Information technology professionals—beware of SharpRhino—a malware variant attributed to threat actor cybercriminals associated with Hunters International. It is being reported that Hunters International is the “10th most active ransomware group in 2024.” Hunters International has “claimed responsibility for 134 attacks in the first seven months of 2024.” It has been linked to the defunct Russian-based Hive ransomware group. Hunters International is known as a Ransomware-as-a-Services provider, which increases the risk other threat actors will use its techniques.

The Quorum Cyber Incident Response Team has identified the SharpRhino malware, which is a Remote Access Trojan (RAT) that uses C# programming language “delivered through a typosquatting domain impersonating the legitimate tool Angry IP Scanner.” This allows the threat actor with remote access to the device to obtain escalated privileges to proceed with the attack without detection.

Quorum Cyber has outlined the tools, techniques, and procedures of SharpRhino and Hunters International in its post, including samples, hashes, signing information, how it is installed, the C# code, IOCs, and Mitre ATT&CK mapping. Since this malware is targeted at IT professionals, you may consider giving a heads up to your IT professional staff.

Last week, Illinois Governor JB Pritzker signed S.B. 2979 to amend the Biometric Information Privacy Act (BIPA) immediately to define the repeated collection of the same biometric data without consent as a SINGLE, COLLECTIVE violation of the Act–this is a significant change. The precedent set by the Illinois Supreme Court in February 2023 in Cothron v. White Castle Sys. Inc., which permitted the plaintiffs to seek damages for “every scan or transmission” of biometric information without consent, is altered by this amendment. It will, in fact, reduce the amounts of damages sought by plaintiffs in BIPA class actions. Perhaps this amendment will even reduce the volume of litigation of BIPA claims. With this change, companies will likely see lower sums sought in BIPA suits and more likelihood that their insurers will cover these claims. Of course, insurers may still be hesitant to pay BIPA claims after years of disagreement with businesses over the Illinois law.

What does BIPA require? The Act requires businesses to collect and store biometric data from employees and consumers only with prior written consent. The big difference between BIPA and other state privacy laws is that BIPA provides a private right of action, allowing consumers to seek $1,000 for each negligent violation and $5,000 for each intentional or reckless violation. The first defendant in a BIPA case paid $75 million to settle the case after the jury determined that the defendant had violated the privacy rights of thousands of its employees. The amendment addresses how violations are counted for damages calculations but doesn’t change the fact that consumers still can seek upwards of $5,000 per violation. Further, the amendment doesn’t state whether the change applies retroactively, so the courts are left to decide on that question.

As far as the insurers go, there will still be questions about whether insurance policies cover BIPA claims. Many policies exclude coverage for federal or state law violations, which some insurers argue bars coverage of BIPA claims. On the other hand, some cyber and employment liability policies are clearer on coverage for BIPA claims. So, while this amendment may not have the answers for insurers, it could at least give insurers more clarity around expected damages in BIPA litigation, which will, in turn, provide more clarity in the ability to underwrite these claims, too. Of course, similar to the cyber insurance arena, the underwriting and application process will likely include more specific questions about compliance with BIPA and how the business obtains consent from employees and consumers. We’ll see how this amendment changes the trends.

On May 17, 2024, Colorado Governor Jared Polis signed, “with reservations,” Senate Bill 42-205, “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” (the Act). The first of its kind in the United States, the Act takes effect on February 1, 2026, and requires artificial intelligence (AI) developers, and businesses that use high-risk AI systems, to adhere to certain transparency requirements and AI governance.

The Governor sent a letter to the Colorado General Assembly explaining his reservations about signing the Act. He noted that the bill “targets ‘high risk’ AI systems involved in making consequential decisions, and imposes a duty on developers and deployers to avoid ‘algorithmic discrimination’ in the use of such systems.” He encouraged the legislature to “reexamine” the concept of algorithmic discrimination of the results of AI system use before the effective date in 2026.

If your company does business in Colorado and either develops or deploys AI systems, your company may need to first determine whether the systems used qualify as high-risk AI systems. A “High-Risk AI System” means any AI system that, when deployed, makes or is a substantial factor in making a consequential decision. A “Consequential Decision” has a material legal or significant effect on the provision or denial of education enrollment/education opportunity, employment opportunity, financial or lending service, essential government service, health care services, housing, insurance, or a legal service.

Unlike other state consumer privacy laws, this Act does not have a threshold number of consumers to trigger applicability. Further, both the Act and the Colorado Privacy Act (CPA) (similar to the California Consumer Privacy Act (CCPA)) use the term “consumers,” but the term refers to Colorado residents under this Act. At the same time, the CPA defines consumers as Colorado residents “acting only in an individual or household context,” excluding anyone in a commercial or employment context. Therefore, businesses that may not be subject to the CPA may have obligations under the Act.

The Act aims to prevent algorithmic discrimination in the development and use of AI systems. “Algorithmic discrimination” means any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals based on their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other protected classification under state or federal law.

What are the requirements of the Act?

For Developers:

  • To avoid algorithmic discrimination in the development of high-risk artificial intelligence systems, developers must develop a statement describing the “reasonably foreseeable uses and known harmful or inappropriate uses of the system,” the type of data used to train the system, risks of algorithmic discrimination, the purpose of the system and the intended benefits and uses of the system. 
  • Additionally, the developer must provide documentation with the AI product stating how the system was evaluated to mitigate algorithmic discrimination before it was made available for use, the data governance measures utilized in development, how the system should be used (and not be used), and how the system should be monitored when used for consequential decision-making. Developers are also required to update the statement no later than 90 days after modifying the system.
  • Developers must also disclose to the Colorado Attorney General any known or reasonably foreseeable risks of algorithmic discrimination arising from system’s intended uses without unreasonable delay but no later than 90 days after discovery (through ongoing testing and analysis or a credible report from a business).

For Businesses:

  • Businesses that use high-risk AI systems must implement a risk management policy and program to govern the system’s deployment. The Act sets out specific requirements for that policy and program and instructs businesses to consider the size and complexity of the company itself, the nature and scope of the systems, and the sensitivity and volume of data processed by the system. Businesses must also conduct an impact assessment for the system at least annually in accordance with the Act. However, there are some exemptions from this impact assessment requirement (e.g., fewer than 50 employees, does not use its own data to train the high-risk AI system, etc.).
  • Additionally, businesses must notify consumers that they are using an AI system to make a consequential decision before the decision is made. The Act sets forth the specific content requirements of the notice, such as how the business manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the system’s deployment. As applicable, if the CPA applies to the business (in addition to the Act), the company must also provide consumers the right to opt out of the processing of personal data by such AI systems for profiling purposes.
  • Businesses must also disclose to the Colorado Attorney General any known or reasonably foreseeable risks of algorithmic discrimination arising from the use of the system no later than 90 days after discovery.

The Act requires developers and businesses who deploy, offer, sell, lease, license, give, or otherwise make available an AI system that is intended to interact with consumers to disclose to each consumer who interacts with the system that the consumer is interacting with an AI system.

Although noting that the Act is “among the first in the county to attempt to regulate the burgeoning artificial intelligence industry on such a scale,” Colorado’s Governor stated in his letter to the legislature that “stakeholders, including industry leaders, must take the intervening two years before this measure takes effect to fine-tune the provisions and ensure that the final product does not hamper development and expansion of new technologies in Colorado that can improve the lives of individuals across our state.” He further noted:

“I want to be clear in my goal of ensuring Colorado remains home to innovative technologies and our consumers are able to fully access important AI-based products. Should the federal government not preempt this with a needed cohesive federal approach, I encourage the General Assembly to work closely with stakeholders to craft future legislation for my signature that will amend this bill to confirm with evidence based findings and recommendations for the regulation of this industry.”

As we have seen with state consumer privacy rights laws, this new AI law may be a model that other states will follow but, based upon the Governor’s letter to the Colorado legislature, we anticipate that there will be additional iterations of the law before it becomes effective. Stay tuned.

On August 1, 2024, the Cybersecurity and Infrastructure Security Agency (CISA) announced the appointment of its first CISA Chief Artificial Intelligence Officer. The appointee, Lisa Einstein, served as CISA’s Senior Advisor for AI and as Executive Director of CISA’s Cybersecurity Advisory Committee, advising CISA on the reduction of risk to critical infrastructure. She earned a dual master’s degree in computer science and international cyber policy from Stanford.

According to CISA, the appointment of a Chief Artificial Intelligence Officer “reflects CISA’s commitment to responsibly use AI to advance its cyber defense mission and to support critical infrastructure owners and operators across the United States in the safe and secure development and adoption of AI.”

HealthEquity, an administrator of workplace benefits for more than 15 million people, is notifying 4.3 million individuals, starting on August 9, 2024, that their personal information was compromised. The compromised data includes names, addresses, phone numbers, employee IDs, employers, Social Security numbers, health card numbers, health plan member numbers, benefit types, dependent information, and diagnosis information, prescription information, and payment card information.

The incident was caused when a third-party vendor’s user account was compromised, and the user’s password was stolen. The vendor’s credentials were then used to access a data repository that included the customers’ personal information. HealthEquity has posted a notice of the data breach on its website. It will offer affected individuals with two years of credit monitoring. If you have an account with HealthEquity, access its website here, which includes a toll-free number for questions.