Guest Commentary
Artificial intelligence (AI) continues to upend how businesses operate, and the life sciences field is no exception. Generative AI provides life sciences companies the ability to identify and learn from patterns in large data sets. This use of AI has clear applications for the development of life-changing therapies. AI can assist companies with enhancing marketing strategies, driving developments in research and development, and making the manufacturing process more efficient—just to name a few examples.
The increased adoption of AI technologies also invites unique regulatory compliance risks stemming from the growing automation of certain business functions. At the same time, AI also offers companies the ability to innovate and improve their compliance functions.
AI compliance risks
The government is aware of the potential compliance risks accompanying the use of AI. The U.S. Department of Justice (DOJ) has a longstanding policy to consider the effectiveness of a company’s compliance program when exercising prosecutorial discretion—e.g., making charging decisions and determining what penalties to seek.
DOJ’s September 2024 update to its guidance on the Evaluation of Corporate Compliance Programs (ECCP) invites consideration of whether or not a corporate compliance program adequately accounts for the impact of emerging technologies like AI on compliance with any applicable rules and regulations. Life sciences companies should carefully consider updates to their compliance programs to address such risks.
For example, in November 2024, the University of Colorado Health (UCHealth) agreed to pay a $23 million civil settlement, without admitting liability, to resolve whistleblower allegations that it violated the False Claims Act (FCA), a federal law prohibiting companies from defrauding the government that can be enforced by both the government and through civil actions for treble damages brought by private whistleblowers. The whistleblower alleged that UCHealth used automatic coding programs and improperly up-coded certain claims submitted to Medicare and TRICARE for all Evaluation & Management services with the highest paying code associated with emergency room visits. The whistleblower further alleged that UCHealth knew that their automatic coding rule did not satisfy Medicare and TRICARE billing requirements because the rule did the reflect the actual resources UCHealth used in association with the claims submitted to the government.
Companies participating in federal healthcare programs may find AI appealing for that exact same task, automating the claims submission process; the UCHealth settlement illustrates how an overreliance on such automated processes, without proper oversight and verification of data inputs, can lead to allegations of defrauding the government.
AI compliance solutions
FCA whistleblowers and the government may use AI to identify potential noncompliance, but companies can implement AI in compliance protocols to identify and address problematic trends and compliance risks before the outside identification of noncompliance. The use of AI over traditional compliance monitoring techniques can result in higher efficiencies that enable companies to identify potential issues in close-to-real-time, even when dealing with vast amounts of data.
Examples include:
• Detection of potential compliance risk areas: For potential kickback, bribery, or other unethical behavior, AI-based detection logic can be used on communications and financial transactions between employees and external parties to detect red-flag behavior (meals, entertainment, and disguised gifts) that are in violation of relevant laws (Stark, anti-kickback statute, anti-bribery laws), as well as company internal policies. Likewise, machine learning algorithms can use historical data and create employee and relationship profiles to assess compliance risks associated with contemplated relationships and transactions before any transactions are entered into.
• Enhance outlier analyses: Particularly for healthcare companies and financial institutions, AI can be used to enhance existing data analysis for outlier-type analyses (e.g., Medicare G code tiering or Medicare Advantage risk adjustment adjustments), both in terms of automation of approach and in the ease of access to readily available analyses for more immediate decision-making and mitigation. AI algorithms can link to publicly available Medicare and Medicaid data, OIG reports, and industry data to analyze outlier potential across multiple data sets.
• Compliance investigations: AI can be used to enhance compliance investigation reports through AI-driven document review and report compilation (e.g., suspicious activity reports). AI can also be used to categorize internal compliance reports (e.g., HR, fraud, misuse of corporate resources), which is an area that has been prone to human error. Similar AI investigation techniques can be used for document production for government-initiated investigations. Increasingly, authorities expect that companies will use AI techniques in document production.
• Evolving regulatory requirements: For heavily regulated industries such as healthcare and banks, AI can be used to stay up to date on evolving regulatory requirements. AI tools can monitor changes in the regulatory landscape and implement necessary changes through automated internal controls and updated compliance policies and procedures.
As with any AI or computer-based learning, the results are dependent on the quality of the input data. Data collection, segregation, and integrity are crucial to the success of any AI-driven compliance solution.
Takeaways
There are many compelling reasons why life sciences companies may explore the uses of AI for the development of therapies and technologies. But when doing so, companies should adopt the following mantra: trust but rigorously verify. Without adequate controls in place, the use of AI could quickly turn a seeming business improvement into a business nightmare.
Companies should also carefully consider how AI-driven innovations can enhance their own compliance monitoring and mitigate—or avoid altogether—the adverse consequences of regulatory noncompliance.
The co-authors practice law in the Litigation, Arbitration, and Employment Practice at Hogan Lovells. Gejaa Gobena is a partner, Emily Lyons is counsel, and Jesse Suh, an associate.
The post Life Sciences AI and Compliance: Trust But Rigorously Verify appeared first on GEN - Genetic Engineering and Biotechnology News.