M&A involving targets that leverage AI present complex legal and operational challenges. Acquirers and their advisors must address legal and operational complexities related to intellectual property, data privacy, employment, regulatory and post-closing integration operational risks. Once identified, such risks can be addressed or mitigated through careful due diligence, appropriately tailored acquisition agreements and thoughtful post-closing integration planning.
Risks in AI-Driven M&A
Intellectual property risks are among the most pressing legal issues faced by acquirers in AI-related M&A, as legal uncertainty continues to cloud the rights associated with training inputs and AI-generated outputs. Many AI systems are trained using datasets scraped from the internet or pulled from third-party sources, which raises the possibility that underlying content used in an AI model may be subject to copyright or trade secret protections. In addition, as a recent post by our firm noted, the US Copyright Office takes the position that content created by AI is not eligible for copyright protection unless it reflects “sufficient human control” over its expressive elements. 1This position creates a critical gap in protection and complicates enforcement of intellectual property rights in AI-generated materials. Acquirers, therefore, must confirm not only that the target owns the outputs its AI models produce, but also that the target holds the proper licenses or the rights to use the data the AI model was trained on.
Data privacy and employment risks are equally critical. AI systems that use or interact with personal or sensitive data will require the target to comply with a patchwork of state, federal and international data privacy and employment laws. Buyers must examine the target’s data use and governance practices, verify the existence of proper data consents and ensure the target has data governance frameworks robust enough to meet current and evolving legal standards. Moreover, AI tools trained on historical data might be susceptible to biases leading to potential claims of discriminatory employment practices.
Companies using AI in their operations also face significant regulatory risks in the US and abroad. For instance, the EU enacted the EU AI Act in early 2024 (with some obligations taking effect earlier this year and other obligations becoming effective in 2026 and 2027), and nearly all of the states in the US have passed some form of legislation regulating the use of AI. Thus, from an operational perspective, an acquirer will need to examine whether the target’s AI activities have been (or will be) compliant with applicable state laws regulating AI, as well as whether the target has engaged in any unfair or deceptive practices relating to its AI capabilities or offerings with the intent to mislead investors, customers or regulators.
Regulators remain concerned with companies overstating or exaggerating their AI capabilities or offerings (i.e., “AI washing”) or otherwise engaging in deceptive or misleading activities with respect to their AI capabilities. Fines levied against companies for failing to comply with this evolving regulatory landscape can be significant; therefore, a prudent acquirer will want to perform robust diligence with respect to the target’s compliance.
Regulators in the US and abroad will also continue to be active in reviewing transactions involving AI companies, not only from an antitrust/market competition perspective, but also from a national security perspective. Regulators are also focused on “Big Tech” companies leveraging their dominant market position to acquire rival AI companies that may limit competition, innovation or negatively impact consumer welfare. Regulators are not only concerned with control transactions; they are equally concerned with the recent slate of “acqui-hire” transactions whereby an acquirer hires an AI-team and acquires a license to the target’s AI-model. As such, pre-closing notice or approval from regulators will need to be carefully assessed in control and minority investment transactions alike.
Finally, post-closing integration risks are often overlooked, but can easily derail an investment. Retaining technical talent is particularly important in AI-driven acquisitions, where the institutional knowledge and skill sets of engineers and researchers are a core asset. Without retention incentives and/or enforceable non-competition agreements in place, key personnel might leave and undermine the value proposition of the acquisition. Technical post-closing integration may pose another hurdle. AI systems often involve proprietary data pipelines, unique model architectures or incompatible software environments. Such technical differences between the acquirer’s and target’s respective systems can make it difficult to consolidate platforms after the closing, potentially delaying synergies and creating new vulnerabilities.
Risk Mitigation
To navigate such challenges, acquirers must adopt a rigorous approach to due diligence. Beyond traditional financial and legal reviews, AI-specific diligence should include audits of training data, licensing arrangements and data consents. Engaging technical experts to review source code, inspect model documentation and verify IP ownership should be standard practice.
Acquisition agreements can also assist in mitigating risks. Because generic representations and warranties in a standard acquisition agreement are unlikely to capture the full range of AI-related risks and liabilities, representations and warranties more tailored to AI matters should be used to address AI risks, including, among others, the target’s legal right to use the data in the AI model, the originality of algorithms and the absence of known model bias or misuse.
Moreover, interim operating covenants in acquisition agreements should limit the target’s ability to materially change its AI or data privacy practices and prevent the target from onboarding new AI tools. Finally, where known risks exist (e.g., pending litigations or regulatory investigations), acquirers will want to obtain special indemnities from the target’s stockholders.
Finally, employee retention should be prioritized early in the post-closing integration planning process. The market for AI talent is extremely competitive, and losing technical talent and leadership can significantly impair post-closing performance of the target. Thus, acquirers should consider implementing retention bonuses, equity-based incentives and career development plans to align key personnel’s goals with the long-term vision of the combined company.
To realize the full value of AI-driven M&A, acquirers must proactively identify and address the distinct legal, regulatory and operational risks of utilizing AI technologies. By adapting diligence processes, acquisition documents and integration strategies, dealmakers can position themselves to achieve a successful outcome in an environment defined by change and complexity.
For further insights and evolving best practices regarding AI from the Squire Patton Boggs Team, we encourage you to visit our AI Law & Policy Hub.
VIEW AS PDF HERE