perea.ai Research · 1.0 · Scheduled

EU AI Act Vendor Contract Clause Library: The 2026 Procurement Playbook

MCC-AI High-Risk + Light, Article 25 deployer-to-provider boundary, AI-DPA addenda, training-data clauses, Article 50 transparency hooks, GPAI Code-signatory warranties — what every B2B AI contract must say

AuthorDante Perea
PublishedMay 2026
Length6,815 words · 31 min read
AudienceIn-house counsel, procurement leads, AI-governance officers, and DPOs negotiating vendor contracts for AI systems subject to the EU AI Act 2 August 2026 high-risk regime
LicenseCC BY 4.0

#Foreword

This is a working clause library for negotiating B2B AI vendor contracts under the EU AI Act, written for the in-house counsel and procurement teams that have to ship signed contracts before the high-risk regime applies on 2 August 2026.[1][2] The premise is simple: a SaaS MSA written in 2022 does not address what AI vendor contracts must address in 2026.[3][4][5] Standard GDPR Data Processing Agreements do not cover training-data permission, model-improvement rights, the deployer-to-provider re-qualification under Article 25, the supplier-side cooperation duties for fundamental rights impact assessments, the Article 50 marking and labelling cooperation, or the GPAI Code of Practice signatory representation that downstream integrators now want their suppliers to provide.[6][7][8][9]

The European Commission's Public Buyers Community has done the public-sector heavy lifting.

The Model Contractual Clauses for the Procurement of AI (MCC-AI), updated 5 March 2025 to align with the final AI Act and translated into all 24 EU languages by 16 June 2025, are now the canonical baseline a private-sector counsel can paste into a procurement template and adapt.[10][11][12][13]

Independent academic and law-firm guidance — Bird & Bird, Trowers & Hamlins, Burges Salmon, Slaughter and May, Covington / Inside Privacy, Snellman, Mishcon de Reya, Stephenson Harwood, ICTLC, Legalithm — has converged on a recognisable twelve-clause shape that this paper codifies.[14][15][16][17][18][19][20][21][22]

The vendor side has converged too: OpenAI, Anthropic, Mistral AI, and the second-tier of agent-platform vendors have published DPAs and supplier addenda whose structure the procurement side now negotiates against.[23][24][25]

#Executive Summary

The 2026 EU AI Act vendor contract has twelve negotiation surfaces, each anchored to a specific statutory hook:[14][17][26][27]

  1. Compliance-status warranty — the supplier represents whether the system is high-risk under Article 6(2) and Annex III and warrants compliance with Chapter III duties applicable to its provider role.[11][28]

  2. Data use and training restrictions — training, fine-tuning, or "service improvement" use of customer data is prohibited unless contractually opted in.[4][5][29]

  3. Output ownership — explicit assignment of inputs, outputs, and derived insights to the customer.[30][31][4]

  4. Liability and indemnities for hallucinations + IP infringement + bias — modification-exclusion kill-switch removed, indemnity scope explicitly extended to AI outputs.[4][5][32]

  5. Model change notification — minimum advance-notice window before substantial model versions or behaviour changes.[32][33]

  6. Log access for Article 26(6) — supplier-side log retention and access for the deployer's six-month-minimum logging duty.[28][34]

  7. Substantial-modification boundary — defined Article 25(b)/(c) thresholds; what the deployer can and cannot change before re-qualification.[7][35][36]

  8. Audit rights — controller and authority audit cooperation, including under Article 26(7).[11][17]

  9. FRIA cooperation — supplier provides architecture, safety measures, data-handling, and model-provider dependencies for the deployer's Article 27 FRIA.[37][38][39]

  10. GPAI Code-signatory representation — supplier represents whether it (or its upstream GPAI provider) is a signatory of the Code, with an alternative-equivalent-measures fallback.[40][41][9]

  11. Article 50 transparency cooperation — supplier implements multilayered marking (C2PA + watermarking + fingerprinting/logging) per the Code of Practice and contractually prohibits removal of markings.[42][43][44][45]

  12. Termination and transition — data return + deletion certification + 90–180-day transition assistance + fine-tuned-model-weight extraction rights.[46][47][48]

The structural insight is that none of these clauses sit cleanly inside a traditional MSA. They live in an AI Addendum that incorporates by reference into the MSA, supplements (not replaces) the GDPR DPA, and runs in parallel with any sector-specific data agreements (e.g., financial-services third-party risk, healthcare BAAs).[49][50] The MCC-AI High-Risk template is the closest public template; the twelve-clause structure in Part VII below is the operational refinement.

#Part I: The MCC-AI Baseline (March 2025)

The Public Buyers Community's MCC-AI is the closest the EU has come to publishing a model AI procurement contract.[10][11] It is not legally binding — the Commentary explicitly states the clauses are "working documents" and do not represent an official Commission position.[11] But for any organisation drafting a clause library against the EU AI Act, it is the natural baseline because it was drafted by 40+ public-sector experts, peer-reviewed across academia and industry, and updated to align with the finalised 13 June 2024 AI Act before being translated into all 24 EU languages.[14][51]

Structure. The MCC-AI ships in three pieces: a full version for procurement of AI systems classified as high-risk under the AI Act; a light version for procurement of AI systems that are not classified as high-risk but may still pose risks to health, safety, or fundamental rights; and a Commentary explaining how to use, customise, and apply the clauses in practice.[11][15][16][52] The full version is structured around five clause categories: essential requirements for the AI system, supplier obligations, rights to use data, AI register and audit, and costs.[15][16][53]

The light-version asymmetry. A subtle but important feature of the MCC-AI Light is that it imposes high-risk-style obligations on suppliers of non-high-risk AI systems — the supplier's quality-management, conformity-assessment, AI-register, and audit-rights obligations in the High-Risk version are deleted in the Light version, but the bulk of the substantive technical and transparency obligations are retained.[16][17] Slaughter and May notes that this is intended to "improve the trustworthiness of the AI system procured by the public body" and "may not be proportionate or appropriate in all circumstances," making the Light version a useful but negotiation-sensitive template.[16]

The cooperation-in-explanation duty. The MCC-AI Commentary highlights one provision that goes beyond the AI Act's Chapter III text: the supplier's contractual obligation to cooperate in explaining, at an individual level, how the AI system reached an outcome or decision.[11] This is the contractual scaffolding for the Article 86 right of explanation — building the supplier-side cooperation duty into the procurement contract from day one rather than discovering after a complaint that the supplier cannot or will not provide explanation data.[11][37]

The dynamic-document caveat. The Commentary's first page identifies it as a "Dynamic Working Document" maintained by Jeroen Naves with reviewers Anita Poort and Ivo Locatelli.[11] Procurement teams using the MCC-AI as a baseline should track its revisions — the Public Buyers Community has signalled additional updates as the AI Office finalises the Article 27(5) FRIA template, the Article 50 Code of Practice, and the harmonised standards under Article 40.[11][41][42]

Private-sector applicability. Although the MCC-AI was drafted for public organisations, both the Public Buyers Community and Covington's Inside Privacy explicitly note that "some clauses may also serve as a useful reference for private companies entering into contracts with external suppliers for AI systems."[17][18] In practice, the MCC-AI High-Risk operating-clauses framework — risk management, transparency, human oversight, data governance, accuracy/robustness/cybersecurity — generalises directly to private B2B procurement of high-risk AI.

#Part II: Article 25 — The Deployer-to-Provider Re-qualification Trigger

Article 25 of the EU AI Act is the most consequential — and most frequently misunderstood — provision in the AI vendor contract.[7][35][36] It defines the three circumstances under which a distributor, importer, deployer, or third party that is not the original developer is reclassified as a provider of a high-risk AI system and inherits the full set of provider obligations under Article 16 (quality management, conformity assessment, CE marking, technical documentation, registration in the EU database, post-market monitoring).[7][54]

Trigger 1 — Rebranding (Article 25(1)(a)). A party that puts its own name or trademark on a high-risk AI system already placed on the market becomes the provider of that system, "without prejudice to contractual arrangements stipulating that the obligations are otherwise allocated."[54][36] This is the only one of the three triggers where a written agreement can re-allocate provider obligations back to the original developer; the other two are functional re-qualifications that contracts cannot override.[7]

Trigger 2 — Substantial modification (Article 25(1)(b)). A party that makes a substantial modification to a high-risk AI system already placed on the market — such that it remains a high-risk system — becomes the new provider.[54][7] Article 3(23) defines substantial modification as "a change to the AI system after its placing on the market or putting into service, which is not foreseen or planned by the provider and as a result of which the compliance of the system with the applicable requirements is affected, or the intended purpose for which the system has been assessed is modified."[54] In practice the boundary is ambiguous: "Light parameter adjustments within the boundaries foreseen and documented by the original provider" remain within the deployer's domain; changes to core software architecture, replacement of the operating system, retraining on proprietary data that alters intended purpose, or fine-tuning that materially changes performance or behaviour cross into provider territory.[54][36]

Trigger 3 — Purpose change (Article 25(1)(c)). A party that modifies the intended purpose of an AI system (including a general-purpose AI system) that was not originally classified as high-risk, in such a way that the system becomes high-risk, becomes the provider of the newly high-risk system.[54][7] This trigger catches enterprises that take a general-purpose model and deploy it in an Annex III context (welfare eligibility, credit scoring, recruitment) the original provider did not contemplate.[54][35]

The Article 25(2) cooperation duty. When re-qualification occurs under triggers (b) or (c), the original provider is no longer the provider of that specific system for regulatory purposes — but Article 25(2) requires the original provider to "closely cooperate with new providers" and "make available the necessary information and provide the reasonably expected technical access and other assistance" required for the new provider to fulfil its obligations.[54][7] The exception: if the original provider has clearly specified that its AI system is not to be changed into a high-risk AI system, the cooperation duty does not apply.[7][36] This last sentence is what makes the supplier's "do not modify" representation a genuinely valuable contract clause — it lets the supplier opt out of the cooperation cost while keeping the deployer accountable for its modification choices.[7]

The Article 25(4) value-chain written-agreement obligation. The most-overlooked clause in Article 25 is paragraph 4, which obligates the high-risk-AI-system provider (as opposed to the deployer or third-party tool supplier) to enter into a written agreement with any third party that supplies AI systems, tools, services, components, or processes integrated into the high-risk system.[54][7] The written agreement must specify the necessary information, capabilities, technical access, and other assistance "based on the generally acknowledged state of the art" to enable the high-risk provider to comply.[54] Open-source third-party tools (other than GPAI models) are exempt.[54] The AI Office may publish voluntary model terms for these contracts, available free of charge in an easily usable electronic format — these would be the natural extension of the MCC-AI to upstream-tool agreements.[54][41]

Contractual recipe. Build Article 25 into both supplier-facing and customer-facing contracts. The supplier should have a "do not modify" clause that triggers Article 25(2)'s cooperation-exception. The deployer should have a "no rebranding without written agreement" clause that addresses Article 25(1)(a). And every party should have a substantial-modification-change-control workflow that documents which planned modifications fall within Article 25(1)(b) thresholds and which require notification + re-classification.[7][36]

#Part III: AI Data Processing Addendum — Beyond GDPR Article 28

A standard GDPR DPA does not cover what an AI vendor contract must cover.[49][55] GDPR Article 28(3) sets ten minimum-content requirements (subject matter, duration, nature, purpose, type of personal data, controller obligations and rights, documented-instructions processing, confidentiality, technical and organisational measures, sub-processor conditions, data-subject-rights assistance, breach assistance, deletion/return, audit rights),[55] but none of those clauses address training-data permission, model-improvement use, AI-output processing nature, or supplier cooperation with the deployer's Article 27 FRIA and Article 50 marking obligations.[49][39][45]

Vendor canonical practice. OpenAI, Anthropic, and Mistral AI publish DPAs that map the GDPR Article 28 frame onto AI-specific processing.[23][24][25] OpenAI's Customer DPA (February 2024) treats OpenAI as the Customer's processor for API Services and ChatGPT Enterprise, with Module Two/Three SCCs incorporated for non-EEA transfers; the Supplier DPA flips the role with OpenAI as controller and the supplier as processor or sub-processor.[23][56] Anthropic's DPA mirrors the structure with explicit subprocessor liability extension and an annual security-awareness + HIPAA training cadence.[24] Mistral AI's DPA is unusual in that Mistral acts as controller for training its AI models (with a customer opt-out + thumbs-up/-down feedback exception) — making it a counter-pattern that procurement teams should watch for in EU-based vendor contracts.[25][57]

The training-data clause. The most consequential and most overlooked AI-specific DPA provision is the clause that permits the vendor to use customer data to "improve the Services."[4][5] LawSnap's practitioner guide notes that "Improve the Services" was drafted before AI but now covers model training, and that under GDPR Article 28(3)(a) — which requires processing on "documented instructions from the controller" — a vendor-defined "improve" purpose is not a documented controller instruction.[4] The opt-out window often begins on the addendum's effective date (the date the vendor posted the update to a URL), not the date the customer received notice — which means processing may have begun before the customer had a practical opportunity to stop it.[4][58]

The "no training on customer data" representation. A growing pattern, exemplified by Elnora AI's DPA, is the contractual representation that "Elnora does not train artificial intelligence or machine learning models on Customer Personal Data" combined with a downstream representation that the foundation-model providers (Anthropic, Azure OpenAI, GCP Vertex) are themselves contractually prohibited from training on the data they receive.[59] This two-layer representation closes the OpenAI-DPA "improve the Services" gap and is the cleanest contractual posture for regulated-industry deployers.

The "vendor's own processing" clause — what to redline. Counter-pattern: BBos's DPA contains a clause that "BBos may Process Personal Data for BBos's own purposes" including training and improving AI models using de-identified or aggregated data, with the explicit caveat that "Once Personal Data is incorporated into BBos's AI models, analytics systems, or aggregated datasets, it becomes BBos's property and cannot be deleted."[60] Procurement should treat this clause as a deal-breaker on regulated workloads — it nullifies the controller's GDPR Article 17 right to erasure for any data the vendor has anonymised or aggregated, even when the customer can show the anonymisation is reversible in practice.[60][61]

The AI-related-assistance clause. The cleanest pattern for FRIA cooperation is Molin AI's "AI-Related Assistance" clause: "Molin shall, upon request, provide Customer with information about the architecture, safety measures, data-handling practices and model-provider dependencies of the relevant AI features, to the extent such information is reasonably required for Customer's AI-related impact assessments (including DPIAs) or compliance with the EU AI Act or equivalent legislation."[62][39] The trade-secret carve-out is appropriate: "This clause does not require Molin to disclose trade secrets, security-sensitive information or proprietary model weights."[62] This is the AI-DPA-side mirror of MCC-AI Article 14's individual-explanation cooperation duty, and is the natural location to add Article 26(7) FRIA-cooperation language.[11][37]

#Part IV: Training-Data, IP, and Indemnification Clauses

Of all the AI-specific clauses, the IP indemnification and the training-data restrictions are where the legal risk is most concentrated and where vendor-customer divergence is widest.[31][4][32]

The modification-exclusion kill switch. LawSnap's analysis catalogs the most consequential pattern in vendor IP indemnification: most AI vendor MSAs offer indemnification for AI outputs that infringe third-party IP, but the indemnification is "voided when output is 'modified'" — and any human edit to the output before publication counts as modification.[4] In practice, this means the work an enterprise actually publishes is rarely covered by any indemnification at all.[4][31] Procurement should redline the modification exclusion to a narrower trigger: indemnification preserved when modifications are limited to "ordinary editorial revisions consistent with the vendor's intended use of the output."[31][4]

Vendor-tier benchmark. VendorBenchmark's February 2026 analysis of 50+ enterprise AI contracts identifies a tiered pattern:[63] standard click-through terms (under $250K/year) typically allow training-data use with opt-out, give 30–90 day vendor-controlled output retention, and disclaim AI-output indemnity entirely; negotiated terms ($250K–$1M/year) offer customer-controlled retention, customer-owned fine-tuning data, 30-day advance subprocessor notice, and indemnity on IP infringement of training data; enterprise terms ($1M+/year) offer no-training-use contractual language, customer-owned fine-tuning weights with deletion rights, 90-day subprocessor change notice with approval right, and the OpenAI Copyright Shield / Google IP Indemnity / Microsoft AI Indemnity programmes covering claims that model outputs infringe third-party IP due to training data.[63][31][32]

Hallucination liability — there is no broad indemnity. No major AI vendor offers contractual indemnification for factual errors in outputs.[63][32] Indemnification for hallucinations is not negotiable at any spend tier in 2026; the vendor posture is that customers are responsible for output verification in regulated applications.[63] Contracts should treat hallucinations through service-level commitments rather than indemnification: an agreed process for reporting and investigating model output errors, service credits for output-quality failures that exceed defined accuracy thresholds for production applications, and a vendor commitment to test for discrimination and bias before deployment.[32][31]

Output ownership pattern. Standard enterprise AI agreements explicitly state that all outputs generated using the customer's data, prompts, and system prompts are the customer's intellectual property.[46][31] Pertama Partners recommends explicit clause language: "AI outputs generated in response to Customer prompts and using Customer Data are owned by Customer. Vendor retains no rights to specific outputs. Vendor shall not grant any third party rights to outputs that conflict with Customer's ownership."[31][46] The carve-out trap: "derivative works" language can be exploited by vendor legal teams to claim rights in transformations of customer outputs; redline to "all copyright, trade secret, and other intellectual property rights without limitation or reservation."[46][31]

The "anti-training" output restriction. A separate clause type, common in vendor terms: a prohibition on the customer using outputs to train its own AI systems.[4] Microsoft Copilot's terms include: "you may not use Microsoft AI services, output, content, data or other information received or derived from any generative AI services to directly or indirectly create, train, or improve other artificial intelligence systems."[4] Procurement teams building internal AI tooling need to flag this clause and either negotiate it out, scope it narrowly to direct fine-tuning (allowing internal-use fine-tuning of distinct models), or factor it into vendor-selection.[4]

Liability-cap calibration. Andrew Bosin's 2026 checklist recommends rejecting liability caps of "1–3 months of fees" and exclusions that "remove vendor liability for AI errors" or "shift risk to the customer" via no-consequential-damages clauses.[46] The negotiated-tier pattern: aggregate-liability floor at 12 months of fees minimum; carve-outs from the cap for IP indemnification, data-protection breaches, and confidentiality breaches; super-cap (e.g., 2× annual fees) for violations of training-data restrictions where the harm is competitive-intelligence loss rather than direct damage.[46][32][63]

#Part V: Article 50 Transparency — Watermarking, C2PA, and Marking Cooperation

Article 50 of the EU AI Act applies on 2 August 2026 and imposes a dual transparency regime: providers of generative AI systems must mark outputs in a machine-readable format (Article 50(2)); deployers must label deepfakes and AI-generated text on matters of public interest (Article 50(4)).[42][64][65] The Code of Practice on marking and labelling, drafted by independent experts under the AI Office multi-stakeholder process, is the operational benchmark.[42][43][44][45]

Multilayered marking. The Code of Practice explicitly takes the position that "no single marking technique is currently sufficient" and recommends a multilayered approach: digitally signed metadata embedding (typically C2PA content credentials); imperceptible watermarking interwoven directly within the content (pixel-level for images, spectral-domain for audio/video, statistical-token for text); and fingerprinting or logging mechanisms as fallback where metadata can be stripped and watermarks can be defeated.[42][43][44][65] The first draft (December 2025) was streamlined in a second draft (March 2026), with finalisation expected by early June 2026; both drafts retain the multilayered requirement.[43][65]

Contractual hooks. The supplier's Article 50 marking obligation is technical, but the contract must include three hooks. First, a representation that the supplier's AI system marks all generated content per the Code of Practice, including specifying which marking technologies are used (C2PA, SynthID, ForensicMark, or equivalent).[45][65] Second, a covenant that the supplier will update marking implementations as the Code is revised (the Code operates on an annual revision cycle).[41][45] Third, a prohibition on the customer or its end-users altering or removing marks — which Cooley's analysis notes the Code itself recommends incorporating "in terms of service and acceptable use policies."[44]

Deployer labelling cooperation. Deployer-side Article 50(4) labelling for deepfakes and public-interest text requires the deployer to know whether content is AI-generated when consuming or publishing it — which means the supplier must provide content-credential APIs or detection tools the deployer can integrate.[43][65] The MCC-AI Commentary's Article 14 individual-explanation duty is the natural place to extend this: a contractual obligation on the supplier to provide a detection API (or equivalent verification tool) accessible to the deployer's content workflow.[11][43] ForensicMark's API pattern — embedding both an invisible neural watermark and a C2PA manifest at generation, with detection available via API or open tool — is the operational template.[65]

The €15M / 3% turnover penalty. Article 50 violations fall under the same operator-obligations tier as Article 27 violations: up to €15 million or 3% of total worldwide annual turnover, whichever is higher.[65][66] For large AI providers, 3% of global turnover is a significant exposure; for downstream integrators, the contract should ensure the supplier's marking obligations are warranted strongly enough that an Article 50 enforcement action against the deployer can be passed back to the supplier through indemnification.[32][65]

Open-weight model edge case. The Code's first draft suggested that open-weight model providers should "implement structural marking techniques encoded in the weights during training to facilitate downstream compliance."[44] Procurement of open-weight models therefore requires a representation that structural marking is in place, plus a cooperation duty for the deployer's downstream marking implementation — because the open-weight provider, by definition, is not the entity running inference and producing the marked output.[44][45]

#Part VI: GPAI Code of Practice — Signatory Status as a Procurement Filter

The General-Purpose AI Code of Practice, published 10 July 2025 in final form by the AI Office, is the voluntary mechanism by which providers of general-purpose AI models (GPAI) demonstrate compliance with Article 53 (transparency, copyright, downstream-provider cooperation) and Article 55 (additional duties for GPAI with Systemic Risk).[9][40][67] It went into application for new models on 2 August 2025; existing-model providers must comply by 2 August 2027; the Commission's enforcement powers begin on 2 August 2026.[41][9]

The Article 56(2) presumption-of-conformity. For signatories, Article 56(2) creates a rebuttable presumption of conformity with the corresponding Chapter V obligations, shifting the burden of proof significantly in enforcement proceedings.[68][9] Non-signatories may comply through "other means" under Article 56(4), but the EU AI Office's guidance is explicit that alternative-pathway compliance receives "closer scrutiny" than Code compliance.[68][41]

Signatories as a procurement filter. The current signatory list, published at code-of-practice.ai, includes Accexible, AI Studio Delta, Aleph Alpha, Almawave, Amazon, Anthropic, Black Forest Labs, Bria AI, Cohere, Domyn, Dweve, Fastweb, Google, IBM, Lawise, LINAGORA, Microsoft, Mistral AI, Open Hippo, OpenAI, Pleias, ServiceNow, and WRITER, plus xAI on the Safety and Security chapter only.[9] Notable absences (Meta) signal the GPAI providers that have declined; downstream procurement teams use the signatory list as a first-pass filter for upstream supplier selection.[9][68]

The 14-day downstream-information duty. Code signatories commit to providing technical documentation and related information to downstream providers within 14 days of request.[9][69][21] This is the contract clause that lets a downstream B2B AI builder demonstrate Article 16 compliance: by warranting that its upstream GPAI is a Code signatory, the downstream provider can show it has a documented information-flow channel matching the Article 53 transparency obligations.[9][69]

The contractual warranty. The aiactblog.nl September 2025 procurement guide provides the canonical warranty language: "Supplier warrants that the Model and associated documentation are in line with the EU AI Act, including obligations for general-purpose AI models as referred to in Article 53 and, where applicable, Article 55. If Supplier is not a signatory to the GPAI Code of Practice, it applies measures that provide equivalent safeguards as described in the Transparency, Copyright and Safety & Security chapters of that code."[40] Three ancillary representations: completion and delivery of the Model Documentation Form per the Transparency chapter; reference and version of the Public Summary of Training Content (mandatory under Article 53); and 72-hour serious-incident reporting to the AI Office and to the customer.[40][41][9]

Annex structure. The contract should ship the GPAI representations as a structured annex: Appendix A — Model Documentation Form (living document, updated each release); Appendix B — Public Summary of Training Content link and version, plus fallback information if publication is not yet available; Appendix C — Assurance and audit plan with a timeline towards harmonised standards once they publish under Article 40.[40][9]

Annual revision cycle. Article 56(5) requires the Code to be revised on an annual cycle, with the first revision due 2 August 2026 and signatories given 90 days to update their compliance implementations after each revision.[68] Procurement contracts should reference "the Code of Practice as in force from time to time" rather than a specific revision, and should include a covenant that the supplier will update its compliance implementation within the 90-day window.[68][9]

#Part VII: The Twelve-Clause MSA AI Addendum

What follows is a working twelve-clause structure for an AI Addendum that incorporates by reference into the MSA, supplements the GDPR DPA, and carries the EU AI Act statutory hooks identified above.[14][31][4][46]

Clause 1 — Compliance status warranty. Supplier represents and warrants that, with respect to each AI System provided under this Agreement: (a) the Annex III high-risk classification, if applicable, has been accurately determined; (b) Supplier complies with the Chapter III obligations applicable to its provider role under Regulation (EU) 2024/1689; and (c) Supplier will notify Customer of any change to (a) or (b) within five Business Days of becoming aware of such change.[11][28][7]

Clause 2 — Data use and training restrictions. Supplier shall not use Customer Data — including prompts, inputs, fine-tuning data, outputs, usage patterns, or any data derived from Customer Data — to train, fine-tune, evaluate, or otherwise improve any AI model, foundation model, or service made available to any third party. Supplier represents that any third-party model providers it engages are contractually bound by an equivalent restriction. The "improve the Services" purpose described in any standard-form addendum or terms of service is expressly disclaimed and superseded by this clause.[4][59][31]

Clause 3 — Output ownership. All outputs generated by the AI System in response to Customer's inputs and using Customer Data are owned by Customer, including all copyright, trade secret, and other intellectual property rights without limitation or reservation. Supplier retains no rights to specific outputs. Supplier shall not grant any third party rights to outputs that conflict with Customer's ownership. The term "derivative work" as used in this Agreement does not include any work in which Customer's outputs, prompts, or system prompts are embedded.[46][31][4]

Clause 4 — Liability and indemnities. Supplier shall indemnify, defend, and hold harmless Customer from third-party claims arising from: (a) infringement of third-party intellectual property by training data or AI outputs; (b) bias or discrimination in AI outputs where Supplier failed to implement commercially reasonable safeguards; (c) data breaches caused by Supplier's failure to implement appropriate security controls; (d) regulatory fines arising from Supplier's non-compliance with the AI Act or applicable Member State implementing law. The IP indemnity in (a) is preserved when modifications to outputs are limited to ordinary editorial revisions consistent with the Supplier's intended use.[4][31][32][63]

Clause 5 — Model change notification. Supplier shall notify Customer no less than 90 calendar days in advance of: (a) any change to the AI System that constitutes a "substantial modification" within the meaning of Article 3(23) of Regulation (EU) 2024/1689; (b) any model version change affecting Customer's intended purpose; or (c) any change in the underlying GPAI model identity. Notice shall include a description of the change, the expected impact on performance, and any required Customer-side action.[32][33][7]

Clause 6 — Log access for Article 26(6). Supplier shall retain, and provide Customer with access to, the logs of the AI System for the period required for Customer's compliance with Article 26(6) of Regulation (EU) 2024/1689 (minimum six months) or such longer period as Customer may reasonably require for regulatory, audit, or litigation purposes. Supplier shall make logs available to market surveillance authorities upon Customer's reasoned request.[28][34]

Clause 7 — Substantial-modification boundary. Supplier expressly specifies, for purposes of Article 25(2) of Regulation (EU) 2024/1689, that the AI System is not to be changed into a high-risk AI system by Customer outside the parameters foreseen and documented in the Supplier's instructions for use. Customer shall not (a) put its name or trademark on the AI System; (b) make a substantial modification within the meaning of Article 3(23); or (c) modify the intended purpose of the AI System into a high-risk use case under Annex III, except in each case with Supplier's prior written consent and a written re-allocation of provider obligations under Article 25(1)(a) where applicable.[7][54][36]

Clause 8 — Audit rights. Customer and its authorised auditors shall have the right to audit Supplier's compliance with this Agreement, including without limitation Supplier's training-data practices, marking and labelling implementation, model documentation, and security controls, no more than once per calendar year (or more frequently following a security incident, regulatory enforcement action, or post-market monitoring finding). Supplier shall cooperate with any audit conducted by the market surveillance authority designated under Article 70 of Regulation (EU) 2024/1689.[11][17]

Clause 9 — FRIA cooperation. Supplier shall, upon Customer's reasonable request, provide Customer with information about the architecture, safety measures, data-handling practices, and model-provider dependencies of the AI System, to the extent such information is reasonably required for Customer's Fundamental Rights Impact Assessment under Article 27 of Regulation (EU) 2024/1689. This clause does not require Supplier to disclose trade secrets, security-sensitive information, or proprietary model weights, but Supplier shall provide alternative information sufficient to enable Customer's FRIA in good faith.[62][39][37]

Clause 10 — GPAI Code-signatory representation. Supplier represents and warrants that, with respect to any general-purpose AI model integrated into the AI System: (a) the GPAI provider is a signatory to the General-Purpose AI Code of Practice published by the AI Office on 10 July 2025, as revised; or (b) if not a signatory, the GPAI provider implements measures that provide equivalent safeguards as described in the Transparency, Copyright, and Safety & Security chapters of the Code, with documented compliance evidence available to Customer on request. Supplier shall ensure that downstream-provider information requested by Customer under Article 53 is provided within 14 calendar days.[40][9][69]

Clause 11 — Article 50 transparency cooperation. Supplier shall implement the multilayered marking required by the Code of Practice on Marking and Labelling of AI-Generated Content (or successor guidance), including digitally signed metadata (e.g., C2PA), imperceptible watermarking, and fingerprinting/logging fallback as applicable to the modality. Supplier shall provide Customer with a detection API or equivalent verification tool sufficient for Customer's Article 50(4) deployer-labelling obligations. Customer and its end-users are prohibited from altering or removing markings applied by the AI System.[42][43][44][45]

Clause 12 — Termination and transition. On termination or expiry of this Agreement, Customer shall have the right to extract all Customer Data, fine-tuned model weights, training datasets, system prompts, and accumulated usage data in a portable, standard format within a transition period of 90–180 days at Customer's election, with deletion of remaining data certified in writing within 30 days of the end of the transition period. Supplier shall not anonymise, aggregate, or otherwise transform Customer Data in a manner that would defeat Customer's deletion or extraction rights under this clause.[46][48][60]

#Quotable Findings — Vendor Contract Clause Library

  1. Per the EU Public Buyers Community announcement (5 March 2025),[10] the updated Model Contractual Clauses for the Procurement of AI ship in three pieces — a full version for high-risk AI, a light version for non-high-risk AI, and a Commentary — and were translated into all 24 EU languages by 16 June 2025.[13]
  2. Per the MCC-AI Commentary (February 2025 dynamic working document, author Jeroen Naves with reviewers Anita Poort and Ivo Locatelli),[11] the MCC-AI imposes an obligation on the supplier to cooperate in explaining, at an individual level, how the AI system reached an outcome — a duty that goes beyond Chapter III of the AI Act and operationalises the Article 86 right of explanation contractually.
  3. Per Article 25 of Regulation (EU) 2024/1689,[54] a deployer or third party becomes the provider of a high-risk AI system if it (a) puts its name or trademark on the system, (b) makes a substantial modification within the meaning of Article 3(23), or (c) modifies the intended purpose of an AI system to make it high-risk.
  4. Per VendorBenchmark's February 2026 analysis of 50+ enterprise AI contracts,[63] only 31% include meaningful price-escalation caps at renewal, and 44% of companies discovered training-data clauses they did not negotiate; SLA uptime guarantees average 99.5% standard versus 99.9%+ negotiated.
  5. Per LawSnap's MSA AI Provisions Practitioner Guide,[4] the modification-exclusion kill switch in vendor IP indemnification — voiding indemnification when output is "modified" — means the work an enterprise actually publishes is rarely covered by any indemnification at all, because any human edit before publication counts as modification.
  6. Per the EU AI Office Code of Practice on Marking and Labelling of AI-Generated Content (second draft, 5 March 2026),[43][65] the multilayered marking requirement combines digitally signed metadata (C2PA), imperceptible watermarking, and fingerprinting or logging fallback, with rules taking effect on 2 August 2026.
  7. Per the General-Purpose AI Code of Practice (final, 10 July 2025),[9][67] Code signatories commit to providing technical documentation and related information to downstream providers within 14 calendar days of request, operationalising the Article 53 transparency duty contractually.
  8. Per Article 56(2) of Regulation (EU) 2024/1689 as analysed by AIActStack and Quantamix Solutions,[68] signing the GPAI Code of Practice creates a rebuttable presumption of conformity with the corresponding Chapter V obligations — shifting the burden of proof significantly in enforcement proceedings; non-signatories must comply through "other means" under Article 56(4) with closer regulatory scrutiny.
  9. Per Article 50 violations under Article 99,[65] non-compliance with the marking and labelling obligations is subject to administrative fines of up to €15 million or 3% of total worldwide annual turnover, whichever is higher — placing the contractual marking warranty at the centre of vendor-passback indemnification design.

#Glossary

MCC-AI: the EU Public Buyers Community Model Contractual Clauses for the Procurement of AI, published in updated form on 5 March 2025 in High-Risk and Light versions with a Commentary.[11][10]

Article 25 re-qualification: the EU AI Act mechanism under which a distributor, importer, deployer, or third party becomes the provider of a high-risk AI system through rebranding, substantial modification, or purpose change — inheriting Article 16 provider obligations.[54][7]

Substantial modification (Article 3(23)): a change to the AI System after placement on the market that is not foreseen by the provider and either affects compliance with applicable requirements or modifies the intended purpose for which the system was assessed.[54][36]

GPAI Code of Practice: the voluntary AI Office instrument published 10 July 2025 by which providers of general-purpose AI models demonstrate compliance with Articles 53 and 55 — comprising Transparency, Copyright, and Safety & Security chapters.[9][67]

C2PA (Coalition for Content Provenance and Authenticity): the open-standard cryptographically signed manifest standard referenced by the AI Office Code of Practice on marking and labelling, used as the metadata layer of multilayered AI-content marking.[43][65]

MCC-AI Light asymmetry: the design choice to retain most high-risk supplier obligations in the Light version even though those obligations are not legally required for non-high-risk systems — improving trustworthiness of the procured system at the cost of supplier negotiability.[16][17]

14-day downstream-information clock: the GPAI Code of Practice commitment by signatories to provide technical documentation and information to downstream providers within 14 calendar days of request.[9][21]

Modification-exclusion kill switch: the standard vendor IP indemnification carve-out that voids coverage when AI output is modified — typically invoked when a customer makes any human edit before publication.[4]

#References

References

  1. Regulation (EU) 2024/1689 — Article 25 canonical text (artificialintelligenceact.com mirror). https://artificialintelligenceact.com/article-25-responsibilities-along-the-ai-value-chain/

  2. aiactblog.nl, "What is FRIA?" (2026-02-28) — Article 27 / 2 August 2026 effective-date anchor. https://www.aiactblog.nl/en/posts/fria-complete-guide-article-27-ai-act

  3. National Law Review, "Negotiating AI Agreements with Vendors." https://natlawreview.com/article/negotiating-ai-agreements-vendors

  4. LawSnap, "MSA AI Provisions — Practitioner Guide." https://lawsnap.com/contracts/msa-ai/ 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

  5. Pertama Partners, "Key AI Contract Clauses: What to Negotiate" (Hauge, 2025-11-14). https://www.pertamapartners.com/insights/key-ai-contract-clauses-negotiate-avoid 2 3 4

  6. Secure Privacy, "GDPR Vendor Compliance & Article 28 DPA Guide" (2026-03-09). https://support.secureprivacy.ai/articles/how-your-dpo-manages-third-party-vendor-compliance/

  7. Stephenson Harwood, "The roles of the provider and deployer in AI systems and models" (2024-09-12). https://stephensonharwood.com/news/the-roles-of-the-provider-and-deployer-in-ai-systems-and-models 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

  8. Lexology / Mishcon de Reya, "Are you a 'Provider' or 'Deployer' of an AI System under the EU AI Act?" — Robinson, Collett (2024-06-11). https://www.lexology.com/library/detail.aspx?g=d25d7cea-5678-433f-a239-a1883499f4fa

  9. AI Office, "The General-Purpose AI Code of Practice." https://digital-strategy.ec.europa.eu/en/policies/gpai-code-practice 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

  10. EU Public Buyers Community, "Updated EU AI model contractual clauses now available" (2025-03-05). https://public-buyers-community.ec.europa.eu/communities/procurement-ai/news/updated-eu-ai-model-contractual-clauses-now-available 2 3 4

  11. EU Public Buyers Community, "Commentary — Model Contractual Clauses for the public procurement of AI (MCC-AI)" — Jeroen Naves, reviewers Anita Poort, Ivo Locatelli (Feb 2025). https://public-buyers-community.ec.europa.eu/system/files/2025-05/Commentary.pdf 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

  12. EU Public Buyers Community, "Updated EU AI model contractual clauses" resource page (2025-03-05). https://public-buyers-community.ec.europa.eu/communities/procurement-ai/resources/updated-eu-ai-model-contractual-clauses

  13. EU Public Buyers Community, "Translations of EU AI model contractual clauses now available in 24 languages" (2025-06-16). https://public-buyers-community.ec.europa.eu/communities/procurement-ai/news/translations-eu-ai-model-contractual-clauses-now-available-24 2

  14. Trowers & Hamlins, "The EU AI model contractual clauses: a comprehensive overview for UK legal practitioners" (March 2025). https://www.trowers.com/insights/2025/march/the-eu-ai-model-contractual-clauses-a-comprehensive-overview-for-uk-legal-practitioners 2 3 4

  15. Burges Salmon, "Update: Model clauses in EU public procurement of AI" — Tom Whittaker (2025-03-19). https://burges-salmon.com/articles/102k540/update-model-clauses-in-eu-public-procurement-of-ai 2 3

  16. Slaughter and May, "Commission updates Model Contractual Clauses for AI procurement" — Anton Nilsson. https://thelens.slaughterandmay.com/post/102kbhf/commission-updates-model-contractual-clauses-for-ai-procurement 2 3 4 5 6

  17. Inside Privacy / Covington, "EU's Community of Practice Publishes Updated AI Model Contractual Clauses" — Cooper, Gray, Choi (2025-04-07). https://www.insideprivacy.com/artificial-intelligence/eus-community-of-practice-publishes-updated-ai-model-contractual-clauses/ 2 3 4 5 6 7

  18. Snellman Digital Compliance Tracker, "EU Updates Model Clauses on AI Procurement to Align with the AI Act" — Mindus Amini (2025-03-11). https://digitalcompliance.snellman.com/eu-updates-model-clauses-on-ai-procurement-to-align-with-the-ai-act/ 2

  19. ICTLC Italy, "AI Act - Artificial Intelligence Provider: When You Are, and When You Become One" (2025-10-28). https://www.ictlc.com/ai-act-artificial-intelligence-provider/?lang=en

  20. Mishcon de Reya, "The General-Purpose Artificial Intelligence Code of Practice" (2025-07-22). https://www.mishcon.com/news/the-general-purpose-artificial-intelligence-code-of-practice-what-does-it-mean-for-me

  21. Taylor Wessing, "The final GPAI Code of Practice" (2025-07-29). https://www.taylorwessing.com/fr/insights-and-events/insights/2025/07/update-ai-act 2 3

  22. Legalithm, "Article 25 EU AI Act: Re-qualification as provider." https://www.legalithm.com/en/ai-act-guide/article-25

  23. OpenAI, "Data processing addendum - Feb 2024." https://openai.com/policies/feb-2024-data-processing-addendum/ 2 3

  24. Anthropic, "Data Processing Addendum." https://www.anthropic.com/legal/data-processing-addendum 2 3

  25. Mistral AI, "Data Processing Addendum." https://legal.mistral.ai/terms/data-processing-addendum 2 3

  26. Andrew S. Bosin LLC, "Key Considerations for Evaluating AI Vendor Contracts (2026 Guide)" (2026-02-05). https://www.njbusiness-attorney.com/key-considerations-for-evaluating-ai-vendor-contracts-complete-2026-legal-checklist/

  27. Mondaq / Gouchev, "10 Critical Clauses For AI Vendor Contracts" (2025-11-28). https://www.mondaq.com/unitedstates/new-technology/1710750/10-critical-clauses-for-ai-vendor-contracts

  28. AIActStack, "Article 27 EU AI Act: FRIA Explained" (2026-04-20) — Article 26 + 27 cross-reference. https://aiactstack.com/article/art-27 2 3 4

  29. Atonement Licensing, "AI Procurement Guide 2026" (2026-03-26). https://atonementlicensing.com/blog/ai-procurement-guide/

  30. Atonement Licensing, "Six non-negotiable clauses." https://atonementlicensing.com/blog/ai-procurement-guide/

  31. Pertama Partners — Output ownership clause language. https://www.pertamapartners.com/insights/key-ai-contract-clauses-negotiate-avoid 2 3 4 5 6 7 8 9 10 11 12 13

  32. Redress Compliance, "AI Procurement Checklist: 20 Questions" — Filipsson (2025-07-15). https://redresscompliance.com/ai-procurement-checklist-20-questions-before-signing.html 2 3 4 5 6 7 8 9 10

  33. VendorBenchmark, "Enterprise AI Contract Terms" — Filipsson (2026-02-19). http://www.vendorbenchmark.com/blog/enterprise-ai-contract-terms-benchmark.html 2

  34. aiactblog.nl, "FRIA for municipalities: public sector guide" (2026-04-08) — Article 26(6) logging cross-reference. https://www.aiactblog.nl/en/posts/fria-public-sector-municipalities-eu-ai-act 2

  35. ovidiusuciu.com, "EU AI Act Article 25: AI Value Chain Responsibilities Explained" (2026-03-25). https://ovidiusuciu.com/eu-ai-act/eu-ai-act-article-25-ai-value-chain-responsibilities/ 2 3

  36. NicFab, "AI Agents: When a Deployer Becomes a Provider" (2026-04-13). https://www.nicfab.eu/en/posts/provider-ai-agents/ 2 3 4 5 6 7 8

  37. artificialintelligenceact.eu, "Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems." https://artificialintelligenceact.eu/article/27 2 3 4

  38. euaiactchecklist.com, "FRIA Template — Article 27 (2026)." https://euaiactchecklist.com/eu-ai-act-fria-template.html

  39. Molin AI, "Data Processing Addendum (DPA) — AI-Related Assistance clause." https://docs.molin.ai/legal/data-processing-addendum 2 3 4

  40. aiactblog.nl, "GPAI procurement: model contract clauses 2025" (2025-09-04). https://www.aiactblog.nl/en/posts/procurement-eu-ai-act-gpai-code-contracts 2 3 4 5 6

  41. AI Office, "Guidelines for providers of general-purpose AI models." https://digital-strategy.ec.europa.eu/en/policies/guidelines-gpai-providers 2 3 4 5 6 7

  42. AI Office, "Code of Practice on marking and labelling of AI-generated content." https://digital-strategy.ec.europa.eu/en/policies/code-practice-ai-generated-content 2 3 4 5 6

  43. AI Office, "Commission publishes second draft of Code of Practice on Marking and Labelling of AI-generated content." https://link.europa.eu/QW4wNh 2 3 4 5 6 7 8 9

  44. Cooley LLP / JDSupra, "EU AI Act: First Draft Code of Practice on Transparency and Watermarking Released." https://www.jdsupra.com/legalnews/eu-ai-act-first-draft-code-of-practice-5164956/ 2 3 4 5 6 7

  45. notraced (Shephard), "AI-generated content labeling: what Article 50 requires and how to implement it" (2026-04-01). https://notraced.com/articles/ai-generated-content-labeling 2 3 4 5 6 7

  46. Andrew S. Bosin LLC — termination + IP clauses. https://www.njbusiness-attorney.com/key-considerations-for-evaluating-ai-vendor-contracts-complete-2026-legal-checklist/ 2 3 4 5 6 7 8 9

  47. AIActStack, "GPAI Code of Practice Guide" (2026-04-20). https://aiactstack.com/guide/gpai-code-of-practice

  48. Atonement Licensing — exit & portability rights. https://atonementlicensing.com/blog/ai-procurement-guide/ 2

  49. Tzafon, "Data Processing Addendum" (effective 2026-01-01). https://tzafon.ai/legal/data-processing-addendum 2 3

  50. BEAMON, "Data Processing Addendum (Germany)" (2026-03-16). https://beamon.ai/legal-terms/data-processing-addendum-germany/

  51. Digital Policy Alert, "EU Model Contractual Clauses to support high-risk AI Procurement" (2025-03-05). https://digitalpolicyalert.org/change/13605-eu-model-contractual-clauses-to-support-artificial-intelligence-procurement

  52. EU AI Act Service Desk — Single Information Platform reference. https://ai-act-service-desk.ec.europa.eu/en

  53. Burges Salmon — five clause categories. https://burges-salmon.com/articles/102k540/update-model-clauses-in-eu-public-procurement-of-ai

  54. euai.app, "Article 25: Responsibilities along the AI value chain" — Paul McCormack. https://euai.app/article/25 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

  55. Secure Privacy — Article 28(3) ten-clause minimum. https://support.secureprivacy.ai/articles/how-your-dpo-manages-third-party-vendor-compliance/ 2

  56. OpenAI, "Supplier Data Processing Addendum." https://openai.com/policies/supplier-dpa

  57. Mistral AI DPA — controller-for-training counter-pattern. https://legal.mistral.ai/terms/data-processing-addendum

  58. Pertama Partners — opt-out window practical analysis. https://www.pertamapartners.com/insights/key-ai-contract-clauses-negotiate-avoid

  59. Elnora AI, "Data Processing Agreement — No Training on Customer Data." https://elnora.ai/dpa 2

  60. BBos, "Data Processing Addendum — BBos's Own Processing." https://bbos.ai/dpa 2 3

  61. gdprregister.eu, "EU AI Act Deployer vs Provider: Compliance Guide" (2026-04-15). https://www.gdprregister.eu/articles/eu-ai-act-deployer-vs-provider/

  62. Molin AI — AI-related-assistance clause. https://docs.molin.ai/legal/data-processing-addendum 2 3

  63. VendorBenchmark — vendor-tier benchmark. http://www.vendorbenchmark.com/blog/enterprise-ai-contract-terms-benchmark.html 2 3 4 5 6 7

  64. Legalithm, "AI Act Transparency: Article 50 and Deepfake Rules" (2026-02-17). https://www.legalithm.com/en/blog/ai-act-transparency-obligations-article-50-deepfake-labeling

  65. ForensicMark, "EU AI Act Article 50: Watermarking Compliance Guide (2026)" (2026-03-15). https://forensicmark.com/blog/eu-ai-act-watermarking-compliance/ 2 3 4 5 6 7 8 9 10 11

  66. Cooley LLP — Code of Practice analysis PDF. https://www.cooley.com/api/downloadpdf?contextItemId=%7B2D9F3273-1560-49F0-BCFA-40162BD462E3%7D

  67. code-of-practice.ai — Final GPAI Code of Practice text. http://code-of-practice.ai/ 2 3

  68. Quantamix Solutions, "GPAI Code of Practice: EU AI Act 2025" — Harish Kumar. https://quantamixsolutions.com/insights/gpai-code-of-practice-eu-2025/ 2 3 4 5 6

  69. AI Office, "Signing the General-Purpose AI code of practice" FAQ. https://digital-strategy.ec.europa.eu/en/faqs/signing-general-purpose-ai-code-practice 2 3

perea.ai Research

One deep piece a month. Three weekly signals.

Get every B2A field report, protocol update, and benchmark from real audits — published before the rest of the market sees it. No filler. Unsubscribe in one click.