AI copyright litigation is no longer a theoretical risk that Australian boards can defer to legal counsel and revisit in two years. Getty Images, the New York Times, Universal Music Group and a growing list of rights holders have filed suits against AI developers for training on copyrighted material without licence. Several of those cases are progressing through US federal courts with meaningful discovery underway. Australian enterprises asking whether deploying generative AI exposes the organisation to liability are asking the right question. The answer depends heavily on which product is in use and what protections the vendor has contractually committed to.
Microsoft's position in this landscape is materially different from most AI vendors, and the difference is worth understanding before a board approves a rollout or a CISO signs a risk register line.
What the live litigation actually covers
The cases in motion as of May 2026 centre on two distinct theories of liability. The first is training liability, the claim that AI developers ingested copyrighted material to train their models without authorisation. The second is output liability, the claim that AI systems generate outputs that reproduce, closely paraphrase or substantially substitute for protected works.
Training liability sits primarily with the AI developer, not the enterprise deploying the tool. An Australian organisation that licenses Microsoft 365 Copilot did not train the underlying GPT models and does not carry that exposure as a direct legal actor. Output liability is different. If an employee uses an AI tool to generate content that reproduces protected material, and that content is then published or distributed, the organisation's exposure depends on whether the AI vendor has indemnified enterprise customers against those claims. That distinction matters enormously in practice.
Microsoft's Copilot Copyright Commitment
Microsoft introduced the Copilot Copyright Commitment in September 2023 and has expanded it through subsequent commercial terms. The commitment covers commercial customers using Microsoft 365 Copilot and other covered Microsoft AI products. The substance is direct: if a customer is sued by a third party alleging copyright infringement in content generated by a covered Microsoft AI tool, and the customer was using the product as designed and within the applicable terms, Microsoft will defend the customer and pay any adverse judgement or settlement.
This is a named contractual indemnity, not a general assurance or a marketing statement. It is enforceable under the Microsoft commercial agreement that governs the organisation's Microsoft 365 relationship. Australian enterprises on an Enterprise Agreement, Microsoft Customer Agreement or Cloud Solution Provider arrangement have this protection available through the existing commercial terms, subject to the standard exclusions.
What the commitment covers and what it does not
Reading the actual scope is important. The Copilot Copyright Commitment covers outputs from Copilot features in Microsoft 365 applications, Copilot in Word, Excel, Outlook, Teams, PowerPoint, OneNote, and the M365 Copilot Chat experience. It also covers Microsoft Bing generative search for commercial customers. It does not automatically extend to all custom agents built in Copilot Studio, content retrieved through custom connectors, or outputs from models deployed directly through Azure OpenAI Service without the Copilot commercial product layer.
- Covered: content generated through Microsoft 365 Copilot in-app experiences (Word, Outlook, Teams, Excel, PowerPoint, OneNote, Copilot Chat) when used as designed.
- Covered: Microsoft Bing commercial generative features for commercial customers.
- Requires review: custom Copilot Studio agents that pull from sources outside the Microsoft Graph, the indemnity scope narrows where the agent is ingesting external third-party content at runtime.
- Not automatically covered: Azure OpenAI Service API calls made directly by the organisation's developers without the Copilot product layer. This is a direct API usage and the liability position is different.
- Not covered: customer-generated content that was itself infringing before it was fed into Copilot, the indemnity assumes the inputs the customer provides are content the customer has the right to use.
What Australian enterprises are actually carrying
For an Australian enterprise running Microsoft 365 Copilot within normal commercial terms, the copyright liability exposure the board should be concerned about is narrow. The training liability sits with Microsoft, covered by Microsoft's own legal defence. The output liability for in-scope Copilot usage is indemnified under the Copilot Copyright Commitment. The residual exposure is the gap between the Commitment's coverage scope and the organisation's actual usage pattern.
Where Frontrow sees risk materialise in Australian tenants is not in the published Copilot products but in adjacent usage patterns that fall outside the Commitment's scope. Developers calling Azure OpenAI APIs directly to build internal tools. Staff copying Copilot outputs into downstream products, presentations or publications without understanding whether the content was retrieved from a grounded internal source or generated from the model's training weights. Custom Copilot Studio agents that ingest third-party news feeds, contract databases or legal research services at query time. Each of these patterns creates a usage posture that requires its own risk assessment separate from the Microsoft 365 Copilot indemnification.
The governance questions boards should be asking
- Has the organisation documented which AI products and product surfaces are covered by the Copilot Copyright Commitment in the current Microsoft commercial agreement? The commitment is not automatic, it requires the product to be a covered product and the usage to be within the terms.
- Does the AI acceptable-use policy distinguish between Copilot in-app usage (covered) and direct Azure OpenAI API usage by developers (not automatically covered)?
- Are custom Copilot Studio agents that ingest external third-party content at runtime reviewed against the indemnity scope before deployment?
- Is there a documented process for employees to understand that AI-generated content used in external publications should be reviewed before release, regardless of the indemnification posture?
- Has the organisation's cyber and professional indemnity insurance broker been briefed on the AI usage pattern to confirm coverage alignment? The Copilot Copyright Commitment is complementary to, not a substitute for, the organisation's own insurance position.
The vendor comparison question
Not every AI vendor in the market has published an equivalent named commercial indemnity for copyright liability. Some have published public statements. Some have pointed to general IP indemnification clauses in commercial terms that were written before generative AI existed. Australian enterprises evaluating AI procurement in 2026 should request explicit written confirmation of the vendor's copyright indemnification scope as part of the commercial review, not as an afterthought. For Microsoft 365 Copilot, the published Copilot Copyright Commitment is the starting point of that conversation. For other tools under evaluation, the question needs to be asked and answered in writing before signing.
"The Copilot Copyright Commitment is a real contractual indemnity that materially changes the enterprise risk position. The gaps in its coverage are predictable and manageable if the usage pattern is mapped before deployment, not after."
Try it
Assess AI readiness before deployment
The AI readiness check covers governance, identity, data classification and usage policy, the four areas where gaps most commonly put the copyright indemnification at risk.
Score each dimension, 1 – 5
How ready is your organisation for AI — really?
Five dimensions. Pick the statement closest to the truth for your business today. No wrong answers.
Data readiness
Is your data in a shape AI can actually reason over?
Governance & security
Identity, permissions, DLP, audit — the safety rails for AI.
Workflow integration
Where will AI actually get used in the business?
Adoption capability
Will your team actually use it when it arrives?
Capacity to invest
Can you actually fund and run an AI program right now?