I get this question two or three times a week from Australian IT leaders who've been asked it by their audit committee: we're aligning to the ACSC Essential Eight — does that mean Copilot is safe to deploy? The short, honest answer is that it's partly covered. The Essential Eight was written before Microsoft 365 Copilot existed, and there are specific controls it simply doesn't touch.
Plenty of vendors will tell you "Essential Eight ML2 = Copilot-ready." That's wrong, and in some cases actively dangerous. The Essential Eight handles a specific slice of the problem — prevention of commodity compromise. Copilot introduces a different risk class: information surfacing across data your users were already over-permissioned on. The strategies overlap. They don't fully coincide.
What the Essential Eight does cover, in a Copilot context
Of the eight ACSC strategies, five are directly relevant to a Copilot deployment. Getting these right is necessary, and for most of them Copilot inherits the benefit automatically if you're already at ML2.
- Multi-factor authentication (Strategy 7): Copilot sign-in uses the same Entra ID identity as everything else in the tenant. MFA on all sign-ins, phishing-resistant on privileged roles, is table stakes before Copilot goes live.
- Restrict administrative privileges (Strategy 5): A compromised Global Admin can turn off any Copilot-adjacent control instantly. PIM, separated admin accounts, no web/email on privileged identities — all directly reduce Copilot blast radius.
- Patch operating systems + patch applications (Strategies 2 and 6): Copilot clients ride on Office, Edge, Teams and Windows. An unpatched endpoint undermines the security stack that's meant to observe Copilot activity.
- Regular backups (Strategy 8): Copilot can, through agents or integrations, modify files, delete content and overwrite data. A tested restore regime is the difference between a bad day and a bad year.
Application control (Strategy 1), Office macro configuration (Strategy 3) and user application hardening (Strategy 4) are relevant but less directly. They reduce the likelihood of an endpoint compromise that could then be used to exfiltrate Copilot-surfaced data, but they don't touch Copilot itself.
What the Essential Eight does not cover, and Copilot needs
This is the part that surprises people. Six control domains sit entirely outside the Essential Eight, and every one of them is load-bearing for a safe Copilot rollout. If you're relying on ML2 alone, these are the gaps.
1. Over-sharing and tenant-wide permissions hygiene
Copilot surfaces whatever your users can already find. If a SharePoint site is shared with 'anyone with the link' from a 2022 project, Copilot can reason over that site's content and summarise it in response to a natural-language query. The Essential Eight says nothing about SharePoint permission models, Teams guest access, or the long tail of historical shares you haven't cleaned up. I have never walked into a tenant Copilot rollout audit and found the oversharing story in good shape. Never once.
2. Sensitivity labels and data classification
Copilot respects Microsoft Purview sensitivity labels. Labels carry permission policies and encryption with them, and Copilot honours both. But the Essential Eight has no requirement for sensitivity labels, no required taxonomy, no auto-labelling posture. If you haven't deployed Purview labels, Copilot treats all content as equally reasonable to surface — which is the wrong default for payroll spreadsheets.
3. Data Loss Prevention on Copilot-relevant surfaces
Copilot can be prompted to generate content that includes sensitive information, paste it into a Teams message, or email it out. Purview DLP scoped to Copilot interactions is a specific configuration — it's not a default, and it isn't an Essential Eight strategy. Australian organisations handling TFNs, bank account numbers, Medicare details and driver's licence data need DLP rules that catch both the input (prompts containing regulated data) and the output (responses that surface it).
4. Agent governance and Copilot Studio controls
Copilot Studio lets business units build agents — small, purposeful Copilots that ground on specific data sources and can take actions. Who can publish agents, which data sources they can ground on, and how agent outputs are audited are all tenant-configurable in Copilot Studio and in the broader Agent Governance controls announced over the past year. None of this is in the Essential Eight. Australian regulators have not yet written a framework specifically for agent governance. You are on your own here, and it matters more every month.
5. Prompt and response auditability
Microsoft logs Copilot interactions in the unified audit log, but only if you've enabled it and you're retaining the right records. For a regulated organisation, you need a specific Copilot prompt/response audit configuration, a retention policy that survives eDiscovery, and a workflow for reviewing flagged events. The Essential Eight has nothing on audit of application usage at this granularity.
6. Responsible AI and content safety controls
Copilot has native content filtering, hallucination guards and safety classifiers. These are on by default but configurable. Whether you've aligned them to your risk appetite, whether you've tested the filters against your actual data patterns, and whether you have a process for incident review when a filter misses — none of this lives in the Essential Eight. It lives in the emerging AI Safety Standard and your own governance.
What a Copilot-safe control set actually looks like
For an Australian mid-market business at Essential Eight ML2 who's about to deploy Copilot, here's the practical delta you need on top. Six additional domains, each with a specific Microsoft capability. None of them are optional.
- 1Oversharing audit before go-live: Microsoft Defender for Cloud Apps tenant-wide oversharing report + SharePoint Advanced Management oversharing inspector. Remediate the worst 20 percent of sites before a single Copilot seat is activated.
- 2Sensitivity label taxonomy and rollout: Four-label Purview taxonomy with default 'Internal' and auto-labelling on TFN/PII/financial sensitive info types. Pilot group first, 30-day simulation mode, then tenant-wide.
- 3Copilot-scoped DLP: Purview DLP rules with Copilot as an in-scope location. Block sharing of 'Confidential' or higher through Copilot-generated content. Log overrides with justification.
- 4Agent governance baseline: Copilot Studio set to 'admin-controlled' publishing. Allowed data sources explicitly listed. Agent creation logged. Quarterly review.
- 5Prompt/response audit retention: Unified audit log retention extended to 1+ year (E5 default is 180 days; most regulated orgs need more). Specific log queries in your SIEM for Copilot events.
- 6Responsible AI posture: Content filter configuration documented. Filter misses captured as incidents with a review workflow. Human-in-the-loop check for high-risk agent actions.
The audit committee answer
If you're asked at the next risk and audit committee whether Essential Eight compliance is sufficient for Copilot, the honest answer is: Essential Eight is necessary and not sufficient. The Essential Eight protects the delivery surface Copilot sits on. It does not protect the data-retrieval, agent-action or prompt-audit surfaces that Copilot introduces.
The operational answer: hit ML2, then deploy the six controls above, then turn on Copilot. In that order. Not a Copilot pilot with ML1 as a plan-to-get-to-ML2. That inversion is where most of the Copilot incidents I've seen in Australian organisations originated.
Try it
Baseline the Essential Eight part first
Run the Essential Eight readiness tool to know exactly where you are before you layer Copilot-specific controls on top. The tool maps your current posture to Microsoft 365 tooling that closes each gap.
Score each of the 8 strategies
Where are you on the Essential Eight — honestly?
Eight strategies. Four levels each. Pick the statement closest to your reality today. We'll map it to the Microsoft 365 tooling that closes the gap.
What's your target Maturity Level?
Maturity Level 2 — most orgs' pragmatic target
- 01
Application control
Only approved applications can execute on workstations and servers.
- 02
Patch applications
Internet-facing apps, browsers, Office, PDF readers patched promptly.
- 03
Microsoft Office macros
Macros disabled unless from trusted locations and signed by a trusted publisher.
- 04
User application hardening
Web browsers and productivity apps hardened against the most common attacks.
- 05
Restrict administrative privileges
Admin accounts limited, separated and reviewed — the crown jewels of the tenant.
- 06
Patch operating systems
Operating system patches applied on a schedule that matches the risk.
- 07
Multi-factor authentication
MFA everywhere that matters — privileged accounts, remote access, important data.
- 08
Regular backups
Backups of important data, configuration and software — and restores you have actually tested.
Why this matters beyond Copilot
The broader lesson sits at the frontier of Australian cyber governance. Compliance frameworks codify known risks, and they update slower than the technology they govern. The Essential Eight was revised in 2023. The first production Copilot licences shipped in 2023. The strategies have not yet been rewritten to reflect agentic AI. Until they are — and the ACSC does not telegraph when — Australian organisations carry the policy gap themselves.
The right posture is to treat Essential Eight as a floor, not a ceiling, and maintain a short written addendum for AI-specific controls that your risk committee reviews quarterly. The addendum isn't complicated. It's just not something most organisations currently have.
If you'd like our take on your specific Copilot rollout sequence — what to turn on first, what to defer, and where the audit committee questions will land — book a cyber review via the contact page. We'll walk your tenant with you and put the specific delta in writing.