Frontrow Technology
← All insights & guides
Guide

Cyber Security · AI Governance

ISO 42001, Voluntary AI Safety Standard and Essential Eight — what Australian organisations actually need

Three frameworks Australian boards are being asked about: the international AI management standard ISO 42001, the Department of Industry's Voluntary AI Safety Standard, and the ASD's Essential Eight. Frontrow's plain-English guide to what each one is, where they overlap, and which one to start with.

Sam Williams · 25 April 2026 · 8 min read

Three framework names are being mentioned in Australian board papers this year more than they were last year — ISO 42001, the Voluntary AI Safety Standard, and the Essential Eight. They sit on different shelves of the same library. The Essential Eight is the cyber baseline. The Voluntary AI Safety Standard is the AI governance baseline. ISO 42001 is the international AI management system standard. The three are complementary, not competing, and the question Frontrow keeps getting from non-executive directors and risk committees is which one to start with for an Australian mid-market organisation in 2026.

Here is the working answer Frontrow takes into board meetings, with the reasoning. The short version is that the order is Essential Eight first, then the Voluntary AI Safety Standard, then ISO 42001 if certification matters for a procurement, regulator or customer reason. The long version is below.

What each framework actually is

The Essential Eight is the Australian Signals Directorate's prioritised set of eight cyber mitigation strategies — patching, MFA, application control, restricting privilege, hardening user applications, restricting Office macros, multi-factor authentication, and tested backups. It is scored against four maturity levels (ML0 through ML3). ML2 is the practical baseline for most Australian industries in 2026, and ML3 is the trajectory for critical infrastructure, finance and defence. It is concrete, technical and well documented through the Australian Cyber Security Centre.

The Voluntary AI Safety Standard was published by the Department of Industry, Science and Resources in August 2024. It defines ten guardrails that apply to organisations across the AI supply chain — accountability and governance, risk management, data governance, testing and monitoring, human oversight, end-user transparency, contestability, supply-chain transparency, record keeping, and stakeholder engagement. It is voluntary today and has been signalled as the design template for any future mandatory regime in Australia. It explicitly references the Australian Privacy Principles, the Notifiable Data Breach scheme and the Essential Eight as part of the surrounding regulatory landscape.

ISO/IEC 42001:2023 is the international standard that defines a management system for artificial intelligence. It is structured the same way as ISO 27001 for information security or ISO 9001 for quality — a Plan, Do, Check, Act loop with documented controls, internal audit, and external certification. Where the Voluntary AI Safety Standard tells an organisation what good AI governance looks like, ISO 42001 provides the management-system scaffolding to operate it on a continuous basis with the option of independent certification.

Where the three overlap

The overlap is real and worth seeing clearly. The Voluntary AI Safety Standard cites ISO 42001 as one of its two main international alignment points (the other is the United States NIST AI Risk Management Framework). It also cites the Essential Eight as the cyber baseline an organisation should already be running before it operates AI at any scale. Inside the ten guardrails, Guardrail 3 (data governance and AI system protection) lands directly on top of cyber controls that the Essential Eight already covers — patching, MFA, privilege restriction, and the rest. An organisation at Essential Eight ML2 has done a meaningful chunk of the cyber work the Voluntary AI Safety Standard asks for, before it has read a word of the standard itself.

The non-overlapping pieces are where each framework adds something the others do not. The Essential Eight adds nothing about model evaluation, contestability, transparency or stakeholder engagement, because cyber baselines are not the right shelf for those concepts. The Voluntary AI Safety Standard adds nothing about cyber controls below the data-governance level, because it assumes those are running. ISO 42001 adds the management system loop that turns either of the other two from a one-off project into a documented, audited, continuously improved capability.

Where each one is the right starting point

Start with Essential Eight if the cyber baseline is anywhere below ML2. The Voluntary AI Safety Standard explicitly asks an organisation to be running cyber controls that map to the Essential Eight, and starting AI governance work on top of an unhardened tenant is a known-bad pattern. Frontrow's earlier note on the 90-day plan to ML2 is the cheapest, fastest way to clear this gate.

Start with the Voluntary AI Safety Standard if the cyber baseline is at ML2 or stronger and the organisation has live AI use of any consequence — Microsoft 365 Copilot deployed to staff, agents in Copilot Studio or Foundry, third-party AI tools in regulated workflows. The ten guardrails are the right governance layer for an organisation operating AI in 2026, and they read well at board level. The standard itself is voluntary today, but the way it is being written into procurement and assurance conversations means the organisations that adopt it now are the ones that will not have to scramble in the next 12 to 18 months.

Start with ISO 42001 only if external certification is needed for a specific procurement, regulator or customer requirement, and the underlying controls (cyber baseline, AI governance) are already in place. ISO 42001 is the management system on top of the substance. Going for certification before the substance is there usually leads to a paper-only management system that the next surveillance audit unwinds. The international standard is the right destination for organisations that already do most of what it requires.

The pragmatic 2026 order

  1. 1Land Essential Eight at Maturity Level 2 inside 90 days. Most Microsoft 365 tenants can do this with the licensing they already have. This is the cyber baseline the AI work then sits on top of.
  2. 2Adopt the Voluntary AI Safety Standard as the AI governance frame within the next 12 months. Run a gap assessment against the ten guardrails, prioritise the gaps, and stand up the missing pieces (AI policy, model register, human oversight design, transparency notices to staff and customers).
  3. 3Decide on ISO 42001 certification only if there is a named external driver — a customer is asking for it, a regulator is signalling it, a procurement is gating on it. If the answer is no, the management-system discipline of ISO 42001 is still worth borrowing from, without going for certification.
  4. 4Run the three as one program, not three. The artefacts overlap. The same risk register feeds the Essential Eight, the Voluntary AI Safety Standard and ISO 42001. The same audit log evidences all three. The same governance committee owns all three. Splitting them across three workstreams duplicates effort and confuses the board.
"The boards that get this right run one program, not three. Cyber, AI governance and management-system discipline are different languages for the same risk surface, and the artefacts that satisfy one usually satisfy the other two with light editing."
Sam Williams · Investor & Executive Consultant

Try it

Score the cyber baseline before stacking AI governance on top

Twelve questions, an ML1, ML2 and ML3 score, and the prioritised gap list. The starting point Frontrow asks every Australian board to land before the AI governance conversation begins.

Score each of the 8 strategies

Where are you on the Essential Eight — honestly?

Eight strategies. Four levels each. Pick the statement closest to your reality today. We'll map it to the Microsoft 365 tooling that closes the gap.

What's your target Maturity Level?

Maturity Level 2 — most orgs' pragmatic target

  • 01

    Application control

    Only approved applications can execute on workstations and servers.

  • 02

    Patch applications

    Internet-facing apps, browsers, Office, PDF readers patched promptly.

  • 03

    Microsoft Office macros

    Macros disabled unless from trusted locations and signed by a trusted publisher.

  • 04

    User application hardening

    Web browsers and productivity apps hardened against the most common attacks.

  • 05

    Restrict administrative privileges

    Admin accounts limited, separated and reviewed — the crown jewels of the tenant.

  • 06

    Patch operating systems

    Operating system patches applied on a schedule that matches the risk.

  • 07

    Multi-factor authentication

    MFA everywhere that matters — privileged accounts, remote access, important data.

  • 08

    Regular backups

    Backups of important data, configuration and software — and restores you have actually tested.

Frontrow advises Australian boards on the order of operations across cyber, AI governance and management-system discipline. The conversation is usually 30 minutes and resolves into a defensible 12-month program plan. Phone 1300 012 466 or book a chat through the contact page.

Want us to run this with your team?

30 minutes. No deck. We'll walk through your tenant, your priorities, and the next sensible move.