Intelligence-Native Hospital (Australia)

Capability Maturity Ladder + Regulatory/Governance Model

This document covers:


1) Capability maturity ladder (Manual → Assisted → Autonomous)

This ladder is intended to be applied across the hospital, and per capability (triage, imaging, surgery, pharmacy, discharge, robotics, etc.)

Principle: autonomy increases only when governance + evidence quality increases.

Level Description
0 Manual / Legacy
1 Digitised
2 Assisted (Non-clinical automation)
3 Assisted Clinical Decision Support (CDS)
4 Semi-autonomous execution (bounded autonomy)
5 High autonomy clinical ops (“autonomous hospital in normal mode”)
6 Network-autonomous healthcare (system-of-systems)

Level 0 — Manual / Legacy

Description

Typical traits

Safety

Outputs

Level 1 — Digitised

Description

Capabilities

AI

Level 2 — Assisted (Non-clinical automation)

Description

Capabilities

Governance

Expected KPI improvement

Level 3 — Assisted Clinical Decision Support (CDS)

Description

Capabilities

Controls

Regulatory note Many CDS systems become Software as a Medical Device (SaMD) if they influence diagnosis/treatment.

Level 4 — Semi-autonomous execution (bounded autonomy)

Description

Capabilities

Required

Level 5 — High autonomy clinical ops (“autonomous hospital in normal mode”)

Description

Capabilities

Safety bar

Level 6 — Network-autonomous healthcare (system-of-systems)

Description

Capabilities

Sovereignty requirement

Score each hospital capability across:

Realistic state: a hospital can run multiple autonomy levels at once
(e.g., logistics at Level 5, triage at Level 3, surgery at Level 4).


2) Australian regulatory + governance model (TGA, NSQHS, Privacy, clinical safety)

This section defines an operating model for an AI/robotics-native hospital compliant with:

2.1 Regulatory perimeter (what you must comply with)

A) NSQHS Standards (Australian Commission on Safety and Quality in Health Care)

NSQHS is the hospital’s quality & safety operating baseline.

Most critical anchor:

AI/robotics must be governed within NSQHS clinical governance, not treated as a separate digital project.

B) TGA — Software as a Medical Device (SaMD), including AI

If software influences diagnosis or treatment, it may be regulated as a medical device.

Key implications:

C) Privacy Act 1988 + Australian Privacy Principles (APPs)

Health data is sensitive information.

Key themes:

D) National clinical governance for digital health

Use Australia’s digital health clinical governance frameworks as the bridge between:

E) Emerging AI governance guidance (Australia)

Adopt national clinical AI usage guidance as internal operational policy (training + acceptable use + safety expectations).


2.2 Governance operating model (how the hospital runs safely)

The “Three-Layer Governance Stack”

Three intersecting lines of governance are required:

1) Clinical Governance (NSQHS-driven)

Owned by

Responsible for

2) Medical Device / SaMD Governance (TGA-driven)

Owned by

Responsible for

3) Information Governance (Privacy + Security)

Owned by

Responsible for

2.3 Committees required (operationally credible setup)

1) Clinical AI Safety Committee

2) Model Registry & Change Control Board

No model goes live without:

3) Robotics Safety Committee

Robotics = physical harm domain. Requires:

2.4 Clinical Safety Case template (mandatory for AI/autonomy)

Every AI capability above Level 2 should have a safety case:

2.5 Data + model controls (privacy + safety combined)

Minimum viable controls

Continuous monitoring requirements

2.6 Practical TGA mapping: what becomes SaMD in your hospital

Usually SaMD

Usually not SaMD (but still governed)

Even non-SaMD automation requires governance because admin errors can become clinical harm.

2.7 Compliance statement (what you should be able to assert)

An intelligence-native hospital should be able to state: