17.17 The author's perspective: AI in clinical medicine

The author has spent fifteen years at the intersection of clinical practice and digital health research. The conclusions that emerge from this experience are six.

First, augmentation is the right frame. The clinician–AI partnership where the AI handles narrow tasks (image triage, documentation, alert generation, retrieval) and the clinician handles judgement, communication and final responsibility is the framework that has produced sustained value. The framing of AI as replacement of clinicians has not produced a successful deployment that the author is aware of, despite a decade of attempts. The reason is partly technical (AI systems' tail-risk failure modes are unacceptable in safety-critical settings) and partly social (medicine is a profession in which trust, relationship and accountability are central, and these do not transfer to algorithms).

Second, the evidence standard for clinical AI must be the standard for any clinical intervention. A new drug requires randomised evidence, a clear safety profile and post-market surveillance. A new surgical technique requires comparable evidence. There is no defensible argument that an AI-based diagnostic, triage or treatment tool should pass through any lower bar. The fact that AI is software and is cheap to deploy is irrelevant; what matters is the consequence to patients, and the consequence to patients is the same whether the cause is a drug, a device or a model. Regulators have largely understood this; the FDA SaMD framework, the EU MDR and the MHRA's Software and AI as a Medical Device programme apply broadly the same standards as for traditional devices.

Third, distribution shift is the rule. A model trained at one site will not work the same way at another. The evidence base on this is now overwhelming. Practical implication: every clinical AI deployment must include a local validation step before going live, and a monitoring plan after.

Fourth, equity must be designed in, not bolted on. Bias in clinical AI is a clinical safety issue. A model that performs better on some patient subpopulations than others is, in clinical terms, an unsafe device for the underserved subpopulations. Pre-deployment performance evaluation by demographic subgroup, post-deployment monitoring and disparity-aware retraining are not optional features; they are part of clinical safety.

Fifth, the best deployments are administrative. Ambient documentation, scheduling optimisation, clinical letter drafting, billing-code suggestion, prior-authorisation automation, summary generation for ward handovers, these unsexy uses are where the largest measurable benefits to clinicians and patients have come from. They reduce burnout, give clinicians time back, and rarely fail in ways that hurt patients. They are the low-hanging fruit of clinical AI and we should pick them aggressively.

Sixth, the largest opportunities are in low-resource settings. Pacific Island clinics with intermittent specialist access, sub-Saharan African hospitals with one radiologist for a million people, refugee and humanitarian settings: in places where the alternative to AI assistance is no specialist input at all, the calculus changes. The author's own work on offline AI-assisted clinics for Pacific Island populations operates in this space. The frame is not "AI replaces the specialist" but "AI extends specialist input where no specialist is available", and in that frame, the argument for ambitious deployment is very strong, provided the safety guarantees and the local validation are in place.

The next decade will determine whether AI in medicine becomes a genuine net benefit to patient care or a costly distraction. The path to the first outcome runs through evidence-based deployment, careful workflow integration, rigorous bias auditing, and humility about the failure modes of these systems. The path to the second runs through hype, premature deployment, and regulatory capture by under-evidenced products. Clinicians have a professional obligation to push the field down the first path.

This site is currently in Beta. Contact: Chris Paton

Textbook of Usability · Textbook of Digital Health

Auckland Maths and Science Tutoring

AI tools used: Claude (research, coding, text), ChatGPT (diagrams, images), Grammarly (editing).