The Capability Gap
Why AI Adoption Isn’t the Same as AI Readiness in Education
By now, most tertiary organisations have moved past the question of whether AI is “allowed”.
Tools are in place. Pilots are running. Policies have been drafted and revised. In many cases, significant investment has already been made. And yet, for many leaders and educators, something still feels unsettled.
Confidence is uneven. Decisions feel fragile. Conversations keep looping back on themselves. Despite activity and spend, the sense of coherence people hoped for hasn’t quite arrived.
This is not a failure of intent, effort, or professionalism. It’s the result of a simple mismatch: AI adoption has moved faster than human capability.
Capability, in this context, is not about familiarity with tools. It’s about professional literacy in a changed environment — shared judgment, confidence in use, clarity about boundaries, and an agreed sense of what “good practice” looks like now.
When those elements lag behind deployment, instability is not a surprise. It’s the expected outcome.
We’re seeing the same pattern well beyond education. Across sectors, organisations are investing heavily in AI with the promise of productivity gains, efficiency, and leverage. Yet the results are often modest or inconsistent. Leaders are not rejecting the technology — they’re puzzled by the gap between expectation and outcome.
This isn’t because the tools don’t work.
It’s because tools alone don’t create capability.
When new systems arrive faster than practices can adapt, people default to workarounds, uncertainty, or over-correction. In education, that tension shows up in familiar ways: over-reliance on policy, repeated revisions of rules, and a quiet sense that detection tools aren’t delivering the stability they were meant to provide.
Policy struggles in these moments not because it’s poorly written, but because policy cannot substitute for shared professional judgment. Rules are most effective when they reflect practice that is already understood and trusted. When capability is uneven, policy ends up chasing behaviour rather than guiding it.
The same is true of detection tools. They are understandable responses to uncertainty, but they address symptoms rather than causes. The deeper risk facing education is not misuse of AI. It’s the gradual erosion of professional confidence and authority — educators unsure what to endorse, leaders uncertain what to stand behind, and institutions oscillating between permissive and punitive stances.
That erosion is far more damaging than any single instance of misuse, because it weakens the conditions under which good judgment can be exercised at all.
This is where the conversation needs to shift.
Capability is not a “nice to have” layered on top of adoption. It is infrastructure. It develops slowly, accumulates through practice, and stabilises systems over time. It includes literacy, yes — but also confidence, shared norms, and the ability to explain why certain uses are appropriate while others are not.
When capability is treated as infrastructure, governance becomes possible in a meaningful way. Not governance as control, but governance as cultivated authority — where decisions are made deliberately, standards are defended with confidence, and educators feel supported rather than scrutinised.
The good news is that this gap is neither mysterious nor permanent. It is a normal phase in any major shift, especially one that touches professional identity and judgment so directly. But it does require a change in focus.
We are past the reaction phase.
The work now is capability.
And from capability, governance can finally take shape.

