Where AI supports academic practice—and where judgement still holds
There is a particular pressure that comes with scale.
In smaller teaching contexts, small inefficiencies are absorbed.
An unclear instruction leads to a question.
A delayed reply is manageable.
An uneven explanation can be corrected.
At scale, those same frictions repeat.
And when they repeat, they stop being minor.
They become structural.
At scale, friction is rarely random. It is usually design.
In fully online environments, much of teaching happens through text.
Instructions carry more weight.
Explanations need to travel further.
Feedback is often the primary point of contact.
That creates a different kind of workload.
Not necessarily more complex—but more exposed.
This is where AI begins to appear—not as a solution to teaching, but as a response to repetition.
There are parts of academic work that repeat frequently, and still require care.
Explaining the same concept in different ways.
Drafting feedback that is clear, constructive, and appropriately toned.
Reviewing instructions to anticipate where learners might become confused.
These are not trivial tasks.
They are also not entirely new.
What changes is the volume.
One way to understand AI’s role is as a first pass.
It can generate options.
Offer alternative explanations.
Structure a response.
But it does not decide what matters.
That distinction holds.
Because the value in teaching is not in producing text.
It is in judging what is appropriate.
What fits the course.
What aligns with expectations.
What supports the learner in front of you.
In that sense, AI does not replace expertise.
It creates more surface area for it.
The educator is no longer starting from a blank page.
But the responsibility for what is kept, changed, or discarded remains.
There is also a quieter shift underneath this.
Repeated learner questions are rarely just a workload issue.
They are often a signal.
Something in the design is unclear.
An assumption has been left unstated.
A concept has not travelled as intended.
At scale, those signals become easier to see—if they are noticed.
AI can assist in making those patterns visible.
Not by interpreting them, but by surfacing them.
That still requires judgement.
The useful boundary, for now, is not between using AI and not using it.
It is between what can be supported, and what should remain firmly human.
Drafting is support.
Pattern spotting is support.
Final academic judgement is not.
The risk is not that AI will take over teaching.
It is that it will be used without reflection.
That it will scale responses without scaling judgement.
Used carefully, it does something more modest.
It helps reduce friction.
And in doing so, it creates a little more space for the parts of teaching that do not repeat.

Kia ora! Hey, I'd love to know what you think.