SMART Tool is available for a ‘familiarisation’ play — what do you think of it? Test it out.
- 24 minutes ago
- 6 min read
by ELV

The Testing Paradox
“Don’t upload samples” vs “Answer everything online”
For years, teachers were given clear guidance about assessment data.
Keep it private.
Shred what you no longer need.
Protect children’s responses with care and integrity.
Most of us did exactly that — not because we were told to, but because it felt right.
Even when I first trained with AI tools like ChatGPT, the very first advice I was given by Dr Craig Hansen was simple: turn your privacy settings on. Don’t feed the machine. Be cautious. Especially when children’s work is involved — particularly things like e-asTTle rubrics or writing samples. Because once data leaves your control, you don’t really know where it goes.
And the guidance was always consistent:
Don’t upload children’s writing samples.
Not to platforms.
Not to third-party tools.
Not to anywhere you can’t fully control.
Why?
Because writing is personal.
Because it reveals how children think.
Because it can carry identity, culture, trauma, and voice.
Because once it’s uploaded, you don’t truly get it back.
That warning made sense. Most of us accepted it without question.
Which brings me to the paradox I can’t stop thinking about:
Why was uploading children’s writing once considered too risky — but asking them to answer everything online is now considered normal?
On the surface, SMART feels harmless. Even familiar.
A bit like a pub quiz.
Click, answer, move on.
No essays. No uploads. Nothing obviously personal.
But what’s changed isn’t just the test.
It’s the technology behind it.
And that shift deserves far more scrutiny than a quiet “familiarisation play”.
What counts as data has changed
Older tools like PATs and e-asTTle mostly collected outcomes.
Right or wrong.
A score.
A band.
A stanine.
As teachers, we interpreted that data with context.
We knew why a child rushed.
Why they froze.
Why they guessed.
Why they underperformed on that particular day.
We knew if a parent had recently been excluded from their life.
If they were exhausted from not having a permanent home.
If the child in the other class had thumped them before school.
If they hadn’t eaten.
And because we knew our children, we also knew when to set the data aside and say,“This doesn’t tell the full story.”
Those tools gave us information, not judgement.
They supported professional interpretation.
They never pretended to know our learners better than we did.
SMART operates in a different world.
Now, the system doesn’t just see the answer — it sees the process.
Even in a simple multiple-choice question, the platform can record:
how long a student hesitated
whether they changed their answer
which questions they skipped or flagged
how often they used the calculator
how they responded under time pressure
You don’t see this data as a teacher.
But it exists.
This is where AI changes everything
AI doesn’t need children’s writing anymore to understand their thinking.
It can infer patterns from behaviour.
Across thousands of students answering the same questions, AI can begin to model:
confidence and uncertainty
persistence and fatigue
risk-taking versus caution
speed versus accuracy
compliance versus exploration
This is no longer just assessment.
It’s behavioural patterning.
Before AI, we were careful with writing samples because they showed too much about a child. Now, systems can infer just as much — sometimes more — without a single sentence being written.
That’s the shift most teachers haven’t been told about.
And it raises a deeply uncomfortable question:
If writing was once considered too personal to upload, why is cognitive behaviour now considered safe to extract?
“De-identified” doesn’t mean powerless
We’re often reassured that this data is de-identified, aggregated, used for improvement, and applied at scale. And technically, that may be true. But de-identified doesn’t mean neutral. Patterns don’t need names to be valuable, and they don’t need faces to be predictive. Once behaviour is captured and compared across thousands of learners, it begins to speak for itself — not as individual stories, but as trends that can be analysed, ranked, and acted upon.
When enough of this data is gathered, it can be used to design so-called “adaptive” learning pathways, to decide which interventions are triggered and which are not, and to compare schools, regions, and cohorts against one another. It can be used to justify new tools and future contracts, to train proprietary AI systems, and to shape policy decisions made far from the classroom. In these moments, the data no longer serves teaching; it begins to serve the system.
This is where power shifts.
Teachers assess children to support learning.
Systems analyse children to optimise systems.
Those are not the same goal.
This is where the conversation becomes uniquely Aotearoa.
Te Mātaiaho was never intended to be an overlay or an aesthetic — it was designed to re-centre learning around relationships, context, and purpose. Any system that claims alignment must be examined through that lens, not simply through what is easiest to measure.
It’s also worth pausing on how this tool presents itself.
Cultural themes, familiar imagery, and local references can feel reassuring — a signal that this belongs here. But we need to be honest about the difference between cultural presence and cultural authority.
When a platform is largely designed offshore — in Australia — and then localised visually or contextually, we should ask whether that localisation is shaping the system, or simply softening its reception.
Representation matters.
But it is not the same as sovereignty.
This isn’t about being anti-technology. It’s about understanding that assessment is no longer just assessment, that data is no longer just scores, and that platforms are no longer neutral.
SMART is a system built primarily on outputs — on what can be measured, compared, and scaled — and I struggle to see where, or why, the original intent of Te Mātaiaho genuinely fits within that process. With or without tokenistic word problems that feature children who look like they might be from Aotearoa, this remains a fully westernised philosophy and pedagogy of what society decides progress looks like and feels like.
There is no clear evidence that a Te Ao Māori understanding of progress, time, or lived reality is shaping how learning is understood over time, despite the inclusion of culturally familiar contexts such as kapa haka. When culture appears only in the surface features of a question, rather than in how progress is defined, interpreted, and valued, it risks becoming reassurance rather than representation.
When systems begin to know more about children’s learning behaviours than teachers do, power shifts — quietly, efficiently, and permanently. A “familiarisation play” isn’t just about getting comfortable with a tool; it is about normalising a new relationship between children, data, and systems, and once that relationship is established, it is very hard to unwind.
Teachers have always handled assessment data with care because we hold children with care, and as AI becomes embedded in education, that ethic matters more than ever.
So perhaps the question isn’t whether SMART works, but what it is really collecting, who learns from that data, who controls how it is used, and who gets to say no. Because once systems learn from children, they don’t forget — and we owe it to our students, and to ourselves, to ask the questions now, before the answers are decided for us.
Editor’s Note: A question for Boards of Trustees
This piece is not intended as a directive, nor as an argument against technology or assessment. It is offered as a starting point for governance-level discussion.
As Boards of Trustees carry legal and ethical responsibility for student wellbeing, data protection, and alignment with Te Tiriti o Waitangi, the introduction of system-level assessment tools such as SMART raises important questions that sit beyond classroom practice.
Boards may wish to consider: How student data is being collected, interpreted, stored, and reused over time; what visibility schools and teachers have into the full range of data generated by these platforms; how consent is understood and communicated to whānau; and whether the underlying philosophy of assessment aligns with the intent of Te Mātaiaho and local curriculum design.
This is not about resisting change. It is about exercising stewardship.
Assessment tools shape not only what is measured, but what is valued.
As AI-enabled systems become embedded in education, the decisions made now will have long-term implications for children, schools, and communities. Boards are uniquely positioned to ask the questions that ensure these tools serve learners — not just systems.




Comments