There’s a conversation about Artificial Intelligence happening in school administration offices and teacher social media circles. It’s full of words like “disruption,” “guardrails,” and “the future of work.” Then there’s the conversation happening in high school students’ group chats. It’s about how to get the history essay done by 11 PM.
The two conversations have almost nothing to do with each other.
I recently had a chance to talk with William Liang, a high school student from San Jose, California, and frankly, he offered one of the clearest views I’ve heard yet on what’s actually happening on the ground. This isn’t just any student. William is a seriously impressive high school journalist, with published work in places like the San Francisco Chronicle and The San Diego Union-Tribune.
He’s living and breathing this stuff every day, and his message is simple: our school system is playing a game of checkers while its students are mastering 3D chess. The way we teach and test kids is fundamentally broken in the age of AI, and our attempts to “catch” them are missing the point entirely.
It’s All About the Incentives
Here’s the first truth bomb William dropped, and it reframes the entire issue. We need to accept that for a huge number of students, an assignment isn’t a journey of intellectual discovery.
“For most students, an assignment is not interpreted as a cognitive development tool, but as a logistical hurdle,” he told me. Think about that. It’s a hurdle to be cleared as efficiently as possible. “Right now,” he said, “that mechanism is generative AI.”
This isn’t really about kids being lazy or immoral. It’s about them being smart players in a game we designed. For decades, the system has screamed one thing above all else: grades matter more than understanding. When the goal is the A grade, and a tool exists that gets you there in a fraction of the time, why wouldn’t you use it?
As William put it, “If there’s an easy shortcut, why wouldn’t we take it?” He sees it as a predictable outcome. When you have a high-pressure, competitive game where a growing number of players can cheat with a huge upside and a tiny risk, everyone else feels forced to cheat just to keep up.
The “Plagiarism Police”
So, what about the teachers? The plagiarism checkers? The honor codes?
According to William, it’s mostly security theater. The whole enforcement system is, in his words, “incoherent.”
He explained that “students are “warned” all the time but rarely penalized because the enforcement apparatus is incoherent. Detection tools operate on heuristics, which include vocabulary uniformity, sentence structure, and semantic burstiness; however, students generally learn quickly how to avoid triggering them. Teachers are busy. They rarely follow up unless something seems egregiously wrong, and even then, they have little evidentiary protocol. And when they do think they’ve “caught” someone, they’re often wrong.”
The anecdotes he shared are both hilarious and horrifying. He told me about a friend, who described a situation at his school. “A guy I know who used AI to write an essay literally had the words ‘as an AI language model myself,’ and he kept it in and didn’t get caught for it,” William recounted. Think about that. The AI confessed to writing the essay in the essay itself, and no one noticed. “Meanwhile,” he continued, “another person got flagged on an essay they spent a week writing and had to show the version history on the essay to prove he wrote it.”
This is where things got really interesting. He argued that we’re all using the wrong words. The line between “using a tool” and “cheating” isn’t about academic integrity anymore. In the real world, it’s about one thing: Can you get caught?
“The designation of ‘cheating’ doesn’t rest on the method but on the detectability,” he argued. Because detection is basically a coin flip, the official labels of “legitimate” and “illegitimate” use just fall apart.
So, What’s the Fix?
If the old system is broken, the only move left is to change the game board itself. William’s solution isn’t more complicated software or another all-school assembly on academic honesty.
It’s one, simple, radical rule.
I asked him what single policy he would mandate in high schools. His answer was:
“Teachers should not be allowed to assign take-home work that ChatGPT can do. Period!”
Read that again. He’s not saying “no more homework.” He’s saying that any essay, problem set, or report with a predictable structure that’s done without supervision is now an invalid test of a student’s knowledge. It only tests their ability to write a good prompt.
The real work. The thinking, the analyzing and the creating, has to be brought back into the classroom where it can be seen. How do you assess real understanding? The old ways, it turns out, still work beautifully. “Drafting essays and solving math problems,” he said. You just have to watch them do it. Think in-class essays, oral presentations, and group projects where the process is as important as the product.
He’s an AI Optimist
But here’s the thing that makes William’s perspective so powerful. He’s not an AI doomer. In fact, he’s incredibly optimistic about the technology. He just thinks we’re focusing on the bad use cases for it.
“There is no inherent tension between embracing AI and preserving critical thinking or creativity, unless schools force one,” he insisted. The problem isn’t the tool; it’s the task.
He asked me to flip the question. “Imagine students had daily access to the greatest minds in science, literature, and art?” he posed. “Students working closely with advanced AI will be like directly apprenticing with Ernest Hemingway, Isaac Newton, or Leonardo da Vinci. Why would we deny students this opportunity?”
Now that’s a vision. Imagine your kid getting feedback on their short story from a bot that thinks like Hemingway—a bot that could say, “Great start, but a master of prose would cut these three adverbs and find a stronger verb here.” Imagine an AI tutor that can generate a thousand different math problems tailored to exactly where your child is struggling, offering hints 24/7.
That’s the right use of the tool. AI shouldn’t be the thing that completes the assignment for the student. It should be the thing that helps the student complete the assignment better. He gave a perfect example: a good assignment could be grading students on a conversation they have with an AI chatbot on a complex topic. The AI is part of the learning, but the student is still doing all the critical thinking.
The takeaway from William is a wakeup call. For anyone in a leadership position. A parent, an educator, or a business leader. It’s time to get honest. Stop asking “how do we catch them?” and start asking “what should we be asking of them in the first place?”
The students are already living in the future. As William put it, “The biggest misconception surrounding AI adoption is that adults don’t realize students are light-years ahead of them. I use ChatGPT more than Instagram, which is astonishing.”
It’s time for the rest of us to catch up.