It started as a simple exchange on a popular communication app used nationally at many elementary schools. A colleague shared that her son’s teacher had reached out about a small behavior issue and asked how they handled it at home. The colleague and the teacher messaged back and forth. It reflected the ideal type of thoughtful and productive partnership every parent hopes to have with their child’s teacher. At the end of the conversation, the teacher wrote, “Thank you for working with me to improve their behavior.”
This struck our colleague as odd. Was it a typo? A mistake? Their son’s pronouns are he/him. Then they realized: Could the teacher…no….have sent an AI-written message? After a rich exchange about the child’s behavior? Really?!
Curious, we searched and discovered that many such apps now offer AI-generated response suggestions for teachers. Suddenly, that small pronoun choice seemed less like a personal courtesy and more like a programmed safeguard. As parents, our kid is an ‘n of 1’ in their class. As educators ourselves we understand that, to the teacher, each kid is one of twenty (or more) and that efficiency tools can make communication manageable. But where, we wondered, is the ethical line between helpful automation and authentic human connection in parent–teacher communication? And how should teachers be trained in responsibly using AI?
Who We Are
As educators, scholars and parents – Ben focused on higher education and Chelsey on early childhood – we find ourselves in conversations about these questions all the time. Everyone from our colleges and departments to other parents on the playground wants to engage in conversation about what is going on with AI. We often occupy dual positions in these exchanges. On one hand we approach the conversation as parents, deeply invested in their respective children’s well-being and education who trust that communication with educators reflects genuine care and understanding. On the other, we view interactions such as the one described above through the lens of our professional expertise, aware of the growing presence of AI tools in educational settings and the ethical complexities they introduce.
Recently, we have been closely considering this question: What can/should higher education be doing in the age of AI? We can’t (or shouldn’t try to) answer the question in its totality, but one conversation we’re increasingly chiming in on concerns how to develop the next generation of professionals, especially for Chelsey classroom-ready teachers. Her work and others aims to help preservice and practicing educators critically and creatively integrate AI tools into their literacy instruction. Through her research, professional development, and curriculum design, she examines how AI can both support and complicate authentic teaching, asking how educators can maintain ethical awareness and human connection while embracing innovation.
Research and Professional Insights: Why College?
Recent research highlights the growing need to prepare early childhood educators to use artificial intelligence responsibly. A 2025 scoping review on AI in early childhood education identifies both the opportunities and the risks of AI integration, including issues of data privacy, algorithmic bias, and teacher preparedness. Similarly, an article from Zero to Three emphasizes how professional development can help early educators move beyond simply using AI tools toward critically reflecting on when and why to use them. For higher education programs that prepare future teachers, this research collectively suggests the importance of embedding AI literacy and ethics into coursework so that educators maintain a focus on human relationships, professional judgment, and child-centered learning as they encounter new technologies.
In fact, higher education – especially the undergraduate and graduate degree programs responsible for developing the next generation of teachers and leaders – is fast becoming an epicenter for gaining professional understandings on the best use (and avoidance of misuse) of AI technologies. With Forbes recently reporting an AI usage rate amongst university students hovering at a whopping 90%, the question is rapidly shifting away from whether future teachers are going to use AI to how they are being guided toward its use with human-centered ethics in mind. While schools of education are interested in taking action, one concern is actually that faculty are moving too slow to meet the demands of the current moment. As this conversation progresses, it will become essential to integrate broader insights about AI (e.g., Mollick’s work on co-intelligence) with teacher-development specific practices (e.g., communicating with parents).
So, how can these insights be put into practice immediately? We offer a few practical considerations.
Practical Takeaways
1. Center relationships. Ultimately, the rise of AI in education challenges us to reaffirm what has always mattered most: human connection. Teacher education must emphasize that relationships always come first because authentic trust and understanding cannot be automated. Always be the human on the other side of the AI, recognizing that these tools make mistakes, lack context, and can unintentionally distort tone or meaning. New teachers should be taught to use AI tools thoughtfully, not as a replacement for their professional judgment or care. As our vignette demonstrates, this absolutely includes relationships with parents. Remember that, to a parent, their child is an n of 1.
2. Collaboration is key. Teacher educators must acculturate students to collaborate early and often with colleagues about what works (and what doesn’t) in their particular classroom or content area. AI applications in math problem generation, for instance, differ from those that support reading feedback or family communication. This could take the form of internal (e.g., within the school) and external (e.g., professional community) knowledge sharing, as well as being open with colleagues about possible challenges with AI. One thing we’re seeing throughout the AI landscape, no matter the area, is that transparency is essential – and this emphasis on transparency in use and collaboration must start well before teachers have their own classroom.
3. Encourage education faculty. What would it look like to ensure that every faculty member responsible for developing future classroom educators was equipped with the knowledge to ethically use and discuss AI, including generative approaches and more bot-based applications? This is probably more moonshot than practical reality, but these are the types of questions that leadership – both within and beyond colleges of education – must be addressing.
4. Use, but don’t overuse. We know – it’s a slippery slope from helpful efficiency to automation that loses the human touch. But for AI use to be meaningful and intentional, it must also be responsible in ways that promote transparency and student learning while demonstrating the high level of professionalism that parents and school districts deserve. What this means for education professors is modeling thoughtful, transparent AI use in their own teaching by showing future educators what it looks like to innovate without diminishing the human judgement, relational work, and pedagogical expertise that remain at the heart of the profession.
Conclusion
The goal isn’t to eliminate AI. The toothpaste, as they say, is out of the tube. Instead, teachers and those who strive to ensure their high-quality development must center those deeply human relationships that are core to learning. We suggest that higher education – for over 100 years from normal colleges to today the primary location for teaching teachers – is where these conversations must start and be sustained. If universities take the lead in setting clear expectations, demonstrating intentional practice, and elevating the human expertise that technology can never replace, they can shape how an entire profession navigates this moment.
