AI chatbot boosters have made some large claims about the potential effect of AI in education in recent months, in particular that:
AI chatbots will give everyone access to a world-class personal tutor. (e.g. Sal Khan, Marc Andreessen.)
AI chatbots will result in less need for human teachers leading to lower teacher hiring. (e.g. Luis von Ahn.)
AI chatbots will result in happier human teachers leading to lower teacher attrition.
My perspective has been “no, I don't think so,” to all of the above and history is on my side here. With the exception of huge exogenous events like a global pandemic or recession, etc, the safest bet in education, for better or worse, has always been the status quo.
In this post, I’m going to pre-register some claims that are a bit more falsifiable than “no, I don’t think so.” I will also offer their “pre-mortem,” an analysis of why they might be incorrect, why many of you may get to dunk on me in five years.
Bet #1
In the next five years, we won’t see a single study in a peer-reviewed journal replicating with AI chatbot tutors the two-sigma difference Bloom found with human tutors.
These chatbots are very exciting, very fast, and often very useful, but they will not perform at the level of high-dosage (usually defined as three times per week) human tutors for several reasons.
First, these chatbots are wrong at rates most people would find unacceptable for human tutors. Some people try to convert this lack of reliability and validity into a meta-cognitive exercise for students—correct the tutor!—but that exercise is only productive or interesting to students who don’t need a tutor. Students who need a tutor need the tutor to be right.
But let’s imagine the chatbots quit hallucinating and become as accurate as human tutors. Even then you’d have to get kids to interact with these AI tutors and researchers have found it very hard to get students to interact with human tutors who are, for now, much more accurate and, well, much more human.
My premise here is that anything close to a two-sigma improvement in learning requires a tutor that is consistent and relational. Someone who knows to follow up on your last exam. Someone who will encourage, push, and pull you beyond what you think are your limits. People are not anywhere close to forming those kinds of relationships with AI chatbots.
2023 June 15. A reader has invited me to amend this bet and I explain in the postscript below.
Bet #2
In the next five years, we won’t see a national survey where K-12 students have access to both classroom teachers and chatbot tutors over a year and a plurality of them prefer the chatbots to classroom teachers.
This is an easy bet to parlay with the last one. I find it plausible that students would prefer a chatbot in situations where their class is staffed by substitute teachers or no teacher at all. But in situations where students have access to a classroom teacher and a chatbot, you won’t find even a plurality of them claim to prefer the chatbot to the human. Students will more often choose human relationship in learning1.
Bet #3
In the next five years, we won’t see changes to teacher attrition or hiring in any of the top ten largest districts that we can attribute to AI chatbots.
This one will be tough to falsify, unfortunately. Patterns of teacher hiring and attrition change all the time, often in response to macroeconomic conditions. (e.g. Weak labor markets generally result in more applicants to teaching jobs.)
It won’t surprise me to see a few schools here or there—charter or non-union most likely—increase class sizes and offload the marginal work to AI chatbots like Rocketship tried with computer labs a decade ago. These large class sizes won’t catch on—particularly in large school systems—because students and parents generally hate them and hiring will continue apace.
Some commenters have suggested that AI chatbots will help us retain teachers by functioning as teaching assistants. But I don’t think we will see that effect, mainly because AI chatbots are irrelevant to the largest factors contributing to job satisfaction among teachers: resource availability, professional respect, disciplinary climate, self-efficacy, teacher collaboration, professional development opportunities (see pp 81-84).
Pre-mortem
Whether or not my predictions are correct mostly depends on the premise that students won’t form the kind of parasocial relationships with AI chatbots that they form with even below-average teachers. If they do form those relationships, it’ll mean the chatbots have become indistinguishable from humans and have also been allowed to represent themselves as human. (Since the knowledge that you’re interacting with a virtual avatar negatively affects your learning.)
If we live in a future where AI agents are indistinguishable from humans, or where humans decide they don’t care about the differences, I might have taken the wrong side of these bets. But none of you will remember to dunk on me because you will be too busy either hiding underground from flesh-eating AI nano-particles or living a life of unlimited AI-subsidized leisure. Either way, in that future, my predictions will not be top of mind.
The most optimistic outcome I can conjure for AI in education is that it will manifest as a series of quality-of-life improvements for teachers and students, similar to the AI grammar checker that is right now ensuring I maintain subject-verb agreement, but not much more than that. I like that grammar checker, but it has not substantially changed how I write and likewise AI will not substantially change how kids learn new ideas.
I’m a lot older now and maybe a little wiser and I’m very much less annoyed than I used to be by the people who make maximalist claims about technology in education. I realize now how hard they have to work to loosen the pursestrings of philanthropic funders and institutional investors. The fact that their previous predictions about technology in education have never once shaken the hand of reality—whether predictions about radio, TV, VHS, or the internet itself—will not change their high likelihood of meeting their funding goals or their low likelihood of making anything of relevance to the work of helping kids learn new ideas.
You could throw trillions of dollars at the project of convincing kids and teachers that they will learn better without each other, that they might even prefer it, and they will tell you en masse, “No, I don’t think so.”
So if you are, right now, helping kids learn through social relationship, I encourage you to stay your course because it is the only course that matters. If, on the other hand, you believe you can help kids learn at scale without social relationship, I encourage you to get in touch with me in the comments because I have several bets in mind and you are my kind of bettor.
2023 Jun 15. Reader Mike G. notes how hard it’s been to replicate Bloom’s two-sigma difference with human tutors. I note in response that AI chatbot boosters frequently hold themselves up to Bloom, which makes me feel like the terms of the wager are fair. Nevertheless, I like Mike’s counterproposal:
In the next five years, we won’t see a peer-reviewed RCT where students receive the same dosage of either a chatbot tutor and a consistent human tutor and the chatbot tutor group significantly outperforms the human tutor group for academic achievement.
Previously on Dan Writes About AI in Education
Three Tests AI Needs to Pass Before It Can Start to Transform Teaching
AI Chatbots Will Help Students Learn Nothing Faster Than Ever
Other Relevant Writing
Chatbots (mostly) won't change education. Michael Pershan, a teacher.
AI Can Be Helpful to Teachers but, Despite What Sal Khan Says, It Will Not Be “The Biggest Positive Transformation That Education Has Ever Seen. Larry Ferlazzo, also a teacher.
Student Perceptions of AI-Generated Avatars in Teaching Business Ethics: We Might not be Impressed. Postdigital Science and Education.
Two studies point to the power of teacher-student relationships to boost learning. Jill Barshay, who has become essential reading in recent months.
Upcoming Presentations
I hope you can come hang out with me at the summer learning institute in Clark County, NV, next Tuesday 6/20 or at CAMT the following week.
I’m aware of this study, which is interesting but not really relevant here as it asked the humans to compete on terrain favorable to chatbots—text-based communication over a short timeline.
Try a geometry problem with AI. Either is uses latex notation that doesn’t make sense without a Latex interpreter (which isn’t included by default), or it gets it wrong. As someone who teaches remotely to high school students, most chat applications aren’t set up for Maths properly as yet, so the notation we use to type with doesn’t match what we hand write. How we express Maths problems is still a larger issue with technology.
In teaching, relationships matter; a community of learners matters; talking through what been learned matters; equal access matters. I can imagine where AI could "improve" getting a "quick" answer. But for the hard, fun and exciting work of actual teaching and learning, I am not predicting a sea change in those critical areas with AI.