Five Differences Between Human and AI Tutors
Chatbot tutors have not arrived and we should wonder if they ever will.
Here is Google CEO Sundar Pichai describing his hopes for AI chatbots in education last year:
I think over time we can give every child in the world and every person in the world — regardless of where they are and where they come from — access to the most powerful A.I. tutor.
Pichai is not alone. Many other people have expressed hopes that chatbots will one day function as world-class tutors available at any time for anybody.
As I listen to these hopes, I have often wondered, have these people ever once been a tutor or been tutored? I have recent experience here and find the difference between human tutors and AI chatbots rather stark.
I have been tutoring a fifth-grader in mathematics over the last several months, a family member, remotely using Google Meets.
But a few months ago I also hired a tutor for myself. My secondary goal with that tutor was to learn more about web technologies like JavaScript, React, and Express. My primary goal, however, was to experience firsthand the habits of effective tutors for the sake of this newsletter.
At the same time, I have tried to engage ChatGPT 3.5 and other chatbots as tutors, both for fifth-grade math and for web development.
Here are several of the differences I have noticed between human tutors and AI chatbot tutors.
Human tutors seek context.
Questions you’ll hear human tutors ask:
What have you tried so far?
Can I see your work?
What’s on your paper?
How far can you go here?
Where did you get stuck?
How did your teacher explain it in class?
What have you been up to since our last session?
Questions you’ll see AI chatbot tutors ask:
Do you know the first step here?
Human tutors understand that learners are not starting from nothing or nowhere, that the context of their current homework assignment or their last test or their recollections from class are all useful assets in a tutoring session. Human tutors understand that a student who is struggling may not need to start at the first step. One glance at their paper might reveal it is the second-to-last step where they’re struggling.
Human tutors use multimedia.
At one point, I started to sense the child I’m tutoring was too deep in the operational weeds of fractions and had lost sight of what a fraction even is. So I drew some pies. I drew a number line. I drew some area diagrams when we multiplied fractions. I’m tutoring via Google Meets but I have a document camera plugged in so we can sketch together. He holds his own handwritten work up to his camera so I can get more context on his work.
With my own tutor, I share my screen so we can pair program. She’ll describe something general and watch me try to fill in the specifics. Occasionally, we’ll switch roles. I’ll watch her code and comment.
Effective human tutors coordinate many different forms of media to support student learning. Chatbots use text.
Human tutors create relationships.
I have enjoyed chatting with Jayne Illovsky, a tutor local to the San Francisco Bay Area who specializes in students who feel alienated from math class and school. Here is how she described her process to me:
I take significant time at the beginning of a session to develop relationships with each of my students and tailor the approach to make the learning process fun, exciting, and something they want to look forward to.
I don’t know that my own tutor specializes in social-emotional learning but even still she draws on our history to encourage and challenge me. Certainly, the same is true with the child I’m tutoring. I have known him for ten years and it isn’t hard to see when he is overwhelmed or when he is excited about having cleared an intellectual hurdle, and for me to respond accordingly.
Human tutors are pushy.
Chatbot boosters often praise AI tutors for their patience. This is an attempt to reframe one of a chatbot’s greatest liabilities into an asset. Rather than waiting for the child to pull the tutor into their learning, the skilled tutor should push into the child’s learning.
For example, I decide that we’re done talking about the video games we like and recommend we get to work now.
When he says he had a substitute teacher that day and didn’t get any homework, I make up a few problems.
When I notice he’s taking a long time to solve a particular problem, I decide whether or not to intervene. If he’s on a productive path and working slowly, I’ll wait. If he’s moving along a counterproductive path, I’ll intervene earlier.
Chatbots, meanwhile, have no such capacity. They wait patiently for the student to take their turn in the conversation, whether or not that patience is what the student really needs for their learning.
Human tutors know their limits.
When people note that these chatbots still struggle quite a bit with math, their developers and promoters will often reply that human tutors make mistakes as well.
This has been true in my own tutoring. My own tutor doesn’t have fresh knowledge of every corner of JavaScript I’m trying to explore. The difference is my tutor knows where her knowledge is fresh and where it is stale and conditions her advice accordingly.
“Don’t 100% quote me on that. I’d have to look back in the documentation for SQLite.”
“I actually don’t know why we can’t use JSON.parse()
instead of stringify()
here.”
Chatbots, meanwhile, present themselves with the same confidence whether they’re right or wrong.
The Discourse
People right now, people who I believe should know better, are reading this post and imagining that, well, chatbots can ingest all of your previous conversations and that’s kind of like a relationship, plus Gemini 1.5 allows one million tokens now which is more than enough to ingest a student’s textbook, notes, etc, and multimodal models are on the come-up which should let chatbots produce diagrams and graphs and all the other media that math students need, and if we stuff all of those entrails into a grocery bag and scribble on a smile, it starts to look a bit like a tutor, yes?
This discourse is unserious.
This hope for chatbots takes a serious challenge—meeting the vast and varied needs of students—and trivializes it.
The positive framing for this article is that I have just described a product roadmap for AI chatbot tutors, one that they are moving ceaselessly along with every new language model release.
The negative framing is that we are asking a tool that is quite neat to do something that is far beyond its capabilities.
We survey teachers relentlessly. What do you need from us to do this job well? We survey students relentlessly. What do you want from us for your education? Their answers are consistent! Their needs are significant! But we lack the collective will to meet their needs for reasons that are beyond the scope of this newsletter but which are a stain on every one of us.
Help is coming, we tell students and teachers. We’re sending chatbots.
It's certainly a mistake to classify AI tutors as equivalent to human tutors. It's self-evident (or should be) that the key element of a person-to-person tutoring experience is the relationship between the tutor and the student -- hence why not a single tutor I know is concerned about losing their job to an AI chatbot, nor should they be.
But I don't fully agree with your post. For one thing, the trope that the fancy new technology inhibits critical child-focused learning practices could easily by applied to video tutoring. You're tutoring a 5th grader over video chat, and you're showing this student how to convert mixed numbers into improper fractions. Couldn't you very easily make a case that it would be much better to tutor this kid in person? That holding up a piece of loose leaf to a camera is not that same as having your tutor watch over your shoulder as you work step-by-step? That getting full human context and building a relationship is harder over Zoom? That your student could have a phone with them, could get distracted, could work in a reclined position on their couch instead of in active learning position at their desk? But OF COURSE we've come to accept that there are many benefits to video tutoring: the ability to easily embed media, quick dissemination of materials, and -- here's the big one -- flexibility to meet kids all over the world.
Now: can an AI tutor build a relationship? No. (Not all human tutors do either.) But can they be pushy? Can they ask for context? Can they use multimedia? Depends on the tutoring program you're using -- much like it depends on the human tutor you're working with. FWIW, I know a shocking number of virtual tutors who don't even own an Apple pencil, or understand Desmos, or who constantly multitask while working. Will a human tutor be available to you at 11pm after school + soccer game + finishing homework + procrastinating because you're 15 years old? Probably not. And -- of course -- will a human tutor work with you for free? Or $20 a month? Do you even want a human tutor who is charging $20 a month?
Excellent human tutoring is irreplaceable. It's also expensive, hard to scale, and hard to come by. AI tutoring is a fundamentally different beast, and one that is in its infancy. The real question should not be, can AI tutoring replace human tutoring? Rather, can AI tutoring help kids? Even right now, with GPT 4.0 less than 1 year old, the answer to that question is yes. Skepticism is vital, and each platform should be considered on an individual basis. But I would not compare infant-stage AI tutoring to the platonic ideal of human tutoring.
I am currently tutoring Alg 2. One thing that I do that AI can’t do is to know when to stop a student who is doing work algorithmically and move them to something else. Then, when they have been distracted enough, return to what they were doing originally. My theory is that you have to forget something and then revisit it before you really get it into long term memory This seems to work.