I was invited to teach a class at Stanford University recently, one led by AI researcher Dora Demszky and attended by a lot of computer science students interested in using AI to do good in education. I tried not to be too much of a downer here, even as I pointed out the dampened teacher uptake of AI and the reality that lots of edtech companies, in spite of their renown, are not helping 95% of students.
During Q&A, a student asked me about the key choices that define a company’s direction in edtech. Three came immediately to mind but I mentioned this one first: “You either bet on teachers or you bet on software, and if you bet on software, you eventually learn you have to build a teacher.”
Exhibit A: Khanmigo Updates
From its earliest designs, Khan Academy made a big bet on software-as-a-teacher and a very small bet on teachers-as-a-teacher. I’m speaking descriptively here. With Khan Academy, the teacher’s main work is to get their students logged into Khan Academy. That’s the bulk of it.
Khan Academy is not an unusual edtech company in this regard. But something interesting happens whenever these companies try to broaden their appeal from opt-in students, from nerds looking to get nerdier on their own time, to students who can’t opt-out, students who are required to go to school and learn stuff they don’t necessarily want to learn: the companies realize that they have to build a teacher.
Here is what I mean. Previously, the icon for Khan Academy’s AI helper Khanmigo sat in the lower-right corner of every practice screen waiting patiently to be activated by kids. Now, something that kids do in Khan Academy will occasionally trigger Khanmigo to ask more directly, “Need help?”
When does Khanmigo decide to intervene like this? Under what conditions? What does it all mean? These questions have transfixed me recently.
Let me tell you what I think is going on behind-the-scenes at Khan Academy: kids aren’t activating Khanmigo. My guess is that Khanmigo gets called up off the bench in fewer than 10% of student sessions.
This isn’t because Khanmigo is a bad AI tutor. (Though sometimes it’s too helpful and other times it isn’t helpful enough.) But Khanmigo asks kids to do something most of them do not want to do—read academic writing about math. What kids would rather do in Khan Academy (and software like it) is toss answer after answer into the automatic feedback slot machine to see if they get their computer confetti.
Generally, schools deploy teachers to encourage students to do the things they don’t want to do. But Khan Academy has imagined a very limited role for teachers. So with this Khanmigo update, they have tried to build a human teacher, or at least build human teacher-like features, out of software.
This is hell.
I sincerely hope Khan Academy is happy in this work because, to me, it sounds like hell. Incredibly difficult. Very low odds of success. A one-way trip into the Uncanny Valley.
Human teachers intervene into student thinking all the time, triggered into action 100 different ways by 100 different students. A student is working faster than expected. Another student is working slower. Another student isn’t working at all. A different student has an error, but stands at the precipice of a revelation. Another has every correct answer and would benefit from a question to deepen their thinking. Two students have different answers but are thinking similarly enough that they should talk to each other. Enough students have the same question to necessitate a whole-class response. Like Khanmigo, a teacher will sometimes say “Need help?” but far more often they’ll say, “Let me help,” and start helping.
Khanmigo has exactly one intervention here—asking “Need help?”—and it intervenes under exactly one condition, which I think I have figured out after trying to trigger it about a dozen times. Khanmigo asks, “Need help?” after:
You enter an answer into an input.
You click out of the input.
One second elapses.
It doesn’t matter if your answer is right or wrong. It doesn’t matter if you have just clicked “Check” on your work. Khanmigo will always ask if you need help one second after you click out of the input. This leads to interesting situations where Khanmigo asks if you need help while also telling you “Good work!” simultaneously.
I am not here to critique this particular trigger or intervention, but I will note for emphasis: there is only one of them where human teachers have 100. This is what happens when companies realize, perhaps belatedly, that they need human teachers and try to build one out of software. They get 1% of a teacher.
I know a way out of hell.
Maybe it’s possible. Maybe you can program a computer to intervene with the sensitivity and skill of something like the median teacher. To me, it sounds like hell, working to create something that is a smidge more teacher-like when real teachers are standing right there.
The way out of this hell is to bet on teachers instead, asking yourself, “What do teachers need that technology uniquely provides?” Asking yourself, “How do I use technology to bring out the best in these humans?”
With the Desmos Activity Builder, we made several specific bets on teachers and technology.
Teachers are uniquely effective at launching activities, at developing “collective effervescence” among students, setting the stage, and getting everyone locked in. So we bet on technology that would help teachers pause and pace their lessons, that would help them get students on and off computers quickly.
Teachers are uniquely effective at monitoring student thinking, at seeing students operating below their best and drawing their best out of them, at finding the right in the wrong, at having conversations that transform the ways kids think about math, about themselves, and about each other. So we bet on technology that would let teachers see student thinking in something close to its raw form—like “violently scorch the pizzas” above—rather than (for example) class-wide aggregations of multiple choice responses.
Teachers are uniquely effective at responding to student thinking, at knowing when and how to intervene, so we bet on technology that would let them send written comments and snapshot student work for use in whole-class discussion.
How Each Bet Pays Out
If you bet on software, the good news is that novice teachers will find your software difficult to screw up. The bad news is that expert teachers won’t get much more out of your software than novices. And year after year, you will learn new lessons about the human dimensions of teaching and struggle to turn those lessons into software.
If you bet on teachers-as-a-teacher, your work is different. You will likely see greater variance in outcomes. Veteran teachers will likely see much greater results than novices but some of the novices might not see better results than if they had used a software-as-a-teacher.
If you bet on teachers, you are betting on humans, including their assets and liabilities. If you send your work through humans as an amplifier, some will amplify it in uninspiring ways. If you choose this path, your work is to develop teachers as much as their work is to develop students.
I understand why edtech founders so frequently choose software over humans. Software compiles. Humans don’t. When software defies your expectations, you debug and re-compile. When humans defy your expectations, you have to talk to them and talk to them and talk to them some more. You have to imagine yourself happy here.
This is one of the most consequential questions you will answer if you decide to build software for learning. It defines your ceiling. How much do you think teachers matter?
“I tried not to be toomuch of a downer here” - please do be a downer when dealing with well meaning Ed tech peeps; as we (but not them) know, Ed tech will not solve all.
“lots of edtech companies, in spite of their renown, are not helping 95% of students.” - and they never have.
“You either bet on teachers or you bet on software” - years ago, when Ed Tech companies introduced what they called integrated learning systems, many marketed them as “teacher proof.” Enough said.
Bet on the teacher is a great way to put it.
On a similar note, across all disciplines where everyone is trying to inject AI, when will the consequences of outsourcing the easy or labourous thinking affect the ability to do advanced thinking? Level 3-4 autonomy is dangerous in transportation b/c it will inevitably lull the operator into distraction yet requires their full attention in the moments of most peril - what will it be for learning?