Generative AI Asks Teachers to Eat Scraps From the Table
Here is how to turn them into a meal.
New technologies are rarely built for teachers or by teachers. The internet was built by the defense industry. Smartphones were built for the consumer market. Social media was built for the ad market. Gmail was built for ads, as well, and eventually corporate sales.
These technologies are useful for teachers and learners by accident, not by design. Teachers encounter these technologies as scraps underneath the table of commerce and must decide—through ingenuity, through experimentation, and above all else, through strong theoretical ideas about technology and learning—whether and how they can form a nutritious meal. Obviously the same is now true of generative AI.
If you want to do anything useful with generative AI in learning, you need to recruit two guides where most people in this space seem to only have one. You need a theory of technology, answering the questions, “What makes this technology powerful? Why and how do people use it?” And you need a theory of learning, answering the questions, “How do people learn? What do students want and need?”
Generally, when people say, “Generative AI technology will transform learning,” they can tell you quite crisply how the technology works but will mumble if you ask them how learning works.
You need a clear answer to each of those questions.
If you are a technologist and do not have answers to those questions, you are trying to navigate and colonize a world that you do not understand.
If you are a teacher and do not have answers to those questions, you will allow technologists to convince you that table scraps are a meal.
Thanks for reading Mathworlds! Subscribe for free to receive a new post every Wednesday about technology, math, or teaching.
My theory of learning in one (1) paragraph.
My theory of learning is that students come to class with ideas about every subject. Students need opportunities to express those ideas, to feel affirmed in the value of their ideas, and then to receive resources that help them develop their ideas. That’s the whole game.
What is the theory of learning here?
Let’s look at three applications of generative AI to learning. Can you tell me what they think learning is? It isn’t easy.
For one example, at last weekend’s AI x Education conference, Kristin DiCerbo, Khan Academy’s Chief Learning Officer, said the following about the tendency of Khanmigo (and generative AI more generally) to just make stuff up:
We saw a classroom in some of our beta testing where it was almost a game for students to try to catch it being wrong. And saying like, “Hey, I caught it being wrong. I got a wrong answer!” And it was kind of fun. So that is an interesting approach to generating that mindset that this isn't something you just rely on to give you a hundred percent the right answer.
I can believe that these AI hallucinations will improve. I can even believe they’ll improve entirely. I can certainly believe that we are at the early stages of learning about generative AI in education and should not expect perfection. But I cannot believe that these hallucinations are a benefit to novices in any way. A novice who needs Khanmigo by definition does not have access to the knowledge that an expert would use to critique Khamigo.
This is a theory of technology, not a theory of learning. It starts by assuming “generative AI is the answer” and then works backwards to the question, “What if it were good for learners to be lied to occasionally by their teacher?” Help me understand the theory of learning here.
I feel similarly about the idea that students would benefit from having a conversation with George Washington about the Revolutionary War or Pythagorus about the Pythagorean Theorem. Obviously, this sort of thing is technologically innovative—a fantastic demo—and probably not harmful to students.
But the theory of learning here isn’t obvious to me. It is implied that students will learn ideas better when they learn the ideas from some of their earliest originators. But this ignores all the ways people are unreliable and self-serving narrators of their own histories. It ignores the many, many people who know lots about their discipline but can’t teach it in a way that is effective or interesting.
It also ignores roughly thirty years of education research findings that teachers have specialized mathematical knowledge in addition to the usual kind. For example, the common ways students come to understand the Pythagorean Theorem. Common wrong answers to questions about the Pythagorean Theorem. How the Pythagorean Theorem ties into later ideas and ties up earlier ideas. Help me understand the theory of learning here.
Here is another example where a concrete theory of learning, or perhaps a different theory of learning, would have been helpful. In the image below, a technologist has helped his kids program their “virtual twins” using generative AI.
The kids loaded up some of their interests and attributes and voice data into the model which now functions as their tutor. The daughter’s twin has her sarcastic sense of humor. The son’s twin has his interest in military history. It is a genuinely impressive feat of technology.
But if your theory of learning, like mine, includes the idea that we need to develop a kid’s existing ideas, not just affirm them, you start to see problems in the exchange above.
The daughter’s virtual twin has echoed back to her, sarcastically as programmed, some very pessimistic ideas about the value of math, no doubt drawn probabilistically from all the text on the internet that includes many pessimistic ideas about the value of math.
Here, the daughter might benefit from hearing from someone less sarcastic! Someone less prone to the internet’s pessimistic groupthink about math. Someone who could affirm the challenges the daughter has faced but who might then represent an alternate perspective on math. Someone who has the curriculum, pedagogies, and beliefs to change, rather than strengthen, her preconceptions about math. Someone less like her, in other words.
The need to experience people less like us is a huge reason why we do school. We bring different kids and grownups together in schools intentionally—not by accident, convenience, or financial necessity. When different people share contrasting views of a thing, it often helps everyone understand that thing better than any one of them could on their own.
It is never too late.
If you are a technologist or a teacher, it is never too late to pick up a compass, a theory of learning, as you navigate the terrain of edtech. You can start by finishing one of these sentences:
I think learning is …
I think learners need …
I promise I will not critique your answer. I will be thrilled to find a concrete theory of learning anywhere in these frothy conversations about generative AI.
If you are a technologist and do not have a concrete theory of learning, you are navigating the world of edtech without a compass and blaming the people you meet there for not appreciating the tools you brought with you.
If you are a teacher and do not have a concrete theory of learning, you will succumb too easily to a marketing campaign that is without precedent in my lifetime, a campaign designed to convince you that generative AI’s transformation of learning is inevitable, designed to convince you that these scraps falling from the table of commerce are, in fact, a multi-course meal. Your theory of learning will tell you whether or not they’re right.
My theory of learning tells me:
Don’t eat the scraps. Demand a meal.
“How Schools Are Coaching — or Coaxing — Teachers to Use ChatGPT” by Olina Banerji in EdSurge. This is such a fantastic piece of reporting. It opens with a couple of surveys indicating that teachers are fairly ambivalent about the value of generative AI to their classrooms. Then Banerji interviews several district and regional tech directors about the (IMO very different) reactions to that ambivalence.
I thought Chris Dede’s plenary lecture at last week’s AI x Education conference was extremely sharp. A good measure of optimism alongside a lot of pragmatism from someone who has lived through basically all of the previous AI hype cycles. Some potent metaphors. Highly recommended.
“The Creative Ways Teachers Are Using ChatGPT in the Classroom.” I never get tired of reading ethnographies of teachers learning and adapting new technologies.
A trio of researchers, all people with strong theories of technology and teaching, write about their use of AI in teacher professional learning.
I chatted with Marc Lesser on his No Such Thing podcast about AI and math education.
I’m aware that I’m rapidly typecasting myself as some kind of zealot against generative AI but I feel like much more of a zealot in favor of good teaching, good learning, good math, and good technology, some of which overlaps with generative AI and some of it doesn’t. Maybe it’s helpful for me to attach my priors here, none of which have changed much in the last six months.
Generative AI will not transform education in the next five years. AI won’t change any of the ways students learn—including how we organize and resource them, how we staff their schools, how their achievement sorts into different distributions—in any appreciable way over the next five years. If pressed I’d guess much longer.
Generative AI will likely result in some quality-of-life improvements for students and, especially, for teachers, though I don’t believe those improvements will be so large that they’ll change the mind of teachers who were planning to leave teaching, to pick one benchmark.