# Chatbots Have a Math Problem and a People Problem

### A conversation with Sal Khan and a bunch of education foundation reps.

Last week, I was invited to chat behind closed doors with a working group on generative AI consisting of a bunch of representatives from education foundations. This group has money to pass around and is wondering, โShould we pass this money around to generative AI projects? How much and which kinds?โ

I was invited to offer my perspective on classroom applications of generative AI. My perspective, as you may know, is more pessimistic than the median education technologistโs and certainly more pessimistic than the other guest this group brought in for the conversation: Sal Khan. Whereas I predict some modest quality-of-life improvements for teachers and students over the next five years, Khan has claimed that AI tools like his Khanmigo represent โthe biggest positive transformation that education has ever seen.โย

I gave some remarks to introduce my position. Khan did the same, and then there was some moderated Q&A. I thought Iโd share my prepared remarks with you all.

My opinion on generative AI in education is unusual in the discourse. In the middle of two factionsโone saying this will disrupt education negatively and another saying this will transform education positivelyโI am here to offer a different perspective about generative AI, about chatbots in particular, a *brave* perspective which is that:

**Itโs neat. ๐**

Depending on where you fall in the discourse, you might hear some subtext underneath โitโs neat.โ I want to tell you that you are not wrong. There is subtext and I want to bring it up to the text.

To the pessimists, when I say, โItโs neat,โ what I mean is, **โIt will be fine.โ** ELA teachers will undergo a bunch of soul-searching. Theyโll have to re-think assessment and assignments, for example, but math did this with graphing calculators and Wolfram Alpha and Photomath. You will find your footing again.

And to the optimistsโthe operators, funders, prognosticators, newly minted genAI experts and consultants who sent the hype cycle skyrocketing Q1 and Q2 this yearโwhen I say, โItโs neat,โ what I am saying is **โItโs just neat.โย **

What I am saying is **โI donโt think you understand the assignment.โ **Or at least we understand the assignment very differently.

### The Assignment

The assignment is to take knowledge that the brightest adults in our world developed over thousands of years, and help *kids* learn it in *twelve*, all while theyโre beset by acne and hormones and body odor and social media, all within a social and economic system that provides fewer and fewer resources for working class kids, that provides instead higher and higher rates of child poverty, child homelessness, child malnutrition, incidences of personal and systemic racism, the incarceration of their parents, all while the answer to the question โwhat will happen if I work hard in school?โ seems more likely to be โlifelong student debt and permanent entry-level wagesโ rather than โfinancial prosperity.โ

That is the assignment, and you bring to me โฆ a chatbot. A CHATBOT??

Look itโs neat โฆ itโs neat like a kid showing up to a raging house fire in a firefighterโs suit telling the crew โI got this!โ

Like neat, I love your hustle, but I gotta โฆ we gotta โฆ look: **we have a lot of work to do here okay?**

Chatbots struggle enormously with that work. They struggle in two particular ways.

**Chatbots have a math problem and a people problem.**

### Chatbots have a math problem.

Chatbots have a math problem, which is that they only speak **a particular dialect of mathematics**, one that students who are learning math do not easily speak.

When you try to help students learn something new, that knowledge is not formed or expressed particularly well. Kids communicate in gestures, sketches, scribbles, murmurs, ums, and ahs. They overgeneralize earlier ideas, taking them well past their โuse byโ date. Their train of thought derails and then gets back on track all in the same sentence.ย

To communicate with chatbots, students have to shape all of that thinking into a single expression of written or verbal text. They have to package it up and ship it to the chatbot. The chatbot then does something with it that I would describe as truly unprecedented and very neat and ships it back to the students who must then map the formal contents of that package back onto the informal dialect they speak.

That translation process represents a cognitive tax, one which students pay coming and going, one which they pay alone.

There are other ways that teachers, students, and devices can interact, ways that accommodate and even invite the informal mathematical dialect.

For example, in the activity Function Carnival from Amplifyโs core math curriculum, we start by asking students to watch an animation, to say something they noticed, to draw the path of the cannon person in the air with their finger, to sketch it on a graph.ย

Noticings, gestures, sketches. We invite, accept, and develop all of them. Students need to be understood in their mathematical dialect and chatbots do not understand it.

### Chatbots have a people problem.

Chatbots have a people problem which is that they have a problem with people.

Not chatbots themselves, which are relentlessly peppy and upbeat and I love that about them. Rather the *chatbot theory of learning*, which is just an upgraded version of the *personalization theory of learning*, has a problem with people*. *That problem is that it has no idea what to do with the people in the classroom.

Personalized learning says that, in a classroom with even this few students, there is far too much variability between learners for the teacher to manage, much less use. Their learning paths are so different as to be unrecognizable one to another. Personalized learning declares that these students are *liabilities* to each other, rather than *assets*. Therefore we must segregate the students from each other and provide them with a personalized set of resources, often delivered through a laptop and headphones.

This theory of learning has not worked relative to other models of schooling because, in this model, the teacher functions as second-tier customer support, essentially handling the student questions that the software could not, and teachers are good for much more than this.

This theory of learning has not worked, additionally, because students do not buy it. Students do not believe that their classmates are liabilities. Much more often students *like* their classmates and do not feel limited by them and would rather learn *with* them if the option were available.

And students *especially* do not believe that they themselves are liabilities to their classmates. No one wants to believe that about themselves.

Skilled teachers know how to work with this learner variability, creating moments of learning that are greater than the sum of what individual learners know.

For example, in the previous lesson, a teacher might identify two students with interesting graphs of the cannon personโs height off the ground, tell the class to stop what theyโre doing and check the graphs out, to name whatโs different and the same about them, to name whatโs good about both, to make a third graph that includes the best features of each.

I invite you to wonder what students have learned about math and about themselves as learners in that kind of moment.

When I say the chatbot theory of learning has not worked, I mean the Gates Foundation funded a national study by RAND finding no significant results for the personalized learning cohort in reading and significant, if pretty modest, results in mathematics (p. 34). All while classroom discussion has an effect size, via Hattie, of many times the size of personalized learning.

All while RAND found the result of telling students that they and their classmates are liabilities to each other, rather than assets, was a *decrease* in feelings of belonging and community in the personalized learning condition (p. 24).

This is the legacy chatbots are poised to inherit.

Look - math students are in school to figure out math and figure out themselves at the same time, all against a social and economic backdrop that has rarely been more oppressive to kids. If you think the answer here is โchatbots,โ then Iโm telling you we do not understand the question in the same way.

Chatbots are neat, but in K-12 math education, they are not much more than that.

### Odds & Ends

We just kicked off the new season of my **Math Teacher Lounge** podcast with **Bethany Lockhart Johnson**. In this season, weโre tackling fluencyโwhat it is, what it isnโt, and how to develop it. In the first episode with **Jason Zimba**, I confess to my own conflicted feelings about fluency and my hope that Iโll resolve that conflict during the season.

Another podcast Iโm enjoying: Teach Me EdTech from **Jess Stokes**. The most recent episode discusses โBurnout-Proofing EdTech Products for Teachers,โ in which edtech operators kind of admit that โeven while weโre promising good stuff, weโre asking teachers for time and energy.โ

**OpenAI** released multi-modal functions for ChatGPT. You can now talk to it and upload images. When asked recently what it would take for chatbots to become of actual transformative use in classrooms, I named ambient, multi-modal sensory as one of (a bunch) of prerequisites.

**Ben Werdmuller** compares generative AI to web3, which experienced a massive hype cycle like genAI and is now settling into its plateau of productivity. I find this pretty compelling.

There is a science of reading. Where is the science of math? **Michael Pershan** describes the differences between reading and math and why they likely necessitate a *sciences* of math. No one writes in this genre like Pershan.

I learned a lot from **Sequoia Capitalโs** article on Generative AIโs Act Two, which admits what few actors will, that โwow we kinda let the discourse get away from ourselves back there.โ They name *user retention *as a challenge going forward. Lots of people created lots of accounts on these generative AI platforms but people donโt use them as often as they use incumbent apps.

**Education Week** asked teachers to tell them โWhat are some examples of how technology is used poorly to teach math?โ They offered 25 quotes, which are by no means comprehensive or statistically representative but I find myself parched for student and teacher perspectives on math edtech right now.