36 Comments

Back in 2013, Kate Nonesuch wrote an article about the place of patience and kindness in teaching. I thought you'd like it: https://katenonesuch.com/2013/05/08/neither-kind-nor-patient/.

It also made me wonder about why the Chat GPT 4o demos I've seen all use a female voice? In the video you linked with the linear equation, I cringed every time the presenters interrupted the AI. Logically, I know that they are just moving the program along (and GPT can be long-winded!), but it pains me to think that we might be training people to interrupt women even more than they already do.

Expand full comment

Thanks for this comment, Rachel. The gendered dynamics of virtual assistants seem well understood by the field and mostly ignored here.

Expand full comment

"calculation is to mathematics what chopping onions is to hosting a dinner party. It is important, but it is not central."

Ahh, I have been trying to get this idea across to my 6th graders who believe that all they need is a calculator. I think this metaphor might actually get into some of their heads.

Expand full comment

All three points are spot on for AI tutors. However, I think the third is where ed tech in general including math-oriented app makers need to focus. Perfect conditions show some, though not many, apps can improve academic outcomes. When used in the wild, though, they don't show the same kinds of results. Ed tech needs to be designed for the real world and the different students who are in our classrooms. Sure, the kid who is self-motivated and finishes all their other work early may use the math app available to them or they may turn to their AI tutor when they feel stuck, but what about the rest of the kids? And how should teachers best fit these apps including AI tutors into their workflow and lessons given their strengths and limitations so all the kids benefit? More research and work needs to be done from the perspective of actual classrooms. If this happens, I expect many existing apps will be shown to not be impactful, feasible, or appropriate to the context. Maybe we can develop something better? AI tutors may fall into this category such that they can be shown to work for different math and different thinking, or they may be found to work but only in specific use cases which teachers will need to understand to know when and how to use them effectively.

Expand full comment

Yeah, this is dead on. Laurence Holt illustrated it as well in his recent "The Five Percent Problem" piece. https://www.educationnext.org/5-percent-problem-online-mathematics-programs-may-benefit-most-kids-who-need-it-least/

Expand full comment

Just read the Holt article. No disrespect is intended for the researchers when I say that everything gleaned from those studies could have been predicted by any teacher with a few years' experience.

Expand full comment

How about number sense? I haven't read about Open AI engaging students in developing number sense -- fundamental to math.

Expand full comment

I would predict we have a while to go before any digital technology is better at developing number sense than Cuisenaire Rods and similar, hands-on tools. The rods will probably always be cheaper, too.

Expand full comment

Hi Dan, I am a new subscriber and loving your thinking on maths teaching. As a parent of 8yo interested in maths, I am looking at ways to make sure that his interest survives the inconsistencies across grades. Definitely giving me food for thought.

And as someone who does not like onions, love your analogy 😀

Expand full comment

Love this part, “In addition to calculating with math, students need to argue, estimate, sketch, notice, wonder, construct, speculate, describe, evaluate, play, and so on."

Expand full comment

If you'll forgive the cross-promotional comment, I did a breakdown of the Khans' video and, well, even on its own I'm not sure it's even neat. https://buildcognitiveresonance.substack.com/p/pedagogy-of-the-omnimodal

Expand full comment
May 15Edited

First of all, how believable is this video even? This is best case scenario and probably scripted or at least tested. How many "video demonstrations" like this have existed that have turned out to be wildly optimistic or straight up lies? The gaming world is rife with this.

Expand full comment

Interesting perspective on why patience may not be good in a tutor. But I'd prefer "slightly impatient." It allows for a level of confidence and comfort when asking questions.

Expand full comment

Exactly!

I see all this BS about how teachers are going to be replaced and how this will be the perfect teacher for every student.

I don't even think it was a good tutor at that moment:

https://open.substack.com/pub/buildcognitiveresonance/p/pedagogy-of-the-omnimodal?utm_source=share&utm_medium=android&r=2qo3xm

It makes a bunch of rookie tutor mistakes: interrupting the student, calling sides y the wrong name, interjecting randomly.

Expand full comment

"...But I suspect it is equally possible that GPT-4o has been trained on the corpus of small ideas and step-by-step guides that pollute the mathematical internet." Notice they didn't have any word problems.

Expand full comment

I would like to start uploading some student work on word problems—incorrect work—into GPT-4o and ask it to identify the area of greatest strength and ask a question of the greatest use to the student. Similar to SteveB I don't doubt it'll parse the word problem fine and that it could guide the student step by step from scratch. But that is, to be frank, really lousy tutoring. Effective tutors figure out what the student has done well and start from that point.

Expand full comment

I'd like to have a way to share examples of discerning the kind of question of best use to the student, such as asking an easier version of the same question. In general, I want to show people what *good* tutoring is since there's also rather a ton of tutors who could do better than "walk 'em through slowly" and some of 'em even recognize that.

Expand full comment

My guess is it wouldn't struggle, because most of the word problems you find in textbooks fall into a few well-defined categories ("OK, this one's a mixture problem, that one's a proportions problem...") And textbook word problems tend to have all the information you need and none of the information you don't. Often the "correct" approach is "circle all the numbers you see and find the right formula to plug them into." Something a chatbot should excel at.

Expand full comment
May 15Edited

You have different texts than we do, then :P I also would NOT count on them having set it up to do even the more predictable ones, or to deal with things like adding water and the student having to know that's 0% concentration. ... or, frankly, to be able to tell a mixture from a proportion problem. Yes, they *could* be that well designed but those folks haven't been designing things with pedagogy in mind EVER.

Expand full comment

I'm glad your textbooks are more creative than the ones I've seen! I don't know anything about AI but my understanding is that these things aren't really "designed", you expose the bot to a million textbook math problems and correct solutions to them and it bases its response on that. Nobody's hand-coding "this is how you solve a mixture problem."

Expand full comment

Assuming you're right (and I think it's a good assumption), then the variances in the language of solution problems are going to confound the AI but it doesn't admit that.

Expand full comment

I wonder how math teachers who are interested in leaning into this technology might teach their students HOW to use AI to support their learning. I initially saw calculators as a crutch for students (and they can be), but over time I developed a better understanding of the role calculators can play and how they can create compelling learning experiences. They don't solve the learning problem, but they can be a powerful tool in the hands of an educator. Will AI be the same?

Expand full comment

Interesting comment. I'm old enough to remember that debate about calculators in the classroom. Now, discussions with my teacher colleagues about AI are almost entirely about how students use them to cheat and how to prevent that. I'd like to move on to some other conversation.

Expand full comment

Does this article have a follow up article? Since the tech is now available?

Expand full comment

I am less concerned about AI replacing teachers (not happening anytime soon IMO) as I am about AI changing what it's important to teach. I think we've already seen a loss of interest by our students -- understandably unmotivated to learn the math underneath problems that PhotoMath can do in a second. If the newest ChatGPT can do pretty much any school math word problem, our kids will rightfully be asking why they have to learn to do them without it. It seems to me the answer, as with the introduction of calculators decades ago, is to ask better questions, change what we teach, and reflectively *incorporate* this new tool where it is warranted. Students who know how to use it effectively will be at a huge advantage over kids who don't, in the very near future.

Expand full comment

On a much-less-sophisticated level, I was talking to a colleague about Knewton, which he was using, and the frustrations it was creating for his students. Because Knewton features "adaptive learning", which means the program will send you back to do more basic questions if you get the questions in the current problem set wrong. And that makes sense, right? But his experience was that students see themselves taking a step backward, can even get trapped in a loop of "developmental hell" that they fear they will never escape from. Then they get discouraged and stop doing any work at all, because we're human and not robots. So is it better to have all the students do the the same fixed set of problems? One thing I know, being human myself, is that I really like it when I can see a light at the end of the tunnel, when I'm given a task that is clearly finite in length: "Do these 10 questions and you're done!" Personally, I find that much more motivating than "We may never know when you're done, do another 30 problems and we'll see."

Expand full comment

and *all this software* that I've ever seen doesn't really adapt. It just kicks you bakc to do the same thing, almost always in the same way. Students will find the oddest ways to just get the right answer that often don't hold up for the "next level" and boom, they're kicked back.

I *adapt.* Maybe you need some cognitive frontloading so that question that shows 3/4 and 1/9 marked in a circle, and the rest of the circle shaded ... you're supposed to add those 2 fractions and subtract from 1.... welp, I'm going to kick back and show you one fraction and build the idea that if 1/8 is not shaded, 7/8 is. THEN .... we'll figure out the rest...

This *could* be put into a program.

Expand full comment

It is also very telling of how much learning is actually taking place when a student continually responds to your questions by first saying, "Well, you told me that..."!

Expand full comment