July is falling apart on me! My goal for July was to copy and paste some of my very old (and still very interesting!) writing into this newsletter to allow me to transfer newsletter writing energy into manuscript writing energy instead. But education is too interesting right now. Generative AI, especially, is too interesting. Writing is how I think and there is a lot to think about. This goal is falling apart.
Compromising with myself, I’ll still share an old post here, one which still describes a great deal of the way I think about math education, but I’ll also share some brief bulleted remarks on my recent reading, particularly about generative AI in education. Keep scrolling.
Introducing the Throwback Post
“Why do students find some experiences in math class enjoyable and not others?” is the question that powered my first ten years working in math education. I wanted some kind of Grand Unified Theory of Student Engagement.
I found early on that students liked experiences that involved the “real world”—the world outside of the classroom. My engagement with “real world math,” especially using digital technologies, got me an audience with a certain crowd of math educators.
But the theory that “students are engaged in math if and only if the math is real world” fails. It fails to account for real world math problems that students dislike. And it fails to account for pure math problems that students enjoy.
The post “Real Work v. Real World” added, literally, a new dimension to my work—a consideration not just of the world of a math problem but the work students do inside that world. That new dimension would not endear me to strict proponents of “real world math” (a term I now cannot use without scare quotes) but I have always enjoyed our intellectual entanglements, particularly conversations with Citizen Math (née Mathalicious) and a talk I gave that prominent mathematical modelers would call “completely wrong” and “dangerous.” Let’s get into it.
The Throwback Post
“Make the problem about mobile phones. Kids love mobile phones.”
I’ve heard dozens of variations on that recommendation in my task design workshops. I heard it at Twitter Math Camp this summer. That statement measures tasks along one axis only: the realness of the world of the problem.
But teachers report time and again that these tasks don’t measurably move the needle on student engagement in challenging mathematics. They’re real world, so students are disarmed of their usual question, “When will I ever use this?” But the questions are still boring.
That’s because there is a second axis we focus on less. That axis looks at work. It looks at what students do.
That work can be real or fake also. The fake work is narrowly focused on precise, abstract, formal calculation. [The exact nature of fake work would require a lot of refinement. -DM] It’s necessary but it interests students less. It interests the world less also. Real work — interesting work, the sort of work students might like to do later in life — involves problem formulation and question development.
We overrate student interest in doing fake work in the real world. We underrate student interest in doing real work in the fake world. There is so much gold in that top-left quadrant. There is much less gold than we think in the bottom-right.
What Else? (Non-AI Division)
Bethany Lockhart Johnson and I just wrapped up the most recent season of our podcast Math Teacher Lounge. We investigated the origins of math anxiety and some of its solutions. If you’re new, I highly recommend our final episode, which functions as a clip show and a recap of our favorite takeaways. I’d be especially interested in your take on our final conversation, one where we wonder about the limits of a teacher’s power to resolve math anxiety.
What Else? (AI Division)
“Mobile and desktop traffic to ChatGPT’s website worldwide fell 9.7 percent in June from the previous month,” reports The Washington Post. This is consistent with my hypothesis that generative AI is less useful than many have claimed (at least in K-12 education) and consistent with the opposite hypothesis that generative AI is extremely useful to P-16 education and the decline in usage corresponds with school holidays. It’ll be interesting to see what happens in August.
Innovating Pedagogy 2023 is a document produced by The Open University (one of the most venerable names in online education) and The University of Cape Town. “Pedagogies using AI tools” is the first section and it describes some of the usual hypotheses people have about how generative AI might be used in the classroom (“personal tutor”) and also some novel ones (“collaboration coach”). At one point the guide encourages students to ask for an explanation and then modify the prompt with lines like “Assume I know nothing about this topic.” This helpfully illustrates one difference between an AI chatbot tutor and a human tutor. You can’t ask the AI chatbot tutor, “Tell it to me again like you know me, like you know what we learned together last week, like you know anything about my intellectual or social context.” The chatbot doesn’t have that context so we can only ask it to assume we know nothing, as if that were ever true about anything, as if “nowhere” were a viable starting place for any conversation.
Freddie deBoer has a fantastic piece thinking about the psychology of AI boosters, their millenarian tendency that makes it easier for them to argue “this will change everything” or even “this will destroy everything” than “this will change some things a little.” It includes this paragraph on education technology that’s worth reproducing in its entirety: “For a quarter decade we’ve been promised a technological solution to our educational problems, and for a quarter century ed tech has failed. But these failures never lead to the obvious conclusion; the boundaries are always pushed forward, the insistence that the technology just needs to be further developed. What these claims fail to understand is that the problems in human-computer interactions in education lie with the humans, not the computers, and thus can never be solved on the technology side. Indeed, this misunderstanding is the central folly of our age, the failure to understand that on the other side of every screen is a human being capable of frailty and folly a computer could never understand.”
I've thought about this for my work, and formed a definition, as follows. Modeling doesn't have to be about connecting math to the physical world, but it has to connect math to a WORLD. I like the old term "microworld" to define what I mean here. "Cinematic universe" or "franchise" are also decent metaphors.
To be useful for teaching, these modeling worlds need contemporary, active, lively fan communities that students can quickly join. Can we build our own little worlds for one math circle or classroom at a time? Maybe, but that's much harder.
In your panel video from 2019, the collection of human creations about mathematical sequences is a big world with a rich history, many living fans, a thriving wiki, lots of publications, and so on. "How did Han Solo make the Kessel run in 12 parsecs?" is a modeling question, even if "Star Wars" is not a real world in the same way that our Sun is real. Likewise, we can model in the world of sequences, IF we know that human-made world enough.
Not everything we learn about is a world in that sense. By my definition, not all learning is modeling. The video's example with the sequences is modeling, because for that particular audience, sequences are a big and rich world. Maybe even a universe!
The real world vs fake world commentary got me thinking about this thinking as it relates to the post from Sara Vanderwerf yesterday regarding the definition of math....still pondering.....