Math EdTech Roundup • April 2024
My five-ish favorite articles from the last thirty days, plus commentary.
¶ First, some housekeeping. Here are a few places where you will find me running my mouth this next month:
April 15. San Diego, CA. 11:30 AM Pacific. The Difference Between Great Teaching & Great AI. The team at ASU+GSV invited me to play the heel at their upcoming “AI Revolution” conference and I have decided not to waste the opportunity. Find me at the main stage.
April 18. Online. 4:00 PM Pacific. How to Invite Students Into More Effective Math Learning. California Math Council Virtual Learning Series. We’ll look at some lessons and video clips and discuss the reasons why certain combinations of teaching and pedagogy draw students into learning more than others.
May 7. Naperville, IL. Creating Mathworlds That Students Love to Visit and Hate to Leave. I have been pouring tons of time, energy, and words into a book and I’m excited to come up for air and share some of its ideas in workshop form with a bunch of interesting people.
¶ I found Ezra Klein’s interview with Ethan Mollick really thoughtful. This comment from Klein speaks to some of my hesitation about using generative AI at any point in my writing process. (Here is a link into that moment in the podcast.)
But almost always when I am stuck, the problem is I don’t know what I need to say. Oftentimes, I have structured the chapter wrong. Oftentimes, I’ve simply not done enough work. And one of the difficulties for me about using A.I. is that A.I. never gives me the answer, which is often the true answer — this whole chapter is wrong. It is poorly structured. You have to delete it and start over. It’s not feeling right to you because it is not right.
Generative AI seems eager to help writers do the wrong thing in the right way. Mollick, for his part, doesn’t have a response here.
¶ Education Week has administered a new survey to teachers about generative AI, which I hope will mean a steady stream of articles about the state of play over the next few months. The headline here says “Teachers Desperately Need AI Training. How Many Are Getting It?” Authors don’t generally choose their headlines, but it’s worth pointing out the article has data for the second sentence, but not the first. So Education Week reports that 71% of teachers have not received any PD about artificial intelligence in the classroom. We do not know how many of those teachers feel desperate or even a little sad about that fact.
¶ Fast Company profiled Khan Academy and Khanmigo. Several items of interest from behind the paywall.
Fast Company reports that Khan Academy anticipates paid use of Khanmigo by “anywhere from 500,000 to one million students and teachers by the fall.” That’s 50% lower than the projections the Wall Street Journal reported a month ago—“a million or two million.” I am, of course, curious why Khan Academy has downsized its projections.
“… part of Khan Academy’s pitch to school districts—which typically pay $35 per student per year for the software—is that teachers can monitor student interactions with the AI to spot attempts at cheating or other inappropriate behavior.” If I were advising Khan Academy on user growth, I would note that what they’re describing here as a feature will be felt by teachers as homework. Khan Academy is saying, “We can help you solve a problem that we have introduced.” This is a burden, not a gift.
“After a pilot program last spring with 90 teachers and 800 students, about 70% of teachers reported the tool was helpful as an assistant, while 77% of students rated it a four out of five or higher.” I’m not picking on Khanmigo specifically here (when I am, I promise you will know) rather chatbots generally. I have helped build edtech products for teachers. I have piloted them and surveyed teachers using similar items as the Khan Academy team here. If I saw a result like “about 70% think this is helpful,” you would have seen me sprinting back to the whiteboard. I would have called up two dozen of the other 30% at minimum. This is evidence IMO that chatbots lack a product-market fit, to say nothing of a killer app, in K-12 education.
Khan Academy is still trying to figure out what to do with hallucinations. They’re obviously too far downstream from ChatGPT to fix them. But how do you even talk about them? In previous interviews, Khan Academy has described hallucinations as a benefit to learners, suggesting that students might appreciate having a teacher who will confidently lie to them every now and again as a treat. Chief Learning Officer Kristen Di Cerbo described hallucinations as “fun” and “a game.” Now Khan says, “The moment that the AI feels like it’s not giving you good feedback, if it feels like it’s getting confusing, talk to someone else, talk to your teacher, talk to a parent about what’s going on.” Honestly, I can more easily believe we’ll solve the issue of hallucinations entirely than I can believe they’ll ever be anything but a liability for learners.
¶ MIT released some guidance on generative AI in education that I think it’s worth your time. It is a cheap shot to say that most guidance on generative AI in education feels like it was written by generative AI, with several artless bullet points underneath the same four category headers as everyone else. But the co-authors of the MIT guidance have collectively studied the adoption and lack of adoption of education technology for decades and offer useful context, solid recommendations, and good writing. The description of generative AI as an “arrival technology” is extremely helpful, for example, as is this quote from Justin Reich directed to school leaders:
Most of the products marketed at you are going to not be useful. Anybody who tells you they have a groundbreaking tool… I don’t think they’re groundbreaking. Everybody says they’re changing the game. The game doesn’t change.
¶ Mike Petrilli recommends an idea that we did already:
It’s now technically and financially feasible to put cameras and microphones in classrooms nationwide to collect detailed information about teaching and learning. Breakthroughs in artificial intelligence will soon allow us to analyze such data to gain insights about curriculum implementation, effective instructional strategies, grouping practices, student discipline, and much else.
We already tried over-instrumentalizing and over-analyzing K12 learning data with school models like AltSchool and platforms like Knewton, both of which can be considered failures by their own standards. I despair for edtech if the people who are paid to pay attention to this stuff like Petrilli can’t recall even recent history.
¶ Veterans of previous math wars urge math educators not to engage in a new one.
¶ ChatGPT amnesty.
Send or comment in with an article about math, education, or technology you read and liked in the last 30 days.
“The moment that the AI feels like it’s not giving you good feedback, if it feels like it’s getting confusing, talk to someone else, talk to your teacher, talk to a parent about what’s going on.”
Er.. how will a student figure out if it's confused, or confidently hallucinating?
What's the point of Khanmigo, or any other chatbot that aims to help students learn maths anyway? What is the argument here?
Is it that teachers cannot pay enough attention, provide enough support, guidance and feedback, hence we need such a bot for help?
Why don't we simply take a closer look at the issue here instead?
- Why can't a teacher do their job properly?
- Well, the class sizes are too big to provide a more personalised approach, let alone differentiated instruction.
- Why don't we downsize the classes then?
- Seriously?! There already is a huge teacher shortage, and you are suggesting to create more classes?!
- Why is there a teacher shortage?
- Well, young graduates don't consider teaching as a good paying, respectable job, even though some of them really want to teach. Veteran teachers are quitting with the same reasons + overwhelming workload.
- This is the issue. Why don't we start from here?
On another note, why is Khanmigo paid anyway?!
Not to add to your reading list Dan, but here's a really well-reported story on recent failures to turn chatbots into tutors: https://www.the74million.org/article/a-cautionary-ai-tale-why-ibms-dazzling-watson-supercomputer-made-a-lousy-tutor/
Hope our paths cross in San Diego!