11 Comments

Two teachers made math social for me, Dennis Lee and Ron Abramson. Twenty years before Adding It Up, Lee already had the strands of strategic thinking along with procedural fluency woven into his week. Abramson created class working groups where we cheered each other and taught each other to solve rigorous problems. I have been a proponent of the social aspect that is critical to learning math. When you have those discoveries in math, you need people to celebrate them with. You need peers and facilitators to challenge whether it really works and ask why.

As my career took me to cities and regions where shortages of math teachers dominate hiring, I waver at times. When a school can't find the teacher, is turning to a digital option a more likely success than the long term aim to fix the supply and demand of math teachers? What is going to help the hundreds of students in your community today? Because if it takes a year or two or more, what will happen to those kids?

I love the quote, "it is people who help us do the difficult things we need to do"

Expand full comment

Thanks for letting us share in your introspection here, Usha. The early learning technologies I built were premised on the engagement with a QUITE skilled teacher. Now I build technologies that adapt to a broader teacher skill range, but STILL presume a teacher of some kind. The question, "What would you build if there weren't a teacher in the room at all?" is really tough for me to hold in my head for any length of time. It isn't a bad question, just not one I can easily answer, or imagine myself enjoying answering.

Expand full comment

One of the basic problems with a lot of apps and other software in the platform age is that they're not actually designed for efficacy at accomplishing a clearly stated purpose. They're designed as proof-of-life by people who are trying to maintain their jobs within vertical design and management hierarchies. Interfaces get changed not because a design team has concluded after a lot of study of use that users will accomplish their goals better with a different interface, but to prove that the company still owns the app, that the design team is still needed, and that the user has to learn to submit. It's a performance. Nobody at Microsoft cared if Clippy was helping anybody write a letter. They cared because they were trying to perform to their bosses that they were still "innovating" and perform to their clients that you'd still need to buy a new version of Word in a year or two.

I think that's what drives the change in Khanmigo that you observed. You critiqued it, so they're showing that they will change the app to innovate and improve. Not that they will make it work better at its stated function because I suspect they know just as well as you do that it can't, that learning is necessarily and intrinsically social, not solipsistic. Even children that seem to be "self-learning" via intensive solitary reading, coding, disassembly and reassembly of home technologies, etc. don't know if they've learned until they put that to the test with other people, and they aren't trying to learn in the first place anyway--the motivations are different. A little bot can't trick a kid into self-learning no matter when it pops up on the screen or what it says, because self-learning isn't a trick in the first place. But Khan Academy is selling a claim and they're stuck--they have to pretend that some adjustment to the app can counter a criticism that is lodged at something more fundamental. So move the bot, change what it says, change the timing, and just keep doing that. You don't need proof that this is more effective because you never had proof in the first place that the technology does what you claim it does or will do.

Expand full comment

The news out of California that the CSU system has signed with ChatGPT Edu is based on a similar performative impulse. This one is bureaucrats feeling the pressure to do something big on the AI front instead of trying to figure out what these tools are good for.

CSU bureaucrats could have waited to see if anything useful sticks to the wall from all the AI spaghetti being flung at colleges and universities by edtech start-ups and big tech. Or, for a fraction of what they are paying OpenAI, spin up some CSU-based teams to develop open-source LLM tools for interested faculty to experiment with in their labs and classrooms.

Instead, they chose to make a "bold move in the AI space" and announce that they have "become the first AI-powered university system in the United States—and a global leader in AI and education."

Expand full comment

Many EdTech startups seem to believe that AI is the ultimate fountain of knowledge, while teachers are now merely tech support for this “infinitely patient, infinitely compassionate, infinitely knowledgeable” tutor. But this vision misunderstands both teaching and, more importantly, learning.

The underlying assumption seems to be that teachers function like machines—“programming” students by breaking down problems into the smallest possible steps, delivering explicit instruction, and making students repeat tasks until they pass. If that’s all teaching required, then sure—why wouldn’t AI do it more efficiently?

But real learning doesn’t work that way. The best classrooms aren’t built on algorithms; they’re built on relationships. At XtraMath, we believe AI should serve people, not replace them. Our goal is to amplify human intelligence, helping educators focus on their most important job: a personal connection with their students. EdTech should empower teachers, not sideline them. That’s why we aren’t interested in building yet another “stack of ChatGPT prompts in a trench coat.”

Expand full comment

Another insightful note! With regards to your point about Unbound Academy, I've been following their progress out of curiosity. Recently, I commented on one of their social media posts, and a couple of parents (who've enrolled their kids there) replied or reached out to me privately. It sounds like they indeed have teachers supervising the 2-hour AI-led learning sessions--6 guides (teachers) for ~25 students. I also previously asked them questions about how their learning model compares to more intensive, but human-led learning models, but I didn't receive a reply. I'm still intrigued by where exactly AI can assist in a novel way when it comes to education, other than what we already know.

Expand full comment

The radio and the telephone

and the personalized AI learning tools that we know

May just be passing fancies

and in time may go

Expand full comment

Two things: this is my new favorite quote “ trying to approximate a human teacher with several LLM prompts stacked on top of each other in a trenchcoat”

And … Oakland rejected you for tutoring? That’s sorta like the local go kart track turning down Dale Earnhardt Jr as a coach. But it is what it is I guess!

Expand full comment
5dEdited

Hmmm. It really seems like the "Brilliant" folks are doing neat things (starting with the idea that 'text explanations don't work.') I have longed for interactive math activities that would, say, show you what your answer looks like in its rightness and wrongness, say, with the distributive property.... it mght be interesting to explore the "neat" stuff too...

Expand full comment

A lot of the pre-written Desmos activities - and the Desmos activities I write myself - try to do this. "What's the equation of this line?" Student enters an equation, Desmos shows them the line matching their equation. Does it match the line given? That's for them to decide.

Expand full comment

so you are no longer dan perplexity meyer?

Expand full comment