It's February, And Now You Understand Why Individualized Learning Hasn't Worked
Cognitively, kids prefer burgers to gyms.
Everyone starts the new year with their best foot forward and on their best behavior. At first, gym attendance is up and fast food consumption is down. But by the end of the first week of February, the location service company Foursquare noticed that interest in gyms and burgers both regress to their mean.
Over the last several decades, many commentators have written about the “promise” of individualized learning—the promise of putting students on laptops where digital assessments will ensure they will receive a learning resource that is appropriate for their development. Yet some of the most popular individualized learning software has failed to demonstrate significant results with the majority of students in their efficacy studies. A nationwide study of 62 schools engaged in this kind of individualization showed a paltry effect size in math, insignificant results in literacy, and diminished feelings of social connection among students.
The promise of individualization receives new energy with every new technology—from radios to movies to the internet and of course generative AI in the present. To most commentators, it all seems so exciting and inevitable. Their destination is always just around the next bend in the road.
But if any of those commentators have ever canceled their gym membership or abandoned their fitness plans, they should know better: it is people who help us do the difficult things we need to do.
It is a small fraction of adults who can take an individualized exercise plan and stick with it in the absence of a personal trainer supporting them, in the absence of a workout partner encouraging them to get out of bed and go for a run. It is a similarly small fraction of kids in K-12 who can self-start, self-manage, and self-learn in the way these commentators imagine. The other, much larger fraction of kids needs more people embedded more tightly into their learning process, not fewer.
PS. Khanmigo Updates After My Last Newsletter
Last week, I critiqued the way Khan Academy’s chatbot Khanmigo would intervene into a student’s learning process. Within 24 hours, Khan Academy changed Khanmigo’s intervention. Previously, Khanmigo would intervene after:
The student entered an answer.
The student clicked out of the answer input.
The student waited for one second.
Now, Khanmigo intervenes after:
The page has been loaded for about two seconds.
That’s it. Maybe you got up to sharpen your pencil. Maybe you’re still reading the problem. Maybe you’re on the cusp of a revelation. It doesn’t matter. Khanmigo doesn’t know. Khanmigo can’t know. Khanmigo will jam its cute green hat into your learning process two seconds after the page loads whether you needed that intervention or not.
My point isn’t so much that this is the wrong behavior for Khanmigo so much as it cannot be right. Khan Academy is stuck in hell, trying to approximate a human teacher with several LLM prompts stacked on top of each other in a trenchcoat. After I pointed out that this is nothing like a human teacher, they rearranged the stack of LLM prompts and it is still nothing like a human teacher.
Maybe one day technology will be able to sense as much about a student as a halfway-skilled human teacher. Maybe one day students will be as interested in disclosing their thinking to artificial intelligence as they are to even the most curmudgeonly teacher. But today is not that day and Khanmigo is not that technology.
The relevant question for many of you: do you really want to spend your working life trying to teach a sack of GPUs to cosplay as a human teacher? Couldn’t be me. Sounds like hell. If you ever catch a spark for working with the humans themselves, pop on by.
PPS. Khan Academy also fixed a UI bug I pointed out where Khanmigo would (1) offer help, (2) say “Good work!” and (3) encourage the next question simultaneously. A former Khan Academy employee suggested I bill them for all of this free product consultation and I said, no no no thank you but no. I do all of this for the love of the game.
Upcoming Webinar
On February 18, I’ll be giving an EdWeb presentation: Everything That Can Go Right When Students Get It Wrong. If you missed this at NCTM, pop on by.
Featured Comments
There are two fundamentals in my experience. As Dan puts it, (1) you either bet on teachers or you bet on software, and to use his phrasing, I'd add (2) you either bet that learning is more effective with others or is more effective alone. Neither choice is completely binary, but choices are made. Every educational technology makes each choice.
Because the chatbot relies on the student to do all the work of exposing what they know, don't know, and aren't sure about. Students don't enjoy that.
Odds & Ends
¶ The team at Brilliant describes how they combine human creativity and generative AI to create their learning games. I am not made of stone, folks. I think this is pretty neat.
¶ Larry Cuban, a school teacher, superintendent, researcher, and all-around dude who has seen too much to trifle with your edtech triumphalism, has posted some useful remarks about this newest education technology. Key quote:
Promoters of AI have attended public and private schools for nearly two decades and sat at desks a few feet away from their teachers. Such familiarity encouraged AI advocates to think that they knew thoroughly what teaching was like and how it was done. That familiarity trapped promoters of AI into misunderstanding the sheer complexity of teaching especially the cordial relationships that teachers must build with their students.
¶ Not long ago, I wrote an exposé of Unbound Academy and their Two-Hour Learning model which at that time had just been approved to run a virtual charter in Arizona and was under review in Pennsylvania. It is no longer under review. Pennsylvania denied their application. The denial letter is absolutely scathing.
While a single deficiency would be grounds for denial, the Department has identified deficiencies in all five of the required criteria.
You simply do not want to read that in your application review, folks.
¶ Tons of tutoring news. FEV Tutor, a big virtual tutoring company, shut down abruptly. (This was the same company that piloted Rose Wang’s Tutor Copilot last year.) Informed commentators suggest this is more a story of financial mismanagement than underlying product quality.
¶ Done Right, Virtual Tutoring Nearly Rivals In-Person. The headline here is rosier than the article, which reports learning growth in one study that was “... about two points on NWEA reading assessments” with “no difference in impacts for English language learners or those with special needs.” Also don’t miss just how much on-the-ground human support was deployed to achieve even those limited virtual results.
¶ My home district here in Oakland, CA, recruited local parents to serve as tutors in schools. (I applied and made it to the in-person interview round but they were looking for full-time tutors.) They recently published school-level results and analysis of implementation, where they saw a lot of variation between schools.