10 Comments
Feb 28Liked by Dan Meyer

Dan, I loved how you asked ChatGPT to "find the kernel of truth" in the student's answer and build on that! I agree, it is what many/most/all human teachers do, and it helps build understanding from what the student already knows.

What you've described with Khanmigo doesn't recognize what the student has done well, so the student doesn't feel seen. The message is "all of this is wrong, we need to start at the first step," instead of noticing what is right and support the student's learning from there.

Expand full comment

Khanmigo is basically saying, "It's ok if the wire-mesh monkey mother is not comforting, it's still able to feed the little baby monkey". The problem is that we've run this experiment and we know the outcome. The baby monkey lives but that's about it.

Expand full comment
Feb 28Liked by Dan Meyer

What Dan did here with Khanmigo could have been done by anyone. Like, for example, a reporter writing a story about the new whiz-bang AI tutors. A reporter at MotorTrend doesn't just transcribe statements from the car-company spokesperson, they actually get in the damn car and drive it around and tell you what they think if it. It's a shame that a product that our children may end up spending hundreds of hours with can't get same serious evaluation that Consumer Reports would give a toaster.

Expand full comment
Feb 28Liked by Dan Meyer

Long ago I was the design chief at a new online learning high school. Our plan was to find the best teachers in So Cal --legendary chem teacher, legendary calculus teacher, etc--and turn their polished classroom-tested methods into visual learning paths and living textbooks. I was a fairly successful science and then math teacher myself before I did software and curriculum design and knew that it takes 3 years to hit your stride as a classroom teacher, five years before you can say you know your trade. But every time a great teacher retires all that hard-won instructional knowledge is lost.

We got off to a great start but the dot bomb put an end to our company almost overnight, and in its place... Sal Khan! Drawing rabbits with different colored ears and teaching (lecturing on) every subject under the sun, from scratch, out of his own limited experience, with zero feedback from the students. Two very different visions of what kids deserve.

Don't get me wrong. Khan Academy slowly, painfully evolved into a useful resource (for B+ and A students who want a second take on the material) but it is limited by its original sin: contempt for classroom experience; and it pushes out better alternatives. I'm glad you're taking them on, Dan (in fact I thought, still do after reading the recent "journalism" from The Atlantic (Why Non-Profit Education Fails), that Amplify qualifies as a company with a stage three infection-- in Parkinson's taxonomy of company incompetence and jealousy-- and thus is unsalvageable. But maybe it's only in stage two, and in that case as Parkinson himself said, the cure for such a company "must come from the outside." With your presence at Amplify I'll wait and see!

Expand full comment
Feb 28·edited Feb 28

Another great essay, Dan. A couple of quick thoughts:

1. When you provided a more thorough prompt for ChatGPT, it's worth calling out that it's you Dan, the human math teacher, doing most (all?) of the important cognitive work. You're relying on your experience and knowledge to make better use of the tool. My tiny quibble: ChatGPT didn't really "know" you did something of value; rather, you Dan had prompted it in such a way that the text it generated in response to your input was better aligned to what a good teacher would do. There's no understanding happening on the other end.

2. I'm a little surprised that "hallucinations" account for only 5% of your tepid feelings. The term is a misnomer: LLMs do not hallucinate, they make predictions about what text to produce as output in response to text they receive as input. This works pretty well for natural language interaction since most of the time our conversations do not have some definitive right or wrong answer. But when LLMs grapple with a math problem that does have a single solution, they aren't "computing" the answer, they are instead making a prediction as to what "math text" is responsive to what they've been inputted. There are workarounds for this: ChatGPT4 now will write some Python code when it recognizes it's been given a calculation to solve. This is kinda cool and impressive, but also underscores the real limitations LLMs have.

Expand full comment

This is a perfect example of how using AI to automate instead of augment can go wrong. It also continues to pull on a thread that Dan has been tugging on for a while which tries to get to the heart of the many parts of education that may not be visible or easily quantifiable, but are challenging to encode in a machine. It's worth asking how much of the process of teaching and learning can be automated without destroying its value.

Dan, I'm curious to learn more about the tool you're developing. It sounds like your approach is likely to be a fruitful one.

I wonder as well if there might be some general lessons that could be shared about best practices for designing AI tools for education that operate in the mode of augmentation instead of automation.

Expand full comment

Looking forward to the Symposium!

Expand full comment