Okay Here Is an Interesting AI Math Edtech Product
Trying to prove, even if only to myself, that I can still find beauty in edtech.
I don’t think it’s all bad, okay. Yes, I think the AI edtech landscape is dominated right now by companies and people who misunderstand the needs of teachers and students, who misunderstand how people learn, who are much more interested in a particular tool than they are in the challenge of public education.
However, I do occasionally run across companies making different, interesting choices. I’ll try to share them with you more often. I don’t have any financial interest here. My interest is in helping my edtech colleagues learn to play more than two notes with classroom AI (student chatbots and teacher copilots) and, frankly, I’d like to play more notes more often here than my usual minor chord.
Snorkl
Jeff Plourd and Jon Laven co-founded Snorkl. Plourd was an engineer at Classkick, a piece of edtech I remember liking ten years ago. Laven has substantial math teaching experience. Both got their start with Teach for America.
You can see Classkick’s DNA in the demo video above, particularly with students working on math problems using freehand sketch.
Pros
Snorkl permits interesting mathematics.
The vast majority of math edtech does not permit interesting mathematics. They force students to express their ideas through inputs that are either low-fidelity (like multiple choice or numeric response) or higher-fidelity but hostile to important mathematic representations. (Try expressing a graph or an equation into a chatbot, for example.) Asking students to do math in the majority of math edtech is like asking every musician in the San Francisco Philharmonic to play a kazoo. No one—not the performers or the audience—will experience much of interest there.
So it’s nice that Snorkl lets teachers upload tasks and ask students to sketch and talk about their thinking. Teachers will upload menial tasks, of course, but the ceiling for interesting mathematics is higher with sketch and voice than with typing and chatting.
Snorkl gives asset-oriented feedback.
The quality of AI feedback will vary with the amount of information students give it. This is the first-mile delivery problem. Lots of students type “idk” into a chatbot interface because “idk” is easier to type than their actual thoughts and the quality of their feedback suffers by consequence.
Snorkl, meanwhile, has sketch data, transcribed voice data, and timestamp data, which gives their AI model a lot to respond to, and they can pinpoint their feedback to different timestamps in the video. This is all interesting to me.
Questions
Do kids want to read the AI feedback?
If someone comes to your desk to talk with you about math, they’re generally going to do at least two very important things. They’re going to talk with you about your work and they’re going to gesture at the parts of your work they’re talking about. They’re going to do one hundred other things, also, many of which are non-verbal, relational, invisible to the eye, and impossible to replicate with AI in 2024. But those two things will generally happen.
Snorkl, and generative AI generally, does only one of those things. It writes to you about your work. But it doesn’t gesture at it, which makes the feedback feel literally disembodied compared to common forms of tutor feedback. If I had access to usage data at Snorkl, I’d be very curious if students click into its AI feedback moments or just move on to the next question.
Do teachers experience AI summaries as help or homework?
After your students have worked on the same problem in different, interesting ways, Snorkl prepares a digest of “insights” for you.
I’m seeing more and more of these displays, especially in AI tools, and my hypothesis is that teachers do not receive them with gratitude. These displays give information but leave action to the educator’s imagination. They imply that the educator should do … something … perhaps with individual students … but that’s incredibly difficult, particularly when the same student appears in multiple categories, which may make the teacher feel overwhelmed. It’s possible that Snorkl needs a stronger perspective on the one thing teachers should do with this information.
Anyway, I spend a lot of time haranguing the edtech industry so it’s nice to see a product about which I can say, sincerely, “well this is interesting.”
Odds & Ends
¶ I am getting mixed messages about AI in education. On the one hand, I see Education Week report tepid adoption of AI in K-12 education. On the other hand, I see startups like MagicSchool claiming that they are “loved by over 4 million educators and their students.” Loved! How can we reconcile this difference?
When you click on the Wall of Love link, you see a bunch of social media posts, each one quite enthusiastic about MagicSchool.
Because I am fundamentally unable to help myself, I scraped all the tweets, used Twitter’s API to grab each person’s Twitter bio, and coded the ~300 people there for their role in education. 👇
There are nearly 3x as many consultants as teachers. You’re 3x as likely to see someone with a bio of “Instructional Technology Coordinator” or “Coordinator of Digital Learning” as “Proud Pre-K3 teacher” or “Dual credit history teacher.” I think this is interesting.
At some point, we're going to have to sit backwards on a chair and have a serious chat with district tech leads about how their incentives do and don't align with the needs of teachers and students, how they are frequently rewarded for inflating expectations about new technologies.
2024 Dec 7. Various people here and on LinkedIn have said I am conflating “Consultant” and “Coach.” The important thing here is that neither one shares the incentives of a teacher. What I wrote on LinkedIn:
I get that "consultant" is kind of a slur in some parts and there ARE material differences between independent consultants and the consultants employed by a district (née "coaches"). But independent and district consultants share an incentive that administrators, teachers, and students do not—to make tech look good and useful. If tech does not look good and useful, the teachers keep on teaching, the administrators keep on administrating, the students keep on learning, but the tech consultants no longer have that job. That misalignment of incentives does a lot to explain why there are many more teachers than consultants in the US but much more consultant enthusiasm for AI than teacher enthusiasm for AI on MagicSchool's wall.
I’m a fan of minor chords personally. I just listened to a podcast this morning that made me think of you! It made a connection between AI and “A Wrinkle in Time”: https://gretchenrubin.com/podcast/a-little-happier-what-do-humans-possess-that-artificial-intelligence-doesnt-possess/
Hmmm, I might be a dyed-in-the-wool contrarian, but I would like this Snorkl a lot better if it allowed me to respond directly to its feedback and see the reasoning behind the score it gives. I did the pb&j sample paragraph (intentionally leaving off any intro or conclusion so I could see what the feedback would look like) and I got 2/4. The feedback I got was:
"Great job listing the main ingredients needed and explaining how to spread both the peanut butter and jelly evenly! You were very clear about putting the slices together at the end." "I noticed you mentioned wiping the knife on the bread. Can you think of a cleaner way to clean the knife between spreading peanut butter and jelly?" "I love how you added your personal preference about cutting the sandwich diagonally and acknowledged that others might do it differently! This shows great thinking beyond the basic steps."
There's no rubric or other rationale for the score, at least not that I can find easily. So, I got half off a writing assignment because I described a somewhat unsanitary sandwich-making practice? Sorry, but you've lost me.