An asset-based approach to teaching asks, “What ideas do these students have?” rather than “What ideas do these students lack?” The point of this particular newsletter isn’t to make a long case for the benefits of an asset orientation over what is often called a deficit orientation. I’ll mention two benefits in brief, however:
Relational. Students like and learn more from people who like and learn from them. Most people like to be seen for their assets, what they do well, rather than what they don’t yet.
Cognitive. Here is psychologist David Ausubel with what has to be the most enduring finding from cognitive psychology: “The most important single factor influencing learning is what a learner already knows.” If you don’t seek to understand what a learner already knows—their assets—you are wasting one of the most valuable resources you have as a teacher.
If you’d like a longer case, I’d refer you to a talk from math education researcher Ilana Horn and, more recently, from a fantastic paper authored by a group of researchers with an interest in education and artificial intelligence. It’s called Toward Asset-based Instruction and Assessment in Artificial Intelligence in Education.
They perform three tasks in this paper:
Review the research around asset-based instruction.
Critique the dominant model of computer-based instruction as deficit-based.
Outline possible futures for AI in support of asset-oriented instruction.
Can AI support asset-based instruction?
I thought the researchers were thorough and convincing on the first two tasks, however their efforts to suggest “Maybe AI can do this asset-oriented instruction thing though?” mostly convinced me that it won’t anytime soon. Two reasons.
First, their outline of possible futures mostly considered metacognitive learning like “conscientiousness and persistence and creativity” rather than content learning like fractions, geometry, and algebra.
Metacognition is an important site for asset orientation but it is much less important than cognition itself, if only because the balance of instructional time tilts much more heavily towards the content of the discipline rather than a student’s metacognition about the content. Some might lament that fact but it is an empirical matter that the problem teachers need to solve, the job they need to do, and the way they spend their time considers cognition more than metacognition.
How would AI use a student’s understanding of fairness as an asset in learning about fractions? How would it use their ability to notice and name shapes as a resource in geometry? How would it take their sensory knowledge of patterns and use it as a resource in algebra?
I think if ideas or hypotheses existed in any kind of abundance here, the researchers would have suggested them.
Second, I know what asset-oriented instruction looks like and I have a hard time imagining the role of AI here. This could be a failure of my imagination, of course, but would you just watch this? Just watch this three-minute clip of Katrine Bryan teaching algebra. If you work in math edtech it will be the most impactful three minutes you spend today. Before you keep reading this newsletter, check your understanding:
How are you seeing an asset orientation show up here? How might artificial intelligence assist the teacher or student?
The student doesn’t know how to solve this problem from our curriculum.
Dominant models of education technology (and indeed dominant models of teaching) would emphasize the student’s deficits here.
You don’t know how to do this so we’ll show you a worked example and then give you a new problem.
You don’t know how to do this so we’ll take you backwards in a learning progression until you can successfully perform a previous skill and then we’ll work our way back.
What Katrine Bryan does instead is say, “I can help you understand the assets you already have for this problem.” Bryan knows that the foundation for every abstract skill is a concrete skill, that before kids can name a cat, they can point at the cat. Before kids can compare two piles quantitatively, they can compare them intuitively. And, here, before kids can describe a pattern using algebra, they can describe it in words.
So she asks Alena to describe the pattern in words. “Okay if you were to describe to me how to sketch this, what would you tell me to do? [..] Like you’re talking to me and you have to make me duplicate this and you win a million dollars, what would you tell me to draw?”
She pushes Alena for more precision in her description. “So I could just draw any 16 boxes I want somewhere?”
She asks Alena to describe what the 100th figure might look like, helping Alena mathematize her resources a little more.
She asks students to sketch the pattern. “If you don’t see it, try a sketch.”
In every case, she is helping the students understand that even if they don’t yet know this math, they know about this math. She is securing resources for her instruction.
A major task for math teachers, and one that Katrine Bryan accomplishes, is to help students understand that math isn’t some alternate world, alien from the one they know. Rather it is a layer on top of the world they know, one which adds value to that world, one which connects with the assets they already possess and multiplies their value.
You have to ask yourself, after every question, how did Bryan know to ask that? What was she noticing? What information was she taking in from Alena? How did she know she should ask Alena to describe the pattern in words? How did she so quickly come to understand Alena’s assets? Can a computer become sensitive in the same way?
I won’t delude myself into thinking that teachers like Katrine Bryan are common. But neither will I consider it any kind of triumph to offer students a slight fraction of her value while charging them a large cognitive tax, asking them to take time and energy to package up their assets in a form a computer might recognize, all while losing the relational and interpersonal benefits of asset-based instruction entirely.
Asking a computer to inventory a student’s assets seems to me like asking an ear “is it sunny out?” or asking an eye “is that piano out of tune?” These instruments are really good for what they’re really good for and it is no insult to say they are not good for everything, that they are not good for this.
Odds & Ends
I think this analysis from Jeni Daley over on one of my LinkedIn posts is basically dead on: “Believing chatbots solve the feedback loop issue is naive - ‘Great job!’ from GPT3.5 brings a lot less value than the teacher who’s helped you through a hard year - and even a fine tuned model that incorporates the teacher’s voice/style/flair and gives elaborate essay feedback immediately just feels…weird — teachers have zero desire to reduce their clout with kids who already think they’re cringey. I think AI is going to change the space, but maybe more on the administration side of things (communication, discipline, documentation, planning, etc.) - and almost every admin I know is less tech savvy than the teachers, so we might wanna extend the timeline past 2024. 😬”
More 2024 predictions.
Philip Vahey thinks 2024 will result in much more disillusionment with generative AI, that 2025 is the year that education technologists will loosen their grip on personalized learning models and that asset-oriented AI models will flourish. Verdict - I want to live in that world, but I don’t think so. The popularity of personalized learning emerges from ideas about learning, teaching, and schooling that are cultural, political, economic, and that run really, really deep into the ground.
The Edtech Insiders think “AI will tip into extremely positive or extremely negative light.” Verdict - After painting themselves a bullseye that large, I still think they’re going to miss it. Generative AI is currently seeing middling usage from K-12 educators, ruling out the extremely positive outcome. But its worst case is buoyed significantly by an investor class that really needs generative AI to be a hit. History is also a guide here: personalized learning is an idea that has hobbled along in various forms for over 60 years and it will continue to hobble along in its generative AI form through 2024 and beyond.
Talent Lab Solutions has a helpful analysis of the diminishing returns of the MOOC space. (“Massive open online courses,” if you were asleep for a few years there.) Especially the section titled “Low Floor, High Ceiling.” With MOOCs you have a personalized learning model optimized for computer delivery, eventually commoditized by a glut of companies offering the same undifferentiated low-value service. Interesting history!
EdSurge. “A Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Can’t Be Done.” Some of you saw this headline and thought, “Oh no Dan is gonna be insufferable this week” and you are 100% correct. I will try to restrain myself, though, and just offer a 👍 to a couple of quotes:
“‘We’ll have flying cars before we will have AI tutors,’ he says. ‘It is a deeply human process that AI is hopelessly incapable of meeting in a meaningful way. It’s like being a therapist or like being a nurse.’”
“What are new ways that generative AI tools can be used in education, if tutoring ends up not being the right fit? To Nitta, the stronger role is to serve as an assistant to experts rather than a replacement for an expert tutor. In other words, instead of replacing, say, a therapist, he imagines that chatbots can help a human therapist summarize and organize notes from a session with a patient.”
Phil Daro is a keen observer of teacher practice and has a wonderful series of short reflections on his recent trip visiting classrooms in Japan.
Love reading your threads on AI and math education. Not sure what kind of feedback or visibility you are getting from what you send out, but I always find your ideas insightful and on point. Just wanted to give you that feedback for your great thinking and work here. Thank you.
This was a really interesting perspective. And for me personally, the timing was amazing, as I have been listening to a podcast this past week on the teaching of reading comprehension, and utilizing an approach which emphasizes the importance of an asset-based platform, as opposed to a skill-set approach (which is the norm in America), though they used a different term for it.
Yeah, the more I learn about AI, the more I think we need humans teaching.