The Biggest Surprise from the Latest Survey of AI & Teachers
And a bunch of findings that will not surprise regular readers.
RAND recently released the results of a survey of teachers and their use of artificial intelligence. These surveys are invaluable for anyone trying to discern the ways teachers are actually using generative AI from the ways tech and business leaders insist teachers are using generative AI.
Some of RAND’s findings will not surprise regular readers here.
Usage of generative AI continues to underperform expectations.
Ask any five edtech conference attendees what percentage of teachers regularly use AI in their teaching and their average prediction will definitely exceed RAND’s finding of 8%.
Even the design of this survey item says, “We are trying to find a heartbeat.” When asked to describe their use of AI, RAND gives teachers a series of tight and incomplete options like:
I regularly use AI tools and products in my work as a teacher and actively seek out new AI tools and products to use myself. (8%)
I use AI tools and products in my work as a teacher that are provided or recommend to me by others, but do not actively seek out AI tools and products to use myself. (11%)
Etc.
What if you’re a teacher who actively seeks out new products to use but don’t regularly use them? Or if you regularly use tools that are provided to you by others?
The kind of item we need here is, roughly but simply, “How often do you use generative AI tools in your teaching practice?” Give teachers options for daily, weekly, monthly, yearly, and never.
Eventually, the majority of teachers will have used these tools at least once. When? Tough to say. According to RAND, 63% of teachers have never used these tools in their job and only 36% plan to use them more next year. But the question we should wonder at some point is, “How regular is that usage? How important are these tools to a teacher compared to a digital projector, a laptop cart, etc?”
A novel tool for text generation proves most useful for text generation especially in disciplines that require lots of text generation.
The top five reported uses of generative AI, according to RAND, are all text generation or in that neighborhood. For example:
Generating quizzes and assessments
Generating lesson plans
Adjusting content
Etc.
And usage of generative AI was highest among ELA / social studies teachers. (27% of those teachers reported regular usage compared to 19% of STEM teachers.)
One way to unconfuse yourself about usage patterns of generative AI in schools is to note that a novel tool for text generation has proven most useful for text generation and is most used in disciplines that require lots of text generation. It seems less useful for purposes beyond text generation and least used in disciplines that require less text generation.
The situation is perhaps that simple.
If the reality of generative AI usage in schools has underperformed your expectations for generative AI, I’m willing to bet you have overestimated how much of the work of schools is about text generation.
This is fine.
It’s fine if generative AI is just a very useful tool for a very circumscribed use case. It’s fine if it is not the god tool, the tool that technologists have promised us since the advent of television and the film reel, the one tool that will personalize learning precisely and ameliorate every form of social difference all at scale and with massive returns to shareholders. I mean, it’s not fine if you’re an edtech founder or a VC and that was your thesis here. That’ll be tough. But if you’re only interested in finding tools that schools find useful and figuring when and how they’re most useful, this is all totally fine.
Teachers are reticent to use AI for fairly diverse reasons.
The largest barriers to generative AI usage reported by non-users included:
Concerns regarding the role of AI in society as a whole (44% of teachers including it among their top three barriers).
Concerns about data privacy (36%).
Concerns that increased use of AI will diminish student-teacher relationships (30%).
Lack of professional development for using AI tools and products (30%).
I’m most surprised by that top concern. Certainly, it’s a challenge that no single edtech company can solve. Perhaps time is the only solution, time to find out if the sky falls and if AI turns us all into paperclips or if life continues mostly apace. The other concerns are within the locus of control of edtech products and shouldn’t be dismissed.
Odds & Ends
¶ Laurence Holt has written up a couple of bangers this last month. First, The 5 Percent Problem, which notes the tendency of edtech companies to report efficacy only for students who meet a minimum, arbitrary threshold for usage, a threshold which for some companies has excluded >90% of all students. I am not here to throw stones too far or too hard, but if >90% of your students fail to meet your standards for usage, your product isn’t ready for an efficacy study. Holt’s other piece considers the sophistication of tutoring and, though he doesn’t mention generative AI specifically, you can feel its presence on the periphery. That piece contains a description about the arrival of a new insight that’s so poetic and apt it made me mad a little.
¶ Along those lines, my kids’ school district here in Oakland, CA, is deploying chatbots to tutor struggling math stud—what’s that?—I’m sorry I’m being told they’re actually recruiting parents and caregivers with roots in the community to tutor students in math, following up on a successful similar effort in ELA.
¶ Here is a useful little idea for using AI to generate images for narrative writing. We need more useful little ideas for using generative AI in classrooms IMO and fewer useless big ideas.
¶ Here are several items that each wonder how far up the gradient of expertise generative AI can take a learner.
What Do We Gain and Lose When Students Use AI to Write? Tony Wan is back in EdSurge.
- frequently invokes the idea that “writing is thinking,” that if you are struggling with a draft it is often because you are struggling with thinking and that a tool for writing may not serve you well in that moment.
Research indicating that code quality has declined in the era of generative AI programming copilots (via
). The measures they used to define code quality are really interesting.Also related: Ezra Klein’s recent podcast about taste, revision, and editing.
One plausible possible future for generative AI in writing instruction is that it flattens the difficulty curve from novice to intermediate writing but then prevents those intermediate writers from ever achieving anything we’d call expertise.
¶ It is still interesting watching people try to find a place in our world for the confident lies we call “hallucinations.” Mustafa Suleyman, CEO of Microsoft AI, says, this is just creativity and this is good, actually. Kristen DiCerbo, the Chief Learning Officer of Khan Academy, has previously argued that the hallucinations represent “fun” and a “game” for students. In a panel at ASU+GSV last month, she now argues that the hallucinations aren’t that bad because they haven’t stopped teachers from using Khanmigo entirely, which seems like a low bar to me.
¶ If I taught high school stats and was interested at all in data science, I would apply for this fellowship with the CourseKata team. Ji Son and Jim Stigler infuse the rigorous, quantitative world of introductory statistics with verve and joy and modern technology. Due June 2.
¶ I’m on a panel tomorrow for Grantmakers in Education discussing classroom technology: The Future of Math: Innovations in Classroom Technology. The panel is absolutely stacked. Tons of heat.
That Holt's article on tutoring is exactly what I need to read. Thanks for sharing!
I was recently introduced to the first AI tool that I see as a potentially helpful tool for a teacher/coach: TeachFX. It is an app that allows teachers to record audio of their classroom, and then uses AI to analyze their questioning, wait time, student talk, etc. I am a math coach, and I used it for the first time modeling a 3-Act task, and my first impression is positive. I still need to experiment on getting better sound quality for student voices.
You should check it out.