It is conventional wisdom that teachers need to know what they’re teaching in order to teach it well. But the relationship between teacher knowledge of math (measured by post-secondary coursework, for example) and student achievement gains is surprisingly strained.
Maybe other kinds of teacher knowledge are useful for kids, wondered Shulman, Grossman, and a bunch of others. Maybe there are specialized forms of math knowledge.
Ball & Hill studied what they called math knowledge for teaching using items like this one, which someone who has only learned how to order fractions would not answer as easily as someone who has learned to teach how to order fractions.
The relationship between this kind of knowledge and student achievement was significant, though the effect size was not particularly large.
I am obsessed lately with a different kind of teacher knowledge and think and dream of not much else besides helping teachers develop it at digital scale. It’s the knowledge of student misconceptions1.
Check this study out. The researchers gave students and their teachers the same multiple choice assessment of science knowledge. On items where a single wrong answer was popular with students, the researchers asked the teachers, “Which is the most common wrong answer?” This wound up being a hard question for teachers to answer!
Teacher SMK (subject matter knowledge) performance on the pretest was strong, with 84.5% correct on non-misconception items and 82.5% on misconception items. Hence, on average, teachers missed only 3 out of 20 items. Teachers’ KOSM, the ability to identify the most common wrong answer on misconception items, was weak, with an average score of 42.7% identified, averaging only 5 out of the 12 items with strong misconceptions.
Look at that thick bottom-right quadrant, where teachers have high subject matter knowledge and low knowledge of student misconceptions.
Big deal? Big deal.
The most interesting results are seen for high nonscience students on misconception items: Students of teachers who had only SMK on an item had gains that were not significantly different from those of students who had teachers without SMK. Only when teachers had both SMK and KOSM were student gains significantly larger.
My theory of change a/k/a how I am investing myself professionally has been:
Create a curriculum that invites student thinking.
Create tools that give teachers visibility into that thinking.
Most math programs are a flop in both categories, either by giving students not enough to think about (a/k/a they’re boring) or by being too restrictive of variability in student thinking or by failing to give teachers visibility into that variability.
I like how our team at Amplify has been thinking about both of those questions (curriculum and teacher platform — check) and I am now wondering:
How do we help teachers develop this important teaching knowledge of student wrong answers—including knowing the common wrong answers, believing them to be a valuable resource for learning, and knowing how to use that resource?
Extra credit: Do this in a context of high teacher variability where out-of-class teacher professional learning time has been sharply curtailed by the limited availability of substitute teachers.
Anyway - this has been a “what I’m thinking about” type update and also an invitation to collaboratively brainstorm. I know (because of “big data”) that there is a wrong answer that’s common on this question from our curriculum.
How do we prepare teachers to anticipate that answer, that common (if wrong) way of thinking, and do something productive with it?
Odds & Ends
Peps Mccrea has a nice post on the efficacy of whole-class questioning and how teachers might miss out on its benefits. I also like the format of the post a lot. Quick, pithy, high-impact. Not my posting style, but I admire it.
Outschool is using generative AI to help teachers write progress reports for parents. Maybe a relief for teachers? Definitely saves time to have it embedded in the platform versus hosted by a third party. Probably unrelated but still: Gannett to pause AI experiment after botched high school sports articles.
Great article from Anne Kim in the Washington Monthly on the promise behind certificate programs (as an alternative to a four-year degree) as a path to a high-paying job and whether or not that promise is met. She earned a certificate from Google by way of Coursera and then tried to take her certificate onto the job market.
My instinct about generative AI in education is the opposite of this Wired headline, “Teachers Are Going All In on Generative AI,” so I read it with a lot of interest. The article quotes heads of schools, startup founders, several researchers, several venture capitalists, a union head, and … exactly one teacher? I’m not trying to “gotcha” this stuff, but I like to believe I stay fairly plugged into teacher communities and I am just not hearing educators “going all in” on generative AI. See also: a Walton Foundation survey finding 10% of teachers using generative AI almost every day (!) way back in March 2023.
I loved this article—devoured it from first word to last—where Daphne Goldstein, an eighth grader (a really real student) compares Khan Academy Classic to Khanmigo a/k/a Khan Academy + Generative AI. Do not miss.
Paper is an online tutoring company that has raised hundreds of millions of dollars in venture funding and seems to be going through a slow-motion collapse, including laying off 20% of the company, losing big contracts due to lack of student usage, and angering tutoring staff bad enough that they’re over at Reddit talking about unionizing. I’m trying to figure out what all of this means, if anything, for the viability of virtual chatbots. If human-powered chatbots are so unpopular (at least via Paper) are generative AI-powered chatbots positioned better or worse here?
A term I do not much appreciate but will use in this post out of deference to the study authors.
I don’t know if other educators have seen a segment called, “My favorite ‘no’”. It discusses wrong answers of students and addresses their misconception. In the segment, the teacher chooses her favorite student misconception and projects it to a screen via document camera. She then proceeds to share with the students as to why she selected such a response as her “favorite no”. The students are allowed to comment as to why the answer is wrong, and volunteer information as to why the student might have made the error he, or she made. This blog entry reminded me of that segment. “My favorite ‘no’” is a great way to build knowledge of student misconceptions and create a classroom discussion regarding such misconceptions.
I loved Daphne's observation: "So obviously you learn more from the teacher, but occasionally the teacher can ramble on."