Highlights from Stanford's AI+Education Summit
Several good quotes. An interesting new study. A debate that was one, maybe two chili peppers spicy.
I attended the AI+Education Summit at Stanford last week, the fourth year for the event and the first year for me. Organizer Isabelle Hau invited researchers, philanthropists, and a large contingent of teachers and students, all of them participating in panels throughout the day. That mix—heavier on practitioners than edtech professionals—gave me lots to think about on my drive home. Here are several of my takeaways.
¶ The party is sobering up. The triumphalism of 2023 is out. The edtech rapture is no longer just one more model release away. Instead, from the first slide of the Summit above, panelists frequently argued that any learning gains from AI will be contingent on local implementation and just as likely to result in learning losses, such as those in the second column of the slide.
¶ Stanford’s Guilherme Lichand presented one of those learning losses with his team’s paper, “GenAI Can Harm Learning Despite Guardrails: Evidence from Middle-School Creativity.” His study replicated previous findings that kids do better on certain tasks with AI assistance in the near-term—creative tasks, in his case—and worse later when the tool is taken away. “Already pretty bad news,” Lichand said. But when he gave the students a transfer task, the students who had AI and had it taken away saw negative transfer. “Four-fold,” said Lichand. What’s happening here? Lichand:
It’s not just persistence. It’s a little bit about how you don’t have as much fun doing it, but most importantly, you start thinking that AI is more creative than you. And the negative effects are concentrated on those kids who really think that AI became more creative than them.
A paper I’ll be interested in reading. This was using a custom AI model, as well, one with guardrails to prevent the LLM from solving the tasks for students, the same kind of “tutor modes” we’ve seen from Google, Anthropic, OpenAI, Khan Academy, etc.
¶ Teacher Michael Taubman had the line that brought down the house.
In the last year or so, it’s really started to feel like we have 45 minutes together and the together part is what’s really mattering now. We can have screens involved. We can use AI. We should sometimes. But that is a human space. The classroom is taking on an almost sacred dimension for me now. It’s people gathering together to be young and human together, and grow up together, and learn to argue in a very complicated country together, and I think that is increasingly a space that education should be exploring in addition to pedagogy and content.
¶ Venture capitalist Miriam Rivera urged us to consider the nexus of technology and eugenics that originated in the Silicon Valley:
I have a lot of optimism and a lot of fear of where AI can take us as a society. Silicon Valley has had a long history of really anti-social kinds of movements including in the earliest days of the semi-conductor, a real belief that there are just different classes of humans and some of them are better than others. I can see that happening with some of the technology champions in AI.
Rivera kept bringing it, asking the crowd to consider whether or not they understand the world they are trying to change:
But my sense is there is such a bifurcation in our country about how people know each other. I used to say that church was the most segregated hour in America. I just think that we’ve just gotten more hours segregated in America. And that people often are only interacting with people in their same class, race, level of education. Sometimes I’ve had a party one time, and I thought, my God, everybody here has a master’s degree at least. That’s just not the real world.
And I am fortunate in that because of my life history, that’s not the only world that I inhabit. But I think for many of us and our students here, that is the world that they primarily inhabit, and they have very little exposure to the real world and to the real needs of a lot of Americans, the majority of whom are in financial situations that don’t allow them to have a $400 emergency, like their car breaks down. That can really push them over the edge.
Related: Michael Taubman’s comments above!
¶ Former Stanford President John Hennessy closed the day with a debate between various education and technology luminaries. His opening question was a good one:
How many people remember the MOOC revolution that was going to completely change K-12 education? Why is this time really different? What fundamentally about the technology could be transformative?
This was an important question, especially given the fact that many of the same people at the same university on the same stage had championed the MOOC movement ten years ago. Answers from the panelists:
Stanford professor Susanna Loeb:
I think the ability to generate is one thing. We didn’t have that before.
Rebecca Winthrop, author of The Disengaged Teen:
Schools did not invite this technology into their classroom like MOOCs. It showed up.
Neerav Kingsland, Strategic Initiatives at Anthropic:
This might be the most powerful technology humanity has ever created and so we should at least have some assumption and curiosity that that would have a big impact on education—both the opportunities and risks.
Shantanu Sinha, Google for Education, former COO of Khan Academy:
I’d actually disagree with the premise of the question that education technology hasn’t had a transformative impact over the last 10 years.
Sinha related an anecdote about a girl from Afghanistan who was able to further her schooling thanks to the availability of MOOC-style videos, which is an inspiring story, of course, but quite a different definition of “transformation” than “there will be only 10 universities in the world” or “a free, world‑class education for anyone, anywhere” or Hennessy’s own prediction (unmentioned by anyone) that “there is a tsunami coming” for higher education.
After Sinha described the creation of LearnLM at Google, a version of their Gemini LLM that won’t give students the answer even if asked, Rebecca Winthrop said, “What kid is gonna pick the learn one and not the give-me-the-answer one?”
Susanna Loeb responded to all this chatbot chatter by saying:
I do think we have to overcome the idea that education is just like feeding information at the right level to students. Because that is just one important part of what we do, but not the main thing.
Later, Kingsland gave a charge to edtech professionals:
The technology is, I think, about there, but we don’t yet have the product right. And so what would be amazing, I think, and transformative from AI is, if in a couple of years we had a AI tutor that worked with most kids most of the time, most subjects, that we had it well-researched, and that it didn’t degrade on mental health or disempowerment or all these issues we’ve talked on.
Look—this is more or less how the same crowd talked about MOOCs ten years ago. Copy and paste. And AI tutors will fall short of the same bar for the same reason MOOCs did: it’s humans who help humans do hard things. Ever thus. And so many of these technologies—by accident or design—fit a bell jar around the student. They put the kid into an airtight container with the technology inside and every other human outside. That’s all you need to know about their odds of success.
It’ll be another set of panelists in another ten years scratching their heads over the failure of chatbot tutors to transform K-12 education, each panelist now promising the audience that AR / VR / wearables / neural implants / et cetera will be different this time. It simply will.






Thank you! I keep coming back to the MOOCs story, I just can't figure out how people refuse to see how similar it is and learn the lessons. It was just 10 years ago, we were all here to witness the rise and fall.
Your piece correctly notices the shift from hype to implementation. The most important insight is that AI’s effect depends less on model capability and more on how students relate to it. If students begin to see the machine as the thinker, effort drops and learning decays. The emphasis on classrooms as human spaces also matters. Education is formation, not information delivery.
Where it may be wrong is its confidence that human instruction and AI are competing models. The real change is not replacement but cognitive outsourcing. Students already rely on calculators, search, and now language generation. The question is not whether AI tutors “transform school,” but whether they quietly redefine what it means to understand something at all.