I wonder how much of the time "saved" by AI comes out of the work the teachers would ordinarily do outside of school, in which case it doesn't save six weeks of school at all-- it just reduces the length of the teacher work day so that it's closer to what the contract says it's supposed to be.
I mostly use AI to do the non-teaching data stuff my admin loves. I didn't spend 20 years working with kids to create color-coded spreadsheets of multiple choice tests. I feed the testing algorithm results straight into the spreadsheet algorithms then hand the output to my "content coach" without having to spend much time on it. This does not save me any time because I have real teacher things to do.
My actual teacher brain can then spend more time with the lab reports written by actual student brains. I get a better picture of their understanding from their work than from the test. And (be prepared, this might shock you) the students who put the time into completing projects also tend to do better on the multiple choice thingys.
Eventually it will come to the point where Ai takes the tests, then some other AI analyses the scores and yet more AI will make color coded reports. And then education reform will have achieved its destiny: Robots talking to each other with no human input at all.
I don't believe it either, largely for the same reasons you articulate.
But also because there isn't any evidence of the repurposing of the time saved. That is the yawning void in the discourse of AI boosters: exactly what are they imagining that the saved time will be redirected towards that is a more valuable activity for trained professionals? The reason they don't want to concretize it is because even if AI automates certain tasks effectively (a big if) AI boosters don't actually know enough about any workflow or work processes in any existing profession to envision what professionals would rather be doing nor do they understand anything about what the obstacles to doing that work actually are.
Let's take primary-care doctors. Suppose that generative AI really does make it easier for them to record notes on visits and integrate treatment and prescription into the visit narrative. What should they be doing more of that record-keeping has forced out of their professional workflow? Spending more time with patients, getting a fuller holistic sense of patient care, etc., all of which we absolutely know leads to better health outcomes. But is record keeping actually what is preventing that from happening? No. What's preventing that from happening is that the medical profession has been violently annexed by the insurance industry, by corporatized management, and in some cases by private equity. What will they do with time that AI frees, if it does? Increase every primary care doctor's patient loads, with no net increase in time spent with patients.
The signs that AI might actually be freeing teacher time will be in larger class sizes, more demands for more forms of data creation, etc., not in higher-quality engagement with existing students or in improved forms of student assessment and better student outcomes. The problem is that we might get larger class sizes and higher administrative workloads even if AI is NOT being adopted or is not working.
Featuring this comment next week. This piece especially:
> exactly what are they imagining that the saved time will be redirected towards that is a more valuable activity for trained professionals?
A huge tell is here is when they imagine the work saved on non-interpersonal work can be redirected towards personal work. No, friend. Work saved grading papers at home after work cannot be repurposed for more attention to students. The students aren't there in your home after work.
For me, there’s just a lot of work that I get done with AI that just simply wouldn’t have gotten done without it. The spiral review warmups I have my Calculus students complete each week would either not exist or they’d do a much worse job of systematically covering topics from throughout the year. They also wouldn’t be as effectively catered to the specific academic level of my students, and I’d have a harder time designing them to be data-informed.
I agree that if I was asked how much time I save doing this, I’d only be doing a rough estimation because I don’t know how long making this type of materials would take without AI. But, indicating that it was a big chunk of time would be quite true, and I think most teachers using AI regularly feel the same.
Do you think my students would better be served if I scoured the internet or spent a lot of time designing specific problems for them to review? Because my deep intuition at this point is that it’s a much more natural and effective workflow to start from learning objectives and use AI as a backwards design companion from those goals. Current models are really impressive for tasks like these!
I like this, maybe I should be doing it myself, I just want to note that this is a common tendency I see in teachers, you give us a tool that makes us more efficient at generating content, we will generate more content. We don't do the same job in less time, we do more work in the same time.
I would guess my wife (middle school) saves 6 hours - she does not do any work on the weekend now, and she used to work both days several hours. She still manually does the grading but the prep is much faster. Not with any fancy tools though - just basic prompts. Don't underestimate the hours it can save when people need to do added required training, as a study aid. Younger teachers can save a ton of time.
Every day I create an assignment called “27.1 A2 HW Due 3-3” and then manually enter the date of the assignment, the class it’s for, and the type of assignment. I curse “AI” as a concept every time I do it.
Every time I create a link in our LMS, I have to click a little box that says "Open in new window" because the LMS can't notice I've chosen this option the last ten thousand times and just make it the default option. But where the fun (or the money) in training an AI to do small, useful things like that?
Thank you, Dan, it means a lot. I’m an assiduous reader of yours, and I think your voice is incredibly important for people to “keep it real,” as you put it. Even when I disagree with your conclusions, your attention to the small print has been a big inspiration for my own thinking and writing. Fun fact: I created the stick figure in my first illustration thinking of you! Curious people can see the uncensored version in my post (link below).
To be fair and honest, the 5.9 hours of weekly saved time surprised me as well when I found it. In fact, I kept revising my prediction upward as I was accumulating data, and my prediction went up significantly by the time I submitted the essay. The fine print of the survey tells us that the percentage of teachers who “routinely” delegate tasks to AI—meaning almost daily—is still in the low single digits at this point. So I agree with you that the sample is too narrow to be conclusive, and I also agree that self-reported results correlate imperfectly with actual measures. However, as with most other products, it is the perceived value that will drive adoption. This kind of survey result will likely act as a self-fulfilling prophecy, driving more and more people to experiment. I’d be very curious to see a survey of heavy users to see how many, if any, return to less usage once they adopt it and as it gets more and more integrated into the tools all knowledge workers use constantly (email/browser) and its long-term memory improves. AI labs could easily show us the typical variation of token consumption per account through time.
In the end, the main driver of my prediction was not just the self-reporting of time savings, but the OECD’s data on the main barriers to adoption mentioned by teachers. I actually made a mistake in my essay there, but I set the record straight in my post.
As for the “personalized learning” question, the intrinsic motivation of students to interact with a chatbot and the fact that the teaching profession won’t get that impacted, in the short term at least; I agree with your current assessment. But I think you’re underestimating what the scaling of energy supply and token production, which agents are already making economically inevitable, will ultimately allow for AI tutors. As of today, and no matter how impressive the new “Mega-token” context windows sound, the bandwidth of AI tutors is laughable compared to the data stream effortlessly processed by human tutors whose brains are running on 20W. However, things can go surprisingly fast on a logarithmic scale; you know that very well.
I was very careful in writing “a survey of heavy users to see how many, if any, return to less usage”—heavy being the keyword. Heavy users know what the tool can and cannot do. The EdWeek survey doesn’t actually track this specific churn; it likely captures casual users who bounced off the initial friction.
Dylan Kane, for instance, whom we both follow and is quoted in the article, stated he gave up after failing to generate functional PDFs. This reminds me of your 2024 speech in San Diego when you noted that chatbots were essentially giving you "homework" by suggesting you go find a video or make a worksheet. I totally agreed with you then—the tools weren't capable of that. But that survey was done in June/July 2025, just before the release of Nano Banana, the first tool to approach usable, text-integrated image and document creation a reality.
Nano Banana 2 is now remarkably capable at formatting, and if you haven’t tried the latest Gemini-integrated Google Sheets for formatting, it’s freaking magic! Kane was just too early to the party.
I call this group "Disgruntled AI Tourists." They logged in a few times, asked for something the model wasn't yet built to do (they wanted an agent, but got a chatbot), and declared it overhyped. Kane’s lament—that he needs a tool to "generate a handout using a specific format"—is exactly what companies like Renaissance are working on right now. When confronted with these shortcomings, some people write about why the tech is useless; others go to curriculum companies and ask, “Why don’t you do it this way?” I think you’re the perfect person to spearhead that second category.
As for the ability for an AI tutor to get a kid to give a shit, I also agree we're very far from it. But in all fairness, how many humans can turn around a kid who "doesn't give a shit"? That's really hard to do. Who do you think will be more suited to do that? An umpteenth human stranger getting in front of that kid, or a [insert preferred celebrity] no latency avatar that remembers everything about the kid including the upcoming tournament and the asshole teacher who tanked them last time? Compare what's comparable. Human is far better, but for many kids there is no choice, I'm afraid it's going to be AI or nothing. Right now it's nothing.
"as it gets more and more integrated into the tools all knowledge workers use constantly (email/browser)"
I teach my online classes on Zoom, and every time I open a Zoom window it offers the "AI assistant" (click the "don't show this again" box and you'll get exactly the same offer the next time, but I suppose memory and intelligence aren't the same thing.)
Anyway, what would this AI assistant do for me? It could produce a nice transcript of the 15 minutes I'm talking before I send them into their breakout rooms, but the notes are already available to the students, would reading a transcript of me reading the notes add any value?
My concern here is that AI is extremely good at generating masses of content (like transcripts/summaries of stuff) but who's got time to read it? My students don't have time to do the work I assign already, where do they find the time to take in all this AI-generated "help"?
I would like to hear comments about technology in the classroom (edtech) and Dr. Jared Horvath's studies regarding this and how this is affecting edtech in the classroom globally.
In addition to the problems with sample size and self-reported data that you rightly point out, the way that the findings are presented seems subtly misleading. Positive findings headlined in bold! (Caveats in the fine print.) Is this analysis or advertising?
I wonder how much of the time "saved" by AI comes out of the work the teachers would ordinarily do outside of school, in which case it doesn't save six weeks of school at all-- it just reduces the length of the teacher work day so that it's closer to what the contract says it's supposed to be.
I mostly use AI to do the non-teaching data stuff my admin loves. I didn't spend 20 years working with kids to create color-coded spreadsheets of multiple choice tests. I feed the testing algorithm results straight into the spreadsheet algorithms then hand the output to my "content coach" without having to spend much time on it. This does not save me any time because I have real teacher things to do.
My actual teacher brain can then spend more time with the lab reports written by actual student brains. I get a better picture of their understanding from their work than from the test. And (be prepared, this might shock you) the students who put the time into completing projects also tend to do better on the multiple choice thingys.
Eventually it will come to the point where Ai takes the tests, then some other AI analyses the scores and yet more AI will make color coded reports. And then education reform will have achieved its destiny: Robots talking to each other with no human input at all.
I don't believe it either, largely for the same reasons you articulate.
But also because there isn't any evidence of the repurposing of the time saved. That is the yawning void in the discourse of AI boosters: exactly what are they imagining that the saved time will be redirected towards that is a more valuable activity for trained professionals? The reason they don't want to concretize it is because even if AI automates certain tasks effectively (a big if) AI boosters don't actually know enough about any workflow or work processes in any existing profession to envision what professionals would rather be doing nor do they understand anything about what the obstacles to doing that work actually are.
Let's take primary-care doctors. Suppose that generative AI really does make it easier for them to record notes on visits and integrate treatment and prescription into the visit narrative. What should they be doing more of that record-keeping has forced out of their professional workflow? Spending more time with patients, getting a fuller holistic sense of patient care, etc., all of which we absolutely know leads to better health outcomes. But is record keeping actually what is preventing that from happening? No. What's preventing that from happening is that the medical profession has been violently annexed by the insurance industry, by corporatized management, and in some cases by private equity. What will they do with time that AI frees, if it does? Increase every primary care doctor's patient loads, with no net increase in time spent with patients.
The signs that AI might actually be freeing teacher time will be in larger class sizes, more demands for more forms of data creation, etc., not in higher-quality engagement with existing students or in improved forms of student assessment and better student outcomes. The problem is that we might get larger class sizes and higher administrative workloads even if AI is NOT being adopted or is not working.
Featuring this comment next week. This piece especially:
> exactly what are they imagining that the saved time will be redirected towards that is a more valuable activity for trained professionals?
A huge tell is here is when they imagine the work saved on non-interpersonal work can be redirected towards personal work. No, friend. Work saved grading papers at home after work cannot be repurposed for more attention to students. The students aren't there in your home after work.
It is a persistent void across the entirety of AI enthusiasm/AI boosterism.
For me, there’s just a lot of work that I get done with AI that just simply wouldn’t have gotten done without it. The spiral review warmups I have my Calculus students complete each week would either not exist or they’d do a much worse job of systematically covering topics from throughout the year. They also wouldn’t be as effectively catered to the specific academic level of my students, and I’d have a harder time designing them to be data-informed.
I agree that if I was asked how much time I save doing this, I’d only be doing a rough estimation because I don’t know how long making this type of materials would take without AI. But, indicating that it was a big chunk of time would be quite true, and I think most teachers using AI regularly feel the same.
Do you think my students would better be served if I scoured the internet or spent a lot of time designing specific problems for them to review? Because my deep intuition at this point is that it’s a much more natural and effective workflow to start from learning objectives and use AI as a backwards design companion from those goals. Current models are really impressive for tasks like these!
I like this, maybe I should be doing it myself, I just want to note that this is a common tendency I see in teachers, you give us a tool that makes us more efficient at generating content, we will generate more content. We don't do the same job in less time, we do more work in the same time.
I would guess my wife (middle school) saves 6 hours - she does not do any work on the weekend now, and she used to work both days several hours. She still manually does the grading but the prep is much faster. Not with any fancy tools though - just basic prompts. Don't underestimate the hours it can save when people need to do added required training, as a study aid. Younger teachers can save a ton of time.
Every day I create an assignment called “27.1 A2 HW Due 3-3” and then manually enter the date of the assignment, the class it’s for, and the type of assignment. I curse “AI” as a concept every time I do it.
Every time I create a link in our LMS, I have to click a little box that says "Open in new window" because the LMS can't notice I've chosen this option the last ten thousand times and just make it the default option. But where the fun (or the money) in training an AI to do small, useful things like that?
Thank you, Dan, it means a lot. I’m an assiduous reader of yours, and I think your voice is incredibly important for people to “keep it real,” as you put it. Even when I disagree with your conclusions, your attention to the small print has been a big inspiration for my own thinking and writing. Fun fact: I created the stick figure in my first illustration thinking of you! Curious people can see the uncensored version in my post (link below).
To be fair and honest, the 5.9 hours of weekly saved time surprised me as well when I found it. In fact, I kept revising my prediction upward as I was accumulating data, and my prediction went up significantly by the time I submitted the essay. The fine print of the survey tells us that the percentage of teachers who “routinely” delegate tasks to AI—meaning almost daily—is still in the low single digits at this point. So I agree with you that the sample is too narrow to be conclusive, and I also agree that self-reported results correlate imperfectly with actual measures. However, as with most other products, it is the perceived value that will drive adoption. This kind of survey result will likely act as a self-fulfilling prophecy, driving more and more people to experiment. I’d be very curious to see a survey of heavy users to see how many, if any, return to less usage once they adopt it and as it gets more and more integrated into the tools all knowledge workers use constantly (email/browser) and its long-term memory improves. AI labs could easily show us the typical variation of token consumption per account through time.
In the end, the main driver of my prediction was not just the self-reporting of time savings, but the OECD’s data on the main barriers to adoption mentioned by teachers. I actually made a mistake in my essay there, but I set the record straight in my post.
As for the “personalized learning” question, the intrinsic motivation of students to interact with a chatbot and the fact that the teaching profession won’t get that impacted, in the short term at least; I agree with your current assessment. But I think you’re underestimating what the scaling of energy supply and token production, which agents are already making economically inevitable, will ultimately allow for AI tutors. As of today, and no matter how impressive the new “Mega-token” context windows sound, the bandwidth of AI tutors is laughable compared to the data stream effortlessly processed by human tutors whose brains are running on 20W. However, things can go surprisingly fast on a logarithmic scale; you know that very well.
Competition prediction post: [https://wesstrabelsi.substack.com/p/i-won-what-ai-will-and-wont-do-to]
The Personalization paradox: [https://wesstrabelsi.substack.com/p/the-personalization-paradox]
Energy and AI post: my long-term prediction for AI in education [https://wesstrabelsi.substack.com/p/my-long-term-prediction-for-ai-in]
> I’d be very curious to see a survey of heavy users to see how many, if any, return to less usage once they adopt it
Education Week surveyed this category, in fact! "I used to use them but stopped." About the same level as "a lot" of usage.
https://www.edweek.org/technology/chatgpt-for-teachers-a-boon-a-bust-or-just-meh/2025/11
I agree that AI usage will only increase from here, though I think "transformation" is still only a distant possibility.
> However, things can go surprisingly fast on a logarithmic scale; you know that very well.
Certainly. I just don't know if "context window length" is the limiting factor on an AI tutor's ability to get a kid to give a shit.
I was very careful in writing “a survey of heavy users to see how many, if any, return to less usage”—heavy being the keyword. Heavy users know what the tool can and cannot do. The EdWeek survey doesn’t actually track this specific churn; it likely captures casual users who bounced off the initial friction.
Dylan Kane, for instance, whom we both follow and is quoted in the article, stated he gave up after failing to generate functional PDFs. This reminds me of your 2024 speech in San Diego when you noted that chatbots were essentially giving you "homework" by suggesting you go find a video or make a worksheet. I totally agreed with you then—the tools weren't capable of that. But that survey was done in June/July 2025, just before the release of Nano Banana, the first tool to approach usable, text-integrated image and document creation a reality.
Nano Banana 2 is now remarkably capable at formatting, and if you haven’t tried the latest Gemini-integrated Google Sheets for formatting, it’s freaking magic! Kane was just too early to the party.
I call this group "Disgruntled AI Tourists." They logged in a few times, asked for something the model wasn't yet built to do (they wanted an agent, but got a chatbot), and declared it overhyped. Kane’s lament—that he needs a tool to "generate a handout using a specific format"—is exactly what companies like Renaissance are working on right now. When confronted with these shortcomings, some people write about why the tech is useless; others go to curriculum companies and ask, “Why don’t you do it this way?” I think you’re the perfect person to spearhead that second category.
As for the ability for an AI tutor to get a kid to give a shit, I also agree we're very far from it. But in all fairness, how many humans can turn around a kid who "doesn't give a shit"? That's really hard to do. Who do you think will be more suited to do that? An umpteenth human stranger getting in front of that kid, or a [insert preferred celebrity] no latency avatar that remembers everything about the kid including the upcoming tournament and the asshole teacher who tanked them last time? Compare what's comparable. Human is far better, but for many kids there is no choice, I'm afraid it's going to be AI or nothing. Right now it's nothing.
"as it gets more and more integrated into the tools all knowledge workers use constantly (email/browser)"
I teach my online classes on Zoom, and every time I open a Zoom window it offers the "AI assistant" (click the "don't show this again" box and you'll get exactly the same offer the next time, but I suppose memory and intelligence aren't the same thing.)
Anyway, what would this AI assistant do for me? It could produce a nice transcript of the 15 minutes I'm talking before I send them into their breakout rooms, but the notes are already available to the students, would reading a transcript of me reading the notes add any value?
My concern here is that AI is extremely good at generating masses of content (like transcripts/summaries of stuff) but who's got time to read it? My students don't have time to do the work I assign already, where do they find the time to take in all this AI-generated "help"?
“If saving time were our only prerogative, we could save even more time by simply not assigning essays at all”—> THIS.
I would like to hear comments about technology in the classroom (edtech) and Dr. Jared Horvath's studies regarding this and how this is affecting edtech in the classroom globally.
In addition to the problems with sample size and self-reported data that you rightly point out, the way that the findings are presented seems subtly misleading. Positive findings headlined in bold! (Caveats in the fine print.) Is this analysis or advertising?