When I first read this, my gut response was, "No, it can't be. It's a chat program that's been trained on available data." I decided to toy around with your prompt and asked a whole bunch of follow-up questions to refine the results. It was then I understood that because I know my field and my audience, which is middle school students, I knew the right questions to ask. This suggests that to properly use this tool, one needs specific knowledge and experience.
The generative nature of GPT is beneficial, but only if the user has the knowledge to separate the good from the bad.
So, despite the lack of a human touch, it's not quite ready to outshine traditional writing created by someone with a strong grasp of pedagogy and content knowledge.
Oh and btw, test this one out to see if you think it’s really adopting your voice ;)
[Assume the role of a mathematician and a seasoned math educator at the K-12 level. Adopt the voice of Dan Meyer in math education to answer this question:
“How can algebraic notation help us understand how different quantities are changing in a scenario? Use an example from solid geometry.”]
Generative textbooks are worse than traditional textbooks, it's true, along traditional metrics, in the same way that most YouTube is worse than traditional TV in terms of production values, editorial oversight, since budgets are far lower and production skills are usually far worse. But YouTube has the unique advantage of allowing individual voices to shine, and that is enough to let it overtake traditional media. I suspect, like you say, that "generative textbook" is a "horseless carriage" linguistic misfire based on history, just as Google Search is a fundamentally different beast from Yahoo! indexing. The interesting question to me is whether people will put in the active effort to use tools like ChatGPT to sharpen their learning. I love the idea of active learners participating in their learning, but I think it will take more than the existence of ChatGPT to make this happen.
An instructive analogy: "Nupedia is best known today as the predecessor of Wikipedia. Nupedia had a seven-step approval process to control content of articles before being posted, rather than live wiki-based updating. Nupedia was designed by a committee of experts who predefined the rules. It had only 21 articles in its first year, compared to Wikipedia having 200 articles in the first month, and 18,000 in the first year." AI only seems promising with the humans in the loop. A LOT of humans, largely engaging with one another with some help from their bot friends. Can AI software expand access to math-making - as in, mathematical conversations, question-posing, and modeling, by and for the people? Before the Wikipedia project turned sour, it used to be an incredible platform to build something together with other people, with the help of the wiki software and protocols. In other words, Wikipedia built new communities. That engagement in shared community practices is the innovation - if any.
I think in many respects this is asking the wrong question. I want to work from the starting point of: "what is the best way to present X concept", or "what is the most accessible way for my students to learn X". Removing the textbook as the default, frees me up to work out when I want to use AI, when I want to use a traditional text, and when I want to use other resources.
Truth of the matter has been for a long time that textbook content has not been the best way to present mathematical ideas to my students - but it has been the simplest way to give them opportunities to practice using those ideas and skills through questions. Because an AI model can adapt on the fly (or will be able to) to their misconceptions I would forsee that AI could well fulfill that role. But if I want AI models to fit into my pre-exisiting textbook framework: that will be much tougher.
When I first read this, my gut response was, "No, it can't be. It's a chat program that's been trained on available data." I decided to toy around with your prompt and asked a whole bunch of follow-up questions to refine the results. It was then I understood that because I know my field and my audience, which is middle school students, I knew the right questions to ask. This suggests that to properly use this tool, one needs specific knowledge and experience.
The generative nature of GPT is beneficial, but only if the user has the knowledge to separate the good from the bad.
So, despite the lack of a human touch, it's not quite ready to outshine traditional writing created by someone with a strong grasp of pedagogy and content knowledge.
Oh and btw, test this one out to see if you think it’s really adopting your voice ;)
[Assume the role of a mathematician and a seasoned math educator at the K-12 level. Adopt the voice of Dan Meyer in math education to answer this question:
“How can algebraic notation help us understand how different quantities are changing in a scenario? Use an example from solid geometry.”]
P.S. Use GPT4.0 instead of 3.5!
Generative textbooks are worse than traditional textbooks, it's true, along traditional metrics, in the same way that most YouTube is worse than traditional TV in terms of production values, editorial oversight, since budgets are far lower and production skills are usually far worse. But YouTube has the unique advantage of allowing individual voices to shine, and that is enough to let it overtake traditional media. I suspect, like you say, that "generative textbook" is a "horseless carriage" linguistic misfire based on history, just as Google Search is a fundamentally different beast from Yahoo! indexing. The interesting question to me is whether people will put in the active effort to use tools like ChatGPT to sharpen their learning. I love the idea of active learners participating in their learning, but I think it will take more than the existence of ChatGPT to make this happen.
An instructive analogy: "Nupedia is best known today as the predecessor of Wikipedia. Nupedia had a seven-step approval process to control content of articles before being posted, rather than live wiki-based updating. Nupedia was designed by a committee of experts who predefined the rules. It had only 21 articles in its first year, compared to Wikipedia having 200 articles in the first month, and 18,000 in the first year." AI only seems promising with the humans in the loop. A LOT of humans, largely engaging with one another with some help from their bot friends. Can AI software expand access to math-making - as in, mathematical conversations, question-posing, and modeling, by and for the people? Before the Wikipedia project turned sour, it used to be an incredible platform to build something together with other people, with the help of the wiki software and protocols. In other words, Wikipedia built new communities. That engagement in shared community practices is the innovation - if any.
https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/ it is getting better. I need to update my experiments. I do think David is onto something about how to use LLM’s to teach students to ask better questions about the problem and as you used to have as your tagline… be less helpful.
I think in many respects this is asking the wrong question. I want to work from the starting point of: "what is the best way to present X concept", or "what is the most accessible way for my students to learn X". Removing the textbook as the default, frees me up to work out when I want to use AI, when I want to use a traditional text, and when I want to use other resources.
Truth of the matter has been for a long time that textbook content has not been the best way to present mathematical ideas to my students - but it has been the simplest way to give them opportunities to practice using those ideas and skills through questions. Because an AI model can adapt on the fly (or will be able to) to their misconceptions I would forsee that AI could well fulfill that role. But if I want AI models to fit into my pre-exisiting textbook framework: that will be much tougher.