I'm not sure how on-point this comment is here, but I had an experience with ChatGBT this weekend that I must share.
I was putting together a test for my 6th graders and wanted to have some examples of rectangular prisms that had the same volume, but different surface area, and then ones with the same surface area, but different volumes. I thought perhaps ChatGBT, which is bookmarked on my browser and never used, would be of great help.
So I put in my request for three such prisms (same SA, diff V), and CGBT came back with three prisms in which the number of the SA was EQUAL to the V. Not what I wanted, but amusing. I rephrased the question. I got different prisms, but the same thing SA=V. I tried rephrasing it several times, I filled out the feedback, all to no avail.
Aha! I realized I was still running Chat GBT v.3. Didn't version 4 come out six months ago (this is how often I use it). So I got into version 4, nothing changed. I asked, not for three example, but just two examples. Then I realized (embarrassingly late) that all I needed to do was ask for three prisms with the same SA, and say nothing about the volume, and I'd likely get what I wanted. So I did that, and when I looked at the result, what popped out to me was that one of these SA measures was impossibly small given the dimensions it provided. For the first time, I ran the dimensions by hand, and OMG, I kid you not, Chat GBT was miscalculating total surface area.
So I taught it the formula. In subsequent attempts, it always showed me the formula as it provided the answers, but it STILL got the wrong answer. No matter what I did, it kept getting it wrong, and it wasn't even the kind of mistake where I could say, "Oh, I see what you did there".
I wasted well over an hour, gave up, had a couple of examples of my own in about five minutes. 😑
ChatGPT reportedly struggles pretty hard with math. I have seen a bunch of explanations. In my writing, I try to imagine "okay, let's assume this is solved one day." I'm trying to future proof my hypotheses, basically, but yeah the present is kind of grim for people trying to learn math with chatbots.
That makes sense. With your keen interest in the topic, I had been presuming that ChatGBT was actually knocking loudly on the door. Glad to see we still have a few years/months/weeks of breathing space before parents and school boards start demanding its incorporation.
2) Couldn't chatbots be built to enhance social learning?
3) What did Khan say? Closed door and all but can you point to anything that he or someone else has said? What about the Q&A, how did that go? Sounds like a fascinating experience. Thanks for sharing.
Holy smokes, what a read! To start, I had never thought of equating AI and chatbots to graphing calculators, Wolfram Alpha, or other math, but I guess that really is the best way to summarize it. It's a new tool teachers have to teach with, they can't give the "You won't always have a calculator in your pocket" answer forever. As for the chatbot as people, I hadn't even thought of the capabilities of chatbot, or lack thereof, compared to teachers. And to be honest, it's a relief to hear this. The amount of times I go online and see "AI is going to replace teachers," "Look at this thing AI taught me that my teachers never taught me," and just show a complete lack of understanding of what teachers actually do. Overall, a very eye opening read, I enjoyed it immensely.
This is terrific Dan, thank you for continuing to center the importance of human relationships in education (and in life). If you're feeling bold, perhaps you can share you sense of how your message was received in comparison to Sal's?
Is it possible that the improvement to Math grades with PL is due to the fact that many general teachers are not math education speciaalist. Apologies I am not familiar with American schooling, here in NZ we don't get Math specialist until year 9 if you are lucky (age 13 approx).
You made a good reads list on Daily Kos today for this one! 🙂https://www.dailykos.com/stories/2023/10/1/2196651/-Sunday-Good-Reads-for-October-1st-2023
I'm not sure how on-point this comment is here, but I had an experience with ChatGBT this weekend that I must share.
I was putting together a test for my 6th graders and wanted to have some examples of rectangular prisms that had the same volume, but different surface area, and then ones with the same surface area, but different volumes. I thought perhaps ChatGBT, which is bookmarked on my browser and never used, would be of great help.
So I put in my request for three such prisms (same SA, diff V), and CGBT came back with three prisms in which the number of the SA was EQUAL to the V. Not what I wanted, but amusing. I rephrased the question. I got different prisms, but the same thing SA=V. I tried rephrasing it several times, I filled out the feedback, all to no avail.
Aha! I realized I was still running Chat GBT v.3. Didn't version 4 come out six months ago (this is how often I use it). So I got into version 4, nothing changed. I asked, not for three example, but just two examples. Then I realized (embarrassingly late) that all I needed to do was ask for three prisms with the same SA, and say nothing about the volume, and I'd likely get what I wanted. So I did that, and when I looked at the result, what popped out to me was that one of these SA measures was impossibly small given the dimensions it provided. For the first time, I ran the dimensions by hand, and OMG, I kid you not, Chat GBT was miscalculating total surface area.
So I taught it the formula. In subsequent attempts, it always showed me the formula as it provided the answers, but it STILL got the wrong answer. No matter what I did, it kept getting it wrong, and it wasn't even the kind of mistake where I could say, "Oh, I see what you did there".
I wasted well over an hour, gave up, had a couple of examples of my own in about five minutes. 😑
ChatGPT reportedly struggles pretty hard with math. I have seen a bunch of explanations. In my writing, I try to imagine "okay, let's assume this is solved one day." I'm trying to future proof my hypotheses, basically, but yeah the present is kind of grim for people trying to learn math with chatbots.
"I'm trying to future proof my hypotheses"
That makes sense. With your keen interest in the topic, I had been presuming that ChatGBT was actually knocking loudly on the door. Glad to see we still have a few years/months/weeks of breathing space before parents and school boards start demanding its incorporation.
I applaud you description of the assignment
Thanks Dan. I am so glad you were there. We need teachers to answer these teckies
Thanks Dan for a very interesting discussion. Are chatbots just another shiny object?
Great work as always, Dan. You're must-reading. Meanwhile, what did Sal have to say?
Thanks, Sam. Unfortunately, I don't have permission to republish anyone else's remarks.
1) Re: new technology being neat, I've always loved the reference of "crayons are the future" from Punya Mishra. https://punyamishra.com/wp-content/uploads/2012/10/Mishra-crayons-techtrends1.pdf
2) Couldn't chatbots be built to enhance social learning?
3) What did Khan say? Closed door and all but can you point to anything that he or someone else has said? What about the Q&A, how did that go? Sounds like a fascinating experience. Thanks for sharing.
Holy smokes, what a read! To start, I had never thought of equating AI and chatbots to graphing calculators, Wolfram Alpha, or other math, but I guess that really is the best way to summarize it. It's a new tool teachers have to teach with, they can't give the "You won't always have a calculator in your pocket" answer forever. As for the chatbot as people, I hadn't even thought of the capabilities of chatbot, or lack thereof, compared to teachers. And to be honest, it's a relief to hear this. The amount of times I go online and see "AI is going to replace teachers," "Look at this thing AI taught me that my teachers never taught me," and just show a complete lack of understanding of what teachers actually do. Overall, a very eye opening read, I enjoyed it immensely.
This is terrific Dan, thank you for continuing to center the importance of human relationships in education (and in life). If you're feeling bold, perhaps you can share you sense of how your message was received in comparison to Sal's?
Is it possible that the improvement to Math grades with PL is due to the fact that many general teachers are not math education speciaalist. Apologies I am not familiar with American schooling, here in NZ we don't get Math specialist until year 9 if you are lucky (age 13 approx).