Education and the Robot: How fast can we get students to the wrong finish line?
After a fall season of learning about A.I. (the robot) in education, there is still a lot that I don’t know, but there are a few things that I’m becoming more and more sure about:
We are running very quickly to the wrong finish line. When it comes to A.I. in education, there are lots of people spending a lot of time and energy to try to do one of two things:
- Develop the best A.I. powered tutoring system. While this might seem to be a logical path to follow, we have to remember that the purpose of school is not to get better at school. Schools don’t exist to develop better students. so yes, Khanmigo may very well help you do your homework and study for your tests, but that shouldn’t be our primary focus. Teaching to the test is only useful if we have designed a really good test, and the tests that the robot might help you do well on probably aren’t the greatest tests.
- Relieve educators of their administrative burdens. This is a more worthwhile goal, but I hope that this does not actually translate into an increase in the expectations around educator capacity. For example, if the robot can write I.E.P.’s it would be a mistake to immediately assume that teachers could then have more students with I.E.P.’s on their caseloads. Yes, there could be some increase in caseloads, but we all need to think carefully before we go about replacing one thing with another. The robot needs to be used to make working in education more sustainable, not just increasing productivity. It is also worth asking if many of the administrative things teachers are being asked to do right now just be eliminated.
Again, both of these objectives seem like very worthwhile and obvious goals to work toward, but the problem is that they are dominating the conversation while we really need to talk about something else. We already know that the robot is really good at A.P. tests, but it does not seem that this fact has inspired sustained conversations in schools about the value of preparing students to do well on A.P. tests. Instead we’re hearing a lot about how we can help students do better on A.P. tests with the robot’s help. We are not sufficiently grappling with the reality that the robot may have made all of the traditional aims of education obsolete. The problem is that this is a much more difficult conversation to have and potentially not as lucrative, but we have to address this with the same energy and vigor that we have for using the robot to help students do their homework.
As it is right now, we are moving too slowly. We are starting to have conversations about this within our organization and with others, but we still feel that we are not moving fast enough. It is very clear that Open AI and others are not going to slow down so that we can all pivot to new way of thinking about our work, let alone wait for senior English teachers across the nation to think about their students’ needs and then rewrite their units so that students have confidence in the importance of what they are learning given the advances in. A.I. So we are moving too slowly and I’m not sure how to move with appropriate urgency on this topic, but our students do notice.
Students have always wondered “why do I need to learn this?”, sometimes it’s a very real question, and some time it’s a procrastination strategy, but now the question carries with it a bit more anxiety on the student’s part. It’s not just wondering why they would ever have to understand symbolism in Lord of the Flies when they want to be a physical therapist, but now it is more about wondering what they can possibly do out in the real-world that the robot won’t be able to do. It’s a different question that brings different emotions to the forefront.
And don’t even get me started about what the robot might be doing to us and our creative drive. These are all really big topics that we need to talk about every time we talk about A.I. and education. If not, we are going to get to wrong place much more quickly than we might think.