Is A.I. being developed too quickly?
As the hustle and bustle of high school life continues with sport practices, pep rallies, band concerts, and more, there’s an underlying presence that’s becoming increasingly prevalent – the integration of artificial intelligence. From fine-tuning sports team combinations to generating music during performances, A.I. is slowly but surely becoming a part of human interactions. As students navigate through the challenges of growing up, they are also faced with decisions about how A.I. will shape their future.
As humans continue to develop A.I., it is becoming harder and harder to detect. For instance, the opening paragraph you just read was written by an A.I. But how is A.I. affecting schools and students’ futures?
“I think A.I. is like any piece of technology, it’s got its perks, but it’s also got pitfalls. I think it really comes down to users,” stated English teacher Natasha Green.
While Green doesn’t have much experience with A.I. in the classroom, she has read a lot of articles related to its use.
“I read an article not too long ago about how one man wanted to test [A.I. ‘s] ability to DM [Dungeon Master] a D&D game,” said Green. “The gentleman had put it in the situation, with some of the basic things that [he and his children] were looking for as far as stats for their characters…and then he had it run a game for him…and he was just astounded by the way it pulled the story together.”
Green also read another article, in which the writer was not happy with how the A.I. had run the game.
“The other guy…was not impressed and felt like it was very clunky, that it took away the creativity and the freedom of the table,” continued Green. “For my purposes, I think it’s neat, especially if you’re just one person and you want to play D&D…but I think it would take away from the experience of the roundtable and creativity, and spur-of-the-moment decisions.”
A.I. is definitely going to improve at tasks like DMing as we continue to use and develop it. However, it could also just be a tool you can use to cut down on busy work.
The New York Times writer German Lopez explains, “Consider an A.I. that can write well. At first, the quality might fall short of writing you can do yourself. Still…it could give you time that you could use to sharpen the draft, focus on research or complete a different task.”
Lopez says that A.I. is like a phone camera. While phone cameras aren’t the best on the market, they take good enough photos, and most importantly are super convenient, so people tend to prefer them to stand-alone cameras for daily use.
“That sort of usefulness is a much lower bar for A.I. to meet than creating the kind of all-knowing, all-doing A.I. depicted in science fiction,” said Lopez.
A.I. is quickly becoming better and better at doing the jobs it’s assigned. Many A.I.s use a technique called machine learning to teach themselves how to complete tasks. The way machine learning works is an A.I. program pulls data from past attempts, or other sources to show it how to complete a task. For example, if its job was to drive a car around a corner in a video game, it might attempt the turn hundreds of times. Each time it would reprogram itself to try and find a better way to complete the turn. The A.I. would use data from past turns to improve the next one and eventually it would complete the turn successfully, so it would keep that version of its programming and discard the rest.
“It can figure out a lot of issues and work around a lot of problems much quicker than the vast majority of humans,” said Green.
That means that we have to be cautious when developing A.I. and make sure we are using it responsibly.
“Does it become the nightmare that’s in H.G. Wells’ ‘The Time Machine’ where we become so reliant on technology that we are lazy, shiftless, useless bums that end up becoming food for other creatures?” asks Green. “I mean, that’s just an extreme example of science fiction, but I think that’s where it’s at.”
That does seem to be where we are at. While there are a lot of positive effects of using A.I., there is also a lot of evidence that supports the idea that we are misusing A.I. in schools and in our daily life.
According to Forbes reporter Emma Whitford, “In January, the New York City education department, which oversees the nation’s largest school district with more than 1 million students, blocked the use of ChatGPT by both students and teachers, citing concerns about safety, accuracy and negative impacts to student learning.” Many other school districts have either blocked or banned at least some A.I.s including Rolla Public Schools. There are also some other distracts that embrace it.
“In a February survey of 1,000 kindergarten through 12th grade teachers nationwide, 51% said they had used ChatGPT, with 40% reporting they used it weekly and 10% using it daily, ” according to Whitford.
In general people seem pretty split about the effectiveness of A.I. in the classroom. Green believes that it really is up to us to make it into something useful.
“I have used it to help grade some of my essays for English,” said sophomore Joseph Huhn.
Huhn believes that the A.I. was very helpful in correcting spelling and grammar mistakes for him before he actually submitted his work. He also believes that A.I. will drastically change the types of jobs people have.
“The social sciences will explode,” said Huhn, “…because if you’re getting replaced and you don’t have anything to do, a lot of people are going to turn to a therapist.”
Senior Hosea Clayton agrees with Huhn.
“From personal experience, I am happier whenever I’m working. Some people are like that…They need something to do, they need a purpose,” said Clayton.
That’s not to say that jobs will be nonexistent. If an A.I. was left unchecked it could spell disaster.
“If an A.I. is told to produce paper clips, and that’s all that it’s supposed to do, it will not stop until every single molecule in the known universe is turned into a paper clip, or something that creates paper clips… Just because an A.I. that is left unchecked can destroy everything, including itself,” said Hunh.
Another problem is A.I.’s ability to lie and spread misinformation. Since A.I. usually learns from humans, it can be taught to believe lies and half truths. This creates a problem since as a society we often believe technology blindly.
“We’ve got to just learn to be responsible citizens,” said Green. “We have to educate ourselves and use our technology responsibly, or else we end up facing the dire consequences of the abuses of that.”
Salut! I’m Lucas and this is my first year on ECHO staff. I’m currently a Junior, and I’m in Band and Tennis. I like music, photography, and camping....