The use of artificial intelligence software in the classroom — like the one used by Chat GPT — is controversial. Some teachers are embracing the technology while others are shunning it.
But how does this new type of AI software work? Arizona State University Computer Science Professor Subbarao Kambhampati explained that there is a difference between discriminative AI and what’s called generative AI.
“One type of software is essentially trying to give a label to something it is presented," said Kambhampati. "So for example, if you give it a dog picture, it will say ‘dog’, if you give it a cat picture, it says ‘cat.’ … These are called, basically, discriminative classification systems … They have been around for a while. And in fact, between 2013 until about two-years back, they’ve become very powerful. That’s why you have your cellphones, you can just take a picture of a word and it will tell you what the word is, and if you take a picture of a plant, a leaf, it will tell you what plant it is.”
He said the main difference lies in the softwares’ ability to not just categorize or identify, but to actually create.
"While those systems could tell you when given a cat, whether it is a cat or not … they did not know how to generate these things," Kambhampati said. "They don’t necessarily know how to … generate the cat picture, they didn’t know how to generate words … “So, most of us know how to say ‘this is a good song,’ ‘this is a bad song.’ But very few of us know how to sing them."
And it’s this ability to create that makes some teachers concerned, but some optimistic.
Rose Martinez, the Sierra Vista Unified School District’s Technology Coordinator and District Librarian said that teachers in Sierra Vista are split on using AI, as she said the topic was discussed during a leadership meeting.
"When the question came ‘how comfortable do you feel about AI?’ the room was pretty evenly split between people who were like ‘you know I’ve seen i-robot, not really sure about this,’ to ‘I’m in it, let’s go, let’s figure out how we can use this and use it effectively and safely,’" said Martinez.
Martinez has since formed an AI-committee to discuss how AI can be used safely in the classroom, especially across different grade levels, and policies for its use.
“Because it’s going to look differently at the kindergarten level than it is at the middle and high school,” said Martinez.
With students using generative AI software like Chat GPT to help them complete school assignments, some teachers are notably concerned. At the moment, the choice of whether or not to strictly ban or allow the use of AI largely depends on the teacher.
Cochise College’s English, Journalism and Creative Writing Instructor Alex O’Meara said that since Chat GPT was launched in November of last year, he’s not allowed his students to use AI to complete their assignments.
"Chat GPT and AI takes the place of actual writing," said O’Meara. "And writing in an academic setting is foundational to basically … helping students how to think, basically learn how to think and express those thoughts clearly. Knowing how to write clearly is an incredibly valuable skill to graduate with or to learn in a class. And then, if you use artificial intelligence to write, you don’t develop that skill. If you use AI, you leave without an education and all you get is a grade, and the grade is worthless.”
He said since students have gained more access to AI software like Chat GPT, he saw some student-plagiarism in his classes last November.
“AI uses other previously printed material to create content," said O’Meara. "So, the books I’ve written and had published, they’re using that. So, one: they’re not paying me for the copyright. Two: it’s plagiarism ‘cause they’re using previously used words that other people have created for students to create their own work. That is the definition of plagiarism. It’s not direct, it’s indirect. But still nonetheless is from sources other than what the student is thinking. And as an instructor — and I’ll be honest with you — I want students to tell me what they’re thinking in writing … If that process is not engaged, and it breaks down even at the start of students not thinking but telling me what they want to hear … the entire educational process collapses.”
This opinion isn’t an uncommon one. Kristin Juarez, a psychology instructor at Cochise College and also the head of the college’s social and behavioral sciences department said that most teachers across different specialties have told her they do not want students using AI on their assignments.
“The general consensus from the people I’ve talked to from English to math to anthropology to psychology to biology is the vast majority of them would prefer to avoid it all together," Juarez said. "And they are writing really strict policies about AI use in their classrooms to try to prevent students from using it. I kinda look at as that’s like asking them not to use a calculator for the rest of their lives.”
Juarez said she’s taken a different approach, embracing the technology in her classroom.
“Those who are absolutely opposed to it say it’s completely taking away the critical thinking aspect," said Juarez. "I actually believe that there’s a way to incorporate it and to challenge the critical thinking because it’s not thinking. And we need to get them to understand that it is not thinking and to identify where it’s missed some of those critical thinking aspects and the gaps. And how they have to be able to make that leap to the application of it in their world and in their lives.”
Juarez is now requiring her students to cite Chat GPT and fact-check the content the software generates.
"I have brought it into my classroom and allowed them to use it as long as — and I’ve had stipulations on this — one: they have to cite it," said Juarez. "I’m also requiring that they vet it, and that’s the hard part for — I think — a lot of them because they’re like ‘wait, I have to do what?’ And I want them to go double-check that it is right, because it lies. It hallucinates, it makes sources up. I mean, it wholly fabricates sources all together — which is absolutely mind-boggling to me. And, I want them to see that.”
At the heart at the debate of the use of AI are questions about accountability. When AZPM spoke with Stephen Wu in September, who a shareholder at the Silicon Valley Law Group in San Jose, he said at that time, there was no overarching legislation on AI at the state or federal level.
“There is no general artificial intelligence law at the federal or state level," said Wu. "We don’t have that yet. It’s too new. The Europeans, the European Union is working on something called an Artificial Intelligence Act, which attempts to do that. But we don’t have something equivalent.”
Since AZPM spoke with Wu, President Joe Biden issued an executive order relating to AI safety and security. While executive orders have the force of law, they can be rescinded by President Biden or any other president at any time.
Wu said some states have amended their privacy laws to include AI.
"One of the things in the state level that is worth noting is that there are laws that are general privacy laws … that talk about automated decision-making," Wu said. "And while automated decision making could encompass a variety of technologies, it obviously encompasses Artificial Intelligence. So we have the California Consumer Privacy Act — as modified by the California Privacy Rights Act — a provision that says there’s going to be regulations about automated decision-making, and we expect those regulations later … We have other state laws — privacy laws — besides California ones that might apply and govern automated decision-making.”
Wu said that the states that have laws about automated decision-making are Virginia, Colorado and Connecticut. He added that there are some AI-specific laws at the state level.
“The Illinois Artificial Intelligence Video Act, which says that there are certain privacy rights in connection with the use of AI systems to look at an interview of someone who’s a potential job candidate for a position where the AI digests that video and provides insights to that candidate," Wu explained. "And you have to provide notice of that, and the person who was the job candidate could say ‘please delete that.’ So, there are various rights associated with that.”
He said there also is a New York City Council ordinance that seeks to protect against AI-bias in job candidate selection, and New York Governor Kathy Hochul signed a law that bans the distribution of AI-made pornographic images without the subject’s consent.
Wu said that action from Congress on regulating AI moving forward is not likely to arrive in the U.S. anytime soon.
"I think we’re having difficulty at the congressional level — in the house and senate — of passing legislation that is very difficult and complex," said Wu. "It may be that we will have to wait until the next Congress before we can see some significant movement on something like that.”
While there are concerns about AI-use, Kambhampati said that generative AI is good at creating works that mimic ones published by humans, but one weakness it has is it struggles generate content that is factually-based.
"Generative AI is extremely bad at factual knowledge," said Kambhampati. "So, Google, basically if you ask it something, it will show the site where in which the answer is … Whereas Chat GPT essentially is completing — so if there is a question saying ‘I have these symptoms, do I have COVID?’ What would be the most likely next word, next word, next word, next word after that? That’s all it is doing, and it's not checking whether or not it is actually factual."
But does generative AI like Chat GPT have a place in the classroom moving forward? The answer is mixed.
O’Meara said he doesn’t think it has a place in the classroom moving forward.
“It’s sort of like plagiarism-lite, or something," O'Meara said. "You’re substituting the pure, unadulterated thinking of an individual with machine-thinking based on a collective."
SVUSD Assistant Superintendent of Curriculum and Instruction Terri Romo said for the next generation, they need to know how to use AI as a tool, just like any other technology.
“They already are using it, they already know about it, they’re playing with it, they’re fascinated by it as well," said Romo. "So, just like with any new tool, we have to go through the process and learn how to utilize it for good.”
Much like the onset of the internet, generative AI isn't going away. So how regulate its use becomes the central issue going forward.