banner
Home / News / AU plunges into Age of artificial intelligence
News

AU plunges into Age of artificial intelligence

Jun 19, 2023Jun 19, 2023

ASHLAND — Peter Slade knew generative artificial intelligence would impact his classroom in December 2022.

Generative AI refers to programs that can produce written, visual and audio responses to prompts. Many programs do this, like Midjourney and Wellsaid Labs. Perhaps the most well-known generative AI program is ChatGPT, the chatbot launched by OpenAI in November 2022. It had 1.6 billion website visitors in June, according to Exploding Topics.

Slade, an 18-year faculty member in Ashland University’s religion department, began testing AI detection software as soon as he became aware of generative AI’s capabilities.

When he came back to teach during the spring semester, he already ran into students using AI on their assignments.

“These were early adopters, so they weren’t very sophisticated,” Slade said. “The writing was very high quality, grammatically speaking, but at the same time had a sort of vagueness to it that wasn’t really connecting perfectly with the prompt.”

One student, he said, claimed they didn’t use AI to write a paper. But Slade compared their former writing samples and used AI detection software to check. Both indicated the student may have used AI to complete the assignment, Slade said.

“It’s cheating, by our current definition, because you’re not able to reproduce the full process that you had to come up with it,” Slade said. “On the other hand, if in fact you did write the paper and it was just crummy, and they’re your ideas, and you put it into AI and say, ‘Make it sound smarter,’ it will do that, and it is your idea and it’s not cheating.

“So, there’s a whole area that has to do with academic integrity.”

Slade thinks students’ use of AI to complete assignments is going to increase in the coming months and years.

“Artificial intelligence and machine learning is not new,” said Shawn Orr, the dean of eAshland.

In her position, Orr teaches courses at Ashland University and directs the Center for Innovation and Teaching Excellence. The center focuses on faculty development. It’s also on the front lines of helping Ashland University respond to the advent of generative AI, like ChatGPT.

While machine learning and AI isn’t a new phenomenon, generative AI is, Orr said.

Such a big part of the fabric of our society is creating art, discussing art, appreciating art.

These new technologies are forcing students, professors and universities to grapple with what schoolwork, research and more may look like in the future.

Orr said universities have generally taken one of three approaches to generative AI: avoiding it, banning it or embracing it. According to Orr, Ashland University is working toward the latter camp.

“As these tools are produced, they’re going to change what we do every day in our job,” Orr said. “It’s important for us in higher ed to embrace this, to be aware of it and then to appropriately prepare our students.”

Enter Katy Major, an AU instructional designer and adjunct English professor. She’s an Ashland University alum who’s been teaching there since 2017.

A large part of Major’s job is developing online classes and training for faculty development. She also teaches courses in the English department.

Major’s interest in generative AI began in November. She attended events and completed independent research on the topic. She was motivated by her belief that creation of writing and art is important to humanity, and her fear that AI could change that.

“Such a big part of the fabric of our society is creating art, discussing art, appreciating art,” Major said.

During the spring semester, Major offered some events for faculty to talk about generative AI. Major said those events didn’t get a lot of response. She figured most people weren’t that interested.

Then, this summer, during an annual faculty learning community, Major led a session on generative AI. It tackled four topics:

AI in education became a faculty focus at that learning community, Major said.

“Faculty really wanted to set up students for success with generative AI,” Major said.

She said ChatGPT, the text-based tool launched in November by Open AI, and tools like it, have been the main focus of faculty discussions.

Faculty had concerns at the summer learning community. The biggest ones focused on academic integrity and the ethics of AI use, according to Major.

“Students are using it, but there’s really no guidelines on how to properly use it,” Major said.

So Major and Maura Grady, the head of AU’s English composition program, wrote an addition to the academic integrity policy that addresses AI. That addition, penned in early spring, concerns submission of AI-generated content as one’s original work.

The faculty senate plans to discuss and vote on that change in September, said Gregory McBrayer, the body’s president.

It could compel students to be much more demanding in what they want: an excellent education.

McBrayer and Major said more training on AI will be available for faculty members during the fall semester. CITE will also offer a second learning community like the one Major led over the summer. Major said there’s been lots of interest in that fall learning community.

Each year, AU also holds a “faculty college,” where faculty learn new teaching methods and the like upon returning to campus. McBrayer, Orr and Major said faculty college will host presentations regarding AI use in the classroom, too.

But even with the preparations for professors to use AI, McBrayer said his colleagues’ thoughts on AI use fall across the spectrum. Some have been quick to adopt it, and others think it’s bad.

It’s up to individual faculty members and departments whether or not to use it in their classrooms.

McBrayer, a political science professor, has played around with AI. But, he doesn’t foresee a situation where he’s likely to use it in his classes.

While McBrayer has a natural hesitation about using generative AI in his classroom, he’s optimistic about its impact on the study of liberal arts.

“It could be very dangerous, but it could also be very good,” McBrayer said. “It could streamline some of the administrative tasks folks have to do and make things easier so we can focus on the things that are really important, like reading great works of literature together or doing an experiment or working through a math problem.

“It could compel students to become much more demanding in what they want: an excellent education.”

Though McBrayer doesn’t currently plan to use it in his classes, other colleagues will.

Grady, the director of AU’s English composition program, showed her students how generative AI worked last spring. She gave ChatGPT an essay prompt and had students read through its answer.

“We examined it together and they were able to identify why it didn’t give a very good answer,” Grady said. “Partly it was because of the type of assignment that it was that required a lot of close readings and citations of the sources that students were using to integrate into their arguments.

“That’s not something that this AI software does very well yet.”

Major, the instructional designer, said she’s teaching a Composition 101 course this fall. She wants her students to complete writing assignments about AI’s application in their field. She also plans to have them use AI in order to determine its benefits and drawbacks, and she’s debating whether to have AI become a “peer” in classroom discussions.

That idea, she said, is one some professors at other institutions are using already. Essentially, students would discuss a question and then pose it to an AI chatbot, which would answer it, too. The goal is for AI to further the discussion.

But, Major stated in an email on July 26 that none of the faculty members she worked with in the summer learning community plan to use generative AI in their classrooms.

“It’s hard to encourage,” Major said, acknowledging that the technology can be unpredictable.

She noted concerns with AI even though she will be using it. Mainly, she’s worried about AI’s tendency to create biased outputs.

It has its own benefits and its own pitfalls, and we need to be cautious about using it for good.

“It’d really be a shame if, while trying to expand their horizons in college, [students] were receiving biased or slanted information,” Major said.

That’s a possibility with generative AI, said Selvanayaki Shanmugam, a computer science professor who’s been at AU for three years. Until it’s 100% accurate, Shanmugam said she won’t be adding generative AI to her syllabus.

Still, as a computer science professor, she wants to expose her students to the new technology. She’ll be making sure students know what AI is.

Shanmugam also said she’s been researching AI use for the visually disabled.

From what she’s seen so far, Shanmugam said, there’s a difference in ethics between generative AI and the type she’s researching.

In developing AI use for visually disabled people, researchers have guardrails, according to Shanmugam. Those include prioritizing customer privacy and focusing on the consumer and their interactions with the application.

There’s a clear justification for the technology’s use. But with generative AI, Shanmugam said student use would need to be justified.

“It has its own benefits and then its own pitfalls, and we need to be cautious about using it for good,” Shanmugam said.

Slade, the religion professor, has concerns about AI as well.

He thinks AI comes with strong potential to cause changes in the job market. It may even make some upper level, white-collar jobs obsolete, Slade said. If there’s no longer demand for those jobs, then Slade worries there won’t be demand for the education that goes along with them.

He thinks the academic integrity policy only addresses part of the puzzle with AI.

“It’s not addressing how we completely reshape higher ed to address this new situation,” Slade said. “College education is about not only introducing you to great ideas … but also to be able to express them.

“It’s no good just having great ideas. You have to be able to organize them and give a speech, or write a paper or present them in some medium.”

In the religions department this fall, “Students will have to learn to write in cursive,” Slade said.

The department, according to him, will opt for in-person exams in the fall. Slade said that choice comes partly as a result of potential academic integrity issues with AI use.

Still, Slade said he thinks colleges have an ethical responsibility to be aware of the world their students are entering. With AI, that world is changing.

“AI is a gamechanger,” Slade said. “It’s a massive disruption to our education system and I think people are only just waking up to that.”

Orr, the dean of eAshland, agreed with Slade about AI’s potential to change higher education.

New technologies like the typewriter and the internet have dramatically changed higher education in the past, Orr said. She argued, those new technologies made education more accessible and equitable.

In her view, students should know and understand how to use this type of new technology. But that doesn’t mean human connection becomes any less important.

“This is transformational,” Orr said. “This is going to transform degrees, it’s going to transform people, it’s going to transform education, but what it can’t ever transform is the human connection you get from your teachers.”

The Education section is brought to you by Ashland Family YMCA.

Ashland Source's Report for America corps member. She covers education and workforce development, among other things, for Ashland Source. Thomas comes to Ashland Source from Montana, where she graduated... More by Mariah Thomas

How is Ashland University addressing AI?Looking toward the future