Last week, I spoke on a panel for the School of Humanities, Arts and Social Sciences (HASS) department conference Thoughts on Bots, to discuss how AI will impact higher education. I showed faculty how I, as a student, use ChatGPT as a tool to enhance my learning. Many expressed concerns about how students use it to avoid doing the thinking for themselves and to fake engagement. I came across this in an online summer course, where weekly discussion posts were required and it became very clear that one person’s contributions were always formulaic, and always restated earlier points using odd synonyms. Philip Sutherland, a student panelist and Visual Arts student, described how this behavior was due to a lack of alignment between the course goals and the student goals: the professor wants you to learn something and the student just wants to pass the class.
Artificial intelligence is, currently, no replacement for human intelligence, though in specific circumstances it does a good job mimicking it. This is not its most useful application, we have human minds to do human thinking, and computational intelligence to aid us. Over the summer, I created a Python program to solve an inverse finite element problem. I had never coded in Python before, but ChatGPT was able to help me take what I knew about the problem at hand and the approach to solve it in C++ and create a program that ultimately won the undergraduate research award. It’s a method of triangulation: I know what the final result should look like and the gaps in my knowledge, and ChatGPT fills in the statistically most likely response. If I give it feedback that its first try didn’t work, it’s able to keep iterating until we reach a suitable solution.
ChatGPT helps with my creative work, too. I input my finished Stute article and ask it to tell me which points are worded confusingly or need more expansion. I don’t take all of its suggestions, but it helps me read my work differently, especially after I’ve grown so used to my wording after working on it for a while. The most difficult part of the writing process for me is the conclusion, so I hit my word count and ask ChatGPT for a conclusion paragraph. It can summarize very well, and though it can’t mimic my writing voice it helps me identify the key points so that I can rephrase it. It’s much easier for me to read something and think of how I would change it to make it better than to come up with it from scratch. AI has the potential to revolutionize personal tutoring for students, but the commercial applications have shaky grounds.
There are a variety of ethical issues that need to be addressed concerning AI and large-language models. President Biden recently issued an executive order to address the major concerns about AI, including protecting consumers’ privacy, preventing discrimination, and requiring companies to share their safety test results with the government. This order does not create any plans for achieving those goals, as that overreaches the federal government’s power, and instead calls on the extremely divided Congress to take action. Americans are already concerned about fake or extremely biased news, and AI-generated content can make it even more difficult to distinguish credible sources. On a local scale, if AI is used to write sections of a report, is it ethical to be attributed to you, or should it be credited as its own contributor? If it generates false content, it could take the liability off of the individual, but then who can be held accountable? Tech companies were eager to volunteer their technology to independent security testing, which might enable them to deflect responsibility, that they have no responsibility if they pass testing. It’s dangerous, uncharted territory; how do you test a system for its potential hazards if we are only just discovering its capability?