The Innovation University: Powered by Technology. At Stevens, technology is always at the forefront of everything we do. Engineering students take coding classes; computer science students spend years learning the technology of computers; business students learn how to use technology to predict stock market fluctuations; and even those in the humanity school use technology to enhance their art or better their analysis of social structures. However, as technology advances, as well as the use of it at Stevens, do more problems arise than are solved? Over the past month, I attended two campus events that analyzed perhaps the quickest-advancing technology with the most influence and implications: Artificial Intelligence (AI).
The first event, hosted by the School of Humanities, Arts, and Social Sciences (HASS), invited professors from around the region to discuss generative AI and pedagogy, aptly named “Thoughts on Bots.” On October 26 and 27, in the TechFlex of the UCC, panelists discussed different aspects of AI and their impacts on writing pedagogy with guests traveling to Stevens from around New Jersey to as far away as the University of Toronto. Presentations included research on AI and emotionality, the impact of AI on libraries, uses of chatbots, like ChatGPT, in university writing centers, and a panel of students, including myself, discussing the impacts of AI on different aspects of student life, with a focus as students here at Stevens.
Throughout the conference, there were many varying opinions. Two seem to stick out as being unanimously agreed upon by all attendants, and to me, as the most important: AI is here to stay — there is no way around that, and society is not at a point where it has the preparation needed to cope with a technology advancing so rapidly.
AI, a permanent addition to our technological society, is a done deal; much of the world now uses or runs on AI, so to end its use or even limit its development is impossible. However, there is more room for debate for society’s preparation for AI. At the “Thoughts on Bots” conference, some argued AI could be used as an idea-generator, while others said otherwise; some argued that AI could be cited as an author, while some proclaimed AI does not have the conscience to consent to authorship. Many pedagogy experts, armed with decades of experience and thousands of data points, argued for the mitigation of AI until we have a better idea of how to ethically, legally, and appropriately use AI.
While the mitigation of AI can be a potential solution for the many issues and questions AI’s existence presents, Stevens’s nature poses a threat to this likely resolution: an institution “Powered by technology” filled with those who strive to make it the “innovation university.” Within weeks of ChatGPT erupting at the end of 2022, swathes of Stevens students were utilizing the powerhouse of large language model (LLM) to produce code for the programming courses or prompting it to analyze and write a thematic analysis of Plato’s everlasting allegory of the cave. Whether it be the Stevens Honor Board working to address the use of LLMs in students’ work or the development of a campus-wide policy on the issue, there is little hope in stopping Stevens’ thirst for innovation.
While Stevens grapples with the implications of AI in the classroom, a much larger debate is happening worldwide: what is the legal precedence when it comes to AI?
The Stevens Law Society recently discussed this during an event on November 9 during a special guest lecture from Dr. David Opterbeck of Seton Hall University Law School’s Gibbons Institute of Law, Science & Technology. Dr. Opterbeck’s talk discussed the legal challenges AI presents: copying versus redistributing, the legality of internet scraping, and who owns the information of these scrapes (it was a great talk, thanks very much to Stevens Law Society for hosting!)
Dr. Opterbeck’s talk offered intersections with the “Thoughts of Bots” conference. In particular, citations to works, the difference between using LLM as a tool versus an entity, and whether LLMs, like ChatGPT, can be considered “conscious” enough to be a consenting author or creator per legal guidelines of copyrights, trademarks, and other intellectual property protections. Yet again, the conclusions remained the same: AI has, is, and will change the world; it is just a matter of when to accept it or how much effort you will put into fighting the inevitable.
A solution to the question of AI will only present itself over time — years or even a decade. It will take rounds and rounds of trial and error, court cases, the emergence of long-term trends, or a consensus of the masses. I admit to using LLMs: GrammarlyGO as a mechanics and grammatical checker, ChatGPT to bounce ideas off of, or even DALL-E to find unique and cool new ways to organize my desk. Not only do I admit to using them, but I am proud of it as a Stevens student who loves to innovate, and as a citizen of the world interested in not being left behind as technology advances. Putting aside my law studies, pedagogy experiences, or simply seeing AI take over the academic world through my civil engineering and music courses, I merely appreciate AI as the next bound of technology that will change the world. Whether it be the internet, personal devices, electricity, or even the discovery of fire, technological advances always prove to be an exciting aspect of society: the social spark for the fire that powers humanity. One can almost say that AI represents an ideology of being inspired by humanity — and powered by technology.