Press "Enter" to skip to content

Let’s talk about tech regulation: The ethics of artificial intelligence

I was first introduced to artificial intelligence (AI) through movies and media. Movies like Star Wars made me appreciate the convenience of AI; watching C-3PO and R2D2 made me wish for a world in which a sentient robot (or rather, a protocol droid) tended to my every need. Malignant AI, as seen in 2001: Space Odyssey and The Terminator were alarming but failed to completely deteriorate my view of future AI. Media portrayals of AI showed me that I was more enticed by the convenience of AI than deterred by its potential evil. 

Years later, as I’ve begun to study computer science and have further considered the ethical implications of technological innovation, I’ve had a change of heart. While I am beyond impressed with the rate at which AI has progressed, it is important to first investigate the potential dangers that AI can pose to society. 

Unintentional (or intentional) biases

The relationship between programmers and intelligent systems mirror that of a parent and child. AIs are often given goals to achieve, but they aren’t given the solution to reach their goal; instead, they receive guidance and data from their programmer. The hope is that one day, the intelligent system will become autonomous and be able to perform without the guidance of its programmer. At least two problems can arise from this: (1) programmers can (unintentionally or intentionally) transfer their biases, and (2) AI can unintentionally develop biases. 

One way in which AI can discriminate against groups of people is by receiving biased data from the programmer. For example, a programmer may supply AI with a dataset that might show discrimination against certain female applicants because they are likely to become pregnant soon. This data may prevent eligible applicants from being hired to the company. 

On the other hand, it is possible for AI to develop biases based on the data they receive. Take the example of tech giant Amazon who used AI and machine learning to filter through resumes. Over time, they noticed that their algorithm began to favor men for their highly technical roles. This was because the training set given to the AI had a large set of male applicants. Amazon has since changed its recruiting methods, but that does not mean this problem doesn’t persist. 

Rapid decision-making

AI also raises other ethical questions about decision-making. A perfect example is self-driving cars, which use machine learning and other sects of AI to make split-second decisions. 

One ethical dilemma that self-driving cars are often faced with is the trolley problem. A common iteration of this problem is as follows: a trolley is quickly moving down tracks that lead to a fork. You can make the trolley go down alley one, where it will run over one person, or down alley two, where it will run over three people. In this situation, the AI is the decision-maker. The people who are on the tracks can represent inanimate objects, pedestrians, animals, or even the people in the car. Which life should the self-driving car value most when it is faced with an accident? This problem poses a larger question into the decision-making processes of AI. 

Conclusions

These problems barely scratch the surface of the dangers of AI. There’s also the concept that AI can redefine the human conception. The debate about whether computers can be considered minds has been ongoing. With superintelligent AI, the possibility of their mechanisms being similar, or even indistinguishable from human minds is possible, but I digress. 

While these effects are beyond scary, I’m glad to see Congress paying more attention to AI regulation; in New Jersey alone, there are seven bills that have been introduced by Congress aiming to study the impacts of AI, prohibit discrimination in automated systems, and modernize state technology to use AI and other related services. While these regulations are a good start, technology has been developing faster than Congressional representatives can keep up with. Furthermore, the generational gap between representatives and those who develop and use intelligent technology is large. 

We need people in Congress and other regulatory agencies who have industry and professional experience with AI to combat these issues. Intelligent system development will not sit idly by as Congress plays catch up. 

Technically Speaking is an Opinion culture column used to discuss topics relating to technology, such as pop culture, trends, social media, or other relevant subject matter. 

Be First to Comment

Leave a Reply