On Wednesday, Oct. 4 at 4 p.m., Dr. Oren Etzioni, the Chief Executive Officer of the Allen Institute for Artificial Intelligence, gave a lecture called “Is Artificial Intelligence Good or Evil?” as a part of The President’s Distinguished Lecture Series. He is a professor at the University of Washington, and he received his Ph.D. from Carnegie Mellon University in 1991 and his B.A. from Harvard in 1986 as Harvard’s first-ever student to concentrate in Computer Science. Dr. Etzioni’s lecture was focused on what A.I. really is, as opposed to common perceptions perpetuated by science fiction, as well as possible ways to solve ethical issues stemming from artificial intelligence development.
Dr. Etzioni emphasized that intelligence and autonomy are two very different things. This is an especially important distinction with regard to applications of A.I. in weapons. He described A.I. weapons that can fly halfway across the world and kill someone as “the stuff of nightmares,” but pointed out that “the nightmare has to do with the autonomy. It’s very scary to have a weapon that can make a life or death decision without a human in the loop. Intelligence is not the problem – it’s autonomy. Intelligence in weapons can actually prevent mistakes like we’ve had and do have when civilians have been killed.” He stressed the importance of avoiding not intelligent weapons, but autonomous ones.
Although many people are frightened by the idea of autonomous A.I., most artificial intelligence is not autonomous, only intelligent. Dr. Etzioni said that his seven-year-old is more autonomous than any A.I., and that “to understand what is harmful and what is not really requires common sense, and remarkably, that is one of the hardest things for us to give to the machine. There really are no machines today with even a modicum of common sense.”
Dr. Etzioni argued that trying to slow down A.I. development in the U.S. will only lead to other countries such as China overtaking us. Therefore, we should attempt to guide the direction and guidelines of this rapidly developing field. He suggests that instead of attempting to regulate the field itself, we should regulate its applications, namely in cars, toys, and robots. Self-driving cars need to be regulated due to the risk to human life, and A.I. toys could pose a threat to privacy without regulation – Etzioni gave the example of information a child might tell to an A.I. Barbie.
He presented three rules that he believes should govern A.I. regarding legal responsibility, full disclosure, and privacy. If someone’s self-driving intelligent car crashes into someone else’s car, they still have responsibility. Just as “My dog ate my homework” is not a valid excuse, neither is “My A.I. did it.” Dr. Etzioni was adamant that an A.I. should disclose that it is not human so as to avoid fooling people online into believing that an A.I. is a real person. Privacy is a huge issue with A.I. because it has the capacity to gather large amounts of data that must be used responsibly. For instance, when someone doesn’t want to see a Google Ad anymore, one of the options for telling Google why he or she doesn’t like it is “ad knew too much.” Dr. Etzioni also explained that a major issue in the development of A.I. is that it can easily pick up on and amplify human bias in its training data because it attempts to compress data to generalize and help it make future decisions.
Dr. Etzioni’s answer to the question “Is Artificial Intelligence Good or Evil?” was simple: A.I. is neither good nor evil. It’s a tool, and the choice is ours.
Be First to Comment