Press "Enter" to skip to content

How would an artificial intelligence report on an artificial-intelligence conference?

I wish I had an artificial-intelligence assistant–call it MY AI–that knows how I think and write, so it can do my job when my mojo is low.

For example, I recently spent two days at NYU listening to philosophers, scientists and engineers jaw about “Ethics of Artificial Intelligence.” How can we ensure that driverless cars, drones and other smart technologies—such as algorithms that decide whether a human gets parole or a loan–are used ethically? Also, what happens if machines get really smart? Can we design them to be nice to us? Do we have to be nice to them?

Speakers responded to these questions in a welter of ways, as did members of the audience. How should I write it up? Too many choices! The biggest choice is whether to take the conference seriously or as light entertainment.

MY AI could prioritize quotes according to the Google ranking of the speaker and/or buzzwords. It could flag comments that aroused the most audience response, as measured by posture (upright versus slumped), post-talk questions and laughter.

MY AI could produce the equivalent of a gag reel. For example, while Eliezer Yudkowsky was emphasizing how hard it might be to turn off a superintelligent machine, a message from the NYU wireless network kept blocking his slides. “I don’t know how to turn off the Internet!” Yudkowsky wailed. Ha ha.

Sex sells, so MY AI might focus on Kate Devlin’s talk about sex robots, which are a real thing now, not just sci-fi. If sex robots become so smart that they are likely to be sentient, will we have to grant them rights? Violence makes good click-bait, too. So MY AI might highlight the talk by Peter Asaro on autonomous weapons, a.k.a. “killer robots.”

MY AI, channeling me, might complain that this sci-fi, killer-robot stuff distracts us from a far more pressing issue. Since 9/11 the U.S. and its allies have killed thousands of civilians, including children, with drones, rockets, bombs and bullets. What about the ethics of that?

Knowing how compulsively skeptical I am, MY AI might focus on comments about how dumb AI programs are, in spite of all the hype. Yann LeCun pointed out that even much-touted deep-learning programs cannot really learn on their own. AI programs still require feedback from humans to know if they’re right or wrong.

Gary Marcus said that in spite of huge advances in hardware and object-recognition, computers lack the common sense of a typical kid. They also lack abstract knowledge of the kind required to make ethical judgments. Any human teenager can tell who the good and bad guys are in a film, but a computer can’t.

MY AI might paraphrase Thomas Nagel’s warning that thousands of years of philosophical inquiry into ethics have produced profound disagreement. So automating ethics might prove elusive.

Daniel Kahneman pointed out that we don’t have any idea how matter generates consciousness. We know we’re conscious, but our confidence that other things are conscious wanes as they become less like us. We are left only with our intuitions. Therefore, MY AI might add, all the talk about how we should minimize the suffering of sex robots and other intelligent machines could be moot.

Nick Bostrom dwelled on possible misalignments between our human goals and the goals of superintelligent, sentient machines. These concerns are so hypothetical, MY AI might argue, that they’re silly. We should be more worried about the misalignment of goals between ordinary citizens and the powerful corporations and government agencies developing AI right now.

But who would want to read that? MY AI, if it’s smart, will go with laugh lines and sexbots.

 

John Horgan directs the Center for Science Writings, which is part of the College of Arts & Letters. This column is adapted from one originally published on his ScientificAmerican.com blog, “Cross-check.”

Be First to Comment

Leave a Reply