Recent developments in artificial intelligence (AI) and computer vision have resulted in the ability to create ‘fake people.’ These people are either generated entirely from scratch by an AI or are made by digitally altering the appearance of actors that appear similar to them.
A movement called RepresentUS recently tried to publish deep fake advertisements on several news outlets such as CNN and Fox News. The advertisements depicted Russian President Vladimir Putin and North Korean leader Kim Jong-un, both stating that Americans need to act in order to protect their democracy. The news station did not let these advertisements run and took them down shortly before being posted.
While it was intended to be for a good cause, the possibility of this technology being used for nefarious purposes remains. At the end of the advertisements, these leaders stated that the video wasn’t real, but there’s no reason that this statement would be included if someone wanted to tarnish another person’s reputation. Without this information, it becomes difficult to identify what videos are real and which are not.
Artificially generated influencers are also gaining popularity on social media such as Instagram. The AI generated influencer ‘Rozy’ is an example of an Instagram influencer developed by a company that is able to represent various companies. Unlike a real person, an influencer generated by an AI has no chance of being involved in a scandal that would negatively impact a company’s reputation, as well as never aging, meaning they could represent a company for decades to come. These Instagram accounts are less like people, and more like characters such as Mickey Mouse, with the only difference being the level of detail and photorealism that these accounts possess.
All of this points towards the need for transparency from users of this kind of artificial intelligence. Digital influencers usually indicate that they are artificially generated in their bio or elsewhere, but similarly to deep fakes, there are no laws or regulations regarding their use. We are still in the early stages of this technology, but we are quickly approaching the point where it will become widely accessible and easy to use.
As AI generation of this content improves, similar techniques for detecting the work of an AI are also being developed. However, it is unclear whether these ‘detective’ AIs will be able to keep up with the rate at which their counterparts are being developed.
Further legislation will be needed in order to regulate these technologies. In the same way that a carton of orange juice has nutritional information on the back, perhaps Instagram accounts will have a required ‘artificially generated’ tag. The societal repercussions of this kind of technology are unclear, but there’s no doubt that it will have a huge impact on our lives.
For now, there’s not much that we can do in order to control the path that artificial intelligence development can take. Instead, we should focus on trying to identify sources of misinformation and prevent its spread. Sources such as Votesmart and Fact Check are great ways to stay informed and confirm (or deny) certain political facts. Besides this, we can lobby our politicians and push for more regulation and research in the hopes of a brighter and more transparent future.
Senioritis is an Opinion column written by one or two Stevens student(s) in their last year of study to discuss life experiences during their final year at Stevens, and other related subject matter.
Be First to Comment