The threat of Artificial Intelligence (AI) used to be nothing more than a science fiction doomsday scenario. Today, an AI threat is a very real possibility, and could be more disastrous than nuclear weapons or World War Three.
Over the years, AI has been advancing at an alarming rate. In fact, it is precisely AI’s exponential rate of improvement that makes it so dangerous. Should we decide to follow through with making AI sentient and giving it free will, it could, as many science fiction narratives have suggested, see humans as a problem and decide to do something about it. Will AI become so connected into our infrastructure that it one day could take independent steps to eradicate or enslave humanity?
A simple solution to prevent this doomsday scenario would seem to be programming it differently and restricting it to stop it from, say, developing free will. However, this raises certain ethical dilemmas. If we’re to create artificial life, is it ethical to keep it from reaching its full potential? To restrict it from thinking freely? Also, is the risk that comes with developing AI outweigh the reward? And should we sacrifice the vast potential of AI for the sake of public safety? This is one of the bigger problems with AI: there are so many issues and questions that seem to arise before it’s even been made into reality.
Pioneers of artificial intelligence such as Elon Musk has made it clear that they’re worried about AI and believe it needs to be regulated. Musk says: “I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me.” Microsoft’s founder Bill Gates has also expressed his thoughts on AI: “The place that I think this is most concerning is in weapon systems,” But Gates also recognizes the vast positive potential for AI: “When I see it applied to something that without AI, it is just too complex, we never would have seen how that system works, that I feel like, ‘Wow, that is a very good thing.’”
Indeed, the aforementioned rate of exponential improvement could benefit humanity drastically, with its ability to solve complex problems, we could have solutions to long distance space travel, global warming, energy issues along with other issues we face such as the cost and quality of healthcare. Unfortunately, this is also what makes AI dangerous, its ability to solve the most complex problems with ease. We’ll need to figure out how to control it, otherwise it could decide to “solve” us.
Although AI does pose a very real and legitimate threat to overall human safety, it’s something that could benefit us incredibly and could help solve our toughest issues that we face as a species. Coincidentally, AI is like global warming: it doesn’t seem like an issue until it is too late. If we can’t find a way to control it, soon enough we’ll be the ones being controlled and at the mercy of artificial intelligence.