Artificial intelligence technology is advancing. Programmers are inching ever closer to creating true AI, or systems that can operate autonomously, teach themselves tasks, and either adeptly mimic or altogether outpace human behavior. DeepMind lab, owned by Google (NASDAQ: GOOGL), recently created a system capable of winning a game of Go against the world's top player, for instance.
But with these advancements come certain risks.
AI learns how to perform specific tasks through machine learning, or the practice of using algorithms to parse data, learn from it, and then apply it to reach a determination or prediction. In this way machines can become capable of guiding themselves, rather than relying on intensive hand-coded software routines that provide instructions narrowly tailored to each discrete task.
Computer vision, or the ability of a computer to analyze visual data, is one well-known application of machine learning. Facebook, for instance, is able to automatically identify a dog or a human face through pattern-recognition systems that review massive quantities of visual data.
AI's ability to teach itself new knowledge, and act on that knowledge, is a powerful tool, but it can have unintended effects. AI can misunderstand tasks in unanticipated ways, resulting in behavior that could be unwanted or even harmful. Researchers are therefore developing improved techniques to teach machines to operate predictably, efficiently, and safely.
One such initiative is underway at OpenAI, the artificial intelligence lab owned by Tesla (NASDAQ: TSLA) founder Elon Musk. OpenAI is partnering with DeepMind to create a program of what is called reinforcement learning. The researchers direct a machine to strive for a particular reward in an artificial and tightly controlled environment, like a video game. In the process of striving blindly for this reward, the machine learns which behaviors bring the reward and which do not. It is a rigorous and time-consuming process of trial and error for the machine, but one that researchers believe will teach AI to analyze inputs correctly and make fewer mistakes. They are also trying to create learning systems that incorporate more human guidance, or even encourage the system to seek human correction.
AI is not only prone to its own errors. Hackers and other wrongdoers can exploit vulnerabilities in AI systems. While some programmers are focusing on improving AI's functionality, others are working to bolster AI's defenses against attacks and other, subtler deceptions. People are sometimes able to thwart facial recognition A.I. technology in security cameras, for instance, simply by painting marks on their face. Humans are also capable of teaching A.I. bad behaviors, as demonstrated by Microsoft's (NASDAQ: MSFT) famously short-lived chatbot Tay, which was hacked by Twitter users and taught to spew racist screeds before it was shut down.
Researchers are also trying to anticipate problems that have yet to arise. One such concern is the possibility that machines may become so focused on achieving goals that they will learn to overcome any obstacle to them, including humans attempting to shut the machines off.
AI may eventually come to pose a larger existential threat to humans. Industry leaders like Elon Musk, Steve Wozniak, and Bill Gates have all warned that AI could be dangerous if it becomes sophisticated to the point that we can no longer understand or control it.
But we are a long way off from AI outstripping human capabilities. True AI does not yet exist, and even the most cutting-edge AI technologies still struggle with basic tasks.