The AI is AlphaGo, which uses a neural network model which combines tree search methods and machine learning along with experience of computer and human play. Alpha Go was developed by the British firm DeepMind, which has since been acquired by Google. Go isn’t necessarily a computer-friendly game, either. Each move has a multitude of possible outcomes, which means that conventional AI simply takes too long to determine its next move. It was thought that it could be a decade before a computer could beat skilled human Go players, with even the creators of AlphaGo being surprised by the outcome of the match.
Computers have been beating us at chess and other board games for years now and the next challenge for AI developers may be videogames, in preparation for AI which can eventually tackle problems in the real world. “We’re concentrating on the moment on things like healthcare and recommendation systems.” Longer-term, DeepMind’s learning-based approach could yield so much more. “I think it’d be cool if one day an AI was involved in finding a new particle,” said Demis Hassabis of DeepMind.
AI like AlphaGo is an example of what is called “weak” or “narrow” AI, technology which can excel in a particular area, but not in others. The AI that people tend to fear is “strong” or “general” artificial intelligence – in other words, intelligence similar to and perhaps rivaling our own. It’s the sort of AI that you’re probably familiar with from sci-fi and many experts are a little worried about its eventual emergence.
Elon Musk and Stephen Hawking, among others, signed an open letter in favor of increased research into the social effects of AI in January of last year. To that end, Musk has established a nonprofit research organization called OpenAI. The real worry is that AI could develop to the point that it would work on making itself better; and this goal may not line up with our own. DeepMind was thinking about this as well. As a condition of its acquisition by Google, it insisted on the establishment of an AI safety and ethics board to come up with rules for the safe and responsible use of AI technology.
Speaking in a 2011 interview, Shane Legg, co-founder of Deep Mind starkly outlined the risks of AI’s evolution. Legg has stated that he believes AI which truly rivals human intelligence is only a few decades away – and that he thinks that technology could well play a role in the extinction of our species if we don’t proceed carefully.