Today’s post is very different, as it’s not directly related to any ‘news’ going on at the moment. But it’s an issue that is more and more important to humanity, and something that could become a huge issue extremely quickly if things go wrong – Artificial Intelligence. This week scientist Stephen Hawking warned that AI could pose a threat to the entire human race if we develop it fully. While I don’t know a huge amount about AI, it’s fascinating to read and learn more about, and its potential is frightening. So what would true Artificial Intelligence look like, and what could it mean for humanity?
We already have created machines that can ‘think’. Chess computers like Deep Blue can beat humans at chess every time, and when you ask Siri or Google Now a question, it has to figure out what you say and what you meant by it. But these are machines that work along a very limited path. Deep Blue knows exactly what moves can be made in chess, but not much else. Real AI would be much more like a human intelligence – able to improvise, create new solutions, and most importantly, learn.
This could look something like a thinking Google that could solve our problems. If Google could understand a problem, analyse the information it possesses (basically all the information that exists), and come up with a solution, this would be of immense benefit to humanity. Through trial and error it could learn which solutions work and which don’t, then apply and improve these solutions. This would have a huge impact in areas such as space travel and computing, but also on issues like reducing poverty or disease. With the Ebola outbreak for example, this thinking Google could easily trace back the outbreak to its starting point and predict how it is likely to evolve. The problem though comes when ‘thinking Google’ starts to apply its learning and problem solving to itself.
The first version of this Google could look at itself and its own code, and figure out how to make itself smarter. The next, smarter, version could do this again to further improve itself, and so on. Due to the speed of its processing, before humans could even realise what was happening Google would have improved itself thousands upon thousands of times. Its intelligence would explode exponentially, and we would have created something more intelligent than ourselves. With the internet being all around us in the 21st century, it would be able to reach out to every corner of the globe in seconds.
At this point it’s hard to know what would happen, because we don’t know what this intelligence would look like. Some experts compare it with a dog trying to understand how a human thinks. Dogs don’t understand our reasoning or how we come to certain conclusions, and neither would we understand the AI. We have no idea what it would do. We could instruct it, but we don’t know how it would decide to interpret those instructions. Telling it to solve poverty could lead to it deciding that all bank accounts should be made equal – crashing the global economy. It could also simply decide humanity is a threat to it or to its goals, and immediately launch all the nuclear missiles the US and Russia possess. We can’t possibly know.
There are plenty of scientists working on this right now, and due to the unpredictable nature of AI, we would have no way of knowing when it started to improve itself. Trying to instill ethics or values in AI as we’re creating it isn’t exactly easy. Humans have enough debates about what is ethical, and trying to get a machine to understand this will be almost impossible. And how do you put in rules that a super intelligence can’t simply write out of its code? Unless scientists working on AI take these things into account, there’s a chance humanity could end up disappearing in a catastrophe we created, but can’t even understand.