Just as I am, perhaps this is an article that speaks about a "possible future" under a grandiloquent tone. I risk, in my defense to mention that we're in the middle of a new technologic and scientific leap. You, just stick to the facts; the rest is debatable.
I'd like to start this story back in 1997, when the chess player and ex world champion Garry Kasparov lost in a 6 match duel (3 1/2, vs 2 1/2) against a supercomputer (Deep Blue), developed by IBM specifically to play chess. Deep Blue's methods to beat Kasparov were brute force based, as in, exploiting the computer's capability of analyzing 200 million possible moves per second (and a bit of manual help that some people still argue about), on Kasparov's side: 5 moves per second. Yeah, the computer won, but it's evident that there's something about human thoughts and learning that totally surpasses computer equivalents: Kasparov does not waste time analyzing absurd moves, he understands the game in a way a computer could not.
To balance this little inefficiency of needing to analyze 200 million moves vs 5 to win, since some years ago the Deep Learning programming started developing, a branch of Machine Learning. It is based in a kind of intelligence that emulates learning by shaping different layers of neural networks. What is done is expose the computer and the neural network to a learning situation (as in, playing chess) and this network models itself as it acquires "new experiences" (I used the quotation marks to be politically correct, partially because I'm very uncomfortable of a program acquiring experience without quotes, syntax error). Evidence shows that the performance of this algorithm improves as it learns. We can mention that a deep learning computer (preloaded with chess' rules and training matches) was left playing against itself, in, 72 hours, reached the International Master level. Level: Master. International. In. 72. Hours. Imagine giving it a yo-yo. Schoolin'.
A kind of learning algorithm (that does not require of preloaded data to learn) is called unsupervised learning. As a counterpart, there's another one, called (Hold it! Don't spoil the surprise!) supervised learning, that requires initial training data to work. Later, as it plays, the machine learns, as in, it remodel the neural networks to its favor. There's another kind the reinforcement learning, that works under the notion of a reward. You, bribe the computer to work better.
Here's where your jaw may drop.
With similar algorithms Google is able or recognizing images, Google's AI is capable of recognizing patterns in images. Google can recognize a dog just like we would. Preload a bunch of dog pictures, google "dog" and it will show you all the dogs it can find, even if the filename has nothing to do with it. (It may fail, as it did with that Black couple and the infamous "gorilla" search).
But wait, this is nothing. To understand the next step, we first need to understand how hard is this thing I say is so hard to do, I need to back it up! Lets analyze this so anti-intuitive situation for us humans: exponential growth. Take a second to answer this question to yourself: How high do you think a common sheet of paper would reach after folding it in half 50 times?
Exponential growth appears whenever you want to multiply something a certain amount of times. For example, 2 multiplied 10 times by itself is 1024; this is 2^10 (two, times two, times two... ten times... some may laugh at this explanation, but I know of people that do not know it!). The thing about exponential growth is that the speed at which numbers grow is... crazy. If you grab a 0,1 mm thick paper (a common paper), and fold it in half, the thickness is 0,2 mm, do it again 0,4 mm, and so on. Do it 10 times, and the math is 0,1 mm multiplied by 10 times 2: 0,1 * 2^10, that's 10 cm. After doing that 50 times, the result is 70% of the distance of earth to the Sun. Your intuition may say "that is impossible", but, do the math, you'll see. 0,1 mm multiplied by 2^50, results in 112 million 600 thousand km, more or less. Crazy thing, fold it 33 more times, 83 in total, and you'll cover the whole length of our galaxy (100.000 light years), fold it 103 times in total (20 more times), and you'll reach the length of the observable universe (0,1 millimeters * 2^103 = 93.000 million light years).
Now we know that exponential numbers climb faster than that new female worker that laughs at all of your boss' jokes, lets estimate how hard it'd be for a computer to store all the possible chess moves. Say that, in average, a chess player can make 30 moves per turn, and an average match has 40 moves per player. 30^80 (30 possible moves 40*2 moves per match). Or, 10^118 (a 1 followed by 118 zeros). To lower this amount into mortal's concepts, a good estimate of the amount of particles in the universe is 10^81. As we see, the phrase "there are more possible chess moves than atoms in the universe" is true, many more, 10^37 more. If I wanted to store every possible chess match in an atom, I would need 10^37 universes. There's no possible way to store that amount. Not to mention the time needed to check which one is better than another one, something more than useful when facing Kasparov.
During March 2016 a Google AI programmed with Deep Learning beated 4 out of 5 matches of a milenary tabletop game called Go. Simple of rulesĀ (there's almost none) yet very complex to play well; the usual board is a 19 by 19 grid (361 total squares), one vs one players, white and black stones. The game is pretty much, place them wherever you like. The objective, lock the opponent's pieces or control and close down empty fields. It is a very intuitive game, and many of the strategies are subjective, there's no particular "recipes" as chess has (control of the center, mobility...), neither there's standard openings or "finishes" as chess does have. The game has been perfecting itself over 2500 years. The possibilities at this game are infinite (meh, finite; but unreachable).
So 361 squares, a player can place one of his pieces at any of them, an average game lasts around 250 moves... So, the approximate amount of possible go matches is of 220^250, this is 10^585. (220 and not 361 because as the game progresses, there's less free spots where to put the pieces at; do the math or believe me; considering that the order is important). If we again, wanted to store every possible match into an atom we would need around 10^504 universes. If this number looks overwhelming, welcome to the club, nobody knows what is it to have 10^504 "somethings".
AlphaGo (the machine that won playing this game) is under an algorithm that is not a specialized one, it can learn anything (multi-purpose). It uses two neural networks: tactic networks (the ones that heuristically analyze moves to discard the useless ones, saving up resources), and value networks (they evaluate the situation and task to follow according to the circumstances). AlphaGo trained nearly a year playing against itself.
We should start thinking that machines may start learning human skills soon, identify images, sound, handwritten text. Even harder things, like playing "go" better than any human ever.
We only need a large company to start making them, and we will stop saying "if" to say "when".
Today, computers are able of processing efficiently more possibilities than all the atoms in many universes. Who are we to stop them?