Artificial intelligence could be one of humanity’s most useful inventions. We research and build safe AI systems that learn how to solve problems and advance scientific discovery for all. Artificial intelligence is learning to decipher damaged ancient Greek engravings. The AI seems to be better than humans at filling in missing words, but may be most useful as a collaborative tool, where researchers use it to narrow down the options.

There are thousands of ancient inscriptions we already know about, with dozens more discovered every year. Unfortunately, many have become eroded or damaged over the centuries, resulting in segments of text being lost. Figuring out what the gaps could be is a difficult task, involving looking at the rest of the inscription and other similar texts. Yannis Assael at DeepMind and his colleagues trained a neural network, a type of AI algorithm, to guess missing words or characters from Greek inscriptions, on surfaces including stone, ceramic and metal, that were between 1500 and 2600 years old.

The AI, called Pythia, learned to recognise patterns in 35,000 relics, containing more than 3 million words. The patterns it picks up on include the context in which different words appear, the grammar, and also the shape and layout of the inscriptions. Given an inscription with missing information, Pythia provides 20 different suggestions that could plug the gap, with the idea that someone could then select the best one using their own judgement and subject knowledge. “It’s all about how we can help the experts,” says Assael.

One of the world’s best players of video game StarCraft II, Komincz was at the height of a successful esports career. Artificial intelligence company DeepMind invited him to face its latest AI, a StarCraft II-playing bot called AlphaStar, on 19 December 2018.

Komincz was expected to be a tough opponent. He wasn’t. After being thrashed 5-0, he was less cocky. “I wasn’t expecting the AI to be that good,” he said. “I felt like I was learning something.”

It was just the latest in a series of unexpected victories for machines that stretch back to chess champion Garry Kasparov’s 1997 defeat by IBM’s Deep Blue. In 2017, another of DeepMind’s AIs, AlphaGo Master, beat the world number one Go player a decade before most researchers predicted it would be possible. The company’s AIs then mastered chess and StarCraft – a game played with dozens of different pieces with hundreds of moves a minute. But this isn’t just a case of humans being humbled by superhuman AI. The real story is that each win gives us a glimpse of how AIs will make us superhuman too. That’s because thinking is set to become a double act. Working together, humans and AIs will bounce ideas back and forth, each guiding the other to better solutions than would be possible alone.

DeepMind gave the technique its name in 2013, in an exciting paper that showed how a single neural network system could be trained to play different Atari games, such as Breakout and Space Invaders, as well as, or better than, humans. The paper was an engineering tour de force, and presumably a key catalyst in DeepMind’s January 2014 sale to Google. Further advances of the technique have fueled DeepMind’s impressive victories in Go and the computer game StarCraft.

The trouble is, the technique is very specific to narrow circumstances. In playing Breakout, for example, tiny changes—like moving the paddle up a few pixels—can cause dramatic drops in performance. DeepMind’s StarCraft outcomes were similarly limited, with better-than-human results when played on a single map with a single “race” of character, but poorer results on different maps and with different characters. To switch characters, you need to retrain the system from scratch.

Deep reinforcement learning also requires a huge amount of data—e.g., millions of self-played games of Go. That’s far more than a human would require to become world class at Go, and often difficult or expensive. That brings a requirement for Google-scale computer resources, which means that, in many real-world problems, the computer time alone would be too costly for most users to consider. By one estimate, the training time for AlphaGo cost $35 million; the same estimate likened the amount of energy used to the energy consumed by 12,760 human brains running continuously for three days without sleep.

But that’s just economics. The real issue, as Ernest Davis and I argue in our forthcoming book Rebooting AI, is trust. For now, deep reinforcement learning can only be trusted in environments that are well controlled, with few surprises; that works fine for Go—neither the board nor the rules have changed in 2,000 years—but you wouldn’t want to rely on it in many real-world situations.

Nobody should count DeepMind out, even if its current strategy turns out to be less fertile than many have hoped. Deep reinforcement learning may not be the royal road to artificial general intelligence, but DeepMind itself is a formidable operation, tightly run and well funded, with hundreds of PhDs. The publicity generated from successes in Go, Atari, and StarCraft attract ever more talent. If the winds in AI shift, DeepMind may be well placed to tack in a different direction. It’s not obvious that anyone can match it.

Meanwhile, in the larger context of Alphabet, $500 million a year isn’t a huge bet. Alphabet has (wisely) made other bets on AI, such as Google Brain, which itself is growing quickly. Alphabet might change the balance of its AI portfolio in various ways, but in a $100 billion-a-year revenue company that depends on AI for everything from search to advertising recommendation, it’s not crazy for Alphabet to make several significant investments.

Since machines can collect, track and analyze so much about you, it’s very possible for those machines to use that information against you. It’s not hard to imagine an insurance company telling you you’re not insurable based on the number of times you were caught on camera talking on your phone. An employer might withhold a job offer based on your “social credit score.”

Any powerful technology can be misused. Today, artificial intelligence is used for many good causes including to help us make better medical diagnoses, find new ways to cure cancer and make our cars safer. Unfortunately, as our AI capabilities expand we will also see it being used for dangerous or malicious purposes. Since AI technology is advancing so rapidly, it is vital for us to start to debate the best ways for AI to develop positively while minimizing its destructive potential.

LEAVE A REPLY

Please enter your comment!
Please enter your name here