Blog on Economics, Artificial Intelligence, and the Future of Work

What’s next for AI?

A question that I’m consistently asked is ‘what can we expect next in AI?’ My responses are generally filled with disclaimers and equivocation. And for good reason. Predicting the future is fraught with uncertainties, particularly in a field developing as quickly as AI. It’s difficult to foresee which AI technologies will bring about the next wave of breakthrough applications. Anyone who is certain is certain to be wrong.

The rise of Deep Learning (DL) offers an important lesson. DL is now foundational to many prominent AI use cases today, such as image recognition in autonomous vehicles and language translation in Google Translate. But the extent of capabilities and applications of DL were not obvious until recently. Despite DL and neural networks having historical roots back to the 1950s, DL received little research attention until around 2010. It was the confluence of access to masses of data, greater computational power, and advances in algorithmic techniques that rocketed DL to the forefront of AI research and development. Few inside the industry predicted the meteoric rise of DL. This reflects the difficulties of predicting the future of AI.

Despite the challenges of looking into the proverbial crystal ball, this article will attempt to shed some light on the future directions of AI, based on AI techniques that are showing encouraging signs today.

AI that learns goals and human preferences

A goal of AI research is to teach machines to perform the same tasks as humans, but better. A problem with this endeavour is the complexity of specifying particular ‘goals’. Tasks are often subjective and require judgement. For example, reasoning about the values of people in personal interactions is something that humans do routinely well. Machines, however, struggle because values and objectives can be subjective and difficult to define, as psychologists will attest.

To overcome these challenges, researchers have flipped the problem by programming ML systems to ‘learn’ the objectives of a task. This is called Inverse Reinforcement Learning (IRL). It differs from traditional Reinforcement Learning (RL – described in this previous post) because the ‘reward function’ is not specified, but is instead ‘learned’.

Consider the example of autonomous driving. Using traditional RL would require creating a reward function that accounts for the exhaustive list of desired driver behaviours, and the relative importance of these behaviours. This is practically infeasible or potentially impossible. In contrast, an IRL agent could be given a history of driving behaviours and will attempt to find a reward function that explains these given driving behaviours. Assuming that the agent acts optimally by picking the best available action to maximise its rewards, the IRL agent will estimate a reward function to reflect the objectives it has observed. As the agent observes more data points across many scenarios, its ability to accurately estimate the reward function (that is, desired driver behaviours) should improve.

IRL offers a framework that could expand the scope of tasks that can be performed by AI into areas where the objectives are subjective, difficult to define, and require judgement.

General purpose algorithms

Generalised intelligence is arguably the ‘holy grail’ of AI research. Developing AI systems that perform well across domains have the potential to be among the most significant technologies developed by humankind. While this ‘north star’ remains elusive, there have been recent developments that suggest progress is being made.

In 2017, Google DeepMind published its latest developments of AlphaGo, a computer program designed to play the ancient Chinese game of Go at superhuman levels1. Learning from first principles with no training data, AlphaGo Zero started from random play and trained itself to become the world’s most advanced player. In late 2018, Google DeepMind extended the AlphaGo Zero algorithm further to be more generally applied across a broader class of games. The same algorithm and network architecture was applied to the games of chess, shogi, and Go. The general purpose RL algorithm was able to ‘learn’ from first principles and achieve superhuman performance across all three games2.

Game playing may seem trivial, but building general purpose solutions for controlled and complex problems are notable steps towards generalised intelligence. Research efforts towards this goal remain strong, so one can expect continued progress with general purpose algorithms.

AI that needs less data

Data is often referred to as the lifeblood of AI. More data is generally considered to yield better performance results than less data. The problem is that access to big datasets can be difficult. So the capacities of organisations to develop advanced AI systems can be limited to the amount and quality of data they have access to.

‘Transfer Learning’ helps to overcome some of these data access issues by using ‘knowledge’ gained from one problem and applying it to different but related problems. For example, a ‘pre-trained’ model for recognising cars can be ‘retrained’ with a smaller dataset to recognise trucks with equivalent performance results. Rather than the AI system ‘learning’ from scratch, it can ‘transfer’ some of its ‘knowledge’ to a new, slightly different domain. Not only does this reduce data requirements, but also development time.

Advances in Transfer Learning will not only accelerate AI development but will also potentially help state-of-the-art AI systems to become less reliant on data. This could help lower the barriers to entry for AI development.

AI that fool humans

AI already has the capabilities to create photorealistic images and videos that are indistinguishable to the human eye. This is being achieved with an unsupervised learning and neural network technique called Generative Adversarial Networks (GANs).  Introduced by Ian Goodfellow in 2014, GANs use a system of two neural networks to compete against each other to produce an output. It works by having one network, called the ‘Generator’, which is tasked with generating data that is designed to try and ‘trick’ the other network, called the ‘Discriminator’. For example, GANs have been tasked with creating computer-generated images that are indistinguishable from human-generated images. The Generator network (think, ‘Artist’) creates an image and tries to trick the Discriminator network (think, ‘Art Critic’) into identifying it as real. This is difficult because there aren’t solid measures of success or universal metrics in art. So, GANs pit two neural networks against each other to make sense of this unstructured data.

There are positive applications for GANs, such as cybercrime detection. But there are also a host of malicious applications. For example, ‘Deep Fakes’ are computer generated photos and videos that appear distinguishably real and can be used to deceive. For good and for ill, applications of GANs are expected to increase.

Natural Language Understanding is approaching a watershed moment

Natural Language Understanding could be on the cusp of a boom era similar to that of Computer Vision five years ago following the critical mass of image data released on ImageNet. Detailed datasets and technical breakthroughs in natural language understanding are accelerating. Researchers are developing techniques for training general purpose language representation models by using masses of unannotated text on the web. These ‘pretrained models’ can be applied to tasks like ‘question answering’ and ‘sentiment analysis’ (that is, assessing the sentiment of written text). Performance already exceeds that of human benchmarks on key measures and is expanding to evermore language tasks. These tasks not only include processing and ‘understanding’ language, but also generating natural language for tasks like writing news articles and natural conversation with voice-activated assistants. The most notable recent developments have come from Google AI Research and OpenAI.

Simulations will become more important

The development of AI systems requires periods of ‘training’ and ‘testing’. However, for use cases in the physical world where safety is paramount, training and testing can be difficult. This has led AI researchers to simulation modeling.

Simulating the physical environment enables AI systems to perform actions, learn from experience, and safely fail, all within the confines of a computer simulation. It also allows many more experiments to be run, thus accelerating the training process. For example, rather than training and testing an autonomous vehicle on physical roads, the AI system guiding the autonomous vehicle can be trained millions on times on simulated vehicle routes, exposing it to many different traffic scenarios. Assuming the simulation is representative of its operating environment, the autonomous vehicle will have been exposed to and  ‘learned’ from millions of scenarios even before the ‘rubber hits the road’.

As AI continues to expand its use cases into areas where safety is critical, it is highly likely that the quality and importance of simulations will increase.

  1. Silver, David, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, et al. 2017. “Mastering the Game of Go without Human Knowledge.” Nature 550 (7676):354–59.
  2. Silver, David, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, et al. 2018. “A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go through Self-Play.” Science 362 (6419): 1140–44.