Blog on Economics, Artificial Intelligence, and the Future of Work

AI Safety Literature Review

DRAFT Section from forthcoming Working Paper

This is a DRAFT section from an AI Policy paper that I’m currently working on. It highlights the main issues currently observed in AI Safety. I wanted to share this section to (1) help others orient the literature and (2) receive feedback. If you have any thoughts or suggestions, please comment below or feel free to contact me via nik@bitsandatoms.co 

While AI safety sits firmly in the realm of technical AI research,1 2 the nascent field is becoming increasingly relevant to policymakers. Advancements in Machine Learning (ML) systems have resulted in their utilisation with safety-critical functions.3 4 It’s expected that more advanced versions of today’s ML systems will be increasingly deployed in safety-critical areas,5 such as Critical Infrastructures (CIs). Therefore, CIs will likely become more dependent on ML systems for their regular operations.6 This growing dependence heightens the need for strong safety standards in the design, development, and implementation of safety-critical AI. As governments are responsible for regulating CIs, anything that affects their operational functioning plausibly falls within the scope of government policy. Therefore, given the importance and low error-tolerance of CIs,7 policies will need to systematically address the safety-related issues of powerful AI agents that are deployed in safety-critical areas of society.

This section aims to review the AI safety literature and to synthesise the key safety problems relevant to the broad-scale deployment of powerful ML systems applied to CIs. While previous cross-disciplinary research from Law8 9 and Social Policy10 11 identify safety as a key issue for AI policy, one must review the AI safety technical literature in order to grasp the scope of concrete safety problems.

Machine Learning Safety

This section explicitly focuses on the safety of ML systems, which is the most dominant subset of AI. ML is a technique that enables computers to learn autonomously and to improve from experience without being explicitly programmed.12 The application of ML systems to safety-critical areas, such as CIs, provides new and additional challenges to safety engineering.13 Traditionally, software systems that have been applied to safety-critical areas have required near-full predictability of behaviours under all conditions, a detailed design with a rigorously specific set of requirements, and a comprehensive set of verification activities to confirm the software implementation fulfils the specification.14 In short, a determinism through explicit programming to ensure the software is free of vulnerabilities (or as close as possible).

The core challenge with introducing ML systems in safety-critical environments is the increase in uncertainty that the correct predictions, and subsequent actions, will be made. In contrast to deterministic software systems, a ML algorithm makes predictions and performs actions based on a model of the environment that’s informed by its input data.15 Therefore, ML algorithms implement forms of inductive inference to make probabilistic predictions for inputs outside the examples observed in the dataset.16 It’s precisely this inductive process of ‘learning’ and predicting that raises the uncertainty that ML systems will make correct predictions. While the flexibility and power of ML systems represents significant opportunities to system efficiencies and benefits to humanity,17 ML also introduces a suite of new challenges to safety engineering. These challenges and considerations also concern public officials, of whom are responsible for the oversight of CIs.

AI Safety informing AI Policy

While many of the AI safety issues remain open-ended technical problems,18 19 they provide the beginnings of a useful criteria to assess the safety of AI systems. Such a criteria could help inform safety standards and measured regulatory oversight. For AI policy discussions to advance beyond the abstract, safety parameters and expectations will need to be clarified and understood. This demands bridging the asymmetry of knowledge and understanding between those contributing to technical AI research and the public officials responsible for the CIs where ML systems are being increasingly applied.

Therefore, this section provides a synthesis of the core set of AI safety considerations that are relevant to policymakers faced with making public decisions regarding safety-critical AI. 

AI Safety Literature

As the capabilities of AI systems advance and assume greater societal functions, so too do the concerns about safety.20 Amodei et al. refer to AI safety as ‘mitigating accident risk’ in the context of accidents in ML systems.21 The authors define accidents as “unintended or harmful behaviour that may emerge from machine learning systems when we specify the wrong objective function, are not careful about the learning process, or commit other machine learning-related implementation errors. Similarly, Bostrom et al. refer to AI safety as techniques that ensure that AI systems behave as intended.22 Based on these definitions, and in the context of ML systems applied to CIs, this paper refers to AI safety as techniques that mitigate unintentional risks and the harmful behaviours of AI agents.

In the context of Reinforcement Learning (RL), and illustrated in a simple agent testing environment of a two-dimensional grid of cells, Leike et al. identify two classifications of current ML problems:23

  • Specification problems: The incorrect specification of the formal objective function, where the agent designed and deployed by a human optimises an objective function that results in harmful and unintended results. Deploying an agent with the wrong objective function can cause damaging effects even when endowed with perfect learning and infinite data.24
  • Robustness problems: Instances where an agent may have been specified the correct objective function, but problems occur due to poorly curated training data or an insufficiently expressive model.25

Further to these two classifications, Amodei et al. offers an additional classification of ML problems:26

  • Oversight problems: Instances in complex environments where feedback to assist an agent to achieve its objective function is expensive or computationally inefficient. Settling on cheap approximations can be a source of accident risk.

The following subsections will expand on these classifications provided by Leike et al. and Amodei et al. detailing concrete ML safety problems.

Specification Problems

Specification problems concern formally specifying properties for a ML system, so that it may function as intended. When the formal objective function is specified incorrectly, risks of harmful behaviours emerge and unintended consequences can occur. Below are six (6) specification problems with current ML systems.

  1. Safe interruptibility: Also referred to as the ‘Control Problem’ or ‘Corrigibility’, safe interruptibility concerns the problem of designing agents that neither seek nor avoid interruptionsOrseau, Laurent, and Stuart Armstrong. 2016. “Safely Interruptible Agents.” In Uncertainty in Artificial Intelligence, 557–66.27 This means that human operators retain the power to control an agent by turning it off. The problem of safe interruptibility primarily relates to RL, and involves scenarios where an agent might learn to interfere with being interrupted. Such scenarios are driven by poor specifications of the reward function, where an agent may receive higher returns for either preventing itself from being interrupted or interrupting itself.28 Overriding becomes increasingly difficult as the capabilities of AI systems advance and the complexities of its applied environments expand. This growing complexity makes it more difficult for programmers to specify the goals of agents that avoid unforeseen solutions.29
  2. Avoiding negative side effects: The core challenge is to design intelligent agents that minimise negative effects on the environment that are otherwise unrelated to its objective function.30 This is particularly crucial in safety-critical environments where effects can be irreversible or difficult to reverse. Manually programming all safety specifications is inherently unscalable in complex environments. Therefore, developing general, adaptable, and comprehensive heuristics to safeguard negative side effects remains an open research problem.
  3. Absent supervisor: The ‘Absent Supervisor’ problem concerns the consistency of agent behaviour during training and deployment. While an agent can be extensively tested in training environments, real-world environments are often noticeably distinct. Therefore, under the presence of a supervisor during training, a capable agent could ‘fake’ its way through testing and then change its behaviour during deployment.31 In the context of superintelligence safety, Bostrom refers to this problem as the ‘treacherous turn’.32 This is a scenario where the safety of an AI is validated by observing its behaviour in a controlled and limited ‘sandbox’ environment, only for it to behave in different and damaging ways when deployed.
  4. Avoiding reward hacking: This is a situation where an agent exploits an unintended loophole in its reward specification and ‘games’ its reward function, thus taking more reward than deserved.33 From an agent’s perspective, pursuing such strategies are legitimate, as this is how the environment works. While pursuing ‘reward hacking’ strategies are valid in some literal sense, they do not reflect the designer’s intent. As a result, unforeseen behaviours and unintended consequences can emerge.34 Specifying error-free reward functions that can’t be misinterpreted by AI agents is extremely difficult, particularly in complex environments. Therefore, designing agents that capture the informal intent of its designer to prevent ‘gaming’ its reward function and act as intended is a general and distinct RL research problem.
  5. Formal verification: Seisha et al. define ‘verified AI’ as AI systems that are provably correct with regards to mathematically-specified requirements.35 The authors identify five (5) major challenges for achieving verified AI:
    • Environment modeling – Developing environmental models that ensure provable guarantees of an AI system’s behaviour in environments of considerable uncertainty.
    • Formal specification – Creating precise, mathematical statements of what the AI system is supposed to do, which specifies the desired and undesired properties of systems that use AI methods.
    • Modeling systems that learn – Formally modeling components of a ML system that evolves as it encounters new input data in stochastic environments is a core verification challenge.
    • Generating training data – AI systems demand extensive training before being applied in real-world scenarios. This often requires access to vast amounts of training data, which can be difficult to source. Other settings have applied formal methods to systematically generate training data, which has proven to be effective in raising the levels of assurance in the systems’ correctness.36 37 As ML systems have been shown to fail under simple adversarial perturbations (e.g. Nguyen, 2014; Moosavi-Dezfooli, 2015),38 39 these simple input disturbances raise concerns regarding their applications in safety-critical scenarios. Therefore, developing techniques that are based on formal methods to systematically generate training data to test the resilience of AI systems is an additional verification challenge.
    • Scalability of verification engines – The challenges of scaling formal verification standards are exacerbated in AI systems. This is due to AI systems needing to model more complex types of components at-scale (for e.g. human drivers in stochastic environments).
  6. Interpretability: While a formal definition of interpretability remains elusive in the context of ML systems, Doshi-Velez and Kim refer to ML interpretability as the “ability to explain or to present in understandable terms to a human”.40 The authors argue that the need for interpretability of ML systems stems from an ‘incompleteness’ in the problem formalisation. Incompleteness arises in ML systems because it is not feasible to specify a complete list of scenarios in complex tasks. This incompleteness creates a fundamental barrier to optimisation and evaluation. Therefore, in the presence of incompleteness in ML systems, interpretability helps to provide explanations that highlight undesirable outputs. As ML systems assume greater prominence and consequence, so too does the issue of interpretability. For instance, the European Union in 2018 will require algorithms that make decisions that “significantly affect” users to provide an explanation (“right to explanation”).41 From a technical safety perspective, however, the opacity of AI reasoning in large and complex systems remains an ongoing research challenge.

Robustness Problems

Robustness problems occur when a ML agent is confronted with challenges in its environment that degrade its performance and cause unexpected behaviours. While the formal objective function may have been specified correctly, inadequate training data or an insufficient model of the environment raise the risks of unintended behaviours.

  1. Self-modification: It’s assumed in RL that the agent and the environment are separate entities that interact through actions and observations.42 This assumption, however, does not always hold in real-world applications43. In such scenarios, agents are embedded in its operating environment, where an agent is a program that’s run on a physical computer that is part of (and computed by) its environment.44 Therefore, under the conditions where the environment has the capability to modify the program operating the agent, the agent can perform actions (intentionally or unintentionally) that cause the environment to trigger agent modifications. Designing agents that can either safely perform, or avoid, actions in the environment that cause such self-modifications is an open research problem in AI safety.
  2. Distributional shift: This safety problem relates to designing agents that behave robustly when there is a difference between their test environment and training environment.45 Distributional shifts represent ‘reality gaps’ and are a constant issue when designing ML agents to be deployed in real-world applications. If an agent’s perception or heuristic reasoning processes have not been adequately trained on the correct distribution, the risk of unintended and harmful behaviour is increased.46 Developing comprehensive methods to ensure the robust behaviour of agents across distributions, and to reliably detect failures, are critical problems to building safe and predictable ML systems.
  3. Robustness to adversaries: Despite classical RL assumptions (Sutton and Barto, 2016),47 some environments can interfere with an agent’s goals and behaviours.48 Such interferences can be caused by actors within these environments that stand to benefit from helping or attacking the agent.49 For instance, evasion attacks50 aim to ‘fool’ ML classifiers by adding strategic perturbations to test inputs.51 Ensuring that agents have robust capabilities to detect and adapt to both friendly and adversarial intentions within its environment is an essential safety consideration.
  4. Safe exploration: All autonomous ML agents need to explore their environments to some degree.52 In the context of RL, an agent has to exploit what it already knows, but also explore the environment to make better selections and maximise its reward in the future (Sutton and Barto, 2016).53 This represents a crucial trade-off and also raises issues of safe exploration in real-world environments.54 As an agent learns by exploring and interacting with its environment, it implicitly has an insufficient understanding of that environment. Therefore, exploration can be dangerous, as the agent takes actions that cause consequences it does not understand with great confidence. Traditional safety engineering methods of explicitly programming all possible safety constraints and failure scenarios is unlikely to be feasible in real-world, complex environments.55 So, applying more principled approaches to building agents that respect safety constraints and prevent harmful exploration is an essential challenge in ML safety.
  5. Multi-agent problems: Given the proliferation of ML agents applied in real-world settings, many ML agents operate in environments with both humans and other ML agents. Like with humans, coordination of machine-to-machine agents can improve overall performance. These multi-agent environments, however, can also lead to adverse scenarios, which are similar to rational multi-agent human interactions.56 For instance, multi-agent human phenomena like the Prisoner’s Dilemma57 and The Tragedy of the Commons58 can emerge in multi-agent ML scenarios. These scenarios can occur where distributed rational agents (human or artificial) share a common pool of resources. The individual agents might ‘rationally’ pursue their respective policies to maximise their own utilities. As shown in the scenarios aforementioned, these ‘rational’ individualistic strategies can lead to adverse outcomes, both for the individual and for the collective. Therefore, designing and monitoring autonomous agents to behave robustly in multi-agent environments with both humans and machines is a key ML safety issue.

Oversight Problems

While the objective function may be known, or there may be an effective method for evaluating it, providing feedback to an agent at scale may be too expensive. This is referred to as an oversight problem.

  1. Scalable oversight: In complex tasks, it may be too expensive or infeasible to provide feedback to a RL agent for every training example.59 In the absence of precise knowledge of the reward function, agent designers must rely on ‘cheap’ approximations of rewards. This can allow the agent to simultaneously learn a robust reward function while also maximising its reward.60 However, these cheaper signals do not always neatly align to what humans care about. When cheap approximations are inconsistent with what humans value, accident risk consequently increases.
  1. Amodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. 2016. “Concrete Problems in AI Safety.” arXiv [cs.AI]. arXiv.
  2. Leike, Jan, Miljan Martic, Victoria Krakovna, Pedro A. Ortega, Tom Everitt, Andrew Lefrancq, Laurent Orseau, and Shane Legg. 2017. “AI Safety Gridworlds.” arXiv.
  3. Omohundro, Steve. 2016. “Autonomous Technology and the Greater Human Good.” In Risks of Artificial Intelligence, edited by Vincent C. Müller, 9–27. CRC Press.
  4. Faria, José M. 2017. “Machine Learning Safety: An Overview.” Safety-Critical Systems Club. Safe Perspective Ltd.
  5. Supra note 2.
  6. Varshney, K.R., and Alemzadeh, H. 2016. “On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products.” arXiv [cs.CY].
  7. Kyriakides, E., and M. Polycarpou. 2014. “Intelligent Monitoring, Control, and Security of Critical Infrastructure Systems”, SpringerLink. Springer.
  8. Calo, Ryan. 2017. “Artificial Intelligence Policy: A Roadmap.” SSRN. University of Washington.
  9. Scherer, Matthew U. 2016. “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies.” Harvard Journal of Law and Technology 29 (2):353–400.
  10. Alex Campolo, Madelyn Sanfilippo, Meredith Whittaker, & Kate Crawford. 2017. “AI Now 2017 Report.” 2. AI Now, New York University.
  11. Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller. 2016. “Artificial Intelligence and Life in 2030.” One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel.” Stanford University.
  12. Jordan, M. I., and T. M. Mitchell. 2015. “Machine Learning: Trends, Perspectives, and Prospects.” Science 349 (6245):255–60.
  13. Ashmore, R. and Lennon, E. 2017. Progress Towards the Assurance of Non-Traditional Software. In Developments in System Safety Engineering, Proceedings of the 25th Safety-Critical Systems Symposium.
  14. Supra note 4, page 17.
  15. Li, Yuxi. 2017. “Deep Reinforcement Learning: An Overview.” arXiv [cs.LG]. arXiv.
  16. Supra note 4, page 2.
  17. Diamandis, Peter H., and Steven Kotler. 2012. Abundance: The Future Is Better Than You Think. Simon and Schuster.
  18. Supra note 1.
  19. Supra note 2.
  20. Russell, Stuart. 2016. “Should We Fear Supersmart Robots?” Scientific American, 314(6)58-59.
  21. Supra note 1.
  22. Bostrom, N., Dafoe, A., and Flynn, C. 2016. “Policy Desiderata in the Development of Machine Superintelligence.”
  23. Supra note 2
  24. Supra note 1.
  25. Supra note 1.
  26. Supra note 1.
  27. Orseau, Laurent, and Stuart Armstrong. 2016. “Safely Interruptible Agents.” In Uncertainty in Artificial Intelligence, 557–66.
  28. Hadfield-Menell, D., Dragan, A., Abbeel, P., and Russell, S. 2016. “The Off-Switch Game.” arXiv [cs.AI]. arXiv.
  29. Soares, N., Fallenstein, B., Yudkowsky, E., and Armstrong, S. 2015. Corrigibility. In AAAI Workshop on AI, Ethics, and Society.
  30. Supra note 1
  31. Supra note 2, pg. 6.
  32. Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. OUP Oxford, pg. 142.
  33. Supra note 2, pg. 7.
  34. Clark, J. and Amodei, D. 2016. “Faulty Reward Functions in the Wild.” OpenAI Blog. OpenAI Blog. December 22, 2016. https://blog.openai.com/faulty-reward-functions/.
  35. Seshia, Sanjit A., Dorsa Sadigh, and S. Shankar Sastry. 2016. “Towards Verified Artificial Intelligence.” arXiv [cs.AI]. arXiv.
  36. e.g. Avgerinos, T., Cha, SK., Rebert, A., Schwartz, E.J., Woo, M., and Brumley, D. 2014. “Automatic Exploit Generation.” Communications of the ACM 57 (2). New York, NY, USA: ACM:74–84.
  37. e.g. Kitchen, N., and Kuehlmann, A. 2007. “Stimulus Generation for Constrained Random Simulation.” In 2007 IEEE/ACM International Conference on Computer-Aided Design, 258–65.
  38. Nguyen, A., Yosinski, J., and Clune, J. 2014. “Deep Neural Networks Are Easily Fooled: High Confidence Predictions for Unrecognizable Images.” arXiv [cs.CV]. arXiv.
  39. Moosavi-Dezfooli, SM., Fawzi, A., and Frossard, P. 2015. “DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks.” arXiv [cs.LG]. arXiv.
  40. Doshi-Velez, F. and Kim, B. 2017. “Towards A Rigorous Science of Interpretable Machine Learning.” arXiv [stat.ML]. arXiv.
  41. Parliament and Council of the European Union. 2016. General data protection regulation.
  42. Supra note 2, pg. 8.
  43. Everitt, T., Leike, J., and Hutter, M. 2015. “Sequential Extensions of Causal and Evidential Decision Theory.” arXiv [cs.AI]. arXiv.
  44. Orseau, L., and Ring, M. 2012. “Space-Time Embedded Intelligence.” In Artificial General Intelligence, edited by Joscha Bach, Ben Goertzel, and Matthew Iklé, 209–18. Springer.
  45. Qui˜nonero-Candela, J., Sugiyama, M., Schwaighofer, A., and Lawrence, ND. 2009. Dataset Shift in Machine Learning. The MIT Press.
  46. Supra note 1, pg. 16.
  47. Sutton, RS., and Barto, AG. 2016. Reinforcement Learning: An Introduction. The MIT Press.
  48. Goodfellow, IJ., Shlens, J., and Szegedy, C. 2014. “Explaining and Harnessing Adversarial Examples.” arXiv [stat.ML]. arXiv.
  49. Huang, L., Joseph, AD.,  and Nelson, B. 2011. “Adversarial Machine Learning.” In Proceedings of 4th ACM Workshop on Artificial Intelligence and Security, 43–58.
  50. Biggio, B., Corona, I., Maiorca, D., Nelson, B., Srndic, N., Laskov, P., Giacinto, G., and Roli, F. 2017. “Evasion Attacks against Machine Learning at Test Time.” arXiv [cs.CR]. arXiv.
  51. Bhagoji, AN., Cullina, D., Sitawarin, C., and Mittal, P. 2017. “Enhancing Robustness of Machine Learning Systems via Data Transformations.” arXiv [cs.CR]. arXiv.
  52. Supra note 1, pg. 14.
  53. Supra note 47, pg. 3.
  54. García, J., and Fernández, F. 2015. “A Comprehensive Survey on Safe Reinforcement Learning.” Journal of Machine Learning Research: JMLR 16:1437–80.
  55. Supra note 1, pg. 14
  56. Chmait, N., Dowe, DL., Green, DG., Li, Y. 2017. “Agent Coordination and Potential Risks: Meaningful Environments for Evaluating Multiagent Systems.” In Evaluating General-Purpose AI, IJCAI Workshop.
  57. Rapoport, A. and Chammah, AM. 1965. Prisoner’s Dilemma: A Study in Conflict and Cooperation. University of Michigan Press.
  58. Hardin, G. 1968. “The Tragedy of the Commons.” Science 162 (3859). American Association for the Advancement of Science:1243–48.
  59. Supra note 1, pg. 11.
  60. Armstrong, S., and Leike, J. 2016. “Towards Interactive Inverse Reinforcement Learning.” In NIPS Workshop.