Blog on Economics, Artificial Intelligence, and the Future of Work

AI Governance – 11 Key Issues

TL;DR

  • As Artificial Intelligence (AI) is broadly applied to social and economic domains, measured oversight becomes increasingly important.
  • But determining appropriate regulatory frameworks for AI is complex. Eleven key issues include:
  1. Defining AI
  2. Articulating ethical standards and social norms
  3. Accountability when AI causes harm

  4. Appropriate degree of oversight
  5. Measurement & evaluation of the impact
  6. The control problem

  7. Openness

  8. Privacy & security

  9. Projections

  10. Assessing institutional competence

  11. The political problem

  • In the absence of robust policies, Matt Scherer has proposed a voluntary AI certification system. AI-certified programs would be granted limited liability privileges and would provide incentives to meet safety standards. The certification standards would be established and monitored by an independent government agency.

AI Governance

Knowingly or unknowingly, Artificial Intelligence systems are intersecting with more parts of our lives. And not just in areas of trivial importance. AI systems are being applied to essential areas of society. From analysisng Electronic Health Records that improve diagnosis rates; to balancing power supply for energy grids. AI can help us achieve more and raise standards of living.

So, as AI systems are deployed at scale within fundamental societal structures, then measured oversight at scale becomes necessary.

Such public interest roles are typically assumed by the arms of national governments. But AI public policy has been met with almost radio silence across the world. As a result, the development and applications of AI continue to exist in a policy vacuum.

Issues with Governing AI

The unique challenges and complexities of AI do not fit neatly into existing governance frameworks. Safety standards are fluid. Accountability is opaque. And policy-makers lack expertise. The amount of investment in developing AI has exceeded investments in making AI safe by an order of magnitude1. This is fueling the immense growth in AI applications with almost unfettered regulatory oversight. The surprising thing is that many of the most prominent Tech leaders, such as Elon Musk2 and Bill Gates3, think that a degree of regulatory oversight is important.

Meanwhile, policy-makers sit idle. AI is viewed as a black box. Most are unclear about what AI actually is, let alone instituting appropriate governance.

So, what are the main issues with governing AI?

Like with all major public policy areas, the issues extend beyond just hard technical problems. There are conceptual issues, as well as practical problems.

Conceptual Policy Issues

  • Defining AIThe problems with defining AI for regulatory purposes centre around the conceptual ambiguities of ‘intelligence’. The definitions of intelligence vary widely. Intellectual characteristics like ‘consciousness’ and ‘the ability to learn’ are at best nebulous. So, arriving at an agreed definition for Artificial Intelligence is difficult. The subjective nature of AI terminology means that it becomes a moving target for policy-makers. Definitions of AI range from: ‘the ability to act ‘humanly’’4; to ‘performing intellectual tasks’5; and the modern definition of ‘acting rationally to achieve goals’6. However, even a ‘goal-oriented’ approach doesn’t provide clarity for a regulatory definition.
  • Ethical Standards & Social Norms – For autonomous systems to operate effectively in society, they need to so ethically and in alignment with social norms. But what is good behaviour? What is just? Any attempt to develop AI governance structures will inevitably confront such philosophical questions.
  • Accountability – Assigning liability for when autonomous systems negatively perform is a difficult conceptual and practical challenge. This will be particularly important in social and economic domains. For instance, to what degree can a physician rely on intelligent diagnosis systems without increasing exposure to malpractice claims in the case of a systems error? Precedent in case law is sparse. And the applications of AI systems are rapidly expanding in the absence of ex-ante accountability frameworks.
  • The Degree of Oversight – The extent of regulation is always a delicate balance. Ultimately, an ideal AI governance structure would help maximise the opportunities for positive outcomes, while minimising the negative risks. The advantage is that AI development is still in its infancy. However, a failure to institute appropriate oversight could yield unfavourable outcomes. If regulations go too far, innovation could be inhibited and societal benefits lost. If regulations don’t go far enough, negative outcomes at scale could result and knee-jerk policy reactions ensue. We’ve seen this before in other sectors, such as Bioengineering and Biomedicine. For instance, the impact that Thalidomide had on tightening FDA regulations on drug classifications in the US.7

Practical Policy Issues

  • Measurement & Evaluation – While technical progress is being made in the emerging field of AI Safety, we currently lack agreed upon methods to assess the social and economic impacts of AI systems. Robust M&E methods are important as they support investigative, regulatory, and enforcement functions. They help set benchmarks, so we can know AI applications are producing positive outcomes.
  • The Control Problem – The risks associated with control of autonomous systems is a core problem across all segments of AI. In the case of autonomous Machine Learning systems, there are risks that as they continue to learn and adapt, the potential for human control is inhibited. Once control is lost, it may be difficult to regain control. From a policy perspective, there are obvious public risks. So, if the potential for such scenarios is in any way more than theoretical, then the assurance of human control and public alignment will be necessary.
  • Openness – Determining the desirability of openness in AI research & development is a key issue for policy-makers (including openness about source code, science, data, safety techniques, capabilities, and goals). Types and degrees of openness result in complex societal tradeoffs, particularly in the long-term. While higher levels of openness will likely accelerate AI development, it may also exacerbate a racing dynamic: a situation where competitors race to develop the first General Artificial Intelligence. Such a dynamic may result in inadequate safety measures in order to accelerate progress.8 This scenario increases the public exposure to systemic risks. It’s important to note that technology and policy decisions are never deterministic. We can’t know for certain that any scenarios will come to pass. It’s plausible, however, that the lever of Openness will have significant second, third, and fourth-order effects. Therefore, it’s an important policy consideration.
  • Privacy & Security – Data is the life source of AI systems. Maintaining standards that uphold privacy and ensure the security of the data accessed by AI systems is a key technical and policy challenge. People should have the right to access, manage, and control the data they generate, given AI systems’ power to analyse and utilise the data. Moreover, it’s imperative that personal data is securely stored and not unscrupulously accessed or used without expressed consent.
  • Projections – The decision-making processes of AI systems are diametrically different to those of humans. That’s why AI systems generate solutions that humans never considered.9 This ability to create value through unexpected solutions is a fundamental point of attraction towards AI systems. It’s also a risk. Accurately projecting adverse effects from AI systems is difficult, precisely because outcomes can be unexpected. As AI increasingly enters into social and economic domains, policy-makers will seek reassurance from projections as part of due diligence. But there aren’t clear projection methods.
  • Assessing Institutional Competence – Even if it were decided that regulatory oversight should be instituted for broad-scope AI, governance structures still need to be determined. There are notable issues at hand: legislators lack expertise; courts can’t act quickly enough on a case-by-case basis to establish precedent; and international institutions can be perceived as toothless tigers. While challenging, there are lessons in history of effective governance structures to oversee powerful technologies. The Treaty on the Nonproliferation of Nuclear Weapons offers relevant insight. While still an underserved research area, Matt Scherer proposes a useful regulatory framework for AI, which is summarised below.
  • The Political Problem – The current and potential powers of AI are not deterministic. They depend on their applications, which are currently decisions made by humans. Like with any source of power, there’s potential for good and subversion. The political challenge with AI is to achieve a situation in which individuals or institutions empowered by such AI use it in ways that promote the common good. At a time where nationalism is on the rise,10 international cooperation is becoming increasingly difficult. Political cooperation, however, is necessary to the safe broad-scale deployment of AI, which transcends national borders.

These issues, taken together, highlight the complexities of establishing appropriate AI policies. National governments are still in the early days of their thinking. Last year, the US government held a series of public workshops with industry and research leaders. This resulted in a summary report presented to The White House.11 Similarly, the UK House of Commons commissioned an inquiry into the opportunities and implications of Robotics and Artificial Intelligence.12 While the intent is positive, policy positions are still abstract. This demonstrates the elementary understanding of how broad-scale AI might impact society. Let alone the potential roles of public policy.

A Proposed AI Regulatory Framework

In the absence of robust policies, Matt Scherer, an attorney and legal scholar from the US, has presented a useful proposal to regulate AI systems.13 The centrepiece of this tort-based framework involves an AI certification process. Certification would require designers, manufacturers, and sellers of AI systems to fulfil safety and legal standards. These standards would be developed and monitored by an independent AI Agency that’s appropriately staffed by AI specialists.

Scherer proposes that rather than creating an AI Agency with ‘FDA-like powers’ to ban products, AI programs that are successfully certified could be granted limited liability. This means that plaintiffs would have to establish actual negligence in the design, manufacturing, or operation of an AI system to be successful in a tort claim. The uncertified AI programs would still be available for commercial sale but would be subject to strict joint and several liability. Successful plaintiffs would, therefore, be permitted to ‘recover the full amount of their damages from any entity in the chain of development, distribution, sale, or operational of the uncertified AI’.14

Another advantage to Scherer’s proposal is that it leverages the institutional strengths of legislatures, agencies, and courts. As a summary, this structure would allocate roles in the following ways:

  • Legislature – This system would utilise the democratic mandate of the Legislature to determine the goals and purposes that guide AI governance. It would also use the powers of the Legislature to enact legislation (Scherer refers to this as the ‘Artificial Intelligence Development Act’) that would create an independent agency for oversight.
  • Independent Agency –  As legislators lack the specialist knowledge required, they would delegate the central task of assessing the safety of AI systems to an independent agency of AI specialists. Independence is key, as it will help inculcate the Agency from the jockeying of electoral politics. An independent agency also has the flexibility to act preemptively. This flexibility and responsiveness is particularly important as AI development continues at breakneck speeds.
  • Courts – The courts would be utilised for their strengths in adjudicating cases and allocating responsibility. This would require the courts to apply the rules governing negligence claims, differentiating between certified-AI with limited liability and uncertified-AI with strict liability. A core role of the courts will be allocating responsibility to the parties that caused harm through the AI program.

This proposed structure isn’t a panacea to the list of issues above. It does, however, provide a flexible regulatory framework for oversight, without draconian regulations. By leveraging tort systems, the proposed structure would provide strong incentives for AI developers to incorporate safety features and internalise the associated costs. It would also provide a disincentive for distributors to sell uncertified AI programs that haven’t met public safety standards.

Regardless of whether Scherer’s proposal is considered appropriate, governments will need to develop policy positions for broad-scope AI. This will take careful planning and consideration. It will also require a sense of urgency. Ultimately, the future depends on what we do in the present.

  1. Farquhar, Seb (2017) Changes in funding in the AI safety field.
  2. Kurt Wagner, Elon Musk just told a group of America’s governors that we need to regulate AI before it’s too late, Recode (July. 15, 2017).
  3. For example, see Eric Mack, Bill Gates Says You Should Worry About Artificial Intelligence, FORBES (Jan. 28, 2015)
  4. A. M. Turing (1950) Computing Machinery and Intelligence. Mind 49: 433-460.
  5. See BRUCE PANDOLFINI, KASPAROV AND DEEP BLUE: THE HISTORIC CHESS MATCH BETWEEN MAN AND MACHINE 7–8 (1997).
  6. ‘Intelligent Machines’ or ‘Artificial Intelligence’ refers to a non-organic autonomous entities that are able to sense and act upon an environment to achieve specific goals. Intelligent agents may also learn or use knowledge to achieve these goals, which are governed by algorithms that are made by people. Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2, chpt. 2.; See also Stephen M. Omohundro, The Basic AI Drives, in ARTIFICIAL GENERAL INTELLIGENCE 2008 483, 483 (2008) (defining AI as a system that “has goals which it tries to accomplish by acting in the world”).
  7. Bren L (2001-02-28). “Frances Oldham Kelsey: FDA Medical Reviewer Leaves Her Mark on History”. FDA Consumer. U.S. Food and Drug Administration.
  8. Bostrom, Nick (2017) “Strategic Implications of Openness in AI Development.” Global Policy 8 (2): 135–48.
  9. For e.g. see: Cade Metz, IN TWO MOVES, ALPHAGO AND LEE SEDOL REDEFINED THE FUTURE. WIRED (16. March, 2017).
  10. Onder, Harun (2016) The age factor and rising nationalism, Brookings Institution.
  11. US National Science & Technology Committee on Technology (2016) “Preparing for the Future of Artificial Intelligence.” Executive Office of the President.
  12. UK House of Commons (2016) “Robotics and Artificial Intelligence – United Kingdom Parliament.” n.d.
  13. Scherer, Matthew U. (2016) “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” Harvard Journal of Law and Technology 29 (2): 354-400.
  14. Ibid pg. 395.