The Regulation of Artificial Intelligence — A Case Study of the Partnership on AI

I wrote this dissertation in the summer of 2018, as part of my MSc in International Public Policy at UCL. One of my goals during the MSc was to learn the methods and frameworks in the literature on international public policy in order to apply them to the challenges and opportunities of technological development.
I recognise that the Partnership of AI is not an explicit attempt at self-regulation by the AI industry, but given the prominence of its members and the relative dearth of alternative cooperative bodies (and the importance of getting safe AI development right!), I hope this research can be food for further thought. I've also shared my thinking on why the international governance of AI development is challenging in this medium post.
Since last summer, the question of how the development of technology — and AI in particular — should be governed has remained at the forefront of public debate. My views on this question have also developed as I've joined the UK's civil service in its Digital and Tech Policy directorate in DCMS. Nevertheless, I still think this research could provide an interesting read!
Trending AI Articles:
1. From Perceptron to Deep Neural Nets
2. Neural networks for solving differential equations
For the formats sake, I've left out the tables and appendices that were originally part of the dissertation.
Table of Contents
Abstract
- Introduction
1.1 AI and Regulation
1.2 AI and Self-Regulation
2. Literature Review
2.1 The Regulation of Innovation
2.2 Unique Challenges of AI Regulation
2.3 AI Competition
2.4 AI Securitization
2.5 National and International AI Commissions
3. A Self-Regulatory Approach
3.1 Characteristics of Effective Self-Regulatory Systems
4. Methodology
5. Analysis of the Partnership on AI
5.1 Principles and Objectives
5.2 Intrinsic Motivation
5.3 Internal and External Transparency
5.4 Independence and Authority
5.5 Credible Enforcement Mechanisms
5.6 Tenet Compliance
6. Evaluation
6.1 Implications
6.2 Validity
7. Conclusion
8. Bibliography
Abstract
This dissertation applies theory from the literature on self-regulation to the context of the AI industry in order to understand how the development of AI is being governed, and how society can benefit from AI development while minimizing its risk to public welfare.
In the absence of suitable traditional regulatory measures, an effective self- regulatory system has the potential to address the unique regulatory challenges of AI, and minimize its risk to public welfare. This dissertation assesses whether one of the most prominent cooperative bodies in the AI industry, the Partnership on AI (PAI), has the characteristics of an effective self-regulatory system.
A review of the literature on regulation and AI demonstrates that specific technical aspects of AI exacerbate the regulatory issues known as the Pacing Problem and the Uncertainty Paradox. It also shows that corporations and states face a competitive dynamic in AI development, wherein they may engage in a regulatory race to the bottom in the pursuit of fast AI development, instead of safe AI development. A self-regulatory system can effectively address these challenges, as experts in the industry can keep regulation connected to innovation, while collective action by the industry can change the incentives in the pursuit of fast AI development.
To evaluate the PAI, five characteristics of effective self- regulation are identified based on examples in the literature on self-regulation concerning the American chemical and nuclear industries. These criteria are applied to the PAI through an in-depth case study, using a content analysis methodology.
This paper concludes that the PAI does not have the characteristics of an effective self-regulatory system, and is unable to resolve the prisoner’s dilemma dynamic in AI development. As the most prominent example of cooperation in the AI industry, this does not bode well for the minimization of public risk in the development of AI, suggesting that a different regulatory response is needed.
1. Introduction
It is a common trope within the study of public policy to complain that innovators are always a step ahead of policymakers. The public also believes that the law lags behind technological innovation (Calo 2017), and that regulators allow regulatory gaps to persist for fear of impeding economic growth and innovation (Wadhwa 2014). In one of his first public remarks as the new Chairman of Alphabet, Google’s parent company, renowned computer scientist John Hennessy echoed the same sentiment, stating that “the technology industry moves too quickly for the government to effectively regulate it” (Williams 2018). Instead, many pioneers in the Artificial Intelligence (AI) industry argue that they are in the best position to develop the standards and rules that will guide continued innovation, while minimising public risk.
The tide of public opinion seems to be turning against the companies at the forefront of AI development, as a number of incidents have shed light on their “perceived unethical or anti-competitive business practices” (Economist 2018). While the industry complains about an “AI misinformation epidemic” that paints a dystopic vision of the consequences of AI development (Schwartz 2018), recent surveys show that the public consider “AI an equal or bigger threat to mankind than nuclear weapons” (Nesta 2018).
The long-term consequences of AI development deserve consideration, but the current AI Hype obscures the reality that this technology is already a significant source of risk to public welfare (Schwartz 2018). The purpose of this dissertation is to investigate whether the self-governance favoured by the AI industry effectively balances the minimization of public risk with the potential for continued innovation. To do so, this dissertation presents a case study of the Partnership on AI, the most prominent and representative body for cooperation in the AI industry. In doing so, it addresses the following question: does the Partnership on AI have the characteristics of an effective self-regulatory system?
1.1 AI and Regulation
Regulation is commonly defined as “the sustained and focused attempt to alter the behaviour of others according to standards or goals, with the intention of producing a broadly identified outcome” (Brownsword and Somsen 2009, 3; Bennett Moses 2013). For a regulatory regime to be effective, it must have a clear definition of what it regulates. Unfortunately, there are many different definitions of AI amongst experts in the field (Scherer 2016). The non-technical definition that is prevalent in the literature on AI policy is that an AI is any digital tool or system that is capable of performing tasks that, if performed by a human, would be said to require intelligence (Brundage et al. 2018, 9; Scherer 2016, 262). An important implication of this definition is that AI is a general-purpose technology — the combination of intelligence with computing properties has the potential to improve productivity across all industries by accelerating innovation (Brundage and Bryson 2017).
The renewed prominence of AI is fuelled by the development of a variety of machine learning (ML) techniques. These techniques are used to create digital systems that can “improve their performance on a given task over time through experience” (Brundage et al. 2018, 9). A combination of cheaper and improved computer processing power, access to large and organised training datasets, and algorithmic innovation — all fuelled by private sector investment — has allowed machine learning academics and professionals to make significant innovations in a variety of domains commonly considered to be important elements of AI (Calo 2017; Brundage and Bryson 2017).
The risk to public welfare emanates from two distinct forms of AI. The AI literature identifies current ML techniques as “narrow AI”, consisting of highly specialized statistical models that have been extensively trained to match or exceed human level performance at a specific task, in a specifically defined environment (Campolo et al. 2017). The development and application of narrow AI is associated with significant risks involving personal privacy, bias, inequality and rapid automation (Brundage et al. 2018).
However, the media-fuelled fears and expectations of AI generally concern Artificial General Intelligence (AGI). AGI refers to a system that equals or exceeds human level performance at any task across multiple domains, independent of its training environment. While there is currently no clear development trajectory toward AGI, a survey of AI experts gives a 10% chance of such AI being developed by 2024, and a 50% chance of it being developed by 2050 (Grace et al. 2017, 2; Bostrom, Dafoe, and Flynn 2017, 12). AGI presents public welfare risks on a different order of magnitude, including geopolitical security concerns, labour market dislocations and extreme economic inequality. Some researchers even identify an existential risk to the survival of our species, if we fail to control or align an AGI with our values (Bostrom, Dafoe, and Flynn 2017).
1.2 AI and Self-Regulation
This dissertation presents the argument that the development and application of AI presents unique regulatory challenges, and that the prospect of AGI creates a competitive dynamic that prioritizes the fast development of AI over the safe development of AI. This dynamic affects the corporations developing AI technology, and the countries tasked with regulating them. Traditional regulatory solutions seem ill-suited to the task of minimizing public risk while sustaining innovation.
Self-regulation is an alternative form of governance where an industry designs and enforces new rules and standards for themselves, often in “areas where government rules are lacking” (Haufler 2001, 8). I posit that a self-regulatory system has the potential to be an effective and expedient solution to the unique regulatory challenges of AI, and could enable the AI industry to overcome the competitive dynamic that incentivises AI development speed over safety.
The Partnership of AI is the industry’s most prominent and representative body for cooperation on AI safety. To investigate whether it has the characteristics of an effective self-regulatory system, the dissertation has the following structure. It presents a review of the literature on regulation and AI policy to identify the unique regulatory challenges of AI. It identifies the characteristics of an effective self-regulatory system by evaluating relevant examples from the literature on self-regulation. It provides a methodological description of the case study and content analysis that will be used in the investigation of the Partnership on AI. Finally, the dissertation evaluates the findings of the case study, assesses its validity, presents a conclusion and suggests areas for further research.
2. Literature Review
This literature review is used to develop the argument that an effective self- regulatory system has the potential to be an effective solution to the challenges of AI regulation, and draws on examples from the literature on self-regulation to identify characteristics of an effective self-regulatory system. A review of the literature on regulation and AI policy demonstrates that they do not offer a viable and expedient solution to the challenges of AI regulation.
The regulation of innovation literature identifies clear challenges in the regulation of emerging technologies, but asserts that “there is nothing about technology… that presents unique regulatory problems” (Bennett Moses 2013, 13; Butenko and Larouche 2015). However, the budding literature on AI policy presents a compelling case that that the development and application of AI does present unique regulatory challenges.
In its turn, the AI policy literature asserts that the biggest challenge in regulating AI is a critical lack of expertise amongst regulators (Scherer 2016), and suggests solutions in the form of national and international regulatory organisations (Brundage and Bryson 2017). This dissertation argues that this solution is unlikely to work in the short-term because of the securitization of AI by national governments.
2.1 The Regulation of Innovation
The literature on the regulation of emerging technologies is primarily concerned with the minimization of “public risk” (Blind 2012; Butenko and Larouche 2015). Public risk describes threats to human health or safety that are “centrally produced, broadly distributed, and largely outside the individual risk bearer’s direct understanding and control” (Huber 1985, 85:227). Scholars in this field seek to understand what kind of regulatory responses are optimal for balancing public risk minimization and sustained innovation (Fenwick, Kaal, and Vermeulen 2016).
The first challenge of regulating emerging technologies is called the Pacing Problem (Butenko and Larouche 2015; Fenwick, Kaal, and Vermeulen 2016; Kaal 2016; Brownsword and Somsen 2009). The Pacing Problem, also known as the problem of regulatory connection, occurs when innovations develop faster than regulations (Kaal 2016). Authors in this field assert that many regulatory systems are “mired in stagnation, ossification and bureaucratic inertia” (Marchant 2011, 199), while technological innovation is accelerating (Kaal 2016). The main issue regulators face is a lack of expertise in their relevant technological field, complicating their ability to manage the relationship between existing regulations and a new technology. This is especially difficult in the case of a general-purpose technology like AI, which presents a “multitude of different and often hard-to-quantify risk and benefit scenarios” (Marchant 2011, 199).
The second challenge of regulating emerging technologies is called the Uncertainty Paradox, also known as the Collingridge Dilemma (Butenko and Larouche 2015; Collingridge 1980). Collingridge asserted that regulators faced twin hurdles related to uncertainty (Bennett Moses 2013). When an innovation is introduced, immediate regulation risks being stifling or counter-productive, since little is known about its potential impact on society (Bennett Moses 2013; Khatchadourian 2015). However, if the regulator waits to reduce uncertainty about the impact of a new technology, it will probably be more difficult to effectively regulate it, since the mature technology has become entrenched in society (Bennett Moses 2013; Butenko and Larouche 2015).
The literature on regulation asserts that regulators who lack the time and expertise to solve the Pacing Problem and overcome the Uncertainty Paradox can pursue two methods to minimize public risk; the precautionary principle, and principles-based regulation.
The precautionary principle is a method that advises a ban on any innovation which can cause significant and irreversible harm, even if the risk of that harm occurring is small, until the innovation is proven safe (Butenko and Larouche 2015; Marchant 2011; Doteveryone 2018). However, the precautionary principle seems unsuitable for the AI industry, considering the extent to which narrow AI has already integrated itself within the global economy.
Principles-based regulation (PBR) is an alternative to traditional ‘command and control’ rules-based regulation (Kaal 2016). It advocates broad guiding principles instead of precise rules in pursuit of desired regulatory outcomes, creating a more resilient regulatory system which stays connected to technological innovation (Marchant 2011; Kaal 2016). PBR aims to mediate the effects of the Pacing Problem and the Uncertainty Paradox by creating a regulatory model based on trust, responsibility and good faith between the regulator and regulated (Black 2008).
However, this dependence on trust earns PBR the criticism of being a “regulatory Utopia”, in which “conflicts of interest and contests of power are glossed over” (Black 2008, 432). As discussed in the introduction, the contemporary AI industry is not marked by high levels of trust between innovators and the public. In this environment, the literature asserts that PBR would increase regulatory uncertainty while allowing for sub-optimal regulatory compliance (Carter and Marchant 2011; Kaal 2016).
2.2 Unique Challenges of AI Regulation
In his 2016 submission to the Harvard Journal of Law and Technology, Matthew Scherer contributed to the literature on AI regulation by explaining how the characteristics of AI created ex-ante and ex-post regulatory challenges that exacerbate the Pacing Problem and Uncertainty Paradox (Scherer 2016).
Ex-post regulation refers to regulation that would be enacted after an AI system causes harm. This regulatory approach is problematic because of the AI characteristic of autonomy, which creates issues of foreseeability, causation and control (Scherer 2016). A significant part of the promise of AI is that it will be able to act autonomously to achieve its goals. Therefore, it may be impossible to foresee how an AI will act to achieve its goals, which creates a legal challenge for establishing causation and legal liability (Scherer 2016). An ex-post approach, or regulating by learning from mistakes, is poorly suited to minimizing public risk, particularly when those risks are as significant as those posed by AI development (Scherer 2016; Bostrom, Dafoe, and Flynn 2017).
However, regulation of AI before it is developed, known as ex-ante regulation, is also challenging, because AI development “may be discreet, diffuse, discrete and opaque” (Scherer 2016, 369). Discreetness refers to the difficulty of identifying where AI development is occurring because it “can be conducted with little visible infrastructure” (Scherer 2016, 369). Diffuseness refers to the fact that AI development is often contributed to by teams in dispersed geographic locations (Scherer 2016, 369). Discreteness means that different components of AI development are frequently designed without conscious coordination between developers (Scherer 2016, 368). Opaqueness means that any outside observer, like a regulator, may not be able to reverse-engineer an AI system to understand the public risks it poses (Scherer 2016, 369).
These unique technical challenges suggest that regulators cannot rely on traditional regulatory approaches to effectively minimize the public risk from AI development. However, a self-regulatory approach has the potential to be a viable solution to these challenges. Researchers in the industry have the expertise to effectively address the Pacing Problem and Uncertainty Paradox, and can develop relevant safety standards to anticipate ex-ante and ex-post technical challenges.
2.3 AI Competition
The debate on how to regulate AI does not occur in academic isolation. It is complicated by the increasingly competitive development of AI by both corporations and states. This creates a prisoner’s dilemma dynamic in which the stable, sub-optimal Nash equilibrium favours development speed over development safety, causing a potential regulatory “race to the bottom” (Bostrom, Dafoe, and Flynn 2017, 6).
The development and application of AI has been driven by the private sector, where it sits at the heart of the corporate strategies of some of the world’s largest corporations. Within the last two years, national governments have also started to recognize that AI is of vital importance to their economic development and national security (Cath et al. 2016). Corporations and governments realize that it will be easier, cheaper and faster to build the first Artificial General Intelligence (AGI) than it is to build the first safe AGI, and as a result, corporations and governments are incentivised to pursue the quickest development trajectory (Armstrong, Bostrom, and Shulman 2013; Calo 2017; Khatchadourian 2015). This dynamic can be understood with the concept of a prisoner’s dilemma. The competitive pressures create a Nash equilibrium, a stable sub-optimal outcome with regard to public risk mediation, wherein all agents pursue fast AI development (Armstrong, Bostrom, and Shulman 2013).
Traditional solutions to the prisoner’s dilemma in the private sector include government intervention and regulation, which change incentive structures so that corporations change their behaviour. A recent example of intervention is the European Commission’s record €4.3 Billion euro fine for Google’s anti-competitive actions on its search platform (Economist 2018). Besides fines and public investigations, the EU has also set an example with its new General Data Protection Regulation (GDPR). While not directly aimed at the development and application of AI, it has significant consequences on the private sector’s ability to gather and use data from European citizens in their AI systems (Birnbaum, Romm, and Timberg 2018).
2.4 AI Securitization
Traditional measures like these are insufficient to resolve the prisoner’s dilemma in AI development, because national governments are subjected to the same competitive dynamic as corporations in the AI industry. Instead of competing on financial returns, governments are securitizing AI, conceiving of it as a central element in their national security and geopolitical competition strategies (Brundage et al. 2018; Cath et al. 2016). Securitization describes a strategic move by governments to cast issues “outside or beyond normal politics” in pursuit of national interests (Mcdonald 2008, 569), increasing “the risk that countries may put aside safety concerns” to attract and support the AI industry (Allen et al. 2018, 10). The securitization of AI has accelerated significantly in the last two years, as 27 countries have published various forms of national AI strategies (Dutton 2018). Governments expect AI to generate “significant changes in the balance of power, international competition and international conflict” (G. C. Allen et al. 2018, 2).
Each national strategy recognizes the need to attract, develop and retain AI talent (Dutton 2018). The key players in AI are private sector corporations, and if governments want to effectively harness AI for their national security, they need to attract private sector innovation (G. C. Allen et al. 2018). This recognition is reflected in both the UK’s Artificial Intelligence Sector Deal and the European Commission’s review on AI strategy, which state that if they fail to actively develop and attract the AI industry, they risk being relegated to the role of a consumer of other countries’ AI solutions (HM Government 2018; EU Commission 2018).
In this competitive context, the relative strictness of a regulatory regime plays an important role in whether or not a country can attract or retain the AI industry (Erdélyi and Goldsmith 2018). The multinational character of corporations in the AI industry, in combination with the diffuse and discrete technical nature of AI development, makes it relatively easy for corporations to move AI development work to countries with more conducive regulatory regimes (Scherer 2016). Concerns are already being raised that the EU’s GDPR “is putting its manufacturers and software designers at a significant disadvantage to the rest of the world” (Allen and West 2018), while a recent report on China’s new AI strategy notes that the government does not want to make the rules too strict for corporations in a way that would inhibit AI development (Ding 2018).
2.5 National and International AI Commissions
Thus, governments also face a Nash Equilibrium, where a sub-optimal equilibrium incentivises the competitive development of national AI industries. This casts doubts on the ability of governments to effectively regulate AI. Nevertheless, the AI policy literature and many of the national strategies propose two methods to ensure the safe and responsible development of AI.
The first method is the creation of national commissions or agencies that develop technical standards and ethical guidelines for the industry, such as the UK’s newly announced Centre for Data Ethics and Innovation (HM Government 2018), and the European AI Alliance (EU Commission 2018). However, these agencies face the aforementioned Pacing Problem and Uncertainty Paradox, and would only be effective if “government, industry and the research sector will rely on (their) advice” (Cath et al. 2016, 7). They also do not change the incentives for the competitive development of national AI industries.
The second method proposes a new international organisation to coordinate and disseminate safety and ethics standards and regulations (Brownsword 2017; Brundage and Bryson 2017; Erdélyi and Goldsmith 2018). Such an organisation would be expected to draw on international interdisciplinary expertise to “create a framework for the regulation of AI technologies and inform the development of AI policies around the world” (Erdélyi and Goldsmith 2018, 1). The AI Global Governance Commission is an early attempt at such a solution (Loj 2018). Launched in April 2018, it is being developed by a London based think tank, and is currently pursuing international partnerships (Loj 2018). While assessing the efficacy of an international regulatory approach to AI is outside the scope of this investigation, the contemporary difficulties in global coordination do not inspire confidence that it will be an expedient solution to the pressing challenges of AI regulation (Danaher 2015).
3. A Self-Regulatory Approach
The review of the literature on regulation and AI policy demonstrates that the development of AI is a unique regulatory challenge which is difficult to solve with traditional regulatory methods. The technical features of AI development and application exacerbate the Pacing Problem and the Uncertainty Paradox, while AI development occurs within a competitive dynamic between corporations and states wherein a “regulatory race to the bottom” is the most likely outcome (Armstrong, Bostrom, and Shulman 2013).
A self-regulatory system has the potential to be the most viable solution for an effective and expedient response to the challenges of AI regulation. Experts in the industry are well suited to address the Pacing Problem and Uncertainty Paradox by keeping safety rules closely tied to innovation. Furthermore, a self-regulatory system has the potential to solve the prisoner’s dilemma between development speed versus safety. A well-designed system for self-regulation can foster collective action and change the incentives for individual corporations away from the choice to solely pursue development speed by increasing the certainty that their competitors are also prioritizing safety over speed.
3.1 Characteristics of Effective Self-Regulatory Systems
I have identified five characteristics of effective self-regulatory systems using two significant examples from the literature on self-regulation. These are the Responsible Care (RC) program, which was developed by the American Chemical Council after the Bhopal disaster (A. A. King and Lenox 2000; Barnett and King 2008; Sinclair 1997; Lenox 2006), and the Institute of Nuclear Power Operations (INPO) created by the American nuclear energy industry in the wake of the Three Mile Island accident (Cohent and Sundararajantt 2017; Gunningham and Rees 1997; A. A. King and Lenox 2000).
These examples are relevant proxies for comparison with the AI industry. Both the RC and INPO are recognized as being successful in part because they only have a small number of major players, which facilitates collective action (Gunningham and Rees 1997). Both industries are a source of significant public risk, which fosters the perception among industry leaders that “the future prosperity and perhaps even the survival of the industry is dependent on self-control” (Gunningham and Rees 1997, 391). The AI industry also contains a small number of major players, whose development and application of AI are a significant source of public risk. The industry also recognises that it faces potential costs from reactionary government regulation and a loss of social legitimacy if they fail to mediate the risks associated with AI.
The first characteristic of an effective self-regulatory system is the formulation of a clear set of principles that align with the objectives of the industry. This “industry-wide normative framework” consists of the codes of conduct or principles that will define industry practices and the overall objective of self- regulation (Gunningham and Rees 1997). In particular, to assuage the concern that these principles merely codify existing practices or represent the lowest common denominator of safe AI development and application practices (Rappert 2011), cooperation with academic and third-party experts can ensure that their principles are aligned with both realistic and meaningful self-regulatory objectives (Campolo et al. 2017).
The second characteristic of an effective self-regulatory system is that the individual researchers and developers in the self-regulating industry have intrinsic motivation to pursue collective-action solutions. In a 2016 paper addressing the development of beneficial AI, Seth Baum distinguishes between the role of extrinsic and intrinsic factors in motivating a culture of safety in AI research and application (Baum 2016). The success of extrinsic measures like regulation depends heavily “on how AI developers react to the measures”, while intrinsic motivation is a better predictor of a sustained commitment to shared objectives (Baum 2016; Campolo et al. 2017).
The third characteristic of an effective self-regulatory system is that it needs an independent and authoritative governing body. In both the chemical industry and the nuclear energy industry, the self-regulatory system is coordinated by an independent governing body. While the RC and the INPO receive funding from their members, they integrate third party expertise from academics and NGOs. This improves their access to expertise, and conveys their commitment to transparency and inclusion, which increases the authority of the governing bodies (Cohent and Sundararajantt 2017; A. A. King and Lenox 2000). In the case of the INPO, it reported on deficiencies at a Philadelphia nuclear plant in the 1980s, which resulted in legal changes and the dismissal of top executives at the plant (Cohent and Sundararajantt 2017, 126). This established the INPO’s social legitimacy and independent reputation, which has proven invaluable in their continued ability to self-regulate the American nuclear industry (Cohent and Sundararajantt 2017).
The fourth characteristic of an effective self-regulatory system is internal and external transparency. The self-regulatory system needs to have the means to provide external transparency by collecting and disseminating information on the alignment of its members with their shared principles (Gunningham and Rees 1997). Moreover, to gain and retain social legitimacy, the self-regulatory system needs to be transparent about its own practices to the wider public, sharing information on the processes of principle-creation, information-gathering and methods of decision making (Cohent and Sundararajantt 2017). A high degree of transparency is seen as one of the major factors behind the success of the INPO (Gunningham and Rees 1997).
The fifth characteristic of an effective self-regulatory system is the presence of credible enforcement mechanisms. An empirical investigation of Responsible Care has shown that many members only joined the self-regulatory system for symbolic reasons, “free-riding” on the efforts of other members while doing little to comply with common standards (A. A. King and Lenox 2000; Lenox 2006). Credible enforcement mechanisms effectively shape the incentives for corporations to engage in collective action. In their absence, a self-regulatory system will be less likely to overcome the prisoner’s dilemma dynamic between AI development speed and development safety (Cohent and Sundararajantt 2017; Lenox 2006; Sinclair 1997).
However, compliance and enforcement in a self-regulatory context needs to take into account concerns about anticompetitive behaviour. Compliance with a self-regulatory system’s needs to be feasible and affordable for its members, otherwise it could serve as a mechanism to exclude or target competitors and new entrants. Moreover, existing anti-trust regulation means corporations participating in a self-regulatory system often don’t have the option to coerce others through fines or sanctions (Barnett and King 2008; Lenox 2006). As a result, an effective self-regulatory system can use alternative “soft” enforcement methods, also known as costs of non-compliance (Cohent and Sundararajantt 2017). These can include reputational naming-and-shaming, or the threat of disintegration of the self-regulatory system (Barnett and King 2008).
4. Methodology
This dissertation uses the method of an idiographic case study in combination with content analysis to develop a nuanced, empirically rich and context-specific account of the creation and operation of the Partnership on AI (PAI), in order to assess whether it has the characteristics of an effective self-regulatory system.
The case study is idiographic because the investigation will only yield insight into this specific case (Bengtsson 2016; Gerring 2006). As John Gerring describes in his article examining the methodology of idiographic cases, which he calls single-outcome cases, these are “studies that investigate a bounded unit in an attempt to elucidate a single outcome” (Gerring 2006, 710). It is important to note that this is not a nomoethic case study, which seeks to generalise from a particular case to a broader population (Bengtsson 2016). Instead, I seek to add to the literature on AI policy and safety by engaging in an exploratory investigation of the PAI, and by applying self-regulation theory to the “new context” of the AI industry (Halperin and Heath 2017, 215).
The case of the Partnership on AI was chosen because it is the AI industries’ most prominent and representative attempt at creating shared standards and governing models for the development of safe AI. The AI industry is relatively new, and does not yet have a formalized system for self-regulation. As a result, unlike the cases of the American nuclear and chemical industries, there is little empirical data relating to non-compliance and industry performance with which to conduct relevant quantitative statistical analyses, and no corporate-led competing self-regulatory systems to conduct a comparative analysis of. Nevertheless, as conveyed in the introduction and literature review, the question of whether the most prominent attempt at coordination on AI safety in the AI industry is effective is an important one, and “is worth studying even if very little information is available” (G. King, Keohane, and Verba 1994, 6). Therefore, I believe the choice of a case study to determine whether the Partnership of AI has the characteristics of an effective self-regulatory system is a contribution to the AI policy and safety literature, and an appropriate method for answering the research question.
This investigation draws on a variety of written sources concerning the creation and operation of the Partnership on AI. In order to evaluate these sources and establish an accurate and holistic understanding of the PAI, I have used the method of content analysis. This qualitative research method facilitates the objective and systematic analysis of written data, allowing me to make replicable and valid inferences about specific phenomena (Bengtsson 2016). I use a combination of latent and manifest content analysis in my case study (Bengtsson 2016), developing a latent thematic understanding of the sources while using manifest segments of text to support my description and argumentation.
The stages of content analysis I have used are the decontextualization, recontextualization, categorisation and compilation of the written sources. These stages have enabled me to implement a deductive coding scheme to collect and organise data relating to the five characteristics of effective self-regulatory systems in the context of the PAI. I developed this coding scheme during my initial engagement with the self-regulation literature, and updated it as I engaged with the written sources relating to the PAI. I have built up my analysis from meaning units of at maximum five sentences of written data, to content areas, categories and themes that correspond with the five characteristics of effective self-regulatory systems.
Throughout this analysis, I have attempted to maintain a neutral perspective in the process of coding the data, and used a variety of primary and secondary sources in order to triangulate the insights from the data, in order to increase its reliability (Halperin and Heath 2017). Nevertheless, this type of qualitative research the recognition of my positionality and subjectivity as a researcher, as other researchers may choose to develop different codes in response to the data, or apply similar codes to different pieces of data. This influenced my choice to approach this case study in an idiographic style, as this methodology depends on my interpretation of the data. Nevertheless, the systematic use of content analysis in combination with triangulation through different data sources provides a relatively objective foundation for the analysis of the facts relating to the PAI. The limitations of this approach are discussed in my evaluation.
My content analysis is based on written sources relating to the creation and operation of the Partnership on AI in a time range from September 2016, until August 2018, in order to cover the full range of its existence, while allowing sufficient time to complete the research and conduct a thorough analysis. The data has been gathered using a combination of prior knowledge of the literature, the Partnership on AI website, Google Search, UCL Explore and Google Scholar. The case study and content analysis are based on data from the following sources:
- Website content and press releases from the PAI.
- PAI press releases on their AI strategy.
- Interviews and personal statements relating to AI by member executives.
- Evidence submitted by the PAI or its members during public inquiries on AI.
- Media, NGO and academic publications concerning the PAI.
A good answer to the research question will determine whether the Partnership on AI has the characteristics of an effective self-regulatory system. In order to determine this, I will use content analysis of the written sources to evaluate whether they provide sufficient evidence to establish that a characteristic can be considered either present or not present in the structure and operation of the PAI.
5. Analysis of the Partnership on AI
On the 28th of September 2016, the Partnership on AI for the Benefit of People and Society was announced by representatives from Amazon, DeepMind, Facebook, Google, IBM and Microsoft. These corporations are among the world’s largest, and each is a leader in the AI industry (Horvitz and Suleyman 2016). While considered an effort at self-regulation by some in academia and the media (Cath et al. 2016), the founding members avoided the word self-regulation in their announcements. Instead, they expressed that the PAI was established “to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society” (Partnership on AI 2018d). Regardless of the specific language, this case study seeks to identify whether the PAI has the characteristics of an effective self-regulatory system. It is the most prominent global organisation dedicated to the responsible development and application of AI, and was founded by the leading companies in the AI industry (Hummel 2017). As such, it is valuable to investigate whether it constitutes a viable solution to the challenges of AI development regulation.
Since its inception, the goal of the PAI has been to establish a “one-of-a-kind community of diverse, global voices” that can balance industry interests with the broader goals of responsible and inclusive AI development (Partnership on AI 2018d). In January 2017, the PAI announced that Apple, whose omission as the largest American corporation in the technology industry had been noted by the media (Burgess 2017), had joined the organisation as a founding member (Hern 2016). Also included in the January 2017 member expansion were three leading non-profit AI research and development organisations, the Association for the Advancement of Artificial Intelligence, the American Civil Liberties Union (ACLU), and OpenAI, who each contributed members to the board of directors. Subsequent membership expansions occurred in May 2017, October 2017 and August 2018, resulting in a partnership with 71 members from the corporate, not-for-profit and academic sectors (see Appendix 2 for a membership list). The PAI is the most prominent global organisation dedicated to the responsible development and application of AI, and was founded by the leading companies in the AI industry (Hummel 2017).
5.1 Principles and Objectives
The characteristic of an effective self-regulatory system that the PAI most clearly has is a detailed and shared set of principles and objectives for the safe development and application of AI. From its inception, the founding members have worked together with the increasing number of non-profit members to define clear goals, thematic principles, and tenets for the safe and responsible development of AI (Burgess 2017). Specifically, its goals are to develop and share best practices, advance public understanding, provide an open and inclusive platform for discussion and engagement, and identify and foster aspirational efforts in AI for socially beneficial purposes (Partnership on AI 2018d). The PAI strives to achieve these goals by working on specific thematic pillars, which were developed during its first general meeting in October 2017 (Hummel 2017). The PAI has created working groups for the first three thematic pillars, “AI, Labor and the Economy”, “Safety-Critical AI”, and “Fair, Transparent and Accountable AI” (Partnership on AI 2018a, 2018c, 2018b). Each group is co-chaired by two representatives from its corporate and non-corporate members, and aims to involve many relevant participants from within the membership. These working groups are directly engaged with many of the most pressing issues that have been identified as main sources of public risk in the AI policy literature, and demonstrate the potential efficacy of an industry led-approach to address the Pacing Problem.
On its website, the PAI claims that its members “believe in and endeavour to uphold” eight tenets. This formulation of collective principles is an essential step in the creation of an effective self-regulatory system. Its normative effect can be seen in the subsequent adoption of similar AI principles by its members, such as Microsoft in 2017, and Google in 2018 (Microsoft 2017; Pichai 2018). However, these tenets, like the separate AI principals adopted by its members, are criticised for being vague and open to interpretation (Newcomer 2018). Another criticism concerns the monitoring of compliance with these principles (Newcomer 2018). Broad as they may be, the tenets create benchmarks for the analysis of the PAI as a viable solution to challenges of AI regulation.
5.2 Intrinsic Motivation
The second characteristic of an effective self-regulatory system concerns the intrinsic motivation of those who created the system. This characteristic distinguishes between a reactionary, extrinsically motivated foundation of the PAI, and a system that was created because of the intrinsic motivations of the founding members. The analysis shows that the foundation of the PAI was rooted in both the intrinsic motivation of the founding members, especially the individuals that took on leadership roles within the PAI, and in the extrinsic context of public opinion and the risk of public regulation. However, in the two years since its founding, it has become clear that this intrinsic motivation has not always been shared by the wider AI development and application teams operating within the founding members.
In analysing the content of the press releases, personal statements and media reactions to the founding of the PAI, a recurring theme is the focus on shared values regarding AI development (Hern 2016; Mannes 2016; Horvitz and Suleyman 2016; Statt 2016). Terah Lyons, the PAIs first executive director, specifies that the PAI “was established by the engineering leads of the world’s six largest tech companies… (who) have been involved from the outset in making sure that we hold core principles and values at the center” (Hummel 2017). Moreover, there appears to be a genuine recognition by the industry that ensuring safety in the development of AI “supercedes competitive concerns” (Gershgorn 2016). This creates the impression that there was a high degree of intrinsic motivation by the founding members and its individual researchers to pursue collective action solutions in the safe development and application of AI.
Nevertheless, further analysis of the sources demonstrates that this intrinsic motivation is also related to the extrinsic context within which the PAI was founded. The announcements by the PAI and its founding members repeatedly focus on the need to better inform public opinion and public policymaking with regard to AI, giving the impression that the PAI was founded in part to pre-empt reactionary regulation resulting from public concern about AI (Leonhard 2016; Knight 2016; Mannes 2016). Mustafa Suleyman, co-founder of DeepMind and initial co-chair of the PAI, recognized this in a call with reporters before the launch, stating that the positive impact of AI depends on “the level of public engagement, transparency, and ethical discussions that takes place around it” (Gershgorn 2016).
In combination with the observed regard for values and principles in AI development, this extrinsic context does not seem to diminish the potential for the PAI to be an effective self-regulatory system for the AI industry. However, since its founding there have been several incidents where founding members have not upheld the PAI tenets, which calls into question the extent of intrinsic motivation within those organisations. In turn, these incidents, discussed in more detail below, relate to the analysis of all the other characteristics of an effective regulatory system in the case of the PAI.
5.3 Independence and Authority
The third characteristic of an effective self-regulatory system is that it has an independent and authoritative governing body. While the PAI was founded by the six leading companies in the AI industry, it has since grown to include 65 more organisations, more than half of which “comprise civil society organisations, advocacy organisations, academic research laboratories and institutions” (Lyons 2018). Since its founding, the PAI has made an explicit commitment to “share leadership with independent third-parties”, and have an “equal representation of corporate and non-corporate members on the board” (Partnership on AI 2018d). In January 2017, it lived up to this commitment when it announced that six independent board members would join six board members drawn from the founding members (Partnership on AI 2017).
This commitment to diversity has been categorised within the category of social legitimacy. The PAI continuously makes explicit its belief that in order to be independent and authoritative, it needs to allow “technologists and activists to take part on an equal footing” (Suleyman 2018), and include “varying perspectives in a structure that ensures balanced governance by diverse stakeholders” (Lyons 2018). The leadership and the board recognize that the value of the PAI lies in its ability to “remain an independent organisation, (which is) not a fig leaf for the companies who established this organisation” (Hummel 2017). This independence is critical to establish trust with the public and government, which in turn is “critical to the adoption of AI and (the) realisation of its benefits” (Lyons 2018).
Nevertheless, the analysis of the PAI identifies three issues that could compromise its independence and authority. The first is related to its ability to act with authority in pursuit of its objectives. While its current actions and communiques seem to reflect consensus amongst its diverse membership, the broad range of different organisations make it difficult to maintain a common “direction of travel” (Temperton 2017). The RC and INPO owed a part of their success to the fact that a small number of major companies can coordinate their behaviour effectively (Cohent and Sundararajantt 2017). The PAI could demonstrate its independence if it takes a position that does not reflect “a perfect consensus perspective of (its) constituent members” (Hummel 2017).
Furthermore, its members are not representative of the entire AI industry. While the PAI espouses a membership that includes organisations from around the world (Lyons 2018), it currently has no for-profit or non-profit Chinese members, other than a think-tank and academic institution in Hong Kong (Partnership on AI 2018d). This omission affects its ability to act with authority in the global AI industry.
Finally, on the PAI website and in early announcements, its funding is described as consisting of “charitable contributions in the form of membership dues paid by its for-profit Partners and contributions and grants from non-profit organizations and foundations” (Partnership on AI 2018d). However, in a 2017 interview given after the first general meeting of all PAI members, executive director Terah Lyons states that the PAI is “collectively financed by every for-profit partner in our organization” (Hummel 2017), making no mention of non-profit grants or contributions. This jeopardizes the independence of the PAI; any conflict between its course of action and that of its paying members could affect its funding.
5.4 Internal and External Transparency
The fourth characteristic of an effective self-regulatory system is that it has both internal and external transparency. The analysis of the PAI identified a clear commitment to providing transparency in its internal work. Transparency, openness and accountability are frequently referred to as essential for both the safe development of AI, and the successful operation of the PAI (Horvitz and Suleyman 2016; Lyons 2018; Partnership on AI 2018d). To achieve this internal transparency, the PAI has expressed its intent to publish its research with an open license, and, crucially, publish minutes of its meetings (Burgess 2017; Mannes 2016). However, this intent is difficult to evaluate, in part because the PAI has “raised eyebrows over a perceived lack of activity” (Temperton 2017). The thematic pillar working groups have only recently been announced, accompanied by high-level PDFs outlining their research interests. There appears to be no other published research, and significantly, no published minutes of its meetings.
This lack of transparency with regard to its internal work is compounded by the deliberate choice not to foster external transparency with regard to its tenets. Beyond the statement that its members will “endeavour to uphold the tenets” (Partnership on AI 2018f), the PAI foregoes a potentially effective self-regulatory role by leaving the question of compliance entirely up to its members.
5.5 Credible Enforcement Mechanisms
Just as the PAI has no structure for monitoring whether its members uphold its tenets in their development and application of AI, it has chosen not to develop credible enforcement mechanisms. This creates significant difficulty in its ability to foster collective action in the face of the prisoner’s dilemma between AI development speed and safety. In the co-chair’s original announcement and subsequent press announcements, Eric Horvitz and Mustafa Suleyman specified that “there is no explicit attempt at the notion of self-regulation” (Burgess 2017), and that the Partnership of AI “will not seek to enforce its guidelines” (Knight 2016).
The recurring issue of missing enforcement mechanisms was coded with the category of competition. Since many of the for-profit members of the PAI are in “constant competition with each other to develop the best products and services powered by machine intelligence” (Mannes 2016), they lack the ability to create financial sanctions for each other because of anti-trust legislation (Campolo et al. 2017). Moreover, they are likely to avoid creating enforcement mechanisms that hinder their development and application of AI if there is a chance that they are later invalidated for being anticompetitive and limiting innovation opportunities for smaller entities inside the AI industry (Datta 2018). This could explain why the PAI chose to avoid association with self-regulation at its conception.
Nevertheless, even if they are not explicitly designed to be enforced, the dissemination of the PAI’s tenets, thematic pillars, and future research constitute potential soft compliance mechanisms. The PAI aims to promote change by setting the right example, which it hopes the industry will follow (Mannes 2016; Statt 2016). Executive director Terah Lyons expressed this sentiment in an interview after the first general member meeting in October 2017, stating that she hopes “that because those (governance) models will be a result of collective input driven by the companies themselves, they will take seriously the tenets that they decide to ascribe to themselves” (Hummel 2017).
5.6 Tenet Compliance
An interesting case of whether this soft compliance mechanism works in the case of the PAI can be assessed by investigating its tenets. The formulation of some of the tenets “seem to veer into self-regulation” (Statt 2016), particularly sub-clauses on the sixth tenet, which describe specific ways in which members “will work to maximize the benefits and address the potential challenges of AI technologies” (Partnership on AI 2018f). These include the protection of privacy and the security of individuals, as well as “opposing (the) development and use of AI technologies that would violate international conventions or human rights” (Partnership on AI 2018f).
Unfortunately, it’s clear that in the two years since the PAI was founded, several of its founding members have been in conflict with its tenets.
An internal leak in March 2018 revealed that Google was working with the Pentagon to “provide cutting-edge artificial intelligence technology for drone warfare” (Fang 2018), prompting protests from its employees and the wider epistemic community of AI researchers, many of whom have signed a pledge against the development of lethal autonomous weapons, which are likely to be created in the form of drones (Future of Life Institute 2018).
A July 2018 investigation published by the ACLU (a PAI member) revealed that a visual recognition programme sold by Amazon falsely matched 28 members of the US congress with mugshots in its database (Snow 2018). Moreover, the false matches were “disproportionally of people of colour”, demonstrating the biases of a system that is in use by law enforcement departments across the US (Snow 2018).
Finally, in July 2018, internal IBM documents demonstrated that its Watson Health service often gave erroneous cancer treatment recommendations (Ross and Swetlitz 2018). While these recommendations were based on test data and not applied to real patients, it demonstrates the risks to personal security that can arise from AI techniques.
In each of these cases, there has been no response from the PAI. The PAI has no structural means to compel its members from engaging in AI development that is in conflict with its tenets. However, it could have soft compliance mechanisms in the form of condemnation or even disintegration when its members do not uphold its tenets. Its success as an organisation is predicated upon the involvement of a diverse range of organisations from the non-profit sector, many of which are actively campaigning against some of the decisions and projects that the aforementioned founding members have been involved in. Their membership of the PAI gives it significant social legitimacy, ostensible independence and credible authority within the industry. Executive director Lyons recognizes the hesitancy that many non-profit members expressed upon joining, and thereby legitimizing, the founding members in their Partnership. She posits that “if they feel the need to, then they will abandon us” (Hummel 2017).
The examples of Google, Amazon and IBM demonstrate that some of the founding members were at times in conflict with the PAI tenets, but that this drew no condemnation from the PAI, and led to no members withdrawing from the Partnership. This indicates that the PAI lacks the characteristic of either hard and soft enforcement mechanisms, which means it cannot effectively incentivise firms away from the prioritization of development speed over development safety.
6. Evaluation
The Partnership of AI’s executive director recognizes that the organisation is “uniquely positioned to be highly impactful in developing and disseminating governance models for its members”, and that their “members better understand the technology that they’re working on than the government” (Hummel 2017). This is an expression of the belief that regulators lag behind the innovators, and that the industry is best suited to govern itself. I undertook this dissertation to assess whether the PAI is indeed a good candidate for creating new governance models for the AI industry. The findings of this case study indicate that the PAI does not have the characteristics of an effective self-regulatory system.
The case study demonstrates that at the founding of the PAI, there was a significant degree of intrinsic motivation, that they were nominally committed to internal transparency, that it developed a clear set of goals and principles and that it created a structure for a diverse and authoritative governing body. This combination of expertise, commitment and structure could enable it to effectively anticipate and mitigate the unique regulatory challenges of AI. By combining diverse groups of AI researchers and professionals in its working groups, it can mediate the Pacing Problem by keeping regulatory solutions closely tied to innovations, while avoiding the Uncertainty Paradox, as they can work on setting standards for innovations before they become widely adopted. Adherence to its tenets could help anticipate ex-ante issues in the development of AI, and establish clear frameworks for liability in the case of ex-post harm done by AI systems. Furthermore, its commitment to membership diversity and equal leadership between the for-profit and non-profit sectors grants it social legitimacy to engage in the process of self-regulation.
However, the PAI does not have a monitoring structure to establish external transparency, and does not check whether its members are developing and applying AI in compliance with its tenets. It also lacks either hard or soft enforcement mechanisms, and does not compel its members to comply with its tenets in pursuit of its shared objectives. As a result of the lack of these characteristics, the PAI is unlikely to be able to improve the sub-optimal Nash Equilibrium of AI development. It doesn’t have the structural capacity to change the incentives in the prisoner’s dilemma between AI speed and AI safety. As such, its members can free-ride with the PAI for symbolic reasons, while continuing their pursuit of fast AI development, with the result that the public risk from AI is not minimized.
6.1 Implications
In light of the chosen method of analysis, I recognize that the ability of this dissertation to generalise from the specific case of the PAI to the AI industry in general is limited. While there are other nascent attempts at coordination and cooperation within the industry, such as the Asilomar AI Principles for the development and application of AI initiated by the Future of Life Institute (2017), and the Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous created by the Institute of Electrical and Electronics Engineers (IEEE 2018), I have chosen to develop an in-depth analysis of the most prominent attempt at standard setting, governance-model creation and public engagement in the AI industry, which was founded by the leaders of that industry.
The finding that the PAI does not have the characteristics it needs to solve the prisoner’s dilemma dynamic that prioritizes AI development speed over AI safety limits its ability to effectively minimize the public risk of AI development. Some corporations in the AI industry recognize this failure of collective action, and are starting to make tentative calls for government regulation in specific cases of AI applications, such as visual recognition software (Smith 2018). However, in the absence of a solution for the prisoner’s dilemma in AI development at the state level, it is unlikely that traditional regulatory methods will be able to minimize the public risk from AI development either. A different and unreliable solution to the regulatory challenges of AI is the ethical intervention on behalf of AI safety by the researchers and employees within the leading AI companies, as happened in the case of Google’s work with the Pentagon (Fang 2018). On the whole, this means that the industry pursues its development of AI without an effective solution to the regulatory challenges of AI, increasing the risk to public welfare.
6.2 Validity
The validity of this dissertation can be evaluated using the criteria of internal and external validity, as well as replicability. These are important criteria because they seek to ascertain whether the findings are a truthful reflection of the case under consideration, and whether the same results would be obtained if the investigation would be repeated (Bengtsson 2016).
The internal validity of the case study is improved by the maintenance of methodological rigour in the analysis of the written sources relating to the PAI. The combination of the four steps of content analysis with a large variety of written sources seeks to identify relatively objectives outcomes and characteristics of the PAI. Similarly, through an extensive engagement with the self-regulation literature I have identified commonly recognized characteristics of effective self-regulatory systems. While it is possible that a characteristic, or variable, that is omitted in the analysis does make the PAI a viable solution to the challenges of AI regulation, the comprehensive analysis of an idiographic case study reduces this likelihood.
This choice of method reduces the inferential power and external validity of the case. Because I have sought to understand the particular characteristics of the PAI, I face a high degree of uncertainty in making claims about other regulatory systems in the AI industry. This limitation is compounded by the risk of selection bias. I have chosen the most prominent example of coordination in the AI industry, but a different case may have led to a different conclusion about the AI industries' ability to overcome the prisoner’s dilemma in AI development. Nevertheless, this particular investigation of the PAI is of value to the literature on AI safety and policy by itself, irrespective of its ability to make claims about the state of self-regulation in the AI industry in general.
Finally, the replicability of my investigation is enabled by the clear description of my methodology, and my record of sources in the bibliography. I believe that the case study approach, in combination with content analysis, can reveal verifiable and objective facts about the characteristics of the case at hand. However, I recognize that a methodological critique of content analysis concerns researcher subjectivity, and that it is a less formalised and researcher-independent method. Since this dissertation consists of a single case study, the validity and reliability of my argument could be improved by assessing this case with different methods and in different contexts in future research.
7. Conclusion
The development of AI is set to transform society. As a general-purpose technology, its continued development and application will accelerate innovation across all dimensions of human endeavor. These changes can both provide tremendous improvements to public welfare and create unprecedented public risk. To minimize that risk, it is important to understand how the development and application of AI is currently being governed.
In this dissertation, I have investigated whether the Partnership of AI has the characteristics of an effective self-regulatory system. I recognize that the Partnership on AI was not designed to be a self-regulatory system. Nevertheless, through my review of the literature on regulation and AI policy, I have developed the argument that a self-regulatory approach may be the most effective and expedient solution to the challenges of AI regulation. Assessing the AI industry’s most prominent attempt at cooperation using a framework based on characteristics of effective self-regulatory systems provides an interesting perspective on how AI is currently being governed.
I’ve drawn on insights from the literature on regulation and AI policy to identify that the technical nature of AI exacerbates the traditional issues in the regulation of innovation known as the Pacing Problem and the Uncertainty Paradox. Furthermore, I’ve demonstrated that the competitive dynamic of AI development leads to a prisoner’s dilemma for both companies and states, wherein both are incentivized to prioritize the fast development of AI instead of the safe development of AI.
The combination of these two aspects of AI regulation make effective self- regulation a suitable solution, as the experts in the industry can keep regulation connected to innovation, while industry collective action can change the incentives in the prisoner’s dilemma of AI development. Subsequently, I used the examples of self-regulation in the American chemical and nuclear industries to identify five characteristics of effective self-regulatory systems, and conducted a case study of the Partnership on AI to determine whether it had the potential to be an effective solution to the challenges of AI regulation.
The partnership on AI does not have all the characteristics of an effective self-regulatory system. In particular, it is structurally unable to resolve the prisoner’s dilemma of AI development. As the most prominent example of coordination and cooperation by the companies leading the AI industry, this does not bode well for the minimization of public risk in the development of AI, and suggests that a different regulatory response is needed.
My findings call for further research on the topic of regulation and AI development. In particular, it would be valuable to investigate other instances of self-regulation or self-governance in the AI industry, to identify the features of an effective international regulatory response, and to conduct future analyses of the Partnership on AI as its first three working groups complete and disseminate its work.
8. Bibliography
Allen, G. C., Horowitz, M. C., Kania, E. B., and Scharre, P. 2018. Strategic Competition in an Era of Artificial Intelligence. Center for a New American Security.
Allen, J. and West, D. 2018. How Artificial Intelligence Is Transforming the World. Brookings. Armstrong, S., Bostrom, N. and Shulman, C. 2013. Racing to the Precipice: A Model of Artificial
Intelligence Development. Future of Humanity Institute.
Barnett, M. L., and King, A. A. 2008. Good Fences Make Good Neighbors: A Longitudinal Analysis of
an Industry Self-Regulatory Institution. Academy of Management Journal. 51:5, pp 1150–1170. Baum, S. D. 2016. On the Promotion of Safe and Socially Beneficial Artificial Intelligence. Global
Catastrophic Risk Institute.
Bengtsson, M. 2016. How to Plan and Perform a Qualitative Study Using Content Analysis.
NursingPlus Open. 2, pp 8–14.
Bennett Moses, L. 2013. How to Think about Law, Regulation and Technology: Problems with
“Technology” as a Regulatory Target. Law, Innovation and Technology. 5:1, pp 1–20.
Birnbaum, M., Romm, T. and Timberg, C. 2018. Europe, Not the U.S., Is Now the Most Powerful Regulator of Silicon Valley. The Washington Post. Available
at: https://www.washingtonpost.com/amphtml/business/technology/europe-not-the-us-is-now- the-most-powerful-regulator-of-silicon-valley/2018/05/25/f7dfb600–604f-11e8–8c93- 8cf33c21da8d_story.html? (accessed 05–07–18).
Black, J. 2008. Forms and Paradoxes of Principles-Based Regulation. Capital Markets Law Journal. 3:4, pp 425–57.
Blind, K. 2012. The Impact of Regulation on Innovation. Nesta Working Paper. 12:2.
Bostrom, N., Dafoe, A. and Flynn, C. 2017. Policy Desiderata in the Development of Superintelligent
AI. Future of Humanity Institute.
Brownsword, R. 2017. From Erewhon to AlphaGo: For the Sake of Human Dignity, Should We
Destroy the Machines?. Law, Innovation and Technology. 9:1, pp 117–53.
Brownsword, R. and Somsen, H. 2009. Innovation and Technology: Before We Fast Forward — A
Forum for Debate. Law, Innovation and Technology. 1:1, pp 1–73.
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Allan Dafoe, et al. 2018. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Center for a New American Security, Electronic Frontier Foundation, OpenAI.
Brundage, M, and Bryson, J. 2017. Smart Policies for Artificial Intelligence. Available at: https://arxiv.org/pdf/1608.08196.pdf (accessed 05–07–18).
Burgess, M. 2017. Apple Joins Facebook, Google, Microsoft, IBM and Amazon in OpenAI. Wired. Available at: https://www.wired.co.uk/article/ai-partnership-facebook-google-deepmind (accessed 10–07–18)
Butenko, A, and Larouche, P. 2015. Regulation for Innovativeness or Regulation of Innovation?. Tilburg Law School.
Calo, R. 2017. Artificial Intelligence Policy: A Primer and Roadmap.Available at: https://ssrn.com/abstract=3015350 (accessed 05–07–18).
Campolo, A., Sanfilippo, M., Whitakker, M. and Crawford, K. 2017. AI Now 2017 Report. AI Now.
Carter, R.B. and Marchant, G. E. 2011. Principles-Based Regulation and Emerging Technology. In The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight, edited by G.E. Marchant et al. 1st ed., pp 157–66. Springer.
Cath, C, Wachter, S., Mittelstadt, B., Taddeo, M., Floridi, L., and Mittelstadt, B. 2016. Artificial Intelligence and the “Good Society”: The US, EU, and UK Approach. Oxford Internet Institute, Alan Turing Institute.
Cohent, M, and Sundararajantt, A. 2017. Self-Regulation and Innovation in the Peer-to-Peer Sharing Economy. University of Chicago Law Review Online. 82:1, pp 116–33.
Collingridge, D. 1980. The Social Control of Technology. London: Frances Pinter Ltd.
Danaher, J. 2015. Is Effective Regulation of AI Possible? Eight Potential Regulatory Problems. Philosophical Disquisitions. Available at: http://philosophicaldisquisitions.blogspot.com/2015/07/is-effective-regulation-of-ai-possible.html (accessed 05–07–18).
Datta, B. 2017. Can Government Keep Up with Artificial Intelligence?. PBS. 2018. Available at: http://www.pbs.org/wgbh/nova/next/tech/ai-government-policy/ (accessed 05–07–18).
Ding, J. 2018. Deciphering China’s AI Dream. Future of Humanity Institute.
Doteveryone. 2018. Lords Communication Committee. The Internet: to regulate or not to regulate?
Doteveryone — written evidence (IRN0028) 1.
Dutton, T. 2018. An Overview of National AI Strategies. Politics + AI. Available at: https://medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd (accessed 25- 07–18).
Economist. 2018. Google Is Fined €4.3bn in the Biggest-Ever Antitrust Penalty. The Economist. Available at: https://www.economist.com/business/2018/07/21/google-is-fined-eu4.3bn-in-the- biggest-ever-antitrust-penalty (accessed 21–07–18).
Erdélyi, O. J. and Goldsmith, J. 2018. Regulating Artificial Intelligence : Proposal for a Global Solution. AIES Conference.
EU Commission. 2018. Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions. EU Commission.
Fang, L. 2018. Leaked Emails Show Google Expected Lucrative Military Drone AI Work to Grow Exponentially. The Intercept.
Fenwick, M., Kaal, W. A. and Vermeulen, E. 2016. Regulation Tomorrow: What Happens When Technology Is Faster than the Law?. American University Business Law Review. 6:3.
Future of Life Institute. 2018. Lethal Autonomous Weapons Pledge. Future of Life Institute. Available at: https://futureoflife.org/lethal-autonomous-weapons-pledge/ (accessed 05–07–18).
Gerring, J. 2006. Single-Outcome Studies A Methodological Primer IS. International Sociology. 21:5, pp 707–34.
Gershgorn, D. 2016. Facebook, Google, Amazon, Microsoft, and IBM Created a Partnership to Make AI Seem Less Terrifying. Quartz.
Grace, K., Salvatier, J., Dafoe, A., Zhang, B. and Evans, O. 2017. When Will AI Exceed Human Performance? Evidence from AI Experts. Future of Humanity Institute.
Gunningham, N, and Rees, J. 1997. Industry Self-Regulation: An Institutional Perspective. Law & Policy. 19:4, pp 363–414.
Halperin, S, and Heath, O. 2017. Political Research: Methods and Practical Skills. 2nd ed. Oxford: Oxford University Press.
Haufler, V. 2001. A Public Role for the Private Sector: Industry Self-Regulation in a Global Economy. Carnegie Endowment for International Peace. Washington, D.C.
Hern, A. 2016. “Partnership on AI” Formed by Google, Facebook, Amazon, IBM and Microsoft. The Guardian.
HM Government. 2018. Industrial Strategy Artificial Intelligence Sector Deal. BEIS, DCMS.
Horvitz, E, and Suleyman, M. 2016. Introduction from the Founding Co-Chairs. Partnership on AI. 2016. Available at: https://www.partnershiponai.org/introduction-from-the-founding-co-chairs/ (accessed 15–07–18).
Huber, P. 1985. Safety and the Second Best: The Hazards of Public Risk Management in the Courts. Columbia Law Review. 85:2, pp 227–337.
Hummel, P. 2017. “Building Consensus Will Be Challenging” — Interview with Terah Lyons, Executive Director of the Partnership on AI. Medium. 2017. Available at: https://medium.com/@hummel_37837/building-consensus-will-be-challenging-97f82ddd538b (accessed 10–07–18).
IEEE. 2018. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. IEEE Standards Association. 2018. Available at: https://standards.ieee.org/develop/indconn/ec/autonomous_systems.html (accessed 05–07–18).
Institute, Future of Life. 2017. Asilomar AI Principles. Future of Life Institute. 2017. Available at: https://futureoflife.org/ai-principles/ (accessed 26–07–18).
Kaal, W. A. 2016. Dynamic Regulation for Innovation. In Perspectives in Law, Business & Innovation, edited by Mark Fenwick, Wulf A. Kaal, Toshiyuki Kono, and Erik P.M. Vermeulen. New York: Springer.
Khatchadourian, R. 2015. The Doomsday Invention. The New Yorker. Available
at: https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence- nick-bostrom (accessed 17–07–18).
King, A. A, and Lenox, M. J. 2000. Industry Self-Regulation without Sanctions: The Chemical Industry’s Responsible Care. Source: The Academy of Management Journal. 43:4, pp 698–716.
King, G., Keohane, R. O. and Verba, S. 1994. Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton: Princeton Univesity Press.
Knight, W. 2016. Tech Titans Join Forces to Stop AI from Behaving Badly. Technology Review. Available at: https://www.technologyreview.com/s/602483/tech-titans-join-forces-to-stop-ai-from- behaving-badly/ (accessed 02–07–18).
Lenox, M. J. 2006. The Role of Private Decentralized Institutions in Sustaining Industry Self- Regulation. Organization Science.17:6, pp 677–690.
Leonhard, G. 2016. Partnership on AI: An Open Letter and the Rise of the Robots. Wired. Available at: https://www.wired.co.uk/article/open-letter-to-ai-partnership (accessed 02–07–18)
Loj, G. 2018. Launch of Call for Partnerships for AI Global Governance Commission. AI Global Governance Commission. Available at: https://www.aiglobalgovernance.org/2018/04/26/hello- world/ (accessed 15–07–18).
Lyons, T. 2018. Written Testimony of Terah Lyons — Hearing on Game Changers: Artificial Intelligence Part III-AI and Public Policy. House of Representatives Oversight & Government Reform Committee Subcommittee on Information Technology.
Mannes, J. 2016. Facebook, Amazon, Google, IBM and Microsoft Come Together to Create the Partnership on AI. Tech Crunch. Available at: https://techcrunch.com/2016/09/28/facebook- amazon-google-ibm-and-microsoft-come-together-to-create-historic-partnership-on-ai/ (accessed 05–07–18).
Marchant, G. 2011. Addressing the Pacing Problem. In The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight, edited by G.E. Marchant et al., 1st ed. Springer.
Mcdonald, M. 2008. Securitization and the Construction of Security. European Journal of International Relations. 14:4, pp 563–87.
Microsoft. 2017. Our Approach to AI. 2017. Available at: https://www.microsoft.com/en-us/ai/our- approach-to-ai (accessed 05–07–18).
Nesta. 2018. AI for Empowerment. Nesta. Available at: https://www.nesta.org.uk/news/ai- empowerment-nesta-calls-change-direction-ai-survey-shows-public-disquiet/ (accessed 10–07- 18).
Newcomer, E. 2018. What Google’s AI Principles Left Out. Bloomberg. Available
at: https://www.bloomberg.com/news/articles/2018-06-08/what-google-s-ai-principles-left-out (accessed 05–07–18).
Partnership on AI. 2017. Partnership on AI Board Update. Partnership on AI. Available at: https://www.partnershiponai.org/partnership-on-ai-update/ (accessed 02–07–18).
— — — . 2018a. ‘AI, Labor, and the Economy: Charter’.
— — — . 2018b. ‘Fair, Transparent, and Accountable AI: Charter’. — — — . 2018c. ‘Safety-Critical AI : Charter’.
— — — . 2018d. ‘The Partnership on AI — About Us’. Available at:
https://www.partnershiponai.org/about/#our-work (accessed 02–07–18). — — — . 2018e. ‘The Partnership on AI — Meet the Partners’. Available at:
https://www.partnershiponai.org/partners/ (accessed 02–07–18).
— — — . 2018f. ‘The Partnership on AI — Tenets’. Partnership on AI. Available at:
https://www.partnershiponai.org/tenets/ (accessed 02–07–18).
Pichai, S. 2018. AI at Google: Our Principles. Blog.Google. Available at: https://www.blog.google/technology/ai/ai-principles/ (accessed 06–07–18).
Rappert, B. 2011. Pacing Science and Technology with Codes of Conduct: Rethinking What Works. In The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight, edited by G.E. Marchant et al. 1st ed. Springer.
Ross, C, and Swetlitz, I. 2018. IBM’s Watson Recommended “unsafe and Incorrect” Cancer Treatments. STAT. Available at: https://www.statnews.com/2018/07/25/ibm-watson- recommended-unsafe-incorrect-treatments/ (accessed 27–07–18).
Scherer, M. U. 2016. Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies and Strategies. Harvard Journal of Law & Technology 29:2, pp 353–400.
Schwartz, O. 2018. “The Discourse Is Unhinged”: How the Media Gets AI Alarmingly Wrong. The Guardian. Available at: https://www.theguardian.com/technology/2018/jul/25/ai-artificial- intelligence-social-media-bots-wrong (accessed 27–07–18).
Sinclair, D. 1997. Self-Regulation versus Command and Control? Beyond False Dichotomies. Law and Policy.19:4, pp 529–559.
Smith, B. 2018. Facial Recognition Technology: The Need for Public Regulation and Corporate Responsibility. Microsoft. Available at: https://blogs.microsoft.com/on-the- issues/2018/07/13/facial-recognition-technology-the-need-for-public-regulation-and-corporate- responsibility/ (accessed 22–07–18)
Snow, J. 2018. Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots. ACLU. Available at: https://www.aclu.org/blog/privacy-technology/surveillance- technologies/amazons-face-recognition-falsely-matched-28 (accessed 27–07–18).
Statt, N. 2016. Facebook, Google, and Microsoft Team up to Pacify Fears about AI. The Verge. Available at: https://www.theverge.com/2016/9/28/13094668/facebook-google-microsoft- partnership-on-ai-artificial-intelligence-benefits (accessed 08–07–18).
Suleyman, M. 2018. DeepMind’s Mustafa Suleyman: In 2018, AI Will Gain a Moral Compass. Wired. Available at: https://www.wired.co.uk/article/mustafa-suleyman-deepmind-ai-morals-ethics? (accessed 01–07–18)
Temperton, J. 2017. DeepMind’s New AI Ethics Unit Is the Company’s next Big Move. Wired. Available at: https://www.wired.co.uk/article/deepmind-ethics-and-society-artificial-intelligence (accessed 01–07–18).
Wadhwa, V. 2014. Laws and Ethics Can’t Keep Pace with Technology. MIT Technology Review. Available at: https://www.technologyreview.com/s/526401/laws-and-ethics-cant-keep-pace-with- technology/ (accessed 01–07–18).
Williams, R. 2018. Alphabet Chairman: Tech Firms Must Self-Regulate, as Government Moves Too Slowly. INews. Available at: https://inews.co.uk/news/technology/alphabet-google-chairman- tech-firms-self-regulate-government-too-slowly/ (accessed 18–06–18).












