Regulation of artificial intelligence

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI);[1] it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union.[2] Regulation is considered necessary to both encourage AI and manage associated risks, but challenging.[3][4] Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.[5]

Background

In 2017 Elon Musk called for regulation of AI development.[6] According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization."[6]

In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development.[7] Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that AI is in its infancy and that it is too early to regulate the technology.[8] Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.[9]

Nature and scope of regulation

Public policy considerations generally focus on the technical and economic implications and on trustworthy and human-centered AI systems.[10] AI law and regulations can be divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.[11] The development of public sector strategies for management and regulation of AI has been increasingly deemed necessary at the local, national,[12] and international levels[13] and in a variety of fields, from public service management[14] and accountability[15] to law enforcement,[13] the financial sector,[12] robotics,[16] the military[17] and national security,[18] and international law.[19][20]

Global regulation

The development of a global governance board to regulate AI development was suggested at least as early as 2017.[21] In December 2018, Canada and France announced plans for a G7-backed International Panel on Artificial Intelligence, modeled on the International Panel on Climate Change, to study the global effects of AI on people and economies and to steer AI development.[22] In 2019 the Panel was renamed the Global Partnership on AI, but it is yet to be endorsed by the United States.[23][24] The OECD Recommendations on AI[25] were adopted in May 2019, and the G20 AI Principles in June 2019.[24][26][27] In September 2019 the World Economic Forum issued ten 'AI Government Procurement Guidelines'.[28] In February 2020, the European Union published its draft strategy paper for promoting and regulating AI.[13] At the United Nations, several entities have begun to promote and discuss aspects of AI regulation and policy, including the UNICRI Centre for AI and Robotics.[18]

Regional and national regulation

Timeline of strategies, action plans and policy papers setting defining national, regional and international approaches to AI

The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union.[2] Since early 2016, many national, regional and international authorities have begun adopting strategies, actions plans and policy papers on AI.[29][1] These documents cover a wide range of topics such as regulation and governance, as well as industrial strategy, research, talent and infrastructure.[10][30]

Regulation of AI in the European Union

The European Union (EU) is guided by a European Strategy on Artificial Intelligence,[31] supported by a High-Level Expert Group on Artificial Intelligence.[32] In April 2019, the European Commission published its Ethics Guidelines for Trustworthy Artificial Intelligence (AI),[33] following this with its Policy and investment recommendations for trustworthy Artificial Intelligence in June 2019.[34] On February 2, 2020, the European Commission published its White Paper on Artificial Intelligence - A European approach to excellence and trust.[35] The White Paper consists of two main building blocks, an ‘ecosystem of excellence’ and a ‘ecosystem of trust’. The latter outlines the EU's approach for a regulatory framework for AI. In its proposed approach, the Commission differentiates between 'high-risk' and 'non-high-risk' AI applications. Only the former should be in the scope of a future EU regulatory framework. Whether this would be the case could in principle be determined by two cumulative criteria, concerning critical sectors and critical use. Following key requirements are envisaged for high-risk AI applications: requirements for training data; data and record-keeping; informational duties; requirements for robustness and accuracy; human oversight; and specific requirements for specific AI applications, such as those used for purposes of remote biometric identification. AI applications that do not qualify as ‘high-risk’ could be governed by voluntary labeling scheme. As regards compliance and enforcement, the Commission considers prior conformity assessments which could include 'procedures for testing, inspection or certification' and/or 'checks of the algorithms and of the data sets used in the development phase'. A European governance structure on AI in the form of a framework for cooperation of national competent authorities could facilitate the implementation of the regulatory framework.[35]

Regulation of AI in the United Kingdom

In the UK public sector, guidance has been provided by the Department for Digital, Culture, Media and Sport, on data ethics[36] and the Alan Turing Institute, on responsible design and implementation of AI systems.[37] In terms of cyber security, the National Cyber Security Centre has issued guidance on ‘Intelligent Security Tools’.[38][18]

Regulation of AI in the United States

In the United States, on January 7, 2019, following an Executive Order on 'Maintaining American Leadership in Artificial Intelligence', the White House’s Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI.[39][40] In response, the National Institute of Standards and Technology has released a position paper,[41] the National Security Commission on Artificial Intelligence has published an interim report,[42] and the Defense Innovation Board has issued recommendations on the ethical use of AI.[43] Steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence.[44] The Artificial Intelligence Initiative Act (S.1558) is a proposed bill that would establish a federal initiative designed to accelerate research and development on AI for, inter alia, the economic and national security of the United States.[45][46]

Regulation of fully autonomous weapons

Legal questions related to lethal autonomous weapons systems (LAWS), in particular compliance with the laws of armed conflict, have been under discussion at the United Nations since 2013, within the context of the Convention on Certain Conventional Weapons.[47] Notably, informal meetings of experts took place in 2014, 2015 and 2016 and a Group of Governmental Experts (GGE) was appointed to further deliberate on the issue in 2016. A set of guiding principles on LAWS affirmed by the GGE on LAWS were adopted in 2018.[48]

In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue,[19] and leading to proposals for global regulation.[49] The possibility of a moratorium or preemptive ban of the development and use of LAWS has also been raised on several occasions by other national delegations to the Convention on Certain Conventional Weapons and is strongly advocated for by the Campaign to Stop Killer Robots - a coalition of non-governmental organizations.[50]

As a response to the AI control problem

Regulation of AI can be seen as positive social means to manage the AI control problem, i.e., the need to insure long-term beneficial AI, with other social responses such as doing nothing or banning being seen as impractical, and approaches such as enhancing human capabilities through transhumanism approaches such as brain-computer interfaces being seen as potentially complementary.[51][52] Regulation of research into artificial general intelligence (AGI) focuses on the role of review boards, from university or corporation to international levels, and on encouraging research into safe AI,[52] together with the possibility of differential intellectual progress (prioritizing risk-reducing strategies over risk-taking strategies in AI development) or conducting international mass surveillance to perform AGI arms control.[51] For instance, the 'AGI Nanny' is a proposed strategy, potentially under the control of humanity, for preventing the creation of a dangerous superintelligence as well as addressing other major threats to human well-being, such as subversion of the global financial system, until a superintelligence can be safely created. It entails the creation of a smarter-than-human, but not superintelligent, artificial general intelligence system connected to a large surveillance network, with the goal of monitoring humanity and protecting it from danger."[51] Regulation of conscious, ethically aware AGIs focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights.[51]

See also

References

  1. Berryhill, Jamie; Heang, Kévin Kok; Clogher, Rob; McBride, Keegan (2019). Hello, World: Artificial Intelligence and its Use in the Public Sector (PDF). Paris: OECD Observatory of Public Sector Innovation.
  2. Law Library of Congress (U.S.). Global Legal Research Directorate, issuing body. Regulation of artificial intelligence in selected jurisdictions. LCCN 2019668143. OCLC 1110727808.
  3. Wirtz, Bernd W.; Weyerer, Jan C.; Geyer, Carolin (2018-07-24). "Artificial Intelligence and the Public Sector—Applications and Challenges". International Journal of Public Administration. 42 (7): 596–615. doi:10.1080/01900692.2018.1498103. ISSN 0190-0692.
  4. Buiten, Miriam C (2019). "Towards Intelligent Regulation of Artificial Intelligence". European Journal of Risk Regulation. 10 (1): 41–59. doi:10.1017/err.2019.8. ISSN 1867-299X.
  5. Sotala, Kaj; Yampolskiy, Roman V (2014-12-19). "Responses to catastrophic AGI risk: a survey". Physica Scripta. 90 (1): 018001. doi:10.1088/0031-8949/90/1/018001. ISSN 0031-8949.
  6. "Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'". NPR.org. Retrieved 27 November 2017.
  7. Gibbs, Samuel (17 July 2017). "Elon Musk: regulate AI to combat 'existential threat' before it's too late". The Guardian. Retrieved 27 November 2017.
  8. Kharpal, Arjun (7 November 2017). "A.I. is in its 'infancy' and it's too early to regulate it, Intel CEO Brian Krzanich says". CNBC. Retrieved 27 November 2017.
  9. Kaplan, Andreas; Haenlein, Michael (2019). "Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence". Business Horizons. 62: 15–25. doi:10.1016/j.bushor.2018.08.004.
  10. Artificial intelligence in society. Organisation for Economic Co-operation and Development. Paris. ISBN 978-92-64-54519-9. OCLC 1105926611.CS1 maint: others (link)
  11. Wirtz, Bernd W.; Weyerer, Jan C.; Geyer, Carolin (2018-07-24). "Artificial Intelligence and the Public Sector—Applications and Challenges". International Journal of Public Administration. 42 (7): 596–615. doi:10.1080/01900692.2018.1498103. ISSN 0190-0692.
  12. Bredt, Stephan (2019-10-04). "Artificial Intelligence (AI) in the Financial Sector—Potential and Public Strategies". Frontiers in Artificial Intelligence. 2. doi:10.3389/frai.2019.00016. ISSN 2624-8212.
  13. White Paper: On Artificial Intelligence – A European approach to excellence and trust (PDF). Brussels: European Commission. 2020. p. 1.
  14. Wirtz, Bernd W.; Müller, Wilhelm M. (2018-12-03). "An integrated artificial intelligence framework for public management". Public Management Review. 21 (7): 1076–1100. doi:10.1080/14719037.2018.1549268. ISSN 1471-9037.
  15. Reisman, Dillon; Schultz, Jason; Crawford, Kate; Whittaker, Meredith (2018). Algorithmic impact assessments: A practical framework for public agency accountability (PDF). New York: AI Now Institute.
  16. Iphofen, Ron; Kritikos, Mihalis (2019-01-03). "Regulating artificial intelligence and robotics: ethics by design in a digital society". Contemporary Social Science: 1–15. doi:10.1080/21582041.2018.1563803. ISSN 2158-2041.
  17. AI principles: Recommendations on the ethical use of artificial intelligence by the Department of Defense (PDF). Washington, DC: United States Defense Innovation Board. 2019. OCLC 1126650738.
  18. Babuta, Alexander; Oswald, Marion; Janjeva, Ardi (2020). Artificial Intelligence and UK National Security: Policy Considerations (PDF). London: Royal United Services Institute.
  19. "Robots with Guns: The Rise of Autonomous Weapons Systems". Snopes.com. 21 April 2017. Retrieved 24 December 2017.
  20. Bento, Lucas (2017). "No Mere Deodands: Human Responsibilities in the Use of Violent Intelligent Systems Under Public International Law". Harvard Scholarship Depository. Retrieved 2019-09-14.
  21. Boyd, Matthew; Wilson, Nick (2017-11-01). "Rapid developments in Artificial Intelligence: how might the New Zealand government respond?". Policy Quarterly. 13 (4). doi:10.26686/pq.v13i4.4619. ISSN 2324-1101.
  22. Innovation, Science and Economic Development Canada (2019-05-16). "Declaration of the International Panel on Artificial Intelligence". gcnws. Retrieved 2020-03-29.
  23. "The world has a plan to rein in AI—but the US doesn't like it". Wired. 2020-01-08. Retrieved 2020-03-29.
  24. "AI Regulation: Has the Time Arrived?". InformationWeek. Retrieved 2020-03-29.
  25. "OECD Principles on Artificial Intelligence - Organisation for Economic Co-operation and Development". www.oecd.org. Retrieved 2020-03-29.
  26. G20 Ministerial Statement on Trade and Digital Economy (PDF). Tsukuba City, Japan: G20. 2019.
  27. "International AI ethics panel must be independent". Nature. 572 (7770): 415. 2019-08-21. Bibcode:2019Natur.572R.415.. doi:10.1038/d41586-019-02491-x. PMID 31435065.
  28. Guidelines for AI Procurement (PDF). Cologny/Geneva: World Economic Forum. 2019.
  29. "OECD Observatory of Public Sector Innovation - Ai Strategies and Public Sector Components". Retrieved 2020-05-04.
  30. Campbell, Thomas A. (2019). Artificial Intelligence: An Overview of State Initiatives (PDF). Evergreen, CO: FutureGrasp, LLC.
  31. Anonymous (2018-04-25). "Communication Artificial Intelligence for Europe". Shaping Europe’s digital future - European Commission. Retrieved 2020-05-05.
  32. smuhana (2018-06-14). "High-Level Expert Group on Artificial Intelligence". Shaping Europe’s digital future - European Commission. Retrieved 2020-05-05.
  33. Weiser, Stephanie (2019-04-03). "Building trust in human-centric AI". FUTURIUM - European Commission. Retrieved 2020-05-05.
  34. Anonymous (2019-06-26). "Policy and investment recommendations for trustworthy Artificial Intelligence". Shaping Europe’s digital future - European Commission. Retrieved 2020-05-05.
  35. European Commission. White paper on artificial intelligence : a European approach to excellence and trust. OCLC 1141850140.
  36. Data Ethics Framework (PDF). London: Department for Digital, Culture, Media and Sport. 2018.
  37. Leslie, David (2019-06-11). "Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector". doi:10.5281/zenodo.3240529. Cite journal requires |journal= (help)
  38. "Intelligent security tools". www.ncsc.gov.uk. Retrieved 2020-04-28.
  39. "AI Update: White House Issues 10 Principles for Artificial Intelligence Regulation". Inside Tech Media. 2020-01-14. Retrieved 2020-03-25.
  40. Memorandum for the Heads of Executive Departments and Agencies (PDF). Washington, D.C.: White House Office of Science and Technology Policy. 2020.
  41. U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools (PDF). National Institute of Science and Technology. 2019.
  42. NSCAI Interim Report for Congress. The National Security Commission on Artificial Intelligence. 2019.
  43. AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense (PDF). Washington, DC: Defense Innovation Board. 2020.
  44. Stefanik, Elise M. (2018-05-22). "H.R.5356 – 115th Congress (2017–2018): National Security Commission Artificial Intelligence Act of 2018". www.congress.gov. Retrieved 2020-03-13.
  45. Heinrich, Martin (2019-05-21). "Text - S.1558 - 116th Congress (2019-2020): Artificial Intelligence Initiative Act". www.congress.gov. Retrieved 2020-03-29.
  46. Scherer, Matthew U. (2015). "Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies". SSRN Working Paper Series. doi:10.2139/ssrn.2609777. ISSN 1556-5068.
  47. "Background on Lethal Autonomous Weapons Systems in the CCW". United Nations Geneva.
  48. "Guiding Principles affirmed by the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System" (PDF). United Nations Geneva.
  49. Baum, Seth (2018-09-30). "Countering Superintelligence Misinformation". Information. 9 (10): 244. doi:10.3390/info9100244. ISSN 2078-2489.
  50. "Country Views on Killer Robots" (PDF). The Campaign to Stop Killer Robots.
  51. Sotala, Kaj; Yampolskiy, Roman V (2014-12-19). "Responses to catastrophic AGI risk: a survey". Physica Scripta. 90 (1): 018001. doi:10.1088/0031-8949/90/1/018001. ISSN 0031-8949.
  52. Barrett, Anthony M.; Baum, Seth D. (2016-05-23). "A model of pathways to artificial superintelligence catastrophe for risk and decision analysis". Journal of Experimental & Theoretical Artificial Intelligence. 29 (2): 397–414. arXiv:1607.07730. doi:10.1080/0952813x.2016.1186228. ISSN 0952-813X.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.