Regulation of artificial intelligence

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

Perspectives

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI.[1] Regulation is considered necessary to both encourage AI and manage associated risks.[2][3] Public administration and policy considerations generally focus on the technical and economic implications and on trustworthy and human-centered AI systems,[4] although regulation of artificial superintelligences is also considered. AI law and regulations can be divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.[2] A public administration approach sees a relationship between AI law and regulation, the ethics of AI, and 'AI society', defined as workforce substitution and transformation, social acceptance and trust in AI, and the transformation of human to machine interaction.[5] The development of public sector strategies for management and regulation of AI is deemed necessary at the local, national,[6] and international levels[7] and in a variety of fields, from public service management[8] and accountability[9] to law enforcement,[7][10] the financial sector,[6] robotics,[11][12] autonomous vehicles,[11] the military[13] and national security,[14] and international law.[15][16]

In 2017 Elon Musk called for regulation of AI development.[17] According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization."[17] In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development.[18] Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that AI is in its infancy and that it is too early to regulate the technology.[19] Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.[20]

As a response to the AI control problem

Regulation of AI can be seen as positive social means to manage the AI control problem, i.e., the need to insure long-term beneficial AI, with other social responses such as doing nothing or banning being seen as impractical, and approaches such as enhancing human capabilities through transhumanism approaches such as brain-computer interfaces being seen as potentially complementary.[21][22] Regulation of research into artificial general intelligence (AGI) focuses on the role of review boards, from university or corporation to international levels, and on encouraging research into safe AI,[22] together with the possibility of differential intellectual progress (prioritizing risk-reducing strategies over risk-taking strategies in AI development) or conducting international mass surveillance to perform AGI arms control.[21] For instance, the 'AGI Nanny' is a proposed strategy, potentially under the control of humanity, for preventing the creation of a dangerous superintelligence as well as addressing other major threats to human well-being, such as subversion of the global financial system, until a superintelligence can be safely created. It entails the creation of a smarter-than-human, but not superintelligent, artificial general intelligence system connected to a large surveillance network, with the goal of monitoring humanity and protecting it from danger."[21] Regulation of conscious, ethically aware AGIs focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights.[21] Regulation of AI has been seen as restrictive, with a risk of preventing the development of AGI.[11]

Global guidance

The development of a global governance board to regulate AI development was suggested at least as early as 2017.[23] In December 2018, Canada and France announced plans for a G7-backed International Panel on Artificial Intelligence, modeled on the International Panel on Climate Change, to study the global effects of AI on people and economies and to steer AI development.[24] In 2019 the Panel was renamed the Global Partnership on AI, but it is yet to be endorsed by the United States.[25][26]

The OECD Recommendations on AI[27] were adopted in May 2019, and the G20 AI Principles in June 2019.[26][28][29] In September 2019 the World Economic Forum issued ten 'AI Government Procurement Guidelines'.[30] In February 2020, the European Union published its draft strategy paper for promoting and regulating AI.[7] At the United Nations, several entities have begun to promote and discuss aspects of AI regulation and policy, including the UNICRI Centre for AI and Robotics.[14]

Regional and national regulation

Timeline of strategies, action plans and policy papers setting defining national, regional and international approaches to AI[31]

The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union.[32] Since early 2016, many national, regional and international authorities have begun adopting strategies, actions plans and policy papers on AI.[33][34] These documents cover a wide range of topics such as regulation and governance, as well as industrial strategy, research, talent and infrastructure.[4][35]

China

The regulation of AI in China is mainly governed by the State Council of the PRC's July 8, 2017 "A Next Generation Artificial Intelligence Development Plan" (State Council Document No. 35), in which the Central Committee of the Communist Party of China and the State Council of the People's Republic of China urged the governing bodies of China to promote the development of AI. Regulation of the issues of ethical and legal support for the development of AI is nascent, but policy ensures state control of Chinese companies and over valuable data, including storage of data on Chinese users within the country and the mandatory use of People's Republic of China's national standards for AI, including over big data, cloud computing, and industrial software.

European Union

The European Union (EU) is guided by a European Strategy on Artificial Intelligence,[36] supported by a High-Level Expert Group on Artificial Intelligence.[37] In April 2019, the European Commission published its Ethics Guidelines for Trustworthy Artificial Intelligence (AI),[38] following this with its Policy and investment recommendations for trustworthy Artificial Intelligence in June 2019.[39]

On February 2, 2020, the European Commission published its White Paper on Artificial Intelligence - A European approach to excellence and trust.[40] The White Paper consists of two main building blocks, an ‘ecosystem of excellence’ and a ‘ecosystem of trust’. The latter outlines the EU's approach for a regulatory framework for AI. In its proposed approach, the Commission differentiates between 'high-risk' and 'non-high-risk' AI applications. Only the former should be in the scope of a future EU regulatory framework. Whether this would be the case could in principle be determined by two cumulative criteria, concerning critical sectors and critical use. Following key requirements are considered for high-risk AI applications: requirements for training data; data and record-keeping; informational duties; requirements for robustness and accuracy; human oversight; and specific requirements for specific AI applications, such as those used for purposes of remote biometric identification. AI applications that do not qualify as ‘high-risk’ could be governed by voluntary labeling scheme. As regards compliance and enforcement, the Commission considers prior conformity assessments which could include 'procedures for testing, inspection or certification' and/or 'checks of the algorithms and of the data sets used in the development phase'. A European governance structure on AI in the form of a framework for cooperation of national competent authorities could facilitate the implementation of the regulatory framework.[40]

United Kingdom

The UK supported the application and development of AI in business via the Digital Economy Strategy 2015-2018, introduced at the beginning of 2015 by Innovate UK as part of the UK Digital Strategy. In the public sector, guidance has been provided by the Department for Digital, Culture, Media and Sport, on data ethics[41] and the Alan Turing Institute, on responsible design and implementation of AI systems.[42] In terms of cyber security, the National Cyber Security Centre has issued guidance on ‘Intelligent Security Tools’.[14][43]

United States

Discussions on regulation of AI in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.[44]

As early as 2016, the Obama administration had begun to focus on the risks and regulations for artificial intelligence. In a report titled Preparing For the Future of Artificial Intelligence, the National Science and Technology Council set a precedent to allow researchers to continue to develop new AI technologies with few restrictions. It is stated within the report that "the approach to regulation of AI-enabled products to protect public safety should be informed by assessment of the aspects of risk....".[45] These risks would be the principle reason to create any form of regulation, granted that any existing regulation would not apply to AI technology.

The first main report was the National Strategic Research and Development Plan for Artificial Intelligence. On August 13, 2018, Section 1051 of the Fiscal Year 2019 John S. McCain National Defense Authorization Act (P.L. 115-232) established the National Security Commission on Artificial Intelligence "to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States."[46] Steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence.[47] The Artificial Intelligence Initiative Act (S.1558) is a proposed bill that would establish a federal initiative designed to accelerate research and development on AI for, inter alia, the economic and national security of the United States.[48][49]

On January 7, 2019, following an Executive Order on Maintaining American Leadership in Artificial Intelligence, the White House’s Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI.[50] In response, the National Institute of Standards and Technology has released a position paper,[51] the National Security Commission on Artificial Intelligence has published an interim report,[52] and the Defense Innovation Board has issued recommendations on the ethical use of AI.[53] A year later, the administration called for comments on further deregulation in another draft of its Guidance for Regulation of Artificial Intelligence Applications.[54]

Regulation of fully autonomous weapons

Legal questions related to lethal autonomous weapons systems (LAWS), in particular compliance with the laws of armed conflict, have been under discussion at the United Nations since 2013, within the context of the Convention on Certain Conventional Weapons.[55] Notably, informal meetings of experts took place in 2014, 2015 and 2016 and a Group of Governmental Experts (GGE) was appointed to further deliberate on the issue in 2016. A set of guiding principles on LAWS affirmed by the GGE on LAWS were adopted in 2018.[56]

In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue,[15] and leading to proposals for global regulation.[57] The possibility of a moratorium or preemptive ban of the development and use of LAWS has also been raised on several occasions by other national delegations to the Convention on Certain Conventional Weapons and is strongly advocated for by the Campaign to Stop Killer Robots - a coalition of non-governmental organizations.[58]

See also

References

  1. Research handbook on the law of artificial intelligence. Barfield, Woodrow,, Pagallo, Ugo. Cheltenham, UK. 2018. ISBN 978-1-78643-904-8. OCLC 1039480085.CS1 maint: others (link)
  2. Wirtz, Bernd W.; Weyerer, Jan C.; Geyer, Carolin (2018-07-24). "Artificial Intelligence and the Public Sector—Applications and Challenges". International Journal of Public Administration. 42 (7): 596–615. doi:10.1080/01900692.2018.1498103. ISSN 0190-0692. S2CID 158829602.
  3. Buiten, Miriam C (2019). "Towards Intelligent Regulation of Artificial Intelligence". European Journal of Risk Regulation. 10 (1): 41–59. doi:10.1017/err.2019.8. ISSN 1867-299X.
  4. Artificial intelligence in society. Organisation for Economic Co-operation and Development. Paris. 11 June 2019. ISBN 978-92-64-54519-9. OCLC 1105926611.CS1 maint: others (link)
  5. Wirtz, Bernd W.; Weyerer, Jan C.; Sturm, Benjamin J. (2020-04-15). "The Dark Sides of Artificial Intelligence: An Integrated AI Governance Framework for Public Administration". International Journal of Public Administration. 43 (9): 818–829. doi:10.1080/01900692.2020.1749851. ISSN 0190-0692. S2CID 218807452.
  6. Bredt, Stephan (2019-10-04). "Artificial Intelligence (AI) in the Financial Sector—Potential and Public Strategies". Frontiers in Artificial Intelligence. 2. doi:10.3389/frai.2019.00016. ISSN 2624-8212.
  7. White Paper: On Artificial Intelligence – A European approach to excellence and trust (PDF). Brussels: European Commission. 2020. p. 1.
  8. Wirtz, Bernd W.; Müller, Wilhelm M. (2018-12-03). "An integrated artificial intelligence framework for public management". Public Management Review. 21 (7): 1076–1100. doi:10.1080/14719037.2018.1549268. ISSN 1471-9037. S2CID 158267709.
  9. Reisman, Dillon; Schultz, Jason; Crawford, Kate; Whittaker, Meredith (2018). Algorithmic impact assessments: A practical framework for public agency accountability (PDF). New York: AI Now Institute.
  10. "Towards Responsible Artificial Intelligence Innovation" (PDF). UNICRI. 2020. Retrieved 2020-08-09.
  11. Gurkaynak, Gonenc; Yilmaz, Ilay; Haksever, Gunes (2016). "Stifling artificial intelligence: Human perils". Computer Law & Security Review. 32 (5): 749–758. doi:10.1016/j.clsr.2016.05.003. ISSN 0267-3649.
  12. Iphofen, Ron; Kritikos, Mihalis (2019-01-03). "Regulating artificial intelligence and robotics: ethics by design in a digital society". Contemporary Social Science: 1–15. doi:10.1080/21582041.2018.1563803. ISSN 2158-2041.
  13. AI principles: Recommendations on the ethical use of artificial intelligence by the Department of Defense (PDF). Washington, DC: United States Defense Innovation Board. 2019. OCLC 1126650738.
  14. Babuta, Alexander; Oswald, Marion; Janjeva, Ardi (2020). Artificial Intelligence and UK National Security: Policy Considerations (PDF). London: Royal United Services Institute.
  15. "Robots with Guns: The Rise of Autonomous Weapons Systems". Snopes.com. 21 April 2017. Retrieved 24 December 2017.
  16. Bento, Lucas (2017). "No Mere Deodands: Human Responsibilities in the Use of Violent Intelligent Systems Under Public International Law". Harvard Scholarship Depository. Retrieved 2019-09-14.
  17. "Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'". NPR.org. Retrieved 27 November 2017.
  18. Gibbs, Samuel (17 July 2017). "Elon Musk: regulate AI to combat 'existential threat' before it's too late". The Guardian. Retrieved 27 November 2017.
  19. Kharpal, Arjun (7 November 2017). "A.I. is in its 'infancy' and it's too early to regulate it, Intel CEO Brian Krzanich says". CNBC. Retrieved 27 November 2017.
  20. Kaplan, Andreas; Haenlein, Michael (2019). "Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence". Business Horizons. 62: 15–25. doi:10.1016/j.bushor.2018.08.004.
  21. Sotala, Kaj; Yampolskiy, Roman V (2014-12-19). "Responses to catastrophic AGI risk: a survey". Physica Scripta. 90 (1): 018001. doi:10.1088/0031-8949/90/1/018001. ISSN 0031-8949.
  22. Barrett, Anthony M.; Baum, Seth D. (2016-05-23). "A model of pathways to artificial superintelligence catastrophe for risk and decision analysis". Journal of Experimental & Theoretical Artificial Intelligence. 29 (2): 397–414. arXiv:1607.07730. doi:10.1080/0952813x.2016.1186228. ISSN 0952-813X. S2CID 928824.
  23. Boyd, Matthew; Wilson, Nick (2017-11-01). "Rapid developments in Artificial Intelligence: how might the New Zealand government respond?". Policy Quarterly. 13 (4). doi:10.26686/pq.v13i4.4619. ISSN 2324-1101.
  24. Innovation, Science and Economic Development Canada (2019-05-16). "Declaration of the International Panel on Artificial Intelligence". gcnws. Retrieved 2020-03-29.
  25. "The world has a plan to rein in AI—but the US doesn't like it". Wired. 2020-01-08. Retrieved 2020-03-29.
  26. "AI Regulation: Has the Time Arrived?". InformationWeek. Retrieved 2020-03-29.
  27. "OECD Principles on Artificial Intelligence - Organisation for Economic Co-operation and Development". www.oecd.org. Retrieved 2020-03-29.
  28. G20 Ministerial Statement on Trade and Digital Economy (PDF). Tsukuba City, Japan: G20. 2019.
  29. "International AI ethics panel must be independent". Nature. 572 (7770): 415. 2019-08-21. Bibcode:2019Natur.572R.415.. doi:10.1038/d41586-019-02491-x. PMID 31435065.
  30. Guidelines for AI Procurement (PDF). Cologny/Geneva: World Economic Forum. 2019.
  31. "UNICRI :: United Nations Interregional Crime and Justice Research Institute". www.unicri.it. Retrieved 2020-08-08.
  32. Law Library of Congress (U.S.). Global Legal Research Directorate, issuing body. Regulation of artificial intelligence in selected jurisdictions. LCCN 2019668143. OCLC 1110727808.
  33. "OECD Observatory of Public Sector Innovation - Ai Strategies and Public Sector Components". Retrieved 2020-05-04.
  34. Berryhill, Jamie; Heang, Kévin Kok; Clogher, Rob; McBride, Keegan (2019). Hello, World: Artificial Intelligence and its Use in the Public Sector (PDF). Paris: OECD Observatory of Public Sector Innovation.
  35. Campbell, Thomas A. (2019). Artificial Intelligence: An Overview of State Initiatives (PDF). Evergreen, CO: FutureGrasp, LLC.
  36. Anonymous (2018-04-25). "Communication Artificial Intelligence for Europe". Shaping Europe’s digital future - European Commission. Retrieved 2020-05-05.
  37. smuhana (2018-06-14). "High-Level Expert Group on Artificial Intelligence". Shaping Europe’s digital future - European Commission. Retrieved 2020-05-05.
  38. Weiser, Stephanie (2019-04-03). "Building trust in human-centric AI". FUTURIUM - European Commission. Retrieved 2020-05-05.
  39. Anonymous (2019-06-26). "Policy and investment recommendations for trustworthy Artificial Intelligence". Shaping Europe’s digital future - European Commission. Retrieved 2020-05-05.
  40. European Commission. (2020). White paper on artificial intelligence : a European approach to excellence and trust. OCLC 1141850140.
  41. Data Ethics Framework (PDF). London: Department for Digital, Culture, Media and Sport. 2018.
  42. Leslie, David (2019-06-11). "Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector". doi:10.5281/zenodo.3240529. S2CID 189762499. Cite journal requires |journal= (help)
  43. "Intelligent security tools". www.ncsc.gov.uk. Retrieved 2020-04-28.
  44. Weaver, John Frank (2018-12-28). "Regulation of artificial intelligence in the United States". Research Handbook on the Law of Artificial Intelligence: 155–212. doi:10.4337/9781786439055.00018. ISBN 9781786439055.
  45. National Science and Technology Council Committee on Technology (October 2016). "Preparing for the Future of Artificial Intelligence". White House.
  46. "About". National Security Commission on Artificial Intelligence. Retrieved 2020-06-29.
  47. Stefanik, Elise M. (2018-05-22). "H.R.5356 – 115th Congress (2017–2018): National Security Commission Artificial Intelligence Act of 2018". www.congress.gov. Retrieved 2020-03-13.
  48. Heinrich, Martin (2019-05-21). "Text - S.1558 - 116th Congress (2019-2020): Artificial Intelligence Initiative Act". www.congress.gov. Retrieved 2020-03-29.
  49. Scherer, Matthew U. (2015). "Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies". SSRN Working Paper Series. doi:10.2139/ssrn.2609777. ISSN 1556-5068.
  50. "AI Update: White House Issues 10 Principles for Artificial Intelligence Regulation". Inside Tech Media. 2020-01-14. Retrieved 2020-03-25.
  51. U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools (PDF). National Institute of Science and Technology. 2019.
  52. NSCAI Interim Report for Congress. The National Security Commission on Artificial Intelligence. 2019.
  53. AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense (PDF). Washington, DC: Defense Innovation Board. 2020.
  54. "Request for Comments on a Draft Memorandum to the Heads of Executive Departments and Agencies, "Guidance for Regulation of Artificial Intelligence Applications"". Federal Register. 2020-01-13. Retrieved 2020-11-28.
  55. "Background on Lethal Autonomous Weapons Systems in the CCW". United Nations Geneva.
  56. "Guiding Principles affirmed by the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System" (PDF). United Nations Geneva.
  57. Baum, Seth (2018-09-30). "Countering Superintelligence Misinformation". Information. 9 (10): 244. doi:10.3390/info9100244. ISSN 2078-2489.
  58. "Country Views on Killer Robots" (PDF). The Campaign to Stop Killer Robots.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.