These days Facebook, Google, IBM and Microsoft invest billions into artificial intelligence research. This is based on the widely held premise that virtually any task performed by human beings can be automated. Indeed, these days Artificial Intelligence has been associated with robotics research above all. Robot technology is believed to be able to effectively replace humans in a number of work and everyday activities: one example is self-driving vehicles that are capable of decreasing accident rates if to compare with human drivers (“AI And Robots Threaten To Unleash Mass Unemployment, Scientists Warn” par.1). The name “robot” was initially popularized by a famous Czech writer Karel Capek (1890-1938), whose fictional robots in the play R.U.R. are created by a mad scientist as slaves with the aim to “to free the humans from the drudgery and elevate them to the higher spheres of learning” (“Introduction to Artificial Intelligence: Opposing Viewpoints”, par.2). When things go wrong, the robots arrange an uprising. Although not all fictional robots have been a threat to humanity, various possible uses of robots (ranging from domestic to military use) as well as the assumptions that robots may one day act as moral agents have raised ethical considerations of robotics research and commerce.
Two major questions in the current debate on the ethics of artificial intelligence have been: ethics of developer conduct and ethics of machine conduct (because robots in the current ethics research are considered to be moral agents themselves) (Bostrom & Yudkowsky 18). These are core ethical challenges for companies who work with artificial intelligence: to develop safe robots and to develop algorithms for these robots that will output superethical behavior.

Order Now
Use code: HELLO100 at checkout

Ethical Challenge One: Developing Safe Robots
The first ethical challenge for businesses that develop Artificial Intelligence agents is to build AIs that will safely act as they will act in many domains. Bill Hibbard from the University of Wisconsin-Madison warns against possible unintended as well as intended consequences of creating super-intelligent machines (for example, social chaos). In his view, the fact that robots will have such great impact on humanity places humans in the position where they should consider how to prevent AIs threat to them through application of ethics as well as how to counter the threat of AI as a result of “enabling a small group of humans to take dictatorial control over the rest of humanity” (Hibbard par.4).

Hibbard argues that the first principle of creating safe AIs is transparency. According to Hibbard, transparency is a necessary tool in AI research and commerce because it will prevent corruption resulting from situations when leaders of large corporations serve their own interests. Transparency will make use of public opinion and law to prevent development and commercial production of AIs that will go against the interests of humanity. Another principle is involving as many experts to look for possible flaws in AI design as possible with the aim to ensure that AI machines remain friendly towards the humanity. Hibbard, in particular asserts that “an open source design will enable a large community to verify that the design conforms to the conditions defined in the (mathematical) proof” (Hibbard par.9). Further, business owners and AI developers will need to actively engage in AI political debate. This is necessary for initiating changes in both public opinion and legislation in favor of recognizing the responsibility of designers and managers of AI for the welfare of humanity. Next, the scientific and business community should outcompete malicious people in their efforts to build harmful robots or other AI agents or use AI to promote their power. Also, personal experiences of users and developers should be studied when developing new AI agents (Hibbard par.17).

Moving from Hibbard’s prescriptive or normative ethics to the applied ethics of AI development, one should focus on the efforts of the Open AI Movement, “ a non-profit organization to develop and advance Artificial Intelligence (AI) technologies, and share these in the greater good” (Sikka par.1). Open AI Movement works to promote ways of human-level AI bringing benefit to the society and preventing it bringing damage to the society. Another important goal of the organization is to develop ethical guidelines for AI research and commerce. As Den Howlett explains, the primary task of Open AI Movement is to promote digital ethics and ensure that business digitization, which is taking place at a dynamic rate, will incorporate this digital ethics guidelines in the use of AI (Howlett par. 1, par. 17-18).

Ethical Challenge Two: Development of Algorithms to Output Superethical Behavior
The issue of the promotion of digital ethics raised by Hibbard and Howlett is closely related to another ethical challenge facing major companies specializing in AI research, development, and commerce. In particular, Bostron & Yudkowsky argue that since the AI agents that will be developed in the future will be super-intelligent, this poses significant challenges to modern developers and philosophers . They explain, “Superintelligence is one of several “existential risks” as defined by Bostrom (2002): a risk “where an adverse outcome would either annihilate Earth‐originating intelligent life or permanently and drastically curtail its potential” (Bostrom & Yudkowsky). In order to face the challenge of the superintelligent robots and direct their activity so that they will produce fully desirable outcomes, the humanity needs to develop such algorithms that will incorporate the principles of the machine ethics that will ensure that AI agents’ discipline is human-superior in terms of ethical conduct (Bostrom & Yudkowsky).

Conclusion
Two major challenges facing modern companies working in AI are the need to produce AI agents that will be safe for the humanity and the need to develop algorithms that will output superethical conduct in AIs. In order to meet these challenges, philosophers, scientists, and business owners should unite their efforts to ensure that the future will be bright for human beings.

    References
  • “AI And Robots Threaten To Unleash Mass Unemployment, Scientists Warn.” YaleGlobal
    Online. 16 Feb. 2016. Opposing Viewpoints in Context. Web. 19 May 2016.
  • Bostrom, Nick & Yudkowsky, Eliezer. “The Ethics of Artificial Intelligence.” In eds. William Ramsey and Keith Frankish Cambridge Handbook of Artificial Intelligence. Cambridge University Press, 2011:  Forthcoming. 2011. Web. 19 March 2016.
  • Hibbard, Bill. “Open Source AI.”  2008 proceedings of the First Conference on Artificial General Intelligence, eds. Pei Wang, Ben Goertzel and Stan Franklin. 2008. Web. 19 May 2016.
  • Howlett, Dan. “Digital ethics, a High Priority for 2016 as AI Creeps into Our Lives.”
    Diginomica. December 21, 2015. Web. May 19, 2016.
  • “Introduction to Artificial Intelligence: Opposing Viewpoints.” Artificial Intelligence. Ed. Noah Berlatsky. Detroit: Greenhaven Press, 2011. Opposing Viewpoints. Opposing Viewpoints in Context. Web. 19 May 2016.
  • Sikka, Vishal. “Open AI: AI for All.” Musings on Constants or Other Invariants. December 13, 2015. Web. May 19, 2016.