top of page
Writer's pictureGWUR

The Uncertain Future of AI and Human Rights

By Jaylee Davis


A brief survey of the past twenty years demonstrates the exponential growth of technology. Since the turn of the century, common household technologies have transformed from clunky metal bricks and famously unwieldy browsers to digital assistants that influence almost every aspect of our waking lives. Technological advancement outpaces itself, each decade bringing something unimaginable in the years prior. This explosion of technology has resulted in a subsequent effusion of philosophical ideas and thought-provoking media (both entirely novel and borrowed applications). Modern-day thinkers muse about how artificial intelligence will affect a world where humans, long-honored for our productivity and knowledge, will no longer reign supreme. Will a superior artificial intelligence endow humans with any value? How will this realign human rights in a world in which humans are usually placed at the top of the moral pyramid? Mathias Risse, Director of the Carr Center for Human Rights Policy at Harvard University, argues for a deeper consideration of these effects in his essay “Human Rights and Artificial Intelligence: An Urgently Needed Agenda.”

How will this realign human rights in a world in which humans are usually placed at the top of the moral pyramid?

The “singularity,” or the point at which machine learning and artificial intelligence will surpass the capacity of human ability, has often been imagined in the media. Science fiction has created fascinating visions of the future, most popularly The Terminator, The Matrix, and Do Androids Dream of Electric Sheep? But Risse anticipates this moment as not mere fiction but a potential reality. If there is no consciousness outside of the complex patterns of the mind, algorithms that can be easily adopted by machines, “then there may eventually be little choice but to grant certain machines the same moral status that humans have.”


If this comes to pass, how would our current perception of human rights change? Would humans be subjected to superior beings that do not assign value to human life? In such an event, how would we argue that human beings merit rights at all?


Risse provides insight into this question by outlining the continuing debate between David Hume and Immanuel Kant. Hume thought that rationality does not affect value; although superintelligent entities may be endowed with the ability to make rational judgments, it may not lead to a rational value commitment. This is exemplified by what is known as the paperclip problem. Artificial intelligence with a seemingly limitless capacity for processing may take upon a goal so absurd such as paperclip production. Though we might want to stop this AI from flooding the world with paperclips, its single-minded nature paired with its ability to adapt and learn independently may turn it against us, thus threatening humans. In this way, a superintelligent machine might evolve into valuing paperclips over human life.


Kant disagrees with Hume, however, on rationality and value. It is through the rational acts of choice that we assign moral value in the first place. His categorical imperative stipulates that all rational beings must follow a moral course of action (and conversely, that adherence to the categorical imperative embodies rationality). In relation to AI, this would mean that a supermachine with the processing power arguably with a greater ability to rationalize, must follow the categorical imperative. A violation of another rational being's right to rationalize leads to a conflict of the same right of rationalization of the violator. Risse explains, “For that reason, certain ways of mistreating others lead an actor into a contradiction with herself, in much the same way flaws in mathematical reasoning do.”

A violation of another rational being's right to rationalize leads to a conflict of the same right of rationalization of the violator.

The pure algorithmic nature of AI may make it an exemplar moral model. Unaffected by the tribal, weighted judgment of humans, AI might produce more egalitarian judgments.


Regardless of AI’s potential value commitments, Risse demands that we consider how to best align AI to human rights standards as it develops. This serves as a way to prevent what might be the biggest threat to human rights in the future (which at the rate of technological rapidization, could be the near future).


Of course, this is not a simple task. Human rights law as it now exists is complicated and highly-contested. AI would only drive further complication and disagreement. The modern conception of human rights is human-centric. The general body of human rights laws relies on a factor of distinction of humans from other lifeforms that we have placed below us. The foremost change to this with the introduction of AI is that “humans would need to get used to sharing the social world they have built over thousands of years with new types of beings.” With the moral ambiguity of these technologies, we cannot confidently know that AI will respect our ideas about human dignity.


The earliest address of this issue of value alignment was codified by Isaac Asimov’s “Three Famous Laws of Robotics.” His 1942 short story “Runaround,” outlines these in a handbook rumored to be from 2058:


  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.


Since the canonization of these laws in the body of science fiction, they have been expanded into real life conversations about artificial intelligence. A 2017 conference led by some of the biggest names in machine learning produced the 23 Asilomar Principles that are meant to guide the future of AI development. It “insist[s] that wherever AI causes harm, it should be ascertainable why it does, and where an AI system is involved in judicial decision making its reasoning should be verifiable by human auditors.”


However promising this may be, the increasing rate of information processing makes this near impossible. Furthermore, there is with value alignment the issue of which human rights values are in need of prioritization. Some AI thinkers make their human rights prerogative clear as demonstrated by Principle 11: “AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.” Others consider this a monopolization of human rights (in Risse’s words “ethical monopolization”) and instead support a crowdsourcing method to determine the rights that need to be prioritized. And this points to the bigger issue of aligning AI with human rights -- there is no firm consensus of what those rights are.


Furthermore, global leaders in AI development (e.g., China) have stalled human rights progress in the present. This creates a gap in technological advancement and the ethics needed to accompany it. Risse presents cooperation as a measure to help this alignment in the development of AI as it perpetuates but deems it unlikely: “Perhaps in due course AI systems can exchange thoughts on how best to align with humans. But it would help if humans went about the design of AI in a unified manner, advancing the same solution to the value-alignment problem. However, since even human rights continue to have detractors there is little hope that will happen.”


Risse says, “Wherever there is AI there also is AS, artificial stupidity... efforts made by adversaries not only to undermine gains made possible by AI, but to turn them into their opposite.” AI automation can lead to misjudgments. Additionally, the rise of these tech giants will lead to stilted competition in data ownership and collection. This has already led to the threat to civil and political rights.


What follows is the weaponization of AI technology to harm. If machine learning is the product of human algorithms that might be racist or sexist, it has the capacity to inflict these learned patterns on the general public. The misinformation crisis that we are currently facing could be exacerbated by the ability to mass-produce falsehoods wrapped in a veneer of AI-generated truth. In the wrong hands, the capacity for abuse is inevitable. Accusations against technology giants such as Facebook and Alphabet (the parent company of Google) for monopolization and privacy violations demonstrate the immediacy of these issues.

 

Jaylee Davis is a GW Scope staff writer for the George Washington Undergraduate Review. A freshman majoring in English and minoring in American studies, she has extensively covered social media and politics from her high school magazine.


72 views0 comments

Comments


bottom of page