Alex to a system of values that benefits us?

Alex Stearns

Peters

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

12/3/2017

English 121

 

The
term “robotics” was first coined by the legendary science fiction writer Sir
Isaac Asimov in his 1941 short story “Liar!”. One of the first to see the vast
potential of the up and coming technologies that were yet to see public
approval or interest in his time. Since then, however, robotics has been on a
startling upward trajectory that has placed it into the forefront of cutting
edge technologies. While robotics has come with many benefits to modern day
humanity it is also a subject of endless heated debates. Humanity is on the
verge of a robot revolution. And while many see it as a gateway to progress not
seen since the Renaissance it could just as easily result in the end of humanity.
With the ever-present threat of accidentally creating humanities unfeeling
successors it’s only natural to question how much, if at all, we should allow
ourselves to become reliant on our technologies.

“As machines get smarter and
smarter, it becomes more important that their goals, what they are trying to
achieve with their decisions, are closely aligned with human values,” said
Stuart Russell, a professor of computer science at UC Berkley and
co-author of the universities textbook on artificial intelligence. A
strong believer that the survival of humanity may well depend on instilling morals
in our AI’s, and that doing so could be the first step to ensuring a peaceful
and safe relationship between people and robots, especially regarding simpler
settings. “A domestic robot, for example, will have to know that you value your
cat,” he says, “and that the cat is not something that can be put in the oven
for dinner just because the fridge is empty.” This begs the obvious
question, how on Earth do we convince these potentially godlike beings to
conform to a system of values that benefits us?

While experts from several fields
around the world attempt to work through the ever-growing list of problems to create
more obedient robots, others caution that it could be a double-edged sword.
While it may lead to machines that are safer and ultimately better it may also
introduce an avalanche of problems regarding the rights of the intelligences
that we have created.

The notion that human/robot
relations might prove tricky far from a new one. In 1947, legendary science
fiction writer Isaac Asimov introduced his Three Laws of Robotics in the short
story collection I, Robot, which were designed to be a basic set of
laws that all robots must follow to ensure the safety of humans. 1) A robot
cannot harm human beings, 2) A robot must obey orders given to it unless it
conflicts with the first law, and 3) A robot must protect its own existence
unless in conflicts with either of the first two laws. Asimov’s robots adhere
strictly to the laws and yet, limited by their rigid robot brains, become trapped
in unresolvable moral dilemmas. In one story, a robot lies to a woman and falsely
tells her that a certain man loves her who doesn’t, because the truth might hurt
her feelings, which the robot interprets as a violation of the first law. To not
break her heart, the robot breaks her trust, traumatizing her and ultimately
violating the first law anyway. The conundrum ultimately drives the
robot insane. Although fictional literature, Asimov’s Laws have remained a
central and basic point entry point for serious discussions about the nature of
morality in robots and acting as a reminder that even clear, well defined rules
may fail when interpreted by individual robots on a case to case basis.  

Accelerating advances in new AI technology
have recently spurred an increased interest to the question of how newly intelligent
robots might navigate our world. With a future of highly intelligent AI
seemingly close at hand, robot morality has emerged as a growing field of discussion,
attracting scholars from ethics, philosophy, human rights, law, psychology, and
theology. There have also been several public concerns as many noteworthy minds
in the scientific and robotics communities have cautioned that the uprise of machines
could well mean the end of the world.

Public concern has centered around  “the singularity,” the theoretical moment when
machine intelligence surpasses our own. Such machines could defy human control,
the argument goes, and lacking morality, could use their superior intellects to
extinguish humanity. Ideally, robots with human-level intelligence will
need human-level morality as a check against bad behavior.

However, as Russell’s example of
the cat-cooking domestic robot illustrates, machines would not necessarily need
to be brilliant to cause trouble. In the near term we are likely to interact
with somewhat simpler machines, and those too, argues Colin Allen, will benefit
from moral sensitivity. Professor Allen teaches cognitive science and history
of philosophy of science at Indiana University at Bloomington. “The immediate
issue,” he says, “is not perfectly replicating human morality, but rather
making machines that are more sensitive to ethically important aspects of what
they’re doing.”

And it’s not merely a matter of
limiting bad robot behavior. Ethical sensitivity, Allen says, could make robots
better, more effective tools. For example, imagine we programmed an automated
car to never break the speed limit. “That might seem like a good idea,” he
says, “until you’re in the back seat bleeding to death. You might be shouting,
‘Bloody well break the speed limit!’ but the car responds, ‘Sorry, I can’t do
that.’ We might want the car to break the rules if something worse will happen
if it doesn’t. We want machines to be more flexible.”

As machines get smarter and more
autonomous, Allen and Russell agree that they will require increasingly
sophisticated moral capabilities. The ultimate goal, Russell says, is to
develop robots “that extend our will and our capability to realize whatever it
is we dream.” But before machines can support the realization of our dreams,
they must be able to understand our values, or at least act in accordance
with them.

Which brings us to the first
colossal hurdle: There is no agreed upon universal set of human morals.
Morality is culturally specific, continually evolving, and eternally debated.
If robots are to live by an ethical code, where will it come from? What will it
consist of? Who decides? Leaving those mind-bending questions for philosophers
and ethicists, roboticists must wrangle with an exceedingly complex challenge
of their own: How to put human morals into the mind of a machine.

There are a few ways to tackle the
problem, says Allen, co-author of the book Moral Machines: Teaching
Robots Right From Wrong. The most direct method is to program explicit
rules for behavior into the robot’s software—the top-down approach. The rules
could be concrete, such as the Ten Commandments or Asimov’s Three Laws of
Robotics; or they could be more theoretical, like Kant’s categorical imperative
or utilitarian ethics. What is important is that the machine is given
hard-coded guidelines upon which to base its decision-making

x

Hi!
I'm Brent!

Would you like to get a custom essay? How about receiving a customized one?

Check it out