Picture humanity, or if it will allow for the

Picture a future in which
every car on the road is self-driving, and car accidents practically never
occur. Medical Operations are now performed perfectly by machines. No one has
to risk their life in wars, they are now fought by machines. Now imagine a
future where you no longer have a job. Biased and Unethical autonomous weapons
are used for warfare. One where Artificial Intelligence takes the place of
human decision making and reasoning. The possibility of the eradication of the
human race. These are the possibilities and risks of Artificial Intelligence,
and will impact society in a monumental way. Artificial Intelligence, which is
commonly referred to as AI, can be described broadly in three different ways:
Artificial Narrow Intelligence (ANI)- when a machine is programmed to complete
a specific task with expertise, Artificial General Intelligence (AGI)- when a
machine’s cognitive capabilities are equal to that of a human, and Artificial
Super Intelligence (ASI)- when a machine’s cognitive capabilities surpass that
of Humans. (Brown 1) Currently, AI is still in development, and has only
reached a level of ANI, and has not yet reached AGI and matched the cognitive
capabilities of humans.. However, the argument that remains and that pervades
the development and evolution of all kinds of Artificial Intelligence, is if it
poses a threat to humanity, or if it will allow for the further development of
the human race.

            Many well-known members of the scientific and
technological communities believe that AI does in fact pose a threat to life as
we know it. Speaking in an interview with BBC, Professor Stephen Hawking, who
is a world-renowned cosmologist and theoretical physicist stated “I think the
development of full artificial intelligence could spell the end of the human
race. Once humans fully develop artificial intelligence, it will take off on
its own and re-design itself at an ever-increasing rate. Humans, who are
limited by slow, biological evolution, couldn’t compete and would be
superseded.” (Cellan-Jones 1) Hawking poses the idea of AI being able to
re-create itself over and over exponentially, essentially improving the machine
and code until it reaches a near-perfect and fluid level. Furthermore, in an op-ed
piece Hawking wrote for the Independent in 2014, he stated “Success in creating
AI would be the biggest event in human history,”, however that was followed by “Unfortunately,
it might also be the last, unless we learn how to avoid the risks. In the near
term, world militaries are considering autonomous-weapon systems that can
choose and eliminate targets.” (Hawking, Russel 1) Imagine a world where wars
were fought with autonomous machines and weapons. It may sound beneficial
because there would be the possibility of less lives being risked in warfare.
However, first world countries, who are on the forefront of AI development
would have an even larger advantage over third world countries, allowing them
to establish dominance over them even more than they do presently. An example of
this in present day would be the robot Atlas, created by Boston Dynamics. In their
videos “What’s new, Atlas?” (What’s New, Atlas?) and “Atlas the Humanoid Robot in
Action” (Atlas the Humanoid Robot in Action), the robot is demonstrated to possess
abilities that some humans cannot even do, such as performing backflips, and jumping
to and from increasing heights. The robot also seems to demonstrate learning capabilities
when pushed over, and when objects are pushed out of its grasp. Imagine if advanced
artificial intelligence and autonomous weapons were paired with this robot’s current
capabilities. Armies and weapons would be created that significantly outperform
those that are compromised of humans. This could allow countries to attempt to
become a world power by using these machines to gain control, and this issue
might even ignite World War III. Professor Hawking, who is a recipient of
various awards for his contribution to science clearly knows what he is talking
about, and poses important issues that need to be addressed, such as “How will
humanity avoid these risks?” and “Is it even possible to avoid these risks when
machines are autonomous?” These issues need to be considered and addressed
before any further development of cognitive Artificial Intelligence and
Superintelligence.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

In
addition to Hawking, Elon Musk is another strong advocate that Artificial
Intelligence is a threat to human existence. During the 2017 National Governors
Association’s meeting, Elon Musk stated that Artificial Intelligence poses an
“existential threat” to human civilization. Musk, the CEO and CTO of SpaceX,
CEO and product architect of Tesla Motors, and the Co-Chairman of OpenAI is
very well versed in the languages of technology, science, and artificial
intelligence. Musk continued by saying “robots will do everything better than
us…AI is a fundamental existential risk for human civilization, and I don’t
think people fully appreciate that. AI is the scariest problem.” (Domonoske
1) Musk has been a strong advocate of this position for years, even helping to
found a company known as OpenAI. OpenAI is a non-profit AI research company
that is dedicated to researching and paving the road for safe artificial
intelligence in the future. (OpenAI.org) Musk believes that certain regulations
need to be put into place before developing AI further, despite the fact that
he believes regulations to be “irksome” and “not fun”. (Domonoske 1) Further
information needs to be derived from experiments and tests with artificial
intelligence. Since no one is sure of the full capability of these machines,
limits should be put into place and guidelines should be set for those who
create these machines. Many countries have already accepted that Artificial
Intelligence has the possibility of posing a threat, and the United Nations has
even created a new department to monitor and study the developments of AI.
(Cluskey 1)

 Bill Gates, the founder of Microsoft, also shares
beliefs with Musk and Hawking. In an ‘Ask Me Anything’ post on Reddit, Bill
Gates was asked how much of an existential threat Artificial Intelligence poses
to Humankind. His response was as follows “I
am in the camp that is concerned about super intelligence…First the machines
will do a lot of jobs for us and not be super intelligent. That should be
positive if we manage it well. A few decades after that though the intelligence
is strong enough to be a concern. I agree with Elon Musk and some others on
this and don’t understand why some people are not concerned.” (Holley 1)
The displacement of Jobs is another threat that Artificial Intelligence poses
to society. Once Artificial intelligence reaches the a higher level of
cognitive capability, it will be able to perform most jobs that humans perform
today at a much higher success rate, and for cheaper too. That will leave many
people without a job. A counterargument for this risk is that the development
and installation of Artificial Intelligence will create more jobs, but will it
be enough to meet the millions that will be displaced?

However, there are also others who
disagree with these notions. Mark Zuckerburg, the creator of Facebook, and the
developer of his own personal AI assistant located within his home. Zuckerburg
was questioned during a Facebook Live Broadcast, regarding statements Elon Musk
had made about Artificial Intelligence (as mentioned earlier in this paper) and
if AI posed a threat. To which, Zuckerburg responded “”Whenever
I hear people saying AI is going to hurt people in the future I think: ‘Yeah
technology can generally always be used for good and bad and you need to be
careful about how you build it and you need to be careful about what you build
and how it’s going to be used.’ But people who are arguing for slowing down the
process of building AI, I just find that really questionable. I have a hard
time wrapping my head around that. If you’re arguing against AI then you’re
arguing against safer cars that aren’t going to have accidents and you’re
arguing against being able to better diagnose people when they’re sick.”
(Shead 1)

 

            Zuckerburg
provides a strong counterargument to Musk, Gates, and Hawking. The present and
possible future uses for Artificial Technology are not all bad. For example,
DeepMind, an Artificial Intelligence company owned by Google is collaborating
with the UK National Health Service by using artificially intelligent computer
recognition software to diagnose cancer and eye diseases from scans of patients,
among other medical projects. Another example of this is the use of machine
learning to pick up on signs of heart disease and Alzheimer’s before they
become an apparent problem. (Nogrady 1) AI is also integrated to parts of
everyday life. Voice assistants such as Siri, Cortana, and Google are all
examples of AI as well, that learn from your past searches and how you use your
phone to offer you new options, along with predictive text. However, these
forms of Artificial Intelligence are at their most basic, and are quite
primitive compared to the possibilities of future Artificial Intelligence. AI
in the future will have many more risks and possible threats, as it will be
more advanced and be able to complete more tasks far better than humans ever
could.

            One of the
scariest things about Artificial Intelligence is that it possesses a very low
interpretability, meaning that it is difficult to follow the path a machine
took to reach its decision. This could cause many problems, such as bias within
machines. This bias could be derived from a machine’s programming, and has the
potential to cause harm if the machine had been programmed with malicious
intent. Furthermore, these biases may not appear
as an explicit line of cod within the programming, but rather as subtle
commands that are embedded in small, short, and possibly hidden interactions
among the thousands of processes that would go through the machine. (Harvard
Business Review) This has the future potential to allow for autonomous weapons
to be programmed that would contain bias, or eve peacekeeping drones or robots
that would contain bias. This would create an ethical dilemma, especially
if machines were to ever become sentient, and begin to outperform humans in
every cognizant task. These artificially intelligent machines could then
possibly take over control as the head of society, and eventually replace the
human race.

            Another risk posed with the evolution of artificial
intelligence is that AI and the neural network systems within it operate on
statistical truth, instead of ‘real truth’ (such as scientific law). (Harvard
Business Review 1) The statistical truths Artificial Intelligence would use to
pull data from would be from past experiences or scenarios the program has been
through, and it would then adapt or make conclusions from such situations. This
could cause a multitude of issues, especially when dealing with life or death
situations, and when handling hazardous materials.  

            A final risk that the implementation of artificial
intelligence in society displays is that it is difficult for these machines, both
currently and presumably in the future, to correct mistakes. The underlying programs that led to the machine’s solution
can be unimaginably complex, and the solution the machine comes up with may be
far from optimal if the conditions under which the system was programmed and originally
learned from change. (Harvard Business Review 1) Similarly
to humans, these machines will make mistakes and learn from them. However, this
would pose an issue because Artificial Intelligence would continue to attempt scenario
after scenario without considering the dangers at hand. Again, this would be a problem
when involved with life or death scenarios or when handling hazardous materials.

In
conclusion, there is still too much uncertainty surrounding the discussion
about artificial intelligence and the possible threat it poses. However, it
seems clear that Artificil Superintelligence is a challenge for which humanity
is not ready for now, and will not be equipped to handle for a long time. When
asking if Artificial Intelligence poses a threat, more questions than answers
arise. However, despite that there is uncertainty that revolves around this
topic, it is clear that although Artificial Intelligence does not pose a threat
to humanity currently, does not mean that it will not pose a threat to humanity
in the future. If the right regulations and precautions are not put into place,
it could mean the end of existence as humankind knows it.