Response to Athens Dialogues:
Science and Ethics
Science, ethics, and the future of human
intelligence
Introduction
The Science and Ethics session
to which this paper contributes is introduced in
the Athens Dialogues official statement thus:
The enormous achievements of Science and
Technology provide both huge promises but also
highly dangerous threats. Ethical questions have
moved from the realm of philosophy to the
practicalities of medical treatments. [...] The
emergence of new technologies [...] could have an
unprecedented effect on the brains of future
generations.
The official statement quoted above makes three
points.
Firstly, it highlights the importance of the
(positive or negative) impact the advancements of
science and technology might have on our lives;
and urges us to think about it.
Secondly, it draws our attention to a
particular field of research in which such impact
needs to be carefully investigated: the human
brain – how it works and how it can be best
preserved or even enhanced by means of the latest
scientific discoveries.
Thirdly, it suggests that questions about what
is (morally) good or bad (for example in relation
to the scientific manipulation of the human brain)
are not anymore within the remit of philosophy;
they are now within the remit of science (in this
case medicine).This point is very broad and there
is a variety of ways of understanding it.
On one interpretation, it might suggest that
whatever facilitates scientific advancements is
good: for example if a certain operation on an
individual’s brain is going to damage the
individual but benefit the human species (by e.g. providing a better understanding of how the brain
works), then it is a good thing to do. But while
medicine can explain how value enters interaction
between living creatures, how would it explain
that we value abstractions, for instance that we
value human autonomy or that value human dignity? Accounting for values such as autonomy and dignity
is within the remit of philosophy, and philosophy
can provided reasons and arguments in their
support.
In this very brief contribution to the Athens
Dialogues, I will argue that science and
technology constantly raise new intellectual
challenges for us; and that we need philosophy to
address them. My argument rests on a thought
experiment regarding the enhancement of human
intelligence through modern technology. I will
describe the experiment; then give reasons why
this thought experiment presents a coherent
possibility that cannot be ruled out on
in-principle reasons; and finally, I will raise
some of the challenges this experiment presents to
us. I will conclude by very briefly pointing out
why this challenges are beyond the remit of
science, and require philosophical investigation.
The thought experiment I will present will also
show my reasons for disagreeing with Professor T. P. Tassios on the issue of the relation between
science and ethics. In his paper “Moral issues and
Technology. Possible lessons from Ancient Greece”
(version submitted to the Athens Dialogue but
unpublished) Professor Tassios writes that
Technology does not ‘create’ new moral
problems; it merely accentuates (sometimes
disproportionately) existing moral
issues.
The thought experiment I present
below shows that technology
does
create new moral problems, which we have not
encountered before.
A new challenge from science and
technology
My argument
that science and technology raise new ethical
issues that philosophy is called upon addressing
focuses on the following question:
What if
machines become more intelligent than
humans ?
The hypothesis was explored
first by J. Good, in 1965, in his
Speculations Concerning the First
Ultraintelligent Machine . Good
writes:
Let an ultra-intelligent machine be
defined as a machine that can far surpass all the
intellectual activities of any man however clever. Since the design of machines is one of these
intellectual activities, an ultra-intelligent
machine could design even better machines; there
would then unquestionably be an “intelligence
explosion”, and the intelligence of man would be
left far behind. Thus the first ultra-intelligent
machine is the last invention that man need ever
make.
The key idea is that if humans will
ever be able to design a machine that is more
intelligent than humans, call it M1, this machine
will be better than humans at designing machines. It follows that M1 will be capable of designing a
machine more intelligent than the most intelligent
machine that a human can ever design. In
particular, M
1 will be
capable of designing a machine more intelligent
than itself, call it M
2 . By similar reasoning, M
2 will also be capable of
designing a machine more intelligent than itself,
call it M
3 . But M
3 will also be capable of
designing a machine more intelligent than itself,
call it M
4 ... and so
on and so forth. Thus, if every machine in turn
does what it is capable of, we should expect a
sequence of ever more intelligent machines – and
ultimately an intelligence explosion.This
intelligence explosion is known in the
philosophical literature as the “Singularity
problem”. Philosophers have discussed it in depth,
and found no incoherency in the argument from
which the Singularity problem follows.
[1]
Questions raised by the possibility of an
intelligence explosion
The possibility that there
will be an intelligence explosion is a coherent
thought; and the fact that its realization is for
now practically impossible, does not rule out that
it might be possible at some point in the history
of humanity. We are all well aware that scientific
and technological advances that were considered
unthinkable just a few decades or even years ago
are now part of our everyday life.
If so, the Singularity problem briefly sketched
in the section above deserves very careful
consideration. The possibility of an intelligence
explosion raises very important practical and
philosophical new concerns. For, if it happened,
it would have enormous potential benefits, by
facilitating now unthinkable scientific advances
in every direction, and hence helping with all
sorts of difficulties the human race faces at
present. Superintelligence, if achieved, would
help securing health, food, energy etc. for the
whole of humanity. On the other hand, an
intelligence explosion would also have very
serious potential dangers; for, the very same
scientific advances it would facilitate could lead
to the distruction of the human race and of the
planet.
The Singularity problem raises questions about
the nature of intelligence and about the mental
capacities of artificial machines. In addition it
requires us to think afresh about values and
morality; and about consciousness and personal
identity. Some of the questions it raises are the
following:
- Are there good reasons to believe that
there will be an intelligence explosion?
- If it is possible that there will be an
intelligence explosion, how can we maximize the
chances of a good outcome? (and what counts as a
good outcome?)
- What will be like to be a human being in a
world where an intelligence explosion has
happened?
- Can human beings survive, and if possible
benefit from an intelligence explosion; and if
yes, in which way?
For reasons of brevity, I will expand only on
the last question. Suppose for the sake of
argument that human beings can survive an
intelligence explosion, by having their brain
migrate to a superintelligent machine, i.e. by
having their mental functions uploaded on to the
machine.
Uploading can take place in many different ways. Which is the best one? (And first of all, on the
basis of which criteria do we rank ways of
uploading?)
Suppose, once again for the sake of argument,
that a possible, and perhaps the best way of
uploading on a superintelligent machine is by
gradual nanotransfer. The details of this process
don’t matter in this context, but have been
discussed in the philosophical literature on the
Singularity problem. A brief sketch of how
nanotransfer might happen follows. Imagine
nanotechnology devices inserted into the human
brain and attached to a neuron each. On this
scenario, each device learns to simulate the
behaviour of the associated neuron, and also
learns about its connectivity. Once it has learned
to simulate well enough the neuron’s behaviour,
the device takes the place of the original neuron,
off-loading the relevant processing to a computer. At this point the device moves to other neurons
and repeats the same procedure, until eventually
every neuron has been replaced by an emulation,
and all processing has been off-loaded to a
computer.
So, suppose that I can upload my brain this
way, by gradual nanotransfer, on to a computer. Will the resulting superintelligent machine with
my mental activities running on it be “me”? This
question can be explored in many directions:
- What happens to consciousness during the
uploading process?
- Under what circumstances does a person
persist over time?
- Suppose that after uploading, my cognitive
systems are enhanced to the point that they use a
wholly different cognitive structure. Would I
survive this process, and how?
- If uploading will eventually be possible
and will be a good outcome for the human race, how
can we in the present ensure that it will happen
in the future?
These are questions that go beyond what
medicine and science in general can investigate,
but philosophy can help with.
Footnotes
An excellent study of the
Singularity problem, on which this essay is based,
is to be found in David Chalmers “The Singularity:
a philosophical analysis”, forthcoming in a
special issue of the
Journal of
Consciousness Studies entirely devoted to
this theme.