| [Return to Home Page]
2001) [Printed in "Reality Module No.23" as "Freeform Futurology
FREEFORM FUTUROLOGY (6)
(A casual series of articles exploring various aspects
of our evolving society)
Artificial Minds? (AI Revisited)
In 1996, world chess champion Gary Kasparov
accepted the challenge of a computer, IBM's
Deep Blue chess-playing program. Kasparov was
shaken to the core. With 32 microprocessors,
Deep Blue could analyze 200 million positions per
"I could feel - I could smell - a new kind of
intelligence across the table," Kasparov admitted.
"I got my first glimpse of artificial intelligence ...
when in the first game of my match with Deep
Blue, the computer nudged a pawn forward to a
square where it could easily be captured." It
dawned on Kasparov that for the first time he was
facing a machine that could see ahead in novel
ways. "I was stunned by this pawn sacrifice," he
In the first match, although Deep Blue took
the first game in the series, eventually Kasparov
found its Achilles' heel and trounced the
computer, 4 to 2 ... Kasparov found the weak
spot of the computer: chess-playing machines
pursue a set strategy. If you force the computer
to deviate from that strategy, it becomes helpless,
flailing like an overturned turtle on its shell. "If it
can't find a way to win material, attack the king
or fulfil one of its other programmed priorities,
the computer drifts planlessly and gets into
trouble," Kasparov said. "So although I think I do
see some signs of intelligence, it's a weird kind,
an inefficient, inflexible kind...."
[Kaku, Michio. Visions: How Science
Will Revolutionize the 21st Century and Beyond.
Oxford University Press, 1998. pp.60-61.
ISBN 0 19 850086 6]
I last touched on Artificial Intelligence in 1998 in
my essay "Prophets of the Silicon God." [RM2]
(This essay will also refer to elements of my "Is
There Meaning In Dreams?" series of essays [RMs
9, 10 & 11].)
I have found Kaku's book to be very informative
on the subject of Artificial Intelligence.
Firstly there are two approaches to AI - the
'bottom-up' school and the 'top-down' school.
Both are relevant.
The bottom-up school works with neural nets and
robots. The robots learn by trial-and-error how to
move and how to interact with objects in the real
world - much the same way that babies learn how
to judge distances with their eyes & to clasp
objects - and later how to balance and how to
We end up with robotic insects crawling across
the floor and avoiding objects - or maybe finding
their way through mazes.
(A far cry from the time-lapse films of the 1970s
of early robots moving a short distance, and then
spending hours calculating their position before
taking another step.)
The robots have become proficient at moving
about, grabbing things, and generally interacting
with the objects of the world.
The bottom-up robots are self-trained, learn as we
do how to work in the world. (They do the
things - like moving around - that we do
unconsciously, without "thinking.")
The top-down school of AI is more traditional - it
is rule-based programming (heuristics) and
attempts to model reasoning. It is based on very
complex decision-trees and is, in effect, an
extension of the concept of the Expert System.1
1Expert Systems can use a sort of question-and-answer
system, where you provide the answers - to do something like diagnose blood diseases.
Expert Systems work very well within their narrow
specialities - but give them a problem which lies
outside their area of expertise and they flounder,
or give nonsense answers.2
2Like the Medical Expert System which diagnosed
a rusty car as having measles.
An AI must be a generalised Expert System - and
so researchers attempt to map out rules to cover
all areas of human experience. (From the
elementary - "If you are holding an object and let
go, it will fall to the ground" - to the complex "If
you are at a birthday party and you give the
birthday person a gift which is identical to a gift
which someone else has already given to them,
you will have to take your gift back and exchange
it for something else. This rule does not apply if
your gift is money.")
We end up with hundreds of millions of lines of
code - but still the AI makes elementary mistakes.
(There are always rules so basic we never realised
they had to be included.)
In short the top-down school is an attempt to
model human reasoning - the so-called "higher
The failure so far to produce a generalised AI is
not a result of heuristics being a 'bad idea' - it is
because developing a generalised AI is such a
highly complex task.
What we have with these two schools of AI
research are the modelling of two areas of human
intelligence - our subconscious interacting-with-
the-world and our higher level reasoning ability.
We don't teach children thousands and thousands
of heuristics - we teach them basic rules, and they
reason out more complex rules from simpler ones
or learn through experimental trial-and-error. (A
top-down machine needs to be able to learn. The
machines are capable of combining existing rules
to produce new rules and conclusions from these
rules, but this may not be quite enough.)
In my original "Meaning in Dreams" essay I wrote
about how each of us has a 'working model of
the world' in our heads, and how we use this
model in understanding the world and in making
predictions about things such as how other people
Which school of AI is this 'working model' more
Both - like the bottom-up school it is a neural net
of associations which is learnt & modified, but like
the top-down school it is based on rules and
When the top-down and bottom-up schools meet
we will be providing an Artificial Intelligence with
a working-model of the world which can be
interacted with and have inferences made from.
[Ultimately the machine will have to be self-
taught and self-teaching. What is the smallest set
of heuristics needed to enable a machine to
bootstrap itself into 'automated reasoning?]
A 'generalised reasoning machine' would be a
marvellous thing, but would it really be accurate
to call it an Artificial Mind?
No! There is more to the human mind than the
'working model of the world.' Indeed that is a
component that is generated automatically while
we sleep. A mind has more than an
understanding of how to move about and a vast
collection of heuristics.
The missing component is eluded to in the poetic
piece "Is There Meaning in Dreams - An
I describe how the vast bulk of the mind is "mechanical, programmed." (It is the nature of
the brain to automate whatever it can - walking,
behaviour patterns, conditioned responses.) This
is the bit that follows rules - like stepping through
a program. This is a 'generalised reasoning
machine' though it is one capable of some
reprogramming on the fly.
In this vast engine of the mind thought trickles
down well-worn paths.
But then I mention something I call 'the magic
ball' which can move about in this space - outside
"A small portion of the mind (consciousness) can
break its programming - can travel & invent at
[Reality Module No.10. p.3]
An AI can follow blindly down the steps of its
program, like a mechanical thing - which is what
Machines run programs because they are told to
by us the operators. We could tell a computer,
for example, to explore a mathematical space -
but we would still have to tell it to do this.
Computers have no initiative. They do only what
we tell them to do. This is because they are not
alive - they are tools which we switch on and
The current research in AI will give us (maybe in
20 years) a generalised reasoning machine -
which will be extremely useful for solving a vast
array of problems.
[We could solve some of the problems relating to
the limitations of a generalised reasoning machine
by having the machine request more data or
telling us that "this does not compute." The real
problem is when the machine (and the operator)
don't realise that there is a knowledge gap. Sure
this problem exists with people too (we can be
unaware of our ignorance), but it is more serious
with computers because we tend to trust too
much the results of mechanical calculation.]
But the research will not give us an intelligent
machine. That would require a machine with
volition - a desire to do its own thing - in essence
a conscious curiosity about the data it is working
The nature of consciousness remains elusive - and
although I am convinced that reasoning (even
very high level reasoning) can be automated, I
cannot yet see how consciousness can be given to
3We could take a conscious entity like a
human brain and interface it to a machine.
A machine without consciousness can become
artificially intelligent - but a machine would need
something akin to consciousness before it could
be labelled an artificial mind.
In conclusion - we can automate reasoning, but
there isn't a mechanical analogue for
[Top of Page]
- In This Series -
(1) What Can & Cannot Be Done - The Limits of
Futurology (April 2000)
(2) The $20 Computer (April 2000)
(3) Smashing Windows(TM) - The Ascent of Non-linear Thinking
(4) Nu Plastic Yu! (February 2001)
(5) Nu Plastic Yu Tu! (April 2001)
(6) Artificial Minds? (AI Revisited) (August 2001)
(7) Video-On-Demand (June 2002)
(8) Changes (June 2002)
(9) The Implications of Immortality (June 2002)
(10) Cheating in Education (April 2003)
Feedback and Discussions
[Top of Page]
[Return to Home Page]
Copyright © 2001 by Michael F. Green.
All rights reserved.
Last Updated: 21 June 2020