[Return to Home Page]
(October 2002) Feedback on 'Prophets of the Silicon God'Eric Lindsay (February 1998) ERIC LINDSAY writes (February 1998): I think Frank Tipler is both overly optimistic, and also somewhat confused about what makes a personality. Uploading as a concept is fairly ancient in science fiction, although the name and the technologies change. I'm sure Bill Wright will point out that E E Smith had his hero Richard Seaton move the personality of his enemy Marc DuQuesne into an immaterial, immortal energy form, assisted by the mechanical brain of his 1000 mile diameter spaceship, in The Skylark of Valeron. Doc Smith died long before Tron appeared. However I think Jones may be the earliest, with his 1930's hero Professor Jamieson frozen in space and revived and his brain placed in a mechanical body. Again I'm sure Bill can give you chapter and verse. You would probably also enjoy Rudy Rucker's recent works which have brain uploads and AI. If you make a perfect copy of a working brain in a mechanical medium, then it will no longer be perfect once it starts working. The silicon personality lacks a body and the body reactions, so the personality will rapidly diverge from the original. Furthermore, being mechanical, it can be reproduced exactly, to create multiple personalities. If these stay exactly in sync, we have shown human personality to be entirely mechanical, and if they don't stay in sync, then they can't all be identical to the original. Incidently, in Van Vogt's Null-A novels, the hero's personality is shifted from his dead body to an identical clone, one of a series of cloned bodies that happen to be lying around. Fun novel, not much sense, but lots of action. I reply (April 1998): I haven't read any books by Rudy Rucker, but I'll check and see if the MSFC library has anything. I've been thinking even more about uploading since December and the more I think about it the more absurd it seems. It's like this: The trouble with computers is that they're so cleverly designed that people can sometimes forget that they are machinery. If people said they were going to upload the "essence" of their arm into a robotic arm you'd think they were nuts, but if they talk about uploading their mind into a machine millions of times more complex than a robotic arm - are they suddenly talking sense? I think not! [[Robots can copy human movements - that's how some of them are trained to spraypaint cars for example. Likewise they can copy "mind behaviour" - but that's all it is - a robotic simulation.]] CHERYL MORGAN writes (February 1998): Fascinating article, here's hoping for more of the same. Your points are well taken, but I have a couple of reservations. First, we now know from Catastrophe Theory that systems can exhibit radically different behaviour following a small change in input parameters. If consciousness is a result of physical properties, it is not that unreasonable to expect it to arise spontaneously when the thinking apparatus reaches a certain level of complexity. Second, if, as you maintain, consciousness is different in kind to Turing machines, what sort of thing is it? Is it physical? If so, how is it different from brain structure? If not, in what medium does it exist? Isn't the existence of souls more improbable that the spontaneous development of consciousness? Hello. What do you mean by "radically different behaviour?" If you are referring to the 'butterfly effect' the argument collapses - i.e. weather patterns will alter with a butterfly's wing-flaps but they will still be recognisable weather patterns. You won't have a situation where a wave of a paper-fan causes airborne mountains with chlorine atmospheres to spontaneously pop into existence. Consciousness is as alien as that to what has gone before. If you are referring to genuinely unexpected outcomes - please give me some examples. If...consciousness is different in kind to Turing machines, what sort of thing is it? Is it physical? If so, how is it different from brain structure? If not, in what medium does it exist? If consciousness is a physical thing, it will relate to the electrical activity in the brain. This is different from brain structure in the same way as the flow of electrons through a CPU differs from the physical structure of the CPU. If the mind isn't physical, as you've implied it would have to be "spiritual" - whatever that may mean. Isn't the existence of souls more improbable than the spontaneous development of consciousness? Actually no. But I'll have to answer this in two parts: 1. It is a matter of belief. If you are a scientific rationalist and believe that physical reality is all there is, then the mind has to fit into the brain - it has nowhere else to go! Since there are, as yet, no plausible physical agencies to explain the gradual evolution of consciousness - the scientific rationalist has to believe that it somehow arose spontaneously, and hope that the mechanisms by which this occurred are gradually revealed by future scientific discoveries. 2. Enchanted Diary. V.13. pp.3458-3460. October 17th, 1991. "It began on Monday October 14th with: "A dream/vision - I am not sure which. "I was in a bed in a long room in an old house. (I am not sure whether it is this actual room or a variation in a dream). "I felt myself drift out of my body, just a few feet. I panicked and rushed back close to make absolutely sure that I was still breathing.... "When I had established a steady rhythm, I let go. I instantly found myself ever so far away - surrounded by stars. I looked casually behind me for the proverbial silver cord, but couldn't see one. I came back to the room and my body, reflecting that if I ever wanted to see anything interesting I'd have to learn how to travel slowly. "It is possible that this is a dream influenced by 'Flatliners' but I don't think so. This is because the 'drifting' had the sensation of painful inevitability like when I dislocated my kneecap. The 'drifting' was not painful, but it felt strange - like I'd irretrievably done something. (I imagine a butterfly breaking out of its chrysalis and stretching its wings for the first time feels like this. And when it flies?) "I don't want it to happen again. It feels strange and is, I think, too close to death. But iff it is real astral-projection, it will happen again! I would have broken a 'restraining-mould.' Next time it will happen easier. "How do I feel? Scared-mostly, but I long to fly...." So Cheryl that's my record of the deeply unsettling experience which has made me think that souls exist - and the mind may be a separate thing to the brain (but closely interacting most-of-the-time). I've spoken to people who have described their out-of-the-body experiences, but it is still a matter of belief unless (and even if) you've had one yourself. Are they true experiences, or hallucinations caused by flooding the brain with natural opiates? You are free to believe whatever you want to. JOHN NEWMAN writes (February 1998): Yeah, VR is virtual reality, and uploading is concrete unreality! [...] With regard to uploading, it seems there is one question you should ask yourself at the outset. Do you believe that a person's mind is entirely a phenomenon of the physical world, a result of chemistry, physics and biology, or do you believe there is another, lets call it 'spiritual' component? If you take the latter position there can obviously be no 'upload' to any mere physical medium, end of story. If you believe the former (as I do) you are saying that the mind, the personality and all it's aspects are the result of a certain arrangement of atoms. In that case it would be precious to suggest that this arrangement of atoms can never be duplicated. At some time and place it will only be engineering. There is no 'God in the Machine'. With respect to intelligence you need to be careful, I think, not to speak of it as an absolute. "Intelligence" is just a way we humans describe how we do some of the things we do. It is clear that some of us use intelligence to do some things that other folk do another way (perhaps with a better memory, or through specialisation). To talk of 'Pseudo intelligence' seems to assume we know and can always recognise the "real thing", but if decades of Artificial Intelligence research has shown us anything it is that we do not know what intelligence is in any prescriptive sense. We only know intelligence empirically. We "know it when we see it", but surprise, surprise, we don't all agree! When you disparage 'simulated intelligence' I am reminded of a joke I used to tell about how I've been pretending to be intelligent for years! It's fooled some people, even me on occasions, but... For my money the vote would go:
I reply (April 1998): To talk of "Pseudo intelligence' seems to assume we know and can always recognise the "real thing", but if decades of Artificial Intelligence research has shown us anything it is that we do not know what intelligence is in any prescriptive sense. We only know intelligence empirically. Agreed. But the problem doesn't concern what intelligence is - it is recognising what intelligence is not! Something can be intelligently designed without being intelligent itself. Computer software is a perfect example - if your snazzy new Word Processor capitalises the first letter after a full-stop automatically it isn't because the program is intelligent (AI or otherwise) or understands sentences, it's because the programmer understands capital letters and the end of sentences and wrote a simple software instruction to deal with them. The difference with AI is that AI-software is even more intelligently designed, and therefore easier to confuse with real intelligence. (Programs which rewrite their own code - which do exist - are a more extreme case. But often the changes they make are randomly generated ("genetic algorithms") and their success is scored by a human being - and nothing genuinely original is incorporated. - i.e. a music composition program won't ask for graphic design tools! The program is making allowable changes in its own tiny conceptual space. Human beings, on the other hand, can remodel and enlarge our conceptual spaces. This is how original and truly revolutionary ideas come into being.) A high-level machine intelligence might have access to a much greater store of raw data than I have, and it will have been programmed to seek out correlations - but only I or another human can come up with a new way of sifting or combining data which the original programmers would have overlooked. If the mind is truly just an arrangement of atoms in the brain as you suggest - then "yes" uploading is certainly possible. But like in "Permutation City" it will be a copy of me. Transferring my mind into a machine environment is a different matter. If my brain is being systematically scanned, destroyed and reproduced in cyberspace - is there a point where my conscious awareness crosses the "brain-machine" bridge, or is it the slow death of self? ROGER SIMS writes (February 1998): An idle thought, I have in the past had conversations with two computer experts. One was not a science fiction reader who was sure that some day a computer would be built that would be able to think for itself. The other had been a member of the University of Chicago SF club and knew with incredible certainty and would not even talk about the possibility that it would never be built. I reply (April 1998): Hello! Yes people's attitudes to AI are interesting. Damien Broderick raises the possibility of super-intelligent AIs coming into being next century, and if we are lucky "keeping us as pets." Personally I find this insulting - you can feed a computer bank with all the data in the world (theoretically) but what then? Just because the machine's got all the data doesn't mean it will have knowledge (which involves understanding linked data) or wisdom (which involves, among other things, applying judgement to linked knowledge). AI can only simulate these things - and not very well. You can think of a computer as a bit like an automated filing cabinet. Human beings decide what goes inside, where it goes, what happens to it, and what comes out. We allow computers limited decision-making powers now, but only under our careful watching eye. Computers are tools - like shovels are tools. AI is a way of automating our decision-making tools and nothing more. If someone tells you 'the computer made a mistake' ask them 'what idiot typed the wrong data into the computer, and how are you going to make it up to me?.' We push the shovel - the shovel doesn't push us. We (not the machines) are the decision-makers on planet Earth! BILL WRIGHT writes (February 1998): Your essay 'Prophets of the Silicon God' is not the first exploration of AI - artificial intelligence - in the pages of ANZAPA but I found it to be a good read. Only giants like Bangsund and Gillespie had heretofore perfected the Art of the Essay in Apazines. Now you have joined their ranks. The Frank Tipler quote at the beginning expresses an idea that was pioneered in the classic short story 'Izzard and the Membrane' by Walter Miller Jr in the 1950s. A quote from that story is the command "Duplicate my Self Awareness Transor". It is part of fandom's small compendium of sacred literature. 'Izzard and the membrane' and not the movie Tron (1982) appears to have been the first attempt to deal with the concept of 'uploading.' You might not yet have found the rich vein of ancient lore on the subject from other SF works which are long out of print. Lord of Lightby the late, great Roger Zelazny, achieves the extraordinary feat of insinuating 'uploading' into the warp and woof of the action without actually describing let alone explaining it. The novel also pioneered in literature the cinematography technique of the flashback. It is an extraordinary blend of Buddhism and the Hindu Pantheon. Your essay performs the useful function of giving me the tools to discuss [AI] in everyday conversation. This is not to say that I will attribute my remarks to you, but I will be one jump ahead of others in having the concepts in mind and will come across as incredibly wise without actually being so, of course.
Thanks for the concept of Uploading, for the distinction between Virtual Reality (VR) which is outside reality and Artificial Intelligence [AI] which is really real. You have also coined the phrase Augmented Wetware to describe neural implants such as the bionic ear. Your demonstration to the effect that Neural Networks and Expert Systems of their nature lack consciousness and, consequently, can never decide what they want to do was lucid and convincing. Simulation techniques can never become the real thing (consciousness and all) even if [it] appears to be indistinguishable from the real thing. The copy will always require stimulation from outside for any action to be performed. You then go on to give the thumbs down to any prospect of AI coming spontaneously into existence with the ever increasing complexity of computer networks and neural interfaces. 'A right scholarly and informative piece of work' Second Stage Lensmen by E E Smith Ph D. Your essay gets close to the criteria by which we may identify [AI] if it arises, but it doesn't spell them out. I will perform that office for you. First, the AI will be Aware I will never be able to instruct the Web to "duplicate my self awareness transor" in such a way as to continue to be the original consciousness translated into cyberspace. Such an intelligence, be it created or evolved, will have its own independent existence quite apart from the existential Me. Uploading is impossible. It is one of those obstinate speculations, like Ron Hubbard's Thetans or the medieval Soul, which are designed to separate people from reality. Therefore, as you rightly point out, it is dangerous in the extreme to take it seriously. Most of us are not strong enough to gaze unblinkingly at 'the clear, cold lines of eternity' as Roger Zelazny puts it in Prince of Chaos (1991). But we can do without navigators who steer us away from contemplation of what we really are. Thank you for the insight.
I reply (April 1998): I thank-you for your encouraging comments. I also value the criteria you give for identifying an AI if one should arise. (I haven't come across such a thing before. It is, philosophically, very useful.) MARC ORTLIEB writes (April 1998): Your comments about neural modifications leads to my favourite complaint about robots in science fiction. What sort of halfarsed designer would construct robots without radio communication? Why does R2D2 need a radio link to communicate with Luke when he should have broadband radio built in? Why isn't there room in a Dalek shell to have instantaneous communications? Okay, it'd mean that your average writer would have to abandon several convenient plot devices, but what the hell..... I couldn't finish Permutation City but, having thoroughly enjoyed DIASPORA, I must give it a second try. I don't see any problem with viruses. I don't see them as a precursor to life, more as opportunistic Johnnie come Latelies. A section of DNA capable of conning another cell into doing its dirty work for it is an inevitable result of the bacterial ability to keep DNA as plasmids and given that eukaryotic cells seem to be descendents of colonial bacteria, it seems logical that viruses should be able to colonise them too. (I wonder whether there is fossil evidence for viruses in Archeon Bacteria.) BRUCE GILLESPIE writes (June 1998): Greg Egan seems to write a lot about uploading. Perhaps he anticipated the idea of your novel. Perhaps it's best not to read him, since he's rapidly running through all the good ideas in science and epistemology All I can do is agree with your fine essay about the difference between intelligence and pseudo-intelligence. It's possible that we haven't begun to grasp the nature of the brain, despite extensive brain-mapping, because we have to find out how the whole brain fires, giving rise to 'consciousness'. Consciousness doesn't seem possible, let alone knowable and reproducible. I trust that you have sent a copy of your essay to Damien Broderick.[Address supplied.] ALAN STEWART writes (June 1998): Interesting comments on "uploading". I agree it all sounds very nice, but can't see it happening too soon. It would be like a program scanning stored images of an actor and selecting one image as a response. Nothing new, not life as we know it. The hardware/wetware modifications are more possible, and likely to be tried way before legislation covers such things. Just look at how drugs and cloning were done in the past. Just what can be interfaced/improved is yet to be seen. RUSSELL BLACKFORD gave a lecture to the Melbourne Science Fiction Club on March 20th about uploading and artificial intelligence. I sent him a copy of "Reality Module No.1" which contained the essay "Prophets of the Silicon God." He sent me a letter a few months later. 27th July 1998. Dear Michael, I apologise for taking so long writing back to you. I enjoyed the fanzine that you sent me. However, whatever the arguments for or against uploading (and I hope it was clear that I'm a bit sceptical, too), I'm not convinced by your approach. The trouble is that there's not much of a philosophical argument once you take out Searle's Chinese Room thought experiment. However, this has been roundly critised by other philosophers, such as Daniel Dennett and David Chalmers. To take it to the extreme, how can all those bio-electrical things going on in the brain add up to consciousness, either?! Yet, it seems they do, unless you adopt a supernaturalist account of the "soul". To me, this would be desperation. I agree with you, though, that the practical problems of uploading seem almost insurmountable. Even if we could scan a brain in sufficient detail and replicate its operations digitally (say with some kind of novel hardware; forget about software modelling for the sake of argument) we run into problems of identity. Is that upload really me? In what circumstances is it, or isn't it? You may have read my story "Lucent Carbon" by now. I try there to deal with these issues, but in a questioning way, rather than offering answers. Meanwhile, my talk to the MSFC has since been published in the June issue of Quadrant....If the club is interested, I think I have a spare copy. Best wishes, [Added March 2002]. I have not seen the issue of "Quadrant" but have found an anthology with the story "Lucent Carbon." DAVID CHARLES CUMMER writes (August 1998): *Re: My Reply to Cheryl Morgan. Wow! The dream / vision you describe is incredible. When I first read this it reminded me of something similar I'd experienced but now I can't remember what it was. Sorry. I reply (October 1998): Re: Possible Astral Projection. I have never had another experience like it. It shook me up a great deal and gave birth to two recurrent nightmares One - was a terrible fear of dying. Since my body and "soul" had came apart so easily, I was terrified that this would happen accidentally in my sleep - and I'd die in my bed. Two - a fear of drifting very very far away and getting "lost in space" never able to know exactly where I was or which way was home. (I've had the fantasy most SF fans have had of owing my own backyard-built starship, and taking it for a test-flight. You'd have to be very careful. A few thousand lightyears in a random direction and you'd be surrounded by alien constellations.) These fears kept my body and "soul" securely attached. ROGER SIMS writes (October 1998): Ryct Artificial Intelligence: The father of Sarah Zettel is a skiffy fan and a former member of the University of Chicago fan club. He is also a former computer programer for Ford Motor Company in Dearborn, Michigan. He does not believe that computers will ever exhibit intelligence. BRUCE GILLESPIE writes (October 1998): *Partly Re: My Reply to Cheryl Morgan. I'm a sucker for people writing about their dreams. The one you describe is obviously one of those Big Ones that you remember the rest of your life. I've just edited for Oxford University Press a rather obtusely convoluted book about the nature of consciousness, The Crucible of Consciousness by Zoltan Torey. It should be out soon. After disentangling some of the tortuous prose I found a convincing argument that consciousness is not the sort of thing that could be programmed into Artificial Intelligence machines, or if it were, it could only happen if the AI computer were indeed conscious. It's a bit hard to work out what Torey means by 'consciousness', since he keeps defining it in terms that themselves have prolix explanations. What I think he is saying is that consciousness is a continuous feedback loop between, on the one hand, our sense of past, present and future (which we apprehend as 'the present') and on the other hand, our sense of 'out there' and 'in here.' We have consciousness to the extent that we can separate [ourselves] from incoming information and derive conclusions from this information. Torey's awkward way of expressing himself makes it impossible for me to summarise his thoughts more succinctly than that; but after struggling with the book for a few weeks I became convinced that he has made an honest effort to pin down what the mind actually is, as opposed to the mechanical elements that might or might not be used in its construction. TERRY MORRIS writes (December 1998): As for AIs taking over the world, we'll just have to pull their plugs out. DAMIEN BRODERICK says (January 2000): Casual conversation after Damien's Change in the Third Millenium talk about whether machines can become genuinely intelligent. (I am sceptical.) Damien says: "Let the experiment be tried." I tend to agree. Copyright © 2002 by Michael F. Green and others. All rights reserved. Last Updated: 11 September 2004 |