| | I've been fascinated by this question ever since I saw Robby-The-Robot in Forbidden Planet and Tobor The Great when I was a kid. Re Mark Vendelbosch's Post#0 question, the tricky part would be what criteria to go by re our determining that the computer was actually what we loosely call 'self-aware,' PLUS what we really mean by 'capable of reason,' with stress on the meaning of the term 'capable.' Re the last, presumably one means more than merely 'AUTOMATICALLY logical,' no? Fallibility in use of logic may be a sine qua non, though not sufficient. Re computers or robots, if it's one thing that's bothered me with movies like StarWars (which I otherwise love !) it's the ambiguousness Lucas allowed (as Asimov practically fostered in his own robot stories) re any distinctions between robots-as-machines and robots-as-'persons.' --- The 'machines' clearly wouldn't be worth risking one's life for on the basis of their worthwhileness-as-a-valuing-sentient, whereas the presumption of robot-'persons' clearly might be. How to tell the difference is the biggee question. What criteria is rationally appropriate to meet the 2 criteria of 'self-aware' and 'capable of reason'? (Strictly speaking, this question applies just as much to ET's, but, I'll let that go here.) I have no answer to this. I don't see enough solid-knowledge presently existing to answer, even in cognitive science or as some like to dwell on in AI, 'networks.' Physical reductionists and philosophical determinists would disagree with me there, but they'd still have to deal with (and that I've seen, so far haven't) distinguishing a sophisticated Eliza program (akin to HAL's appearance) that's not worth risking one's life for from a metallic 'friend' (akin to C3PO's appearance.) I will add one thing: Due to my readings of Jean Piaget re his longtitudinal studies on the empirico-behavioural-development of a child's conceptual mind , I do not see an immobile computer with no capability of physical-environment praxis as ever being more than a super-sophisticated HAL that, like the old Eliza program, 'mimics' a human communication style with possibly a separate agenda-program. That I can see, it could never develop into a bona-fide 'sentient,' that has any environmentally-oriented 'values.' (Maybe conceptual-'values' needs to be a 3rd criterion?) Because of this, the likes of Colossus: The Forbin Project or Proteus in The Demon Seed, I see as impossible per se. Mark's question's relevence I see as ONLY applicable to robots with separate self-containing 'programs.'. Further: in either case, such is really no longer properly called a 'computer' or 'robot.' 'Cyborg' wouldn't do since it's a combo of man and machine (basically in the brain.) Interestingly, there doesn't really seem to be a proper term to distinguish this idea of a sentient-'machine.' Lastly, however, since we're not talking ET's, but human-built 'advanced' automatons, another criterion may be required: how to tell the difference between an 'actual' X versus a 'mimicked' X. --- If this doesn't imply the necessity of uncontrovertably answering the philosophical conundrums on the subject of identity re 'original,' 'duplicate,' 'copy,' 'actual,' and 'mimic,' before deciding if it's a toaster or a person, I don't know what would.
MTFBWY J-D
P.S: Only after successfully dealing with my above criteria-questions would talking about metallic-'rights' be useful to debate.
(Edited by John Dailey on 8/13, 5:58pm)
|
|