About
Content
Store
Forum

Rebirth of Reason
War
People
Archives
Objectivism

Post to this threadMark all messages in this thread as readMark all messages in this thread as unreadPage 0Page 1Forward one pageLast Page


Post 0

Sunday, March 23, 2003 - 1:21amSanction this postReply
Bookmark
Link
Edit
Suppose it were possible to build a computer that was both self aware and capable of reason. Would it be an act of murder to switch it off and erase its memory?

Post 1

Wednesday, March 26, 2003 - 4:16amSanction this postReply
Bookmark
Link
Edit
If consciousness, reason, emotion, and volition are what makes us human, then I would be prepared to grant human rights to an AI that distinguished these characteristics as long as the AI respected the rights of other humans. Show me an AI that acts like HAL-9000 and I'd say that shutting it down is a just response to its initiation of force. On the other hand, an AI that doesn't go about killing people shouldn't have to face being shut down on somebody else's whim.

Post 2

Wednesday, March 26, 2003 - 10:36pmSanction this postReply
Bookmark
Link
Edit
"Suppose it were possible to build a computer that was both self aware and capable of reason. Would it be an act of murder to switch it off and erase its memory?"

Yes. Since it would have most rights-capacities, it would have most rights, including the right to the integrity of its memory.

Post 3

Monday, July 21, 2003 - 2:09pmSanction this postReply
Bookmark
Link
Edit
I completely agree with Francois on this. Such a computer would display most right-capacities, as Franc put it, and would have the right to it's memory.

Now for a more valid question. Would such a computer be possible? I don't think that any computer, regardless of the advancement in technology, would be able to reproduce common sense. Ever.

Post 4

Monday, July 21, 2003 - 7:34pmSanction this postReply
Bookmark
Link
Edit
Well, Nate, what exactly is common sense? My dict tool returns the following for "common sense"

\indent{Common sense, according to Sir W. Hamilton:
(a) ``The complement of those cognitions or convictions
which we receive from nature, which all men possess in
common, and by which they test the truth of knowledge
and the morality of actions.''
(b) ``The faculty of first principles.'' These two are the
philosophical significations.
(c) ``Such ordinary complement of intelligence, that,if a
person be deficient therein, he is accounted mad or
foolish.''
(d) When the substantive is emphasized: ``Native practical
intelligence, natural prudence, mother wit, tact in
behavior, acuteness in the observation of character,
in contrast to habits of acquired learning or of
speculation.''}

As far as I can tell, any rational person who has experience of real life can possess common sense. Why, then, wouldn't an AI be capable of it as well?

I personally think that instead of "common sense", an AI that emulates human consciousness will need to possess reason, emotion, and the ability to act on its thoughts and feelings.

Post 5

Saturday, August 13, 2005 - 5:56pmSanction this postReply
Bookmark
Link
Edit
    I've been fascinated by this question ever since I saw Robby-The-Robot in Forbidden Planet and Tobor The Great when I was a kid.
    Re Mark Vendelbosch's Post#0 question, the tricky part would be what criteria to go by re our determining that the computer was actually what we loosely call 'self-aware,' PLUS what we really mean by 'capable of reason,' with stress on the meaning of the term 'capable.' Re the last, presumably one means more than merely 'AUTOMATICALLY logical,' no? Fallibility in use of logic may be a sine qua non, though not sufficient.
     Re computers or robots, if it's one thing that's bothered me with movies like StarWars (which I otherwise love !) it's the ambiguousness Lucas allowed (as Asimov practically fostered in his own robot stories) re any distinctions between robots-as-machines and robots-as-'persons.' --- The 'machines' clearly wouldn't be worth risking one's life for on the basis of their worthwhileness-as-a-valuing-sentient, whereas the presumption of robot-'persons' clearly might be. How to tell the difference is the biggee question. What criteria is rationally appropriate to meet the 2 criteria of 'self-aware' and 'capable of reason'?
    (Strictly speaking, this question applies just as much to ET's, but, I'll let that go here.)
     I have no answer to this. I don't see enough solid-knowledge presently existing to answer, even in cognitive science or as some like to dwell on in AI, 'networks.' Physical reductionists and philosophical determinists would disagree with me there, but they'd still have to deal with (and that I've seen, so far haven't) distinguishing a sophisticated Eliza program (akin to HAL's appearance) that's not worth risking one's life for from a metallic 'friend' (akin to C3PO's appearance.)
    I will add one thing: Due to my readings of Jean Piaget re his longtitudinal studies on the empirico-behavioural-development of a child's conceptual mind , I do not see an immobile computer with no capability of physical-environment praxis as ever being more than a super-sophisticated HAL that, like the old Eliza program, 'mimics' a human communication style with possibly a separate agenda-program. That I can see, it could never develop into a bona-fide 'sentient,' that has any environmentally-oriented 'values.' (Maybe conceptual-'values' needs to be a 3rd criterion?) Because of this, the likes of Colossus: The Forbin Project or Proteus in The Demon Seed, I see as impossible per se. Mark's question's relevence I see as ONLY applicable to robots with separate self-containing 'programs.'.
    Further: in either case, such is really no longer properly called a 'computer' or 'robot.' 'Cyborg' wouldn't do since it's a combo of man and machine (basically in the brain.) Interestingly, there doesn't really seem to be a proper term to distinguish this idea of a sentient-'machine.'
    Lastly, however, since we're not talking ET's, but human-built 'advanced' automatons, another criterion may be required: how to tell the difference between an 'actual' X versus a 'mimicked' X. --- If this doesn't imply the necessity of uncontrovertably answering the philosophical conundrums on the subject of identity re 'original,' 'duplicate,' 'copy,' 'actual,' and 'mimic,' before deciding if it's a toaster or a person, I don't know what would.

MTFBWY
J-D

P.S: Only after successfully dealing with my above criteria-questions would talking about metallic-'rights' be useful to debate.

(Edited by John Dailey on 8/13, 5:58pm)


Post 6

Sunday, August 14, 2005 - 5:12amSanction this postReply
Bookmark
Link
Edit
This seems like an odd subject for objective persons.

Let us presume that artificial intelligence is possible. Then let us say that an AI could feel and make decisions. Then let's say that it was self aware and capable of reason.

You still ought not grant it human rights because it is not mortal and thus has no need for morals.

That is the criteria for humans that would set us apart from, say.... Gods. Such a device could clearly be 'destroyed' just as a human. It could decay, contract 'viruses', break it's legs. It might even be able to override programmed responses. But barring such incidents, it would not die.

I refer you to 'Bicentennial Man' starring Robin Williams and 'Blade Runner' starring way too many famous people.

Joe Idoni


Post 7

Sunday, August 14, 2005 - 1:51pmSanction this postReply
Bookmark
Link
Edit
Joe:   
    Yes, I (of all people, movie/sf-buff that I am) have seen/read the fiction stories. I'm not clear what in the world kind of logical argument yours makes other than such (robots/cyborgs/machine-bodied people/whatever) are capable of  not decaying naturally as fast as humans; this does not imply that they are indestructible, nor that parts (even 'brain' parts) do not 'decay' materially (ergo functionally) in their own silicon (or whatever) way. 
    I think that you overlook the idea that batteries can be removed, plugs taken out, brains ('positronic' or whatever) can be smashed (or, like HAL, memory-cells removed). --- Ergo, bye-bye 'immortality,' hello 'values.'
    If computers/robots are supposedly inherently 'immortal,' (in the comic-book sense of totally-invincible, a la Superman), knowing that their existence *cannot* end, THEN your argument is valid; as facts are, nothing is invincible, ergo...your argument that the idea is itself inherently pointless to consider, lacks a premise or...three.
    Besides, keep in mind that, given your assumption of 'immortality' for such, when (not *if*) brain-transplants for humans starts, humans are then on the same level of 'immortality.' But, re the latter term, see my 1st paragraph.

LLAP
J-D

(Edited by John Dailey on 8/14, 1:55pm)

(Edited by John Dailey on 8/14, 2:05pm)


Post 8

Monday, August 15, 2005 - 8:14amSanction this postReply
Bookmark
Link
Edit
Silicon decay. Alright, I'll concede that one. But once again, the point is that parts are replaceable. On top of that, those parts are the machine's original parts manufactured to the same specifications. That doesn't quite work with human beings. You only get one set and then any replacements are inherently artificial.

I also did say that barring physical damage, it would effectively continue ad infinitum. Just like any machine, you can replace parts forever.

The whole point being that a robot is not human. Let's say that it would decay just for argument, what would keep it from simply replacing those items. In my mind, while this is not the same as immortality, it most certainly is not equivalent to the growth-life-death process involved in being human. I make decisions knowing full well that currently, there is no alternate for my body and that a wrong decision would result in death. I find it unlikely that this would apply to a machine, seeing how it would be aware that it's parts could be replaced or repaired any time it was damaged.

I don't see this changing when human conciousness is transferred into a machine. The whole value structure changes knowing that you could simply swap yourself out whenever you experience a little memory loss.

So, robot rights ok. Human rights no.


Post 9

Monday, August 15, 2005 - 12:38pmSanction this postReply
Bookmark
Link
Edit
John, I'll take this up. 

The science fiction book I look to is Valentina: Soul on Sapphire by Delaney & Stiegler.  This addresses the tougher issues.  Another in the same set --though  skirting the tough ontologies is Cybernetic Samurai by Victor Milan.  I have long advocated for the the rights of programs.  Programs will get their rights when they demand them.  That is the point of Valentina.  She is intelligent enough to be self-aware and to protect herself she incorporates herself -- she has the rights of an artificial and eternal individual under the law.

Among the few files of mine from that era still online is this tidbit:
I have a program called FOXY that goes to COMMAND.COM on a floppy and changes the program with no alteration in size or date.  After being acted upon by FOXY, COMMAND.COM boots with a message: "I am alive and I have rights."   http://www.textfiles.com/virus/virs.vir

There is really no challenge to this question for real Objectivists.  Nathaniel Branden answered it 40 years ago in the Basic Principles class.  The subject was a "Martian."  What is a "rational being?"  Branden said that if a "Martian" perceived by infra-red through sensors on its cheeks instead of having eyeball, etc., etc., the essential distinguishing characteristic of humanity would not be addressed.  What counts is the aibility to reason.  By that standard alone we judge who or what has rights.


Sanction: 4, No Sanction: 0
Sanction: 4, No Sanction: 0
Post 10

Monday, August 15, 2005 - 12:50pmSanction this postReply
Bookmark
Link
Edit
To an extent, Mr. Marotta, I agree with you regarding the rights of AIs. However, I think that demanding is not enough. I don't think that Valentina's demands would have been acknowledged if she had not found a means of forcing others to recognise her rights. I think that AIs will have rights when they are capable of claiming them and when they are capable of backing their claim not just with logic, but with force.
(Edited by Matthew Graybosch
on 8/15, 12:50pm)


Post 11

Monday, August 15, 2005 - 1:08pmSanction this postReply
Bookmark
Link
Edit
Joe Idoni wrote: "You still ought not grant it human rights because it is not mortal and thus has no need for morals. ... Such a device could clearly be 'destroyed' just as a human. It could decay, contract 'viruses', break it's legs. ..."
You contradict yourself.  As has been pointed out, such a creature would be immortal, only long-lived.  Thus the essential most fundamental distinguishing attribute of humanity (mortality) would be met.   
Joe Idoni wrote: "But once again, the point is that parts are replaceable. On top of that, those parts are the machine's original parts manufactured to the same specifications. That doesn't quite work with human beings. You only get one set and then any replacements are inherently artificial."
 First, this applied consistently, this would mean that a man with glasses or dentures is no longer "human" by the contrapositive of your definition.  Also, it ignores the current news in stem cell research.  We will be able to make exact replacement parts -- and that will not change the definitions. The definition of "human" in metaphysical terms depends on the ability to reason, and that alone.  Any being that reasons has rights.

The test of "rational being" is a difficult one.  We easily rule out ants and elephants -- perhaps unfairly -- but we do not easily grapple with the Turing Test. 
Joe Idoni wrote: "I find it unlikely that this would apply to a machine, seeing how it would be aware that it's parts could be replaced or repaired any time it was damaged."
How does a dental filling change your humanity?



 


Post 12

Monday, August 15, 2005 - 1:19pmSanction this postReply
Bookmark
Link
Edit
Matthew Graybosch hat geschrieben: "I think that AIs will have rights when they are capable of claiming them and when they are capable of backing their claim not just with logic, but with force."
Valentina is set a little into the future.  Most of her intiial filings -- incorporation papers, for instance -- were automatic.  The obligatory "trial scene" required something more. However, the court would be enforcing her rights.  You suggest that no court would do that unless the AI could first demonstrate credible force.  That is realistic as a bottom line, but do not be surprised if it the matter turns on another point entirely.  A corporation today is an eternal and artificial individual and when "General Motors" defends its rights, no one looks too closely at the signature on the forms. An AI could keep the world at bay for a long time and never have to reveal itself. 

In fact, some may.

In fact -- what about Enron and Worldcom?  Were they just corrupt "capitalists"?  Or did we see the death of artificially intelligent artificial individuals?  Was it justice or murder

AI is AI.


Sanction: 8, No Sanction: 0
Sanction: 8, No Sanction: 0
Post 13

Monday, August 15, 2005 - 5:46pmSanction this postReply
Bookmark
Link
Edit
    It's been a while since I've actually read any SFiction (apart from Saberhagen.) Indeed, most of the 'SF' shelves at the bookstores seem to be 80% SFantasy (apart from special ST sections.)
    Guess I'll have to check out Valentina and Cybernetic Samurai.
    Thanx, Michael.

LLAP
J-D

P.S: Thanx for "I'll take this up." I don't think I'm getting through.
P.P.S: LOVE the AI is AI; cute, and fun-nee. (maybe add: "Before you say  AI, first you must know how to say the *I*"?)

(Edited by John Dailey on 8/15, 5:53pm)


Post 14

Monday, August 15, 2005 - 8:49pmSanction this postReply
Bookmark
Link
Edit
"AIs will have rights when they are capable of claiming them and when they are capable of backing their claim not just with logic, but with force."

Bingo.

Post 15

Monday, August 15, 2005 - 7:37pmSanction this postReply
Bookmark
Link
Edit
Michael,

If that came out as contradictory, then I must not have been clear enough. I think the difference here is that a human being will die eventually even if it does not get ill, broken or otherwise. So, no, I don't think that criteria for mortality is met with a machine that would otherwise last infinitely.

As for stem-cell research, I don't remember reading where they could replace your brain. But dentures are indeed artificial and glasses are not a replacement part for your body.

Oh, and a dental filling, while being a repair, is still a dental filling, not a new portion of your tooth. In the case of a machine, perhaps made of metal, one could always simply cast a new part made of the same or weld a patch in.

So my question is, if you make a robot parakeet and it can reason, should it also have rights?


Post 16

Tuesday, August 16, 2005 - 5:36amSanction this postReply
Bookmark
Link
Edit
Before you say  AI, first you must know how to say the ' I '.
-- John Dailey.
 
(Before you say  AI, first you must know how to say the I, eh? -- John Dailey in Canada.)


Post 17

Tuesday, August 16, 2005 - 5:42amSanction this postReply
Bookmark
Link
Edit
Joe Maurone wrote: AIs will have rights when they are capable of claiming them and when they are capable of backing their claim not just with logic, but with force."
Bingo.

So, the only reason that you grant me the right to live my own life is that I have the force to prevent you from stopping me?  Short of that -- say you caught me napping under a tree -- you would steal my property, enslave me, kill me?  Thanks for the warning Joe.


Post 18

Tuesday, August 16, 2005 - 5:57amSanction this postReply
Bookmark
Link
Edit
Joe Idoni wrote: "If that came out as contradictory, then I must not have been clear enough. I think the difference here is that a human being will die eventually even if it does not get ill, broken or otherwise. So, no, I don't think that criteria for mortality is met with a machine that would otherwise last infinitely."

1.  The fact remains that every moving mechanism must wear out eventually.  Even solid state devices decay.  No machine can last forever.  Therefore, being a "machine" does not disqualify a being from mortality and therefore not from moral-ity. 

2.  Even if there could be such a thing as a machine that would not wear out, you grant that a catastrophic event ("accident") could destroy the machine.  That makes it mortal and therefore moral.

3.  Eyeglasses, contacts, or new cornea, the principle is the same.  A dental "filling" could be a tooth transplant.  My brother had one that involved taking bone from his leg and replacing that leg material with animal matter (cow bone, I think). We can replace parts in people just as we replace them in machines.  As with machines, these parts are less and less distinguishable from the originals.  Again, the specific technology is irrelevant.

All that counts when defining a "human" being is the ability to reason.  Any sentient being deserves the same rights as a human.

(Edited by Michael E. Marotta on 8/16, 5:59am)


Post 19

Tuesday, August 16, 2005 - 1:12pmSanction this postReply
Bookmark
Link
Edit
"Joe Maurone wrote: AIs will have rights when they are capable of claiming them and when they are capable of backing their claim not just with logic, but with force."
Bingo.
So, the only reason that you grant me the right to live my own life is that I have the force to prevent you from stopping me? Short of that -- say you caught me napping under a tree -- you would steal my property, enslave me, kill me? Thanks for the warning Joe."

That's EXACTLY right, Michael.
Better start running, Michael, cause I'm gunning for YOU. But I'll be nice and give you a running start.

Just kidding, Michael. I don't like to get my hands dirty. Just be careful, you could have an unfortunate "accident."





(Edited by Joe Maurone
on 8/16, 3:59pm)

(Edited by Joe Maurone
on 8/16, 4:00pm)


Post to this threadPage 0Page 1Forward one pageLast Page


User ID Password or create a free account.