About
Content
Store
Forum

Rebirth of Reason
War
People
Archives
Objectivism

Post to this threadMark all messages in this thread as readMark all messages in this thread as unreadBack one pagePage 0Page 1Page 2Page 3Page 4Page 5Forward one pageLast Page


Sanction: 5, No Sanction: 0
Sanction: 5, No Sanction: 0
Post 20

Wednesday, December 19, 2007 - 8:14amSanction this postReply
Bookmark
Link
Edit
John,

Maybe the discussion should turn to the choice about why (or how) a particular 'value-judgement' was employed, rather than (as is often usual) an existing conflicting one?   :)

     After all, all value-judgements have alternative competing ones, though only one gets...somehow...used.

Exactly. So when Bill says:

You don't "make up" your motivations; your motivations are the result of your value judgments.
It pays to remember the source of our value judgments: (focused-on) premises.

Focused-on premises can be critically-scrutinized or casually or begrudgingly accepted; or even rejected -- or anything in-between. Also, held premises compete; and a given resolution of that may be more, or less, rational -- depending on the will of the deliberating agent. Evasion (to "blank-out") is also a choice which can be willed. In fact, premise-juggling is an active process always engaged-in during an individual's free (read: open-, and active-minded) deliberation of something.


Bill,

You often (at least operationally) ignore my posts on this subject. In addition to a potential answer to this post, I invite you to consider answering my posts 7 and 9.

Thank you if you do,

Ed

p.s. And if you prefer not to answer this post, nor my posts 7 or 9 -- then I would at least hope that you answer Joe's post 16; which brings up very similar points, especially to this post.

(Edited by Ed Thompson on 12/19, 8:28am)


Sanction: 5, No Sanction: 0
Sanction: 5, No Sanction: 0
Post 21

Wednesday, December 19, 2007 - 8:56amSanction this postReply
Bookmark
Link
Edit
Bill,
I feel compelled to ask this of you : )
Suppose I write a computer program called "Chooser", which consists of an algorithm that takes a large set of input data and, after much (deterministic, not random) analysis, spits out either A or B.  I then load this program on my stand alone desktop computer.
After collecting the relevant data, I enter it and allow the program to run with no interference of any kind.  Eventually the computer says: "B".  According to your conception of free will, did the computer "freely choose" B?  If not, do you consider the difference between the computer and a human agent to be qualitative or quantitative?
Thanks,
Glenn


Post 22

Wednesday, December 19, 2007 - 10:30amSanction this postReply
Bookmark
Link
Edit
Side point: this argument as to the self-refutingness of determinism, be it sound or not, isn't specifically Objectivist; Epicurus, Kant and Blanshard all used it earlier.  This came up recently here on RoR.

Your point that reductivism and materialism work by turning the issue into a tautology is a valuable one.  Phillippa Foot, a philosopher more Objectivists should get to know, said something similar about non-naturalist ethics.  The people who claim that value utterances aren't statements of fact and can't be gotten from them, similarly define "natural" or "factual" to give them just the conclusion they want.


Post 23

Wednesday, December 19, 2007 - 2:03pmSanction this postReply
Bookmark
Link
Edit
As far as the nature of the human mind is concerned, in many instances one could choose differently from how one did, all things remaining equal.  That's because the choice, at bottom, would be a kind of "on/off" type. We could cause our minds to be on rather than off. We would be who determines this, nothing else. Which is not to say there would be no conditions that would place constraints on what we could choose--no fantasies in the offing, for instance. But, say, John insulted Harry because John failed to consider the implications of what he went on to say; he might have not failed to consider this and then the insults wouldn't have occurred. The action of considering things is normally up to John (and all of us).

Post 24

Wednesday, December 19, 2007 - 7:27pmSanction this postReply
Bookmark
Link
Edit
This appears in fact to be another variation on the "floating consciousness" noncept.  If there were no outside influences on our conscious choices, what would that mean?  Other than that one was somehow conscious of nothing... I contend.  Consciousness is a  specific process conducted as part of the ongoing life processes of an entity - a specific entity.

Of course there are inputs to consciousness that influence its actions.  To then claim that this makes true free-will or volition impossible is simply a mental sleight of hand, in which the real item - consciousness as a process of identification, verification and evaluation ("What is it. How do I know. So what?") existing within a specific entity as part of its life processes, has just been substituted for by "X", which somehow makes choices that are not connected with either inputs or consequences.

"X" is undefined, a gift from God, Who is also undefined, but necessary for X to exist, as no such entity can possibly exist - as Brady shows in his alleged proof of the existence of God - in the physical, causational universe.  So, if it isn't a process of awareness - identification, verification, evaluation - tied to the physical needs of a real entity, then what the heck is it, how do we know this, and who gives a damn about X?

Life, by definition, is a process of self-sustained action - "action" being directed movement, "directed" implying a goal, which is the sustenance of that very process.  Consciousness is a part of that process which enables an infinite degree of refinement of that direction.  As such, human consciousness, which can be shown to involve trillions of internal entities, such as neurons and their functional subcomponents, and many more integrated subprocesses, all synchronized and coordinated to produce a single line of action and a single entiry - "I" - is certainly as qualified as any other known entity in the physical universe to be seen as a causal agent.

To give one final example, consider the latest robot car race, in which the majority of vehicles, performing without a driver, were able to complete a long and arduous course.  Now these entities were far from being conscious in any human or living sense, yet it one were to hit you, you would assign it a causal role.  And if it missed you deliberately standing in the middle of the road as a frat initiation dare, then you would also assign the cause to the vehicle's excellent sensors and fine processors and programming. 

Sure, the programming was done ultimately by humans in the past, but at that moment it was the vehicle that was the causal agent.  And, however we got here to this specific moment as a species or as individuals, we are certainly as much a causal agent as any current robot car.  It is only when one reduces "consciousness" to the undefined "X" that cannot exist in this universe at all without the help of another undefined "Y" (God) that such issues appear to make sense.


Post 25

Wednesday, December 19, 2007 - 8:17pmSanction this postReply
Bookmark
Link
Edit
Er... Does that put you on the 'pro' side of the question or the 'con'?

Post 26

Wednesday, December 19, 2007 - 11:19pmSanction this postReply
Bookmark
Link
Edit
Joe wrote,
Bill, you say necessity and compulsion are different. Compulsion overrides someone's will. Are you simply saying that a person doesn't have a will, and so it can't be overridden?
No, that's not what I'm saying. Why would I now say something like that, when I've been saying all along that people choose their actions based on their value judgments? Why would I now say that they DON'T choose their actions -- that they DON'T have a will? A person is "compelled" to do something only when he couldn't have chosen the alternative even if he had wanted to. I am "compelled" to give the government my money, because I couldn't have kept it even if I had wanted to. If I vote for Bidinotto, because I've decided that he is the best candidate, that doesn't mean that my decision "compels" me to vote for him, the reason being that I could have voted for the other candidate, if I had decided that he was the better of the two. If the government forces me to vote for Bidinotto, then my vote is compelled, because I couldn't have voted for the other guy, even if I had preferred him to Bidinotto. Note that if I say that my choice to vote for Bidinotto is necessitated by my value judgments, I'm NOT saying that I couldn't have voted for the other candidate even if I had preferred him to Bidinotto. Do you see the difference? Necessity in this context does not mean the same thing as compulsion.
Now one problem with responding line by line is that you missed the point of an entire paragraph because you broke it up and answered outside the context. You present an example of a person who has absolutely convinced himself that there is only one reasonable choice, feels it strongly, and has no other possible motivation to choose anything else. I pointed out that the example is worthless and proves nothing.
What it proves is that he can still be said to make a "choice" even though it was clearly necessitated by his value judgments.
We're wasting time here. You brought up this example of the stars aligning because you admitted that you can't define a person's values until after they chose, and were looking for an example that is so overwhelming[ly] in favor of a particular alternative that it would define their values accurately.
What example of "the stars aligning"? I didn't use that metaphor, so I'm not sure what you're referring to? In any case, a person's choices are a reflection of his values. As Rand puts it, "A value is that which one acts to gain and/or keep." Or as Nathaniel Branden says, "A value is an object of an action."
I reject this method because it avoids defining "value" by having every conceivable motivation for acting be aimed at a single alternative. Instead of more rigorously defining your term, you've created the kind of situation where your term doesn't need a definition.
Why do you say that?
And where value determinism should be able to make predictions, the only cases you can hope to safely predict is one [sic] where the person has every motivation to act one way, and no motivation to act another, including acting on whim or just to prove he has free will.
Joe, you brought up those examples; I didn't. And this is not an issue of prediction. I'm not saying that because a person's value judgments determine his choices, one can always predict what action he will choose.
And no, I'm not taking back my position that if your actions are necessitated by your values, there is not choice. Notice I say your action is necessitated, not your choice. There is no choice in determinism. It's a stolen concept. There are alternatives that are physically possible, but the action is an automatic response to X. In your case, X is your values, whatever those are.
So I didn't "choose" to vote for Bidinotto. Is that what you're saying? I didn't "choose" the answer on the test that I knew to be correct, because I had no reason to choose the alternatives. Is that your position?

I wrote, "Besides, if there is a motivation to pick something else, it must be one that favors the other alternative -- otherwise it's not a motivation to pick something else, because a motivation is necessarily preferential."
I think you can be motivated to do more than one thing, and choose which you will act on. Are you suggesting that in your value determinism scheme, there is no mixed motivations? That when the calculation of which you prefer completes, you are 100% emotionally wedded to that action and feel no urge to do the other? I'm thinking about a hard decision. Which car should I buy? A or B. Both are nice. I choose A, but I clearly feel a desire to buy B instead. I am motivated to buy B. But I will myself to go with A instead.
Okay. Let's be clear on what the alternatives are. You are motivated to buy Car A instead of not buying a car at all, AND you are motivated to buy Car B instead of not buying a car at all. That much is true. The question is: Are you motivated to buy Car A instead of Car B? If you are, then you will choose Car A. But you cannot be motivated to buy Car A instead of Car B and SIMULTANEOUSLY motivated to buy Car B instead of Car A. By the same token, you cannot choose either car over the other without PREFERRING one to the other. You cannot prefer Car A, but buy Car B instead, or prefer Car B, but buy Car A instead. Whichever car you like the most is the one you will buy (ceteris paribus).

I wrote, "What you're calling 'acting' [versus 'reacting'] is an arbitrary choice that is not made for the sake of any perceived end or goal."
This is entirely wrong. Again, you posit that the alternatives are determinism, which is an automatic and passive response to stimuli...
I wouldn't call the response "automatic" or "passive," which suggests a response that is unintentional and purposeless.
...and non-identity or indeterminism, where your "choices" are not based on your will but are seeming[ly] random. As long as you insist on this false alternative, of course determinism will seem like the only possible alternative. But the alternative is not between being a slave to your values and not having any at all.
To say that a value is an object of an action -- something one acts to gain or keep -- is not to say that the actor is a slave to it. Slavery, by definition negates the slave's values -- negates his choice.
I don't understand how you view your own choices. If I'm trying to figure out whether to post or go to bed, it's an active process. First, I'm aware that there is a choice to make, and what the alternatives are. I put some focus into each choice to understand what the possible trade-offs are.
Exactly! And after evaluating the trade-offs, you choose the action that you judge to be the most valuable.
I also am aware that I have certain emotional or physical needs. My emotions tell me that I want to make you understand. My body is telling me that I'm sleepy. I find it emotionally frustrating that you are so convinced of your position when it contradicts experience and is so poorly defined. As I consider the alternative, my mind starts to wander. I am sleepy after all. I focus more sharply. My moral values tell me that I'd appreciate posting more if we didn't have this big disagreement. I value clarity. I value making progress. I value getting to the heart of the issue. There are countless reasons for and against. I disvalue wasting my time, as I feel this topic usually is. I disvalue arguing about something I don't think I'll change my mind about. I disvalue the effort it takes to make myself as clear as possible on ideas that I consider simple. Short term, the costs are high. Long term, the benefits vary. If I convince you, we can have more intelligent discussions, and others may also learn and contribute. But that seems unlikely. Another possibility is that by formulating a response, I'll better identify the ideas, learn to communicate them, and benefit in the long term. Again, this whole thing requires focus. It requires looking at all of these competing alternatives. There is no magical values that I've decided on in the past that make my decision for me.
Of course not, and that's not what I'm saying. Your values in this case involve trying to figure out what further choices are worth making, but that very process is something you value and is necessitated by that value.
The process is long and focused. I run the calculations. I look deep at the alternatives. I make a choice, knowing that the other alternative has benefits as well and I'll need to continue to evaluate as I go.

This is what I mean by an active process. As opposed to "here are my alternatives, and my values tell me I must do X".

And this is the important part. This active process, this awareness that I focus on the various alternatives, and form the will to act, is the critical issue. It means that I could have chosen either thing.
Yes, "could have" chosen either thing, IF you had considered it worth choosing.
That this experience of making a choice is not an illusion. That there is no magic "value" that necessitated it, and that suggesting so is not simply a gross simplification, but actually removes from the equation this active effort of focusing awareness.
But the effort is itself something you value and consider worth making.
All determinism does this. It makes the "choices" simply automatic responses to some hidden values, or secret understanding, or outside influences, or emotional reactions, or whatever.
Joe, if you do something you like, because you like to do it, is that an automatic response? No, it isn't. An automatic response is a purposeless action, like the patella reflex, not an action that one takes because one values its object.
If I choose poorly, I can be blamed for not focusing enough.
Yes, you can, because to blame someone for a bad action simply means that he chose the action with full knowledge of what he was doing. The fact that he valued the action and didn't think it was bad does not absolve him from blame.
It's not something I can blame on outside influences, or inside influences. It can be understood that I may be even strongly influenced one way or another, but that the choice is still ultimately a product of my own focused awareness. My own reasoning mind. I don't remove myself from the decision making. I'm at the center of it. The buck stops here.
Of course, you can't remove yourself from the decision making, but your decision is itself based on your antecedent values. You make a decision for the sake of some end or goal -- something that you want to achieve.
Determinism removes this active process from the equation. It makes the "choices" into straightforward mathematical products of input stimuli. The stimulus forces the choice. It necessitates it. The active process that I claim is the real decision making is removed. No longer is your mind the source of the actions. And so moral responsibility is taken with it. You can't be blamed. Your values can be blamed, but you're not even responsible for those. They arose from previous values, which arose from....who knows. But choice is an illusion.
Not true. Every choice reflects a value judgment. Suppose I believe that abortion is murder and execute an abortionist, because I think he deserves to die for killing an innocent human life. The fact that I saw no reason to spare the doctor's life does not absolve me of my crime. I am still blameworthy, and should be punished in order to deter such crimes in the future.
And I mean that. Where we experience this process of choosing, requiring effort and focus, and ultimately deciding for whatever reasons (including whims or defiance), none of that would be real in the determinism framework. We may think we are focusing on the problem, but it is in fact these magical values (or other external stimuli) that are compelling our brain to function in a way that we merely experience as choosing. This long process I described earlier would be simply a rationalization. It would be an illusion created by our brains to make us feel like we're in control, while in fact the values colliding or mixing without our participation, making the choice for us.
I don't know where you get this stuff -- "magical values compelling our brain to function." If I decide after investigating two candidates that A is better than B, no "magical value" has compelled my brain to choose A over B. I choose A over B, because, after evaluated the two candidates, I consider A a better choice than B. Having reached that conclusion, I cannot then choose B over A. If I could, then THAT would be a "magical value."

- Bill

Post 27

Wednesday, December 19, 2007 - 11:46pmSanction this postReply
Bookmark
Link
Edit
Glenn, You wrote,
I feel compelled to ask this of you : )
Ah, but you see, you weren't "compelled" to ask me; you were simply "determined" to ask me! ;-)
Suppose I write a computer program called "Chooser", which consists of an algorithm that takes a large set of input data and, after much (deterministic, not random) analysis, spits out either A or B. I then load this program on my stand alone desktop computer.
After collecting the relevant data, I enter it and allow the program to run with no interference of any kind. Eventually the computer says: "B". According to your conception of free will, did the computer "freely choose" B?
No. The computer didn't "choose" B. A "choice" is an action taken for the sake of a perceived end, goal or value. The computer is neither conscious, goal-directed nor value-oriented. Unlike a human being's, a computor's action is passive, mechanistic and automatic. The biggest mistake people make in this area is to equate teleological determinism with mechanistic determinism. To say that human being's action is necessitated by his value judgments is to say that he is determined not by efficient causation but by final causation.
If not, do you consider the difference between the computer and a human agent to be qualitative or quantitative?
Definitely qualitative!

- Bill

Sanction: 5, No Sanction: 0
Sanction: 5, No Sanction: 0
Post 28

Thursday, December 20, 2007 - 12:05amSanction this postReply
Bookmark
Link
Edit
Bill, the more you state your position, the more you sound like you're defending free will and just misusing the term "determinism".  You claim to be against every element of determinism.  And yet earlier you claimed that a persons values "necessitate" an action.  I don't believe those positions are compatible.  The values didn't necessitate anything.  Your own process of focused awareness, using your values and understanding and emotions and whatever else, made the decision.  I don't know what you hope to gain by reinterpreting determinism to be the same as free will.

A few nits to pick:

1.)  You were trying to prove that values necessitate actions.  I questioned your use of the term value.  The "that which you act to gain or keep" is an after the fact description of your choice, it doesn't make any sense to say that that value "determined" your choice.  It was the effect.  How can it also be the cause?

2.)  You also used a different version of "value" when you talked about the Bidinotto example.  You talked about moral values.  Those, presumably, are abstract values that guide your decision making process.  Those would be better for arguing that values necessitate choices since they are there before the choice is made, but I pointed out that your actual choice could be based on emotions, whims, etc.  So this also fails.

3.)  That leaves you arguing that values necessitate actions or choices (although I claim a necessitated choice is not a choice), without any definition of what these mysterious "values" are.  And before you dissect this line by line, notice I've already rejected your  definition "A value is that which one acts to gain and/or keep."  See 1. above.

4.)  Back in post 6, I pointed out that if these "values" which you can't define really necessitated the action, then we should expect that we can predict outcomes based on these.  You offered, in post 7, that "In some cases, there isn't, because it's not always possible to know someone's real values. Even the actor himself may not always know them, until he or she makes a choice. But there are many circumstances in which such knowledge does exist and in which the action is clearly predictable."  And you gave Bidinotto for President as an example.  Instead of showing a case where the values are well defined and the outcome is necessitated, you gave a case where every possible motivation aims at the same alternative, and no other motivations are possible.  This doesn't clarify in any way what you mean by values.  It only appears to do the trick because you ruled out any possible motivation for doing anything else, including emotions, whims, etc.  I don't know how often I have to repeat this.

5.)  Conclusion so far is that your value determinism is based on a floating abstraction called "value", which you periodically try to define as the product of the decision making, instead of the criteria of the decision making.

6.)  Back to the cars.  I can be motivated to buy the SUV over the Sports car because it's cheaper and roomier.  I can be motivated to buy the Sports car over the SUV because it's faster and the chicks dig it.  Motivation is not preference.  Preference is after the evaluation and decision making.  Motivation comes before it.  And the motivations can conflict.

7.)  I'm not saying you didn't choose Bidinotto for president.  I, as an advocate of free will, say that's exactly what happened.  I'm saying if your action was "necessitated", than it wasn't a choice.  Choice is not compatible with determinism, except as a stolen concept or broken metaphor.


Sanction: 10, No Sanction: 0
Sanction: 10, No Sanction: 0
Sanction: 10, No Sanction: 0
Post 29

Thursday, December 20, 2007 - 8:39amSanction this postReply
Bookmark
Link
Edit
Bill Writes:

> Necessity in this context does not mean the same thing as compulsion.


I'm not going to wade into this discussion other than to ask this question of Bill one final time. I will not comment further on Bill's response.

Bill, do you believe that, in principle (even though in practice we might be unable to actually track the factors), all the actions of men that have occurred throughout history up to this time were determined, in the sense that they were inevitable as a result of prior conditions of the unwinding mechanics of the physical universe? In other words, you say that our actions are necessarily determined by our values. Are our values necessarily determined by antecedent events that ultimately lie outside of our consciousness? If possible, a simple yes or no answer would be appreciated.

Regards,
--
Jeff


Post 30

Thursday, December 20, 2007 - 9:18amSanction this postReply
Bookmark
Link
Edit
1.)  You were trying to prove that values necessitate actions.  I questioned your use of the term value.  The "that which you act to gain or keep" is an after the fact description of your choice, it doesn't make any sense to say that that value "determined" your choice.  It was the effect.  How can it also be the cause?
Good retort, Joe. A related and stale and worn-out question in ethics is: Is something good BECAUSE you acted to gain or keep it (subjective theory of values), or are certain things good for us objectively (objective theory of values)?

2.)  You also used a different version of "value" when you talked about the Bidinotto example.  You talked about moral values.  Those, presumably, are abstract values that guide your decision making process.  Those would be better for arguing that values necessitate choices since they are there before the choice is made, but I pointed out that your actual choice could be based on emotions, whims, etc.  So this also fails.
Another doozie, Joe. Via use of my perceptual power of memory, I am able to remember several instances of acting against my abstract moral values. Bill's hypothesis fails to adequately differentiate and integrate this phenomenon.

You offered ... that "In some cases, there isn't, because it's not always possible to know someone's real values. Even the actor himself may not always know them, until he or she makes a choice. ..."
This goes back to the "good BECAUSE I acted to gain/keep it (subjective theory); good objectively (objective theory)"-alternative. Bill's "value-determinism" includes un-identifiably-determining factors [!] -- which forces him to retreat to the potentially-meaningless tautological position illustrated by the following statement:

"In whatever context, whatever you did, you felt like doing." [!]

;-)

Ed


Post 31

Thursday, December 20, 2007 - 10:45amSanction this postReply
Bookmark
Link
Edit
Bill wrote:
The biggest mistake people make in this area is to equate teleological determinism with mechanistic determinism. To say that human being's action is necessitated by his value judgments is to say that he is determined not by efficient causation but by final causation.
I think this is the key to the misunderstanding and disagreement. Without going back to review all that Bill has written on the issue, I think this may be the first time that he has made this statement.

Does teleological determinism make any sense? Can it be said that final causation determines the outcome in the same way that efficient (mechanical) causation does?

My first take is that final causation may be necessary for the act of choice but it is not sufficient, while efficient causation is sufficient by itself for any action. The latter is what is usually understood by the term "determinism".

Post 32

Thursday, December 20, 2007 - 10:58amSanction this postReply
Bookmark
Link
Edit
Bill said:
The computer is neither conscious, goal-directed nor value-oriented. Unlike a human being's, a computer's action is passive, mechanistic and automatic.
Bill: I think you underestimate my algorithm!  But, let's move on.  A dog is conscious and goal-directed, so consider the following hypothetical (it's hypothetical because I don't have a dog : ).

Suppose I train my dog by using easily distinguishable sounds.  I first train him to respond to one sound by picking up his leash and meeting me by the door.  Another sound, quite different from the first, is used to alert him to the fact that there is food in his bowl.  Now, suppose I make both sounds simultaneously.  The dog runs to the door with his leash in his mouth.  Did the dog "freely choose" to go for a walk?  If not, do you consider the difference between the dog and a human agent to be qualitative or quantitative?
Thanks,
Glenn



Post 33

Thursday, December 20, 2007 - 11:59amSanction this postReply
Bookmark
Link
Edit
Bill D. wrote:
The biggest mistake people make in this area is to equate teleological determinism with mechanistic determinism. To say that human being's action is necessitated by his value judgments is to say that he is determined not by efficient causation but by final causation.

Rick P. replied:
I think this is the key to the misunderstanding and disagreement. Without going back to review all that Bill has written on the issue, I think this may be the first time that he has made this statement.
Bill has advocated teleology, or final causation, multiple times on this forum, as long as at least two years ago. Here are a couple of examples:
http://rebirthofreason.com/Forum/GeneralForum/0833_11.shtml#223
http://rebirthofreason.com/Forum/Dissent/0046_2.shtml#56
I noted it and called him a "closet volitionist" here:
http://rebirthofreason.com/Forum/ArticleDiscussions/1874_5.shtml#101

Rick P. wrote:
Does teleological determinism make any sense? Can it be said that final causation determines the outcome in the same way that efficient (mechanical) causation does?
My first take is that final causation may be necessary for the act of choice but it is not sufficient, while efficient causation is sufficient by itself for any action. The latter is what is usually understood by the term "determinism".

I agree. Choices are influenced, but not necessitated, by antecedents.

Edit: Also, said antecedents may include one's prior choices. These are particularly relevant to Bill D.'s oft-used question 'which of two political candidates one would vote for?'.


(Edited by Merlin Jetton on 12/21, 7:22am)


Post 34

Thursday, December 20, 2007 - 8:37pmSanction this postReply
Bookmark
Link
Edit
Rick,

My first take is that final causation may be necessary for the act of choice but it is not sufficient ...
Final, or intentional, causation is necessary for the act of choice, since choosing is a form of conscious intention. To intend to choose A over B even seems (at least to me) to be sufficient, though I may be proved wrong on that.

;-)


Glenn,

A dog is conscious and goal-directed ...
But wouldn't you agree that a dog's goals aren't chosen in the human sense of the term (i.e., by the dog ruminating over the hierarchical value of the available alternatives)?

And how about you, Bill?


Ed

(Edited by Ed Thompson on 12/20, 8:39pm)


Post 35

Thursday, December 20, 2007 - 10:55pmSanction this postReply
Bookmark
Link
Edit
I wrote, "The computer is neither conscious, goal-directed nor value-oriented. Unlike a human being's, a computer's action is passive, mechanistic and automatic." Glenn replied,
Bill: I think you underestimate my algorithm!
Really! Then you've got a pretty sophisticated computer, one that's actually alive. The next thing you'll tell me is that it's capable of reproducing and is looking for a mate, which gives new meaning to the term "computer dating."
But, let's move on. A dog is conscious and goal-directed, so consider the following hypothetical (it's hypothetical because I don't have a dog : ).

Suppose I train my dog by using easily distinguishable sounds. I first train him to respond to one sound by picking up his leash and meeting me by the door. Another sound, quite different from the first, is used to alert him to the fact that there is food in his bowl. Now, suppose I make both sounds simultaneously. The dog runs to the door with his leash in his mouth. Did the dog "freely choose" to go for a walk?
I'm not sure I understand your example or what it's supposed to prove. In any case, I wouldn't use the term "choice" to describe the actions of a non-rational animal. Choice, as I understand it, is a conceptually based action.
If not, do you consider the difference between the dog and a human agent to be qualitative or quantitative?
Well, a dog isn't a rational animal, so it's qualitative.

- Bill


Post 36

Friday, December 21, 2007 - 12:25amSanction this postReply
Bookmark
Link
Edit
Joe wrote,
1.) You were trying to prove that values necessitate actions. I questioned your use of the term value. The "that which you act to gain or keep" is an after the fact description of your choice, it doesn't make any sense to say that that value "determined" your choice. It was the effect. How can it also be the cause?
What do you mean, "It was the effect"? That which you act to gain or keep is what you are SEEKING; it is the OBJECT of your action, and the object of your action is its final CAUSE!
2.) You also used a different version of "value" when you talked about the Bidinotto example. You talked about moral values. Those, presumably, are abstract values that guide your decision making process. Those would be better for arguing that values necessitate choices since they are there before the choice is made, but I pointed out that your actual choice could be based on emotions, whims, etc. So this also fails.
Not true. The object of your action is that for the sake of which you're making the choice; it is what you're seeking to gain or keep by choosing the action.
3.) That leaves you arguing that values necessitate actions or choices (although I claim a necessitated choice is not a choice), without any definition of what these mysterious "values" are.
I've given you a definition of "value." What more do you want?
And before you dissect this line by line, notice I've already rejected your definition "A value is that which one acts to gain and/or keep." See 1. above.
Then you've rejected Rand's definition as well and done so unsuccessfully, because you've failed to grasp its meaning. See my rejoinder to 1. above!
4.) Back in post 6, I pointed out that if these "values" which you can't define really necessitated the action, then we should expect that we can predict outcomes based on these. You offered, in post 7, that "In some cases, there isn't, because it's not always possible to know someone's real values. Even the actor himself may not always know them, until he or she makes a choice. But there are many circumstances in which such knowledge does exist and in which the action is clearly predictable." And you gave Bidinotto for President as an example. Instead of showing a case where the values are well defined and the outcome is necessitated, you gave a case where every possible motivation aims at the same alternative, and no other motivations are possible. This doesn't clarify in any way what you mean by values. It only appears to do the trick because you ruled out any possible motivation for doing anything else, including emotions, whims, etc. I don't know how often I have to repeat this.
I have not ruled out emotions or whims as reasons for actions. Let's be clear on what the voter example was designed to illustrate. It was designed only to show a case in which there is no dispute that the action was both a choice AND a necessary consequence of the person's value judgment. It was not designed to show that all choices are clear-cut decisions involving no uncertainties or conflicting emotions.
5.) Conclusion so far is that your value determinism is based on a floating abstraction called "value", which you periodically try to define as the product of the decision making, instead of the criteria of the decision making.
The product of a decision can be a value and the criteria for a decision can be a value. The product of a decision can be a value, insofar as you arrive at a value -- at something you consider worth gaining or keeping -- as a result of a decision. And the criteria for a decision can be a value, insofar as you base your decision on what you already consider worth gaining or keeping.
6.) Back to the cars. I can be motivated to buy the SUV over the Sports car because it's cheaper and roomier. I can be motivated to buy the Sports car over the SUV because it's faster and the chicks dig it. Motivation is not preference. Preference is after the evaluation and decision making. Motivation comes before it. And the motivations can conflict.
What you're really saying is that you can value certain features of the SUV over those of the Sports car (viz., price and roominess), and other features of the Sports car over those of the SUV (viz., speed and attractiveness). There is no conflict there. But you cannot be motivated to BUY the Sports car over the SUV and simultaneously motivated to buy the SUV over the Sports car. In other words, you cannot value the Sports car as a total package over the SUV, and simultaneously value the SUV as a total package over the Sports car. To be sure, you can be conflicted over which one you value more, which I think is what you're getting at. In other words, you may have difficult deciding which one is the more valuable. But once you decide, you will be motivated to buy that one and not motivated to buy the one you consider less valuable.
7.) I'm not saying you didn't choose Bidinotto for president. I, as an advocate of free will, say that's exactly what happened. I'm saying if your action was "necessitated", than it wasn't a choice. Choice is not compatible with determinism, except as a stolen concept or broken metaphor.
Well, that's your premise, and your sticking to it! :-) What I was trying to do is get you to see that your premise is mistaken. I don't think anyone would deny that when you cast your ballot for Bidinotto, you're "choosing" him for president, EVEN THOUGH it is quite clear that, given your political values, you couldn't have chosen otherwise.

- Bill



Post 37

Friday, December 21, 2007 - 9:19amSanction this postReply
Bookmark
Link
Edit
Jeff asked,
Bill, do you believe that, in principle (even though in practice we might be unable to actually track the factors), all the actions of men that have occurred throughout history up to this time were determined, in the sense that they were inevitable as a result of prior conditions of the unwinding mechanics of the physical universe? In other words, you say that our actions are necessarily determined by our values. Are our values necessarily determined by antecedent events that ultimately lie outside of our consciousness? If possible, a simple yes or no answer would be appreciated.
Jeff, a simple yes or no answer is not only possible; it's absolutely necessary, inevitable, inexorable, ineluctable and inescapable. :)

Ladies and Gentlemen, the answer is . . . (drum roll!) . . . YES!

- Bill



Post 38

Friday, December 21, 2007 - 9:45amSanction this postReply
Bookmark
Link
Edit
Bill,

To be sure, you can be conflicted over which one you value more, which I think is what you're getting at. In other words, you may have difficult deciding which one is the more valuable. But once you decide, you will be motivated to buy that one and not motivated to buy the one you consider less valuable.
Or, in terms more plain: In whatever context, whatever you did, you felt like doing.

;-)

And in this case, your feelings weren't straightforward and clear-cut -- so you openly and actively deliberated on the available alternatives (and maybe even actively imagined those other alternatives not immediately self-evident -- such as flip-flopping and deciding not to buy a damn car, for example).

In other words, you were free to choose the scope and the extent of your open and active deliberation on the matter -- which, operationally, is just what Free Will is all about (a "personally-chosen focus").

;-)

Ed


Post 39

Friday, December 21, 2007 - 9:50amSanction this postReply
Bookmark
Link
Edit
Ed, in Post 30, you wrote,
I am able to remember several instances of acting against my abstract moral values. Bill's hypothesis fails to adequately differentiate and integrate this phenomenon.
Ah, so a value is NOT "that which one acts to gain or keep," because, according to you, one can act to gain or keep something that is not a value. So why did you act to gain or keep this non-value? For the sake of what end or goal? In other words, what were you seeking to gain or keep by doing so?

Ed, I think what was going on there is that, under those circumstances, you wished that you had valued your "abstract values," but in fact, you didn't value them. They were values in name only, like the man who claims to value his marriage, but cheats on his wife. Is he acting against his abstract values? No, because he doesn't hold his marriage as an abstract value. If he did, he wouldn't jeopardize it by cheating on his wife.

- Bill


Post to this threadBack one pagePage 0Page 1Page 2Page 3Page 4Page 5Forward one pageLast Page


User ID Password or create a free account.