| | Some thoughts: One could look at this issue from the other side.
What doesn't have free will? If you eliminate everything, then you're left without free will. If you have an irreduceable remainder, then there's your answer... Assuming that the concept was properly defined to start. ;)
Also, I could program a computer or a non-sentient avatar in a VR to tell itself "I have free will," and to insist to anyone who asks that it did indeed have free will. The fact that the avatar acts consistently with its programmed premise does not make it so.
If there were no such thing as free will, then perhaps it - the concept and belief - would still evolve in any culture of self-aware entities. Whenever a certain level of cognitive dissonance occurred, and the entity was feeling the stress of competing imperatives, it might recall, "Oh, I have free will. I can choose among these options. I determine what happens, not these things I experience as separate from me." Simply changing the mental focus from being below the level of determination to being above it would likely result in a reduction of stress and a feeling of relief and the ability to focus more clearly upon the truly relevant issues.
Other similar formulas come to mind: "God will take care of everything." "It's all good." These formulas may have no truth to them, but may allow the entity expressing them internally to break free of some internal lock of focus.
I'll go tentatively with compatibalism, in any case. (I recall coming up with that formulation - as I understand it, anyway - in 1966, when the issue of free will came up in a college discussion group.) Basically, you can define (with a certain amount of fuzziness in the data, to be sure) the parameters within which any entity determines its own actions. If someone sets off a nuke in your immediate vicinity, then nothing you do in that last nanosecond has any measureable impact on anything. On the other hand, if you were dropped into one of those hypothesized fractal pocket universes, and you were literally the only thing in that universe, then "you" would almost completely determine your actions and interactions.
If you program a computer to do a set of things based on input and then it does those things, you can say in a physical sense that the computer was responsible. But the computer is not self-aware, and the things it does are discrete steps largely unconnected to each other, except as defined by the logic of the program. You can't validly assign moral responsibility to the computer, although you can to the programmer.
A human equivalent entity, on the other hand, is organized such that every "is" implies and "ought." Everything is connected to everything and there is an irreducable holism to the entity - again with a lot of fuzziness, as in that a person is still that person in essence even after a stroke in most cases. When I say that "I did it," I don't mean that I followed some disconnected rule in a complex program. I mean that "I" as an entity an sich took that action. This is all free will can validly refer to.
|
|