I mostly sit on the sidelines in these freewill and choice discussions, because
I don’t know what anybody else wants from the terms and the language using
them. I wonder whether the people using them know what they want from them, or
if they would regard that as a meaningful thing to ask of someone. So I hear
evidence of this or that degree of regularity or context-sensitivity, and have
no idea what problem they are supposed to be addressing.
But it leaves me wondering what questions I could ask that would give me some
traction.
First, how much of this freewill and choice language is really anchored in its
main features in the nature of human experience, and how much is exercise of a
culturally inherited and partly arbitrary speech convention? I mean this in
the sense one might ask of language: there is some evidence that aspects of
language use are inherently human and regular, as we see from invented
isolated-group languages (deaf kids etc.) or from pidgin-creole transitions.
But any particular language is of course conventional in much of its structure,
and most kids raised without a language will not, by themselves, create a
substitute in time.
I can imagine there are aspects of both for the freewill and choice thing.
But what would be the “native and unambiguous” aspect? I feel like to say
anything, I first have to think of the activities going on in the mind like a
hive of bees, partly autonomous but partly coordinated. Other activities also
always-working are serving to maintain coordination where it is needed. For
example, if my elbow moves in one direction, then the laws of physics say it
will not have moved in some different direction. So while there may be many
chaotic inputs to elbow movement, at some point they need to resolve into an
action that there will only be one of.
So if, David Chalmers-like, I am to claim that there is an experiential aspect
called “feeling that I choose”, what the hell should that refer to? I could
guess that, in cases where there is some significant spread in the contributors
to an action, and the resolving activities have to do some work against some
tension to resolve to some definite action, the self-modeling and
self-reporting aspects of awareness might present those as a report of a
tension, to which evolved language will then attach a tag “choosing” so that we
can refer to it in ourselves and in each other. It seems like I remember
reading, years ago when this was in a spate of articles, about the anterior
cingulate gyrus and the amygdala as brain regions that go active when some kind
of conflict or heterogeneous input needs to be resolved somehow.
Thus: to the extent that Cheyne-Stokes breathing doesn’t even need the brain,
it is pretty hierarchical, and I don’t expect that it will ever report itself
too me as offering freedom to “choose” something. Presumably it won’t report
itself at all. But other things look more like Seely’s process of a hive of
bees choosing where to migrate. Lots of signals come in from significantly
independent origins, and there can be a long period of balance among several
possible outcomes before something tips, and the hive goes somewhere. That
tension will feel to me like some symbol needs to be attached to the tipping,
and there is an opportunity for some “choice” or “will” term to get attached to
those kinds of feelings.
But then what is all this “choice” and “free will” language doing, beyond just
allowing us to label feelings? Where does it come from? What role is it
playing?
I can imagine that it is something like a socially constructed prosthetic
system. There is lots of stuff that happens in mind-activity, which minds
don’t carry out regularly (or maybe at all) in isolation, but which they can
scaffold their way through by evolving external prosthetic systems. So, game
boards for playing combinatorial games. Counting and rhythm for dealing with
enumeration, time and other stuff. Many structure of language for organizing
thought patterns and images. We wouldn’t say that the minds aren’t “thinking”
or “solving” whatever the problem is because they employ a prosthetic system in
doing so. We can instead say that, like neoteny and like lots of other things
that are extreme in human minds, they have taken on capacity for a lot of
complexity by offloading the completion of many parallelizable tasks to
constructed niches, which the minds as a community then generate, maintain, and
evolve.
Likewise, one can imagine that these abstractions of “will” and “choice” get
used by some of the resolving-activities, to direct attention or imagination
(among the many places it could be directed at any moment) to images of others,
social sanction, imaginations of fear of blame, guilt, reprisal, or whatever,
and then one navigates through the language-mediated rules of that game, to
results that feed back as part of the resolution-activity to send the hive to
one place or another. This would be consistent with thinking that a lot of
freewill and choice language takes it most concrete form in legal and punitive
institutionalism. There it is not only the hive of actions in individual minds
that make some joint move; it is all of those in a population of people with
yet further constructed niches (rules, roles, authorizations of force, etc.)
that act collectively to serve inputs to the coordinating activities at the
“times of choice”.
I don’t supposed I could weave a philosophical system out of such vague
imagery, or even make it into anything psychological. But at least it gives me
some metaphors to attach the terms to that don’t seem completely unanchored (to
me, by my admittedly arbitrary tastes).
Eric
On Jan 26, 2024, at 11:06 AM, Marcus Daniels <mar...@snoutfarm.com> wrote:
LLMs are causal models. Science is about building causal models. It is
bizarre to me that there are scientists that carve out a special case for their
own mind. Even people like Scott Aaronson talk this way. As far as I can
tell, it is just vanity.
*From:*Friam <friam-boun...@redfish.com <mailto:friam-boun...@redfish.com>>*On
Behalf Of*Steve Smith
*Sent:*Friday, January 26, 2024 7:38 AM
*To:*friam@redfish.com <mailto:friam@redfish.com>
*Subject:*Re: [FRIAM] Breaking Bad and Free Will
Does ChatGPT have choices?
I "can't help myself", so here goes:
I've been reading Sopolsky's "Behave" which paves the runway (or exit ramp) for his
recent "Determined". His deep background in neuroendocrinology leads to some very
compelling arguments which pretty much degenerate to:
"do you believe in causality? if so, then where do you suppose the spirit or soul
intervene to break the chain dominoes that have been set up by everything that you are
and has happened to you up to that instant?"
He does a fancy little mocking dance, three card monte style, of homunculii he
contrives for the purpose of debunking any fanciful regression/recursion escape
plans you might have in that direction.
To hear him tell it we (as are all AIs/LLMs/etc) essentially giant pachinko
machines:
:
His arguments on the topic seem unassailable in spite of my own deep and abiding sense of
"choice" at many levels. It also doesn't help my cognitive dissonance that he
speaks entirely colloquially using many words we all associate with choice... he speaks
_as if_ he makes choices and others do as well even if he seems to ignore the specific
word, choice.
I like the conceit of: "the universe is deterministic but not prestateable"
I guess this is why they call it "the HARD problem of consciousness"?
For those of you who read this far, it would seem you "couldn't help yourself" or as my mother used to quip
"you must not have had anything better to do", and for those who have not, the same goes for hitting
<next> or <delete> (or having set up a spam-filter at an earlier time to avoid repeating the
"decision" personally)...
PS re: Breaking Bad
I've only dropped a few Pachinko balls in my life, but I couldn't help agonizing over the trajectory of each one,
feeling as if at every bounce they were at risk of "breaking bad" (or "good")... since many here
are at least part-time simulants (as Glen I believe refers to himself), even the most aggressive attempts at
introducing "random" (noise, annealing, etc.) either degenerate to "pseudo-random" or engage with a
physical system (e.g. sample a pixel-value from a webcam trained on a lava lamp) which of course is deterministic if
arbitrarily complex.
Anyone interested in feeding FRIAM Archive to a custom LLM. :-)
Prompt: Interpret Marcus use of gdb analogy in this thread:
GPT:
The analogy used by Marcus in the context of the TV show "Breaking Bad" and the concept of free
will involves a comparison with using the "step" command in GDB (GNU Debugger). GDB is a tool used
by programmers to debug code. The "step" command in GDB allows the user to execute program code one
line at a time, observing the behavior and state of the program at each step.
In "Breaking Bad," the main character, Walter White, is drawn into a life of crime.
The analogy suggests that watching Walter White's descent into this life is like using the
"step" command in GDB. Each step represents a decision or action taken by Walter, leading
him deeper into the criminal world. Just as a programmer observes the progression of a program one
step at a time in GDB, the viewer observes Walter's journey into criminality in a step-by-step
manner.
The implication here is that Walter's choices, while appearing to be
individual conscious decisions, are part of an inevitable progression, much
like the sequential execution of code in a debugger. This raises questions
about free will and determinism, mirroring the discussion in the FRIAM group
about whether free will is an illusion, especially in the context of strong
compulsions or character traits that might limit one's range of choices.
____________________________________________
CEO Founder,Simtable.com
<https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fSimtable.com&c=E,1,Cx40HE4flhFdeOsBQ2v2ug-p6iiMipKfTjHG8I-VKgu7QnRf-2sZClHqDxlpmXK_ti262ksE5TeCnQbeJbLL90UxQu4JJ7zEihmKvBAGPWYgZ85YSWpP&typo=1&ancr_add=1>
stephen.gue...@simtable.com <mailto:stephen.gue...@simtable.com>
Harvard Visualization Research and Teaching Lab
stephengue...@fas.harvard.edu <mailto:stephengue...@fas.harvard.edu>
mobile: (505)577-5828
On Thu, Jan 25, 2024, 4:20 PM Frank Wimberly <wimber...@gmail.com
<mailto:wimber...@gmail.com>> wrote:
GNU Debugger?
---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505
505 670-9918
Santa Fe, NM
On Thu, Jan 25, 2024, 4:11 PM Marcus Daniels <mar...@snoutfarm.com
<mailto:mar...@snoutfarm.com>> wrote:
Walter White was drawn into that life as sure as water runs
downhill. Watching him go deeper and deeper was like repeating “step” in GDB.
*From:*Friam <friam-boun...@redfish.com
<mailto:friam-boun...@redfish.com>>*On Behalf Of*Jochen Fromm
*Sent:*Thursday, January 25, 2024 1:34 PM
*To:*The Friday Morning Applied Complexity Coffee Group <friam@redfish.com
<mailto:friam@redfish.com>>
*Subject:*[FRIAM] Breaking Bad and Free Will
Since one of you mentioned today that free will is a recurring
FRIAM topic I was thinking about it and found this SCIAM article that argues
that human beings do have the power to make conscious choices.
https://www.scientificamerican.com/article/free-will-is-only-an-illusion-if-you-are-too/
<https://www.scientificamerican.com/article/free-will-is-only-an-illusion-if-you-are-too/>
On the other hand the drug addicts in Breaking Bad and elsewhere
clearly demonstrate that drug addiction inevitably takes away our free will and
replaces it by a strong compulsion to get and consume the drug, no matter how
devastating the consequences are.
Robert McKee says in his book "Story: Substance, Structure, Style and
the Principles of Screenwriting" that true character is revealed in the choices a
human being makes under pressure. Could you say that a strong character or personality
reduce our free will too, because they restrict our choices and decisions?
-J.