Re: The Brain Speaks(to Bruno)

2011-07-26 Thread Bruno Marchal


On 26 Jul 2011, at 14:58, ronaldheld wrote:


http://arxiv.org/ftp/arxiv/papers/1107/1107.4028.pdf
Sorry about my title choice. Any comments?
Ronald


The author would be 100% correct, this is still computationalism. And  
I agree with its critics of the computation *metaphor(s)*, which often  
tries to imposes a choice of *one* universal machine on the brain,  
when comp implies this is already false for physics.


So the paper might look like being against comp, but it works in the  
comp theory (in the weak sense I propose) all along its paper. Its  
critics against AI is very similar with the common one I have with  
Colin (despite Colin *pretends* to be against comp, but it is not).


There is a big difference between saying there is a level where I am  
digitally encodable, and saying things like the brain works like this  
or that machine. We know that if comp is true we cannot know the  
level. Werner might be wrong when he thinks he is the one finding the  
right comp level.


Now for the way to tackle the functioning of the brain, I am agnostic  
about Werner's proposal. It is still comp, and he might do the  
implementation error done by those he is criticizing. He critics  
representationalism, but he is not convincing (and comp does not need  
it, but psychology itself makes it plausible: Here he is on the fringe  
of person elimination, and without reason I van see relevant to its  
model.


In that paper he avoids to talk on consciousness, and in his other  
paper, he does the brain-mind identity error (in the comp context).  
The brain does not speak for itself: only a person can do that, and a  
person is not a brain, a person owns a brain.


His proposal *might* be correct, and even interesting for an  
engineering point of view, but what it describe is a computation  
machine, and if he is correct, physics is a branch of theology. He  
might implicitly be physicalist though, and in *that* perspective, he  
does eliminate consciousness and person.


To sum up don't confuse
- the belief that it exists a level n such that I am a machine at that  
level (comp)
- It exists a level n such that I belief that I am a machine at that  
level (a comp metaphor)


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The Brain Speaks(to Bruno)

2011-07-26 Thread Craig Weinberg
On Jul 26, 8:58 am, ronaldheld ronaldh...@gmail.com wrote:
 http://arxiv.org/ftp/arxiv/papers/1107/1107.4028.pdf
 Sorry about my title choice. Any comments?
                              Ronald

I like this a lot. Some Highlights:

[…Turing machine computation (like any other form of programmable
computation) transforms abstract objects by syntactic rules. Both,
rules and semantics, must be supplied by the user. Hence, this form of
computation is not part of a natural (i.e. user-observer independent)
Ontology.]

This is what I've been trying to get across with the computer monitor
metaphors. The user needs a monitor to make any sense out of what's
going on in the computer, but the brain has no monitor, it is it's own
user and therefore fundamentally, ontologically different from
something that merely processes data generically.

[…As a paradigmatic situation, consider the usual experimental
condition for studying “coding”: typically, one applies stimuli whose
metric one chooses as plausible, and then evaluates the neural (or
behavioral) responses they evoked by a metric one chooses, such that
one obtains a statistically valid (or otherwise to the experimenter
meaningful) stimulus-response relationship. One then claims that the
metric of the neural response ‘encodes’ the stimulus, being tempted to
conclude that the organism is ‘processing information’ in the MTC
paradigm. But recall that this paradigm deals with selective
information; that is: the receiver needs to have available a known
ensemble of stimuli from which to select the message. Moreover, this
procedure does not permit one to know whether any of the metrics
applied is of intrinsic significance to the organism under study: the
observer made the choices on pragmatic grounds (Werner, 1988). In
reality, once triggered by an external input, all that is accessible
to the nervous system are the states of activity of its own neurons;
hence it must be viewed as self-referring system.]

Yes. Neural codes require an experiential alphabet which must also be
but cannot be neurological. Paradox? Not if you see neurology and
experience as opposite ends of the same phenomenological continuum
rather than physical-mathematical phenomenon and interpretive
epiphenomenon.

[…MTC {Mathematical Theory of Communication} and its generalizations
to Information Theory and ‘information processing’ are predicated on
the assumption of normalcy of data distributions, and ergodicity of
the data generating process. But the abundant evidence for fractality
and self-similarity at all levels of neural organization violates this
assumption]

Not entirely sure that fractality and self-similarity violate the
spirit of information processing, even if they might violate the
letter of MTC, but I do think that the self-similarity itself may be a
symptom of an exhaustive boundary for information/computation. It is
the observer bumping up against the limit of 3p observation such that
the act is reflected back upon itself, leaving 1p phenomena blissfully
private, proprietary, and relatively inscrutable from the outside.

[…the colloquial ‘Information’ of prevailing linguistic use
undoubtedly fostering its ready acceptance. This entailed forgetting
that Information (technically speaking) and Computation (of the Turing
type) are observer constructs.]

I never get tired of making this point or seeing someone else make it.
Information doesn't exist. Like other feelings, motives, senses,
thinking, and computing, it INsists. Literally. Neurons EXist. What
neurons experience, individually and collectively does not. We do not
exist, we are that which insists through a brain, body, and lifetime.

[…”In the language of representations, the olfactory bulb extracts
features, encodes information, recognizes stimulus patterns, …These
are seductive phrases, but in animal physiology they are empty
rhetoric.”]

[…Here, then is the drastic difference to the cybernetic Information
Metaphor: neural systems do not process information; rather, being
perturbed by external events impinging on them, neural systems
rearrange themselves by discontinuous phase transitions to new ontic
states, formed by self-organization according to their internal
dynamics. Internal to the system, these new ontic states are the ‘raw
material’ for the ongoing system dynamics, by feedback from, or
forward projection to other levels of organization.]

Here's where I diverge a bit from where he's going. He makes a good
case for non-comp but, as Bruno points out, goes on to suggest a
different flavor of comp as the next step. He's expressing hope for
next-gen comp in the form of Fractional Calculus, and Complex System
Dynamics. In my view, this is an interesting route to understanding
aspects of how 1p experiences are shadowed in 3p, but ultimately it is
still dissecting shadows. To get at the 1p, I think that you need to
go into radical simplicity - elementary assumptions underlying
mathematics - cycles, sequence, symmetric