On Jul 26, 8:58 am, ronaldheld wrote:
> http://arxiv.org/ftp/arxiv/papers/1107/1107.4028.pdf
> Sorry about my title choice. Any comments?
> Ronald
I like this a lot. Some Highlights:
[…Turing machine computation (like any other form of programmable
computation) transforms abstract objects by syntactic rules. Both,
rules and semantics, must be supplied by the user. Hence, this form of
computation is not part of a natural (i.e. user-observer independent)
Ontology.]
This is what I've been trying to get across with the computer monitor
metaphors. The user needs a monitor to make any sense out of what's
going on in the computer, but the brain has no monitor, it is it's own
user and therefore fundamentally, ontologically different from
something that merely processes data generically.
[…As a paradigmatic situation, consider the usual experimental
condition for studying “coding”: typically, one applies stimuli whose
metric one chooses as plausible, and then evaluates the neural (or
behavioral) responses they evoked by a metric one chooses, such that
one obtains a statistically valid (or otherwise to the experimenter
meaningful) stimulus-response relationship. One then claims that the
metric of the neural response ‘encodes’ the stimulus, being tempted to
conclude that the organism is ‘processing information’ in the MTC
paradigm. But recall that this paradigm deals with selective
information; that is: the receiver needs to have available a known
ensemble of stimuli from which to select the message. Moreover, this
procedure does not permit one to know whether any of the metrics
applied is of intrinsic significance to the organism under study: the
observer made the choices on pragmatic grounds (Werner, 1988). In
reality, once triggered by an external input, all that is accessible
to the nervous system are the states of activity of its own neurons;
hence it must be viewed as self-referring system.]
Yes. Neural codes require an experiential alphabet which must also be
but cannot be neurological. Paradox? Not if you see neurology and
experience as opposite ends of the same phenomenological continuum
rather than physical-mathematical phenomenon and interpretive
epiphenomenon.
[…MTC {Mathematical Theory of Communication} and its generalizations
to Information Theory and ‘information processing’ are predicated on
the assumption of normalcy of data distributions, and ergodicity of
the data generating process. But the abundant evidence for fractality
and self-similarity at all levels of neural organization violates this
assumption]
Not entirely sure that fractality and self-similarity violate the
spirit of information processing, even if they might violate the
letter of MTC, but I do think that the self-similarity itself may be a
symptom of an exhaustive boundary for information/computation. It is
the observer bumping up against the limit of 3p observation such that
the act is reflected back upon itself, leaving 1p phenomena blissfully
private, proprietary, and relatively inscrutable from the outside.
[…the colloquial ‘Information’ of prevailing linguistic use
undoubtedly fostering its ready acceptance. This entailed forgetting
that Information (technically speaking) and Computation (of the Turing
type) are observer constructs.]
I never get tired of making this point or seeing someone else make it.
Information doesn't exist. Like other feelings, motives, senses,
thinking, and computing, it INsists. Literally. Neurons EXist. What
neurons experience, individually and collectively does not. We do not
exist, we are that which insists through a brain, body, and lifetime.
[…”In the language of representations, the olfactory bulb extracts
features, encodes information, recognizes stimulus patterns, …These
are seductive phrases, but in animal physiology they are empty
rhetoric.”]
[…Here, then is the drastic difference to the cybernetic Information
Metaphor: neural systems do not process information; rather, being
perturbed by external events impinging on them, neural systems
rearrange themselves by discontinuous phase transitions to new ontic
states, formed by self-organization according to their internal
dynamics. Internal to the system, these new ontic states are the ‘raw
material’ for the ongoing system dynamics, by feedback from, or
forward projection to other levels of organization.]
Here's where I diverge a bit from where he's going. He makes a good
case for non-comp but, as Bruno points out, goes on to suggest a
different flavor of comp as the next step. He's expressing hope for
next-gen comp in the form of Fractional Calculus, and Complex System
Dynamics. In my view, this is an interesting route to understanding
aspects of how 1p experiences are shadowed in 3p, but ultimately it is
still dissecting shadows. To get at the 1p, I think that you need to
go into radical simplicity - elementary assumptions underlying
mathematics - cycles, sequence, symmetric juxtaposition, etc, and go
beyo