>How can an abstraction be felt?

This is a very deep question. To me, anyway.

I have followed this thread with spattering distraction while doing other
things, and I am greatly interested in the relative roles of the self,
observer and causality. My interest is in nailing the human mind's position
in this regard. I have come a long way in this (I think!)

What I wanted to run by you folks for comment was the notion of and
'instance' of 'type "mind"' as simultaneously being a member of class
'observer' and of class 'observed'. In particular I am concerned with us
homo sapiens sapiens  role as observer and as observed. It is part of my
dealing with self referentiality - the notion of self and the human mind, if
you can just bear with me a second.

Throw an N-dimensional net around any portion of any process in any
universe. Call it a computational entity 'X'. Entity X (described by an as
yet undefined observer), will have within it a model of its surroundings
commensurate with the a) sophistication of the 'computational abstraction
capacity' within it and b) the sophistication of connectivity of the entity
with the rest of the universe(s).

If you consider the well worn 'what it is like' question as applied to the
netted entity you begin to approach the answer to the "How can an
abstraction be felt?" question. There is only one unassailable fact
(opinions please) about this: The only way that the 'what it is like to be
entity X' question can be answered is to 'be' entity X .ie. The
computational process itself is the description of 'what it is like'. The
3rd person observer simply cannot 'be' entity X and hence is forever
excluded from the 'what it is like' descriptive milieux.

This concept can be applied to entity X1 = 1 x homo sapiens or X2 = the
complete homo sapiens population of Greenland. The defining thing is the
boundary formed by the net. The computational capacity X2 when considered as
a single entity will have a dim notion of self and poor sensory feeds as
compared to X1.

Q1 Can X2 as observer describe X1 to another instance of type X2?       A. Not
very well.
Q2 Can X1 as observer describe X2? To another instance of type X1?      A. much
better.

Why? Because the computational model within X1 for describing things that
are NOT X1 is far superior to that of X2, which is a highly watered down
computational critter with poor sensory feeds.

Is this way of thinking useful? IMO: Yes, when you turn it around an apply
it to the observer.

It tells you that when you throw a 'net' around a chunk of universe, if you
capture optimally you will have the complete computational
universe-modelling contents and all sensory feeds. As such the entity can
then become an observer in the universe commensurate with the sophistication
of its modelling .ie. the causality predictions produced by the models
within the net, given all history and current sensory inputs. Put observer
and observed together (as two 'instances' of entities of a class) and you
have each modelling the other and themselves.

Getting back to the original question "How can an abstraction be felt?". Now
'be' the abstraction. Now apply it to a rock, insect, mouse, dog, dolphin,
human. Consider throwing a net around a galaxy or an entire universe.
Examples:

Consider X1 = a whole universe.
Does it have any sensory feeds or anything 'outside' to model? Nope.
Being like 'a universe' is like being nothing.

Consider X1 = an atom of silicon. Does it have any sensory feeds or anything
'outside' to model? Maybe a tiny bit eg some sense of temperature and no
model of self. It has a lot to model but very little to model with. 'Being'
an atom is very like being nothing, but not quite.

The trick thing that has distracted us all is that we tend to thing of the
'feeling' aspect as 'happening to us'. The reason for this is that we have a
very sophisticated model for self. It is NOT happening to us. We ARE it. Our
skull contains brain matter. The brain matter models internal and external
processes and it is so sophisticated we feel like we are outside ourselves.
We deal with the rest of the universe at the boundary of our bodies (or
temporary extensions we connect to it). The collective effect of a
concentration of modelling capacity and sensory nexus makes us feel like our
cranium is where it's all happening.

You can do various thought experiments on this.
Eg. Blow your brain up to the size of a Mac Truck (keeping everything else
scaled appropriately). What happens to the model of self?
Eg. Distribute you brain around you whole body instead of just the cranium.
Now what is it like?
 :-) It's helpful knowing that to talk to a human you look at the head, but
what happens when there is no 'focus' of attention at the physical scales
you are forced to inhabit?

In this way I have been able to formulate the upper most level my model for
a potential AI.

I have never had anyone to talk this through with. There appears to be
no-one else on the planet but you guys that get it. I look forward to seeing
how you 'feel' about the descriptive capacity of this way of looking at
things. I'm also interested in how others' way of describing things matches
with this. Baars and Chalmers and Dennett and Sloman slide tantalisingly by
and miss it. There seems to be an industry built on being boggled by the
issue with things like qualia endlessly debated. Why is it that a group of
people focussed on QM/MWI hit this issue? Or is it me simply not being
widely read enough?

Your thoughts?

Cheers,

Colin Hales


Reply via email to