Re: computer pain

2006-12-17 Thread James N Rose



Brent Meeker wrote:

> > That notion may fit comfortably with your presumptive
> > ideas about 'memory' -- computer stored, special-neuron
> > stored, and similar.  But the universe IS ITSELF 'memory
> > storage' from the start.  Operational rules of performance
> > -- the laws of nature, so to speak -- are 'memory', and
> > inform EVERY organization of action-appropriateness.  Its
> > 'memory' of the longest-term kind actually.
> 
> Assuming there is no inherent randomness.  Even in MWI of QM, you can't 
> recover
> the past from the present, except in a coarse-grained approximation.  So you 
> may
> say the universe has 'memory' and even 'is conscious' but that's stretching 
> the
> common meaning of the words beyond recognition.


Your comments are exactly on target Brent, because I am
doing exactly that - stretching the 'common meanings' of
this and several other concepts.  Not to take them beyond
'recognition', but definitely beyond the conventional.
Because the conventional notions are narrow and too
presumptive (by familiarity, and unexplored assumptions).

When you mentioned MWI and 'randomness', my first thought
was of Andre Linde and alternative universes defined not
as variables of -this universe- but as alternatives of
the universal constants - where different values of force
and field strength and assignment - vary. Eg, where the
fine structure constant is not (approx) 1/137, but maybe
1/143.9.  

Even those 'possible universes' would be functionally-pinned
to certain values and not others.  Where again - the value-states
would be the 'primal memory' of -that- specific universe.

Actions that happen 'because of' or 'conditional on' some
temporally "previous" information (aka 'instructions', guide
data, etc) are what you seem to associate as 'memory driven'
behaviors.  Where we have a current notion of 'stable, recallable
information', and by experience that kind of information seems
to be N (data) that is 'stored and later accessible' when/as
needed.  So with that kind of -general notion- I began to think
about 'general conditions' and where functional instructive
information -could- be held for -any system's- general draw.

Memory as resource.   Well, it just seems to make sense to
first look at relationships that fill that definition, not
the mechanisms we are aware of that match the relationship.

"Memory" of 'how' to perform.  Irrespective and indifferent to
'mechanisms'.  Whether it provides 'choice' or not "to" perform.

The universal constants and similar invariant relations-of-systems
must be includable as a form-of-"memory".

Which in the long run is much much better for our intuitive
notions of existence, life and being.  Life for example, may
be a special existential state, but its a lot more spectrally
consistent to think of sentience being wholly present, but in 
precursive primary forms, and then in refined, improved capacity
forms later on, than to say that this massively important 
quality flat-out has no familially related qualia, and then
suddenly does.

We already see other aspects that show the spectral notion
to exist.  For example there are power laws and wave function 
-laws- that are seen in the QM realm and macro/complex systems
- like animal speciation/ditribution studies; relation for relation.
Even though the 'qualia' that display the rules, laws and relations
are quite quite different.  Where there is more going on than
scaling differences to account for those mappings.

The example I use often is the QM "defined" electron 'shells'
of atoms.  The interesting thing about them - especially the 
valence group - is that electrons of the approprotiate energies
can fill or vacate those QM regions.  No big thing, you might
remark.  Very calculable. Great for building molecules we 
might want; understanding physics event and chemistry.

Right.  But there's more.

A qualia currently not defined.

And it goes like this:  certain respiring animals
use materially identifiable organs called 'lungs',
to capture gas and release gas.  They take certain
energy/configuration molecules and exude other 
energy/configuration molecules. Plants are complementary
and do the opposite exchange.

But both kinds of organisms 'breath' in some fashion,
through some 'physical' organs or organelles.

So lets look at valence shells of electron.  There
is no cognition or 'intentional' or survival-related
necessity to hold/release electrons.  An atom or molecule
won't 'cease to exist' if the activity of filling/emptying
valence states of an electron doesn't take place (as part
of some larger/extended metabolic sequence of electron
transferring) - but - a living organism -would- die if
atoms didn't functionally 'breathe'.

The valence shells of atoms are their defacto 'functional'
"lungs" even if they don't exist in/as a 'physical'
manifestation of -material- operant organelle.

I am NOT imposing a biology model on to physics.
I AM citing that primitive precursive BEHAVIOR
CAPACITIES exist from th

RE: computer pain

2006-12-17 Thread Colin Geoffrey Hales

>
> Colin,
>
> You have described a way in which our perception may be more than can
> be explained by the sense data. However, how does this explain the
> response
> to novelty? I can come up with a plan or theory to deal with a novel
> situation
> if it is simply described to me. I don't have to actually perceive
> anything. Writers,
> philosophers, mathematicians can all be creative without perceiving
> anything.
>
> Stathis Papaioannou
>

Imaginative processes also use phenoenal consciousness. To have it
described to you you had to use phenomenal consciousness. Once you dispose
of PC you are model bound in all ways. You have to have a model to
generate the novelty! PC pervades the whole process at all levels. Look
what happens to Marvin. Even if he had someoine tell him there was an
outide world he'd never know what the data was telling him.

Gotta go.

cheers

colin



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-17 Thread 1Z


Colin Geoffrey Hales wrote:

> What I expect to happen is that the field configuration I find emerging in
> the guts of the chips will be different, depending on the object, even
> though the sensory measurement is identical. The different field
> configurations will correspond to the different objects. That is what
> subjective experience will look like from the "outside".
>
> The chip's 'solution' to the charge cnfiguration will take up a
> configuration based on the non-locality...hence the scientists will report
> different objects, even when their sensory measurement is identical, and
> it is the only apparent access they have to the object (to us).

And is it going to be the right solution? Because
then you chips are going to do better than humans.
Humans presented with ambiguous sensory data make wrong
perceptual judgements, or random ones (Necker cube).


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-17 Thread Stathis Papaioannou


Colin,

You have described a way in which our perception may be more than can 
be explained by the sense data. However, how does this explain the response 
to novelty? I can come up with a plan or theory to deal with a novel situation 
if it is simply described to me. I don't have to actually perceive anything. 
Writers, 
philosophers, mathematicians can all be creative without perceiving anything. 

Stathis Papaioannou


> Date: Mon, 18 Dec 2006 10:54:05 +1100
> From: [EMAIL PROTECTED]
> Subject: RE: computer pain
> To: everything-list@googlegroups.com
> 
> 
> Stathis said:
> 
> > If you present an object with "identical sensory measurements" but get
> different results in the chip, then that means what you took as "sensory
> measurements" was incomplete. For example, blind people might be able to
> sense the presense of someone who silently walks into the room due to
> their body heat, or the breeze created by their breathing, or perhaps
> even
> > some proximity sensor that we have not as yet discovered.
> >
> > But even supposing that perception involves some non-local
> > interaction (which would of course be an amazing finding
> > on its own, regardless of the
> > implications for consciousness), much interesting scientific
> > work has nothing to do with the scientist's direct
> > connection with his object of study. A
> > scientist can read about empirical data collected
> > by someone on the other side of the world and come
> > up with a theory to explain it; for all he knows, the data
> > is completely fabricated, but this makes no difference
> > to the cognitive processes
> > which result in the theory.
> >
> > Stathis Papaioannou
> 
> RE: "Incomplete" sensing
> 
> Sorry, Stathis, but no amount of sensory feeds would ever make it
> 'complete'. The sensory data is fundamentally ambiguous statistic of it's
> original source. That argumant won't do it. The question is: what physical
> processes cause the brain's field structure to settle on a particular
> solution. That constraint is NOT in the sensory data.
> 
> Yes it will be an amazing result to everyone else. but me. I find it
> amazing that eveyone thinks it could be anything else or that somehow the
> incomplete laws derived using appearances can explain the appearance
> generation system. It's like saying the correltated contents of the image
> in a mirror somehow fathom the reflective surface of the mirror that
> generated the appearances.
> 
> RE: Science
> I know accurate science requires certain behavioural normatives. Effective
> science has skill sets, individual characteristics of the temperament and
> genetic propensities of individual scientists. I know it has a social
> aspect. All this is true but irrelevant.
> 
> From one of the metascience gurus:
> 
> "Science is not done by logically omniscient lone knowers but by
> biological systems with certain kinds of capacities and limitations. At
> the most fine grained level, scientific change involves modifications of
> the cognitive states of limited biological systems".
> Philip Kitcher, 1993
> "The advancement of science : science without legend, objectivity without
> illusions"
> 
> It's going to be fun watching the macro-scale electric field change in
> response to different objects when the sensory measurement is demonstrably
> the same. The only reason we can;t do it in brain materia is we can't get
> at it without buggering it up with probes and other junk related to the
> measurement. Our imaging techniques measure the wrong things.
> 
> It'll light up a light when the subjective experience changes. We can wire
> it up like that. That will be a spooky day. I have to leave now. Merry
> XMAS and 2007 all you everything folk...
> 
> cheers
> 
> colin
> 
> 
> 
> > 

_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: computer pain

2006-12-17 Thread Brent Meeker

James N Rose wrote:
> 
> 
> Brent Meeker wrote:
>  
>> If consciousness is the creation of an inner narrative
>> to be stored in long-term memory then there are levels
>> of consciousness.  The amoeba forms no memories and so
>> is not conscious at all. A dog forms memories and even
>> has some understanding of symbols (gestures, words) and
>> so is conscious.  In between there are various degress
>> of consciousness corresponding to different complexity
>> and scope of learning.
> 
> That notion may fit comfortably with your presumptive
> ideas about 'memory' -- computer stored, special-neuron
> stored, and similar.  But the universe IS ITSELF 'memory
> storage' from the start.  Operational rules of performance
> -- the laws of nature, so to speak -- are 'memory', and 
> inform EVERY organization of action-appropriateness.  Its 
> 'memory' of the longest-term kind actually. 

Assuming there is no inherent randomness.  Even in MWI of QM, you can't recover 
the past from the present, except in a coarse-grained approximation.  So you 
may say the universe has 'memory' and even 'is conscious' but that's stretching 
the common meaning of the words beyond recognition.

> 
> Amoebic behavior embodies more than stimulus-response
> actions - consistent with organismic plan 'must eat';
> but less than your criterial state of sentient awareness
>  - consistent with 'plan dynamics/behaviors'.
> 
> The rut that science is in, is presumption that 'our sentience
> is 'the only' sentience form' and is the gold standard for 
> any/all aware-behavior activity.
> 
> Sentience better fits a model of spectrum and degrees; rather
> than not-extant / suddenly-extant.

I thought I said that.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-17 Thread Colin Geoffrey Hales

Stahis said:

> If you present an object with "identical sensory measurements" but get
different results in the chip, then that means what you took as "sensory
measurements" was incomplete. For example, blind people might be able to
sense the presense of someone who silently walks into the room due to
their body heat, or the breeze created by their breathing, or perhaps
even
> some proximity sensor that we have not as yet discovered.
>
> But even supposing that perception involves some non-local
> interaction (which would of course be an amazing finding
> on its own, regardless of the
> implications for consciousness), much interesting scientific
> work has nothing to do with the scientist's direct
> connection with his object of study. A
> scientist can read about empirical data collected
> by someone on the other side of the world and come
> up with a theory to explain it; for all he knows, the data
> is completely fabricated, but this makes no difference
> to the cognitive processes
> which result in the theory.
>
> Stathis Papaioannou

RE: "Incomplete" sensing

Sorry, Stathis, but no amount of sensory feeds would ever make it
'complete'. The sensory data is fundamentally ambiguous statistic of it's
original source. That argumant won't do it. The question is: what physical
processes cause the brain's field structure to settle on a particular
solution. That constraint is NOT in the sensory data.

Yes it will be an amazing result to everyone else. but me. I find it
amazing that eveyone thinks it could be anything else or that somehow the
incomplete laws derived using appearances can explain the appearance
generation system. It's like saying the correltated contents of the image
in a mirror somehow fathom the reflective surface of the mirror that
generated the appearances.

RE: Science
I know accurate science requires certain behavioural normatives. Effective
science has skill sets, individual characteristics of the temperament and
genetic propensities of individual scientists. I know it has a social
aspect. All this is true but irrelevant.

>From one of the metascience gurus:

"Science is not done by logically omniscient lone knowers but by
biological systems with certain kinds of capacities and limitations. At
the most fine grained level, scientific change involves modifications of
the cognitive states of limited biological systems".
Philip Kitcher, 1993
"The advancement of science : science without legend, objectivity without
illusions"

It's going to be fun watching the macro-scale electric field change in
response to different objects when the sensory measurement is demonstrably
the same. The only reason we can;t do it in brain materia is we can't get
at it without buggering it up with probes and other junk related to the
measurement. Our imaging techniques measure the wrong things.

It'll light up a light when the subjective experience changes. We can wire
it up like that. That will be a spooky day. I have to leave now. Merry
XMAS and 2007 all you everything folk...

cheers

colin



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-17 Thread Stathis Papaioannou


Colin Hales writes:

> 
> Stathis said
> <>
> > and Colin has said that he does not believe that philosophical zombies
> can exist.
> > Hence, he has to show not only that the computer model will lack the 1st
> person
> > experience, but also lack the 3rd person observable behaviour of the
> real thing;
> > and the latter can only be the case if there is some aspect of brain
> physics which
> > does not comply with any possible mathematical model.
> >
> > Stathis Papaioannou
> 
> I just thought of a better way of explaining 'deviation'.
> 
> Maxwell's equations are not 'unique' in the sense that there are an
> infinite number of different charge configurations that will produce the
> same field congurations around some surface. This is a very old
> resultwas it Poisson who said it? can't remember.
> 
> Anyway I will be presenting different objects to my 'chip scientists',
> but I will be presenting them in such a way as the sensory measurement is
> literally identical.
> 
> What I expect to happen is that the field configuration I find emerging in
> the guts of the chips will be different, depending on the object, even
> though the sensory measurement is identical. The different field
> configurations will correspond to the different objects. That is what
> subjective experience will look like from the "outside".
> 
> The chip's 'solution' to the charge cnfiguration will take up a
> configuration based on the non-locality...hence the scientists will report
> different objects, even when their sensory measurement is identical, and
> it is the only apparent access they have to the object (to us).
> 
> I think that's more like what you are after... there's no "failure to
> obey" maxwell's equations, but their predictions as to charge
> configuration is not a unique solution. The trick is to realise that the
> sensory maeasurement has to be there in order that _any_ solution be
> found, not a _particular_ solution.
> 
> pretty simple really. does that make more sense?

If you present an object with "identical sensory measurements" but get 
different results in the chip, then that means what you took as "sensory 
measurements" was incomplete. For example, blind people might be able 
to sense the presense of someone who silently walks into the room due to 
their body heat, or the breeze created by their breathing, or perhaps even 
some proximity sensor that we have not as yet discovered. 

But even supposing that perception involves some non-local interaction 
(which would of course be an amazing finding on its own, regardless of the 
implications for consciousness), much interesting scientific work has nothing 
to do with the scientist's direct connection with his object of study. A 
scientist 
can read about empirical data collected by someone on the other side of the 
world and come up with a theory to explain it; for all he knows, the data is 
completely fabricated, but this makes no difference to the cognitive processes 
which result in the theory.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




RE: computer pain

2006-12-17 Thread Stathis Papaioannou


Colin,

I think there is a logical contradiction here. You say that the physical models 
do, in fact, explain the 3rd person observable behaviour of a physical system. 
A brain is a physical system with 3rd person observable behaviour. Therefore, 
the models *must* predict *all* of the third person observable behaviour of 
a brain. When a person is handed a complex problem to solve, scratches his head 
and chews his pencil, then writes down his proposed solution to the problem, 
then 
that is definitely 3rd person observable behaviour, and it is definitely due to 
the 
motion of matter which perfectly follows physical laws. So you can in theory 
build 
a model which will predict what the person is going to write down, or at least 
the 
sort of thing a real person might write down, given that classical chaos may 
make 
it impossible to predict what a particular person will do on a particular day.

Now, you would no doubt say that the model will not experience the qualia. 
That's 
OK, but then the model will effectively be a zombie that can behave just like a 
person yet lack phenomenal consciousness, which you don't beleive is possible. 
The only way you can retain this belief consistently is if the model would 
*not* be 
able to predict the 3rd person behaviour of a real person. And the only way 
that is 
possible is if there is some aspect of the physics in the brain which is 
inherently 
unpredictable.

Stathis Papaioannou





> Date: Mon, 18 Dec 2006 07:42:38 +1100
> From: [EMAIL PROTECTED]
> Subject: RE: computer pain
> To: everything-list@googlegroups.com
> 
> 
> Stathis said
> >
> > I'll let Colin answer, but it seems to me he must say that some aspect of
> > brain
> > physics deviates from what the equations tell us (and deviates in an
> > unpredictable
> > way, otherwise it would just mean that different equations are required)
> > to be
> > consistent. If not, then it should be possible to model the behaviour of a
> > brain:
> > predict what the brain is going to do in a particular situation, including
> > novel situations
> > such as those involving scientific research. Now, it is possible that the
> > model will
> > reproduce the behaviour but not the qualia, because the actual brain
> > material is
> > required for that, but that would mean that the model will be a
> > philosophical zombie,
> > and Colin has said that he does not believe that philosophical zombies can
> > exist.
> > Hence, he has to show not only that the computer model will lack the 1st
> > person
> > experience, but also lack the 3rd person observable behaviour of the real
> > thing;
> > and the latter can only be the case if there is some aspect of brain
> > physics which
> > does not comply with any possible mathematical model.
> >
> > Stathis Papaioannou
> 
> Exactly rightexcept for the bit where you talk about 'deviation from
> the model'. I expect the EM model to be perfectly right - indeed MUST be
> right or I can't do the experiment because the modelling I do will help me
> design the chips...it must be right or they won't work. It's just that the
> models don't deliver all the result - you have to "BE the chips" to get
> the whole picture.
> 
> What is missing from the model, seamlessly and irrevocably and
> instrinsically... is that it says nothing about the first person
> perspective. You cannot model the first person perspective by definition,
> because every first person perspective is different! The 'fact' of the
> existence of the first person is the invariant, however.
> 
> SoAll the models are quite right and accurate, but are inherently
> third person descriptions of 'the stuff', not 'the stuff'. When you be
> 'the stuff' under the right circumstances there's more to the description.
> And EVERYTHING gets to 'be'...ie..is forced, implicitly to uniquely be
> somewhere in the universe and inherits all the properties of that act,
> NONE of which is delivered by empirical laws, which are constructed under
> conditions designed specifically to throw out that
> perspective...and...what's worse...it does it by verifying the laws using
> the FIRST PERSON...to do all scientific measurements...not only that, if
> you don't do it with the first person (measurement/experimental
> observation grounded in the first person of the scientist) you get told
> you are not doing science!
> 
> How screwed up is that!
> 
> My planned experiment makes chips and on those chips will be probably 4
> intrinsically intermixed 'scientists', all of whom can share each other's
> "scientific evidence" = first person experiences...whilst they do 'dumb
> science' like test a hypothesis H1 = "is the thing there?". By fiddling
> about with the configuration of the scientists you can create
> circumstances where the only way they can agree/disagree is because of the
> first person perspectiveand the whole thing will obey Maxwell's
> equations perfectly well from the outside. Indeed the 'p

RE: computer pain

2006-12-17 Thread Stathis Papaioannou


Colin,

If there is nothing wrong with the equations, it is always possible to predict 
the
behaviour of any piece of matter, right? And living matter is still matter, 
which 
obeys all of the physical laws all of the time, right? It appeared from your 
previous 
posts that you would disagree with this and predict that living matter would 
sometimes 
do surprising, unpredictable things. In that case, your theory is logically 
consistent, 
but you have to find evidence for it, and it would be easier to tease out the 
essential 
unpredictable physical elements and test them in a physics lab.

Stathis 



> Date: Mon, 18 Dec 2006 07:17:10 +1100
> From: [EMAIL PROTECTED]
> Subject: RE: computer pain
> To: everything-list@googlegroups.com
> 
> 
> >
> > I'm not sure of the details of your experiments, but wouldn't the most
> > direct way to prove what you are saying be to isolate just
> > that physical process
> > which cannot be modelled? For example, if it is EM fields, set up an
> > appropriately
> > brain-like configuration of EM fields, introduce some environmental input,
> > then
> > show that the response of the fields deviates from what Maxwell's
> > equations
> > would predict.
> >
> > Stathis Papaioannou
> 
> I don't expect any deviation from Maxwell's equations. There's nothing
> wrong with them. It's just that they are merely a very good representation
> of a surface behaviour of the perceived universe in a particular context.
> Just like QM. But it's only the surface. The universe is not made of EM or
> QM or atoms or space. All these things are appearances. and it's what they
> are all actually made of with is what delivers its appearances.
> 
> It's a pretty simple idea and it's been around for 300 years (and it's not
> a substance dualism!). The paper I'm writing at the moment (nearly
> finished) is about how this cultural delusion that the universe is made of
> our models pervades the low-level physical science. It's quite stark...the
> application of situated cognition to knowledge is quite pervasive. You can
> take a vertical slice all the way through the entire epistemological tree
> from social sciences down through psychology..cognitive
> science..ecology..ethology..anthropology ||
> neuroscience...chemistry..physics. The || is the sudden break where
> situated cognition matters and where physics, in particular cosmology is
> almost pathologically intent on the surgical excision of the scientist
> from the universe. Situated cognition applied to metascience at the level
> of physics is simply absent.
> 
> You can see it in the desperate drive to make sense of QM maths, as if the
> universe is made of it...that the only way that any sense can be made of
> it is to write complex stories about infinite numbers of universes, all of
> which are somehow explanatory of the weirdness of the maths, rather then
> deal with what the universe is actually made ofwhen right in front of
> all of them is the perfect way out...start talking about what universes
> must be made of in order that it can omplement scientists that have
> perceptionto realise that the maths of empirical laws is just a model
> of the stuff, not the stuff.
> 
> Cosmologists are the key. They have some sort of mass fantasy going about
> the mathematics they use. Totally unfounded assumptions pervade their
> craft - far worse than any assumption that the universe is not made of
> idealsised maths...the thing that gets labeled erroneously 'metaphysics'
> and eschewed.
> 
> I have done a cartoon representation of a cosmologist made of stuff in a
> unoiverse of stuff staring at the cosmos wondering where all teh stuff is,
> when the fact of be able to stare _at all_ is telling him about the deep
> nature of of the cosmos. Poor little deluded cosmologist.
> 
> There's nothing wrong with Maxwell's equations. In fact there's nothing
> wrong with any empirical laws. The problem is us...
> 
> cheers,
> 
> colin
> 
> 
> 
> > 

_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---




Re: computer pain

2006-12-17 Thread James N Rose



Brent Meeker wrote:
 
> If consciousness is the creation of an inner narrative
> to be stored in long-term memory then there are levels
> of consciousness.  The amoeba forms no memories and so
> is not conscious at all. A dog forms memories and even
> has some understanding of symbols (gestures, words) and
> so is conscious.  In between there are various degress
> of consciousness corresponding to different complexity
> and scope of learning.

That notion may fit comfortably with your presumptive
ideas about 'memory' -- computer stored, special-neuron
stored, and similar.  But the universe IS ITSELF 'memory
storage' from the start.  Operational rules of performance
-- the laws of nature, so to speak -- are 'memory', and 
inform EVERY organization of action-appropriateness.  Its 
'memory' of the longest-term kind actually. 

Amoebic behavior embodies more than stimulus-response
actions - consistent with organismic plan 'must eat';
but less than your criterial state of sentient awareness
 - consistent with 'plan dynamics/behaviors'.

The rut that science is in, is presumption that 'our sentience
is 'the only' sentience form' and is the gold standard for 
any/all aware-behavior activity.

Sentience better fits a model of spectrum and degrees; rather
than not-extant / suddenly-extant.

Correct analoging is more challenging with the former, which is
why no AI afficionados want to give up the Cartesian Split way
of thinking and dealing with things - trying make square 'wheels'
roll in the long run.

Jamie 
> >
> > 


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-17 Thread Colin Geoffrey Hales

Stathis said
<>
> and Colin has said that he does not believe that philosophical zombies
can exist.
> Hence, he has to show not only that the computer model will lack the 1st
person
> experience, but also lack the 3rd person observable behaviour of the
real thing;
> and the latter can only be the case if there is some aspect of brain
physics which
> does not comply with any possible mathematical model.
>
> Stathis Papaioannou

I just thought of a better way of explaining 'deviation'.

Maxwell's equations are not 'unique' in the sense that there are an
infinite number of different charge configurations that will produce the
same field congurations around some surface. This is a very old
resultwas it Poisson who said it? can't remember.

Anyway I will be presenting different objects to my 'chip scientists',
but I will be presenting them in such a way as the sensory measurement is
literally identical.

What I expect to happen is that the field configuration I find emerging in
the guts of the chips will be different, depending on the object, even
though the sensory measurement is identical. The different field
configurations will correspond to the different objects. That is what
subjective experience will look like from the "outside".

The chip's 'solution' to the charge cnfiguration will take up a
configuration based on the non-locality...hence the scientists will report
different objects, even when their sensory measurement is identical, and
it is the only apparent access they have to the object (to us).

I think that's more like what you are after... there's no "failure to
obey" maxwell's equations, but their predictions as to charge
configuration is not a unique solution. The trick is to realise that the
sensory maeasurement has to be there in order that _any_ solution be
found, not a _particular_ solution.

pretty simple really. does that make more sense?

cheers

colin



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-17 Thread Colin Geoffrey Hales

Stathis said
>
> I'll let Colin answer, but it seems to me he must say that some aspect of
> brain
> physics deviates from what the equations tell us (and deviates in an
> unpredictable
> way, otherwise it would just mean that different equations are required)
> to be
> consistent. If not, then it should be possible to model the behaviour of a
> brain:
> predict what the brain is going to do in a particular situation, including
> novel situations
> such as those involving scientific research. Now, it is possible that the
> model will
> reproduce the behaviour but not the qualia, because the actual brain
> material is
> required for that, but that would mean that the model will be a
> philosophical zombie,
> and Colin has said that he does not believe that philosophical zombies can
> exist.
> Hence, he has to show not only that the computer model will lack the 1st
> person
> experience, but also lack the 3rd person observable behaviour of the real
> thing;
> and the latter can only be the case if there is some aspect of brain
> physics which
> does not comply with any possible mathematical model.
>
> Stathis Papaioannou

Exactly rightexcept for the bit where you talk about 'deviation from
the model'. I expect the EM model to be perfectly right - indeed MUST be
right or I can't do the experiment because the modelling I do will help me
design the chips...it must be right or they won't work. It's just that the
models don't deliver all the result - you have to "BE the chips" to get
the whole picture.

What is missing from the model, seamlessly and irrevocably and
instrinsically... is that it says nothing about the first person
perspective. You cannot model the first person perspective by definition,
because every first person perspective is different! The 'fact' of the
existence of the first person is the invariant, however.

SoAll the models are quite right and accurate, but are inherently
third person descriptions of 'the stuff', not 'the stuff'. When you be
'the stuff' under the right circumstances there's more to the description.
And EVERYTHING gets to 'be'...ie..is forced, implicitly to uniquely be
somewhere in the universe and inherits all the properties of that act,
NONE of which is delivered by empirical laws, which are constructed under
conditions designed specifically to throw out that
perspective...and...what's worse...it does it by verifying the laws using
the FIRST PERSON...to do all scientific measurements...not only that, if
you don't do it with the first person (measurement/experimental
observation grounded in the first person of the scientist) you get told
you are not doing science!

How screwed up is that!

My planned experiment makes chips and on those chips will be probably 4
intrinsically intermixed 'scientists', all of whom can share each other's
"scientific evidence" = first person experiences...whilst they do 'dumb
science' like test a hypothesis H1 = "is the thing there?". By fiddling
about with the configuration of the scientists you can create
circumstances where the only way they can agree/disagree is because of the
first person perspectiveand the whole thing will obey Maxwell's
equations perfectly well from the outside. Indeed the 'probes' I will
embed will measure field effects in-situ that are supposed to do what
Maxwell's equations says.

cheers,

colin hales





--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: computer pain

2006-12-17 Thread Colin Geoffrey Hales

>
> I'm not sure of the details of your experiments, but wouldn't the most
> direct way to prove what you are saying be to isolate just
> that physical process
> which cannot be modelled? For example, if it is EM fields, set up an
> appropriately
> brain-like configuration of EM fields, introduce some environmental input,
> then
> show that the response of the fields deviates from what Maxwell's
> equations
> would predict.
>
> Stathis Papaioannou

I don't expect any deviation from Maxwell's equations. There's nothing
wrong with them. It's just that they are merely a very good representation
of a surface behaviour of the perceived universe in a particular context.
Just like QM. But it's only the surface. The universe is not made of EM or
QM or atoms or space. All these things are appearances. and it's what they
are all actually made of with is what delivers its appearances.

It's a pretty simple idea and it's been around for 300 years (and it's not
a substance dualism!). The paper I'm writing at the moment (nearly
finished) is about how this cultural delusion that the universe is made of
our models pervades the low-level physical science. It's quite stark...the
application of situated cognition to knowledge is quite pervasive. You can
take a vertical slice all the way through the entire epistemological tree
from social sciences down through psychology..cognitive
science..ecology..ethology..anthropology ||
neuroscience...chemistry..physics. The || is the sudden break where
situated cognition matters and where physics, in particular cosmology is
almost pathologically intent on the surgical excision of the scientist
from the universe. Situated cognition applied to metascience at the level
of physics is simply absent.

You can see it in the desperate drive to make sense of QM maths, as if the
universe is made of it...that the only way that any sense can be made of
it is to write complex stories about infinite numbers of universes, all of
which are somehow explanatory of the weirdness of the maths, rather then
deal with what the universe is actually made ofwhen right in front of
all of them is the perfect way out...start talking about what universes
must be made of in order that it can omplement scientists that have
perceptionto realise that the maths of empirical laws is just a model
of the stuff, not the stuff.

Cosmologists are the key. They have some sort of mass fantasy going about
the mathematics they use. Totally unfounded assumptions pervade their
craft - far worse than any assumption that the universe is not made of
idealsised maths...the thing that gets labeled erroneously 'metaphysics'
and eschewed.

I have done a cartoon representation of a cosmologist made of stuff in a
unoiverse of stuff staring at the cosmos wondering where all teh stuff is,
when the fact of be able to stare _at all_ is telling him about the deep
nature of of the cosmos. Poor little deluded cosmologist.

There's nothing wrong with Maxwell's equations. In fact there's nothing
wrong with any empirical laws. The problem is us...

cheers,

colin



--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-17 Thread Brent Meeker

James N Rose wrote:
> Just to throw a point of perspective into this
> conversation about mimicking qualia.
> 
> I posed a thematic question in my 1992 opus
> "Understanding the Integral Universe".
> 
>  "What of a single celled animus like an amoeba or paramecium?
>  Does it 'feel' itself?  Does it sense the subtle variations
>  in its shape as it bumps around in its liquid world?  Does it
>  somehow note changes in water pressure around it?  Is it
>  always "hungry"?  What drives a single celled creature to eat?
>  What "need", if any is fulfilled?  Is it due to an internal
>  pressure gradient in it's chemical metabolism? Is there a
>  resilience to its boundary that not only determines its
>  particular shape, whether amoebic or firm, but that variations
>  in that boundary re-distribute pressures through its form to
>  create a range of responsive actions? And, because it is
>  coherent for that life form, is "this" primal consciousness?
>  How far down into the structure of existence can we reasonably
>  extrapolate this? An atom's electron cloud responds and interacts
>  with its level of environment, but is this consciousness? We
>  cannot personify, and therefore mystify, all kinetic functions
>  as different degrees of consciousness; at least not at this point.
>  Neither, can we specify with any certainty a level where
>  consciousness suddenly appears, where there was none before."
>  "UIU"(c)ROSE 1992 ; 02)Intro section.

If consciousness is the creation of an inner narrative to be stored in 
long-term memory then there are levels of consciousness.  The amoeba forms no 
memories and so is not conscious at all. A dog forms memories and even has some 
understanding of symbols (gestures, words) and so is conscious.  In between 
there are various degress of consciousness corresponding to different 
complexity and scope of learning.

> 
> 
> 
> 
> "Pain" is a net-collective qualia, an 'other-tier' cybernetic 
> emerged phenomenon.  But it is -not unrelated- to phenomena
> like basic EM field changes and 'system's experiences' in those
> precursive tiers.
> 
> Also, "pain" (an aspect of -consciousness-), has to be understood
> in regard to the panorama of 'kinds-of-sentience' that any given 
> system/organism has, embodies, utilizes or enacts.  
> 
> In other words, it would be wrong to dismiss the presence of
> 'pain' in autonomic nervous systems, simply because the
> cognitive nervous system is 'unaware' of the signals or
> the distress situation generating them.

This seems to depend on whether you define pain to be the conscious experience 
of pain, or you allow that the bodily reaction is evidence of pain in some more 
general sense.  I think Stathis posed the question in terms of conscious 
experience.  There's really no doubt that one can create and artificial system 
that reacts to distress; as  in my example of a modern aircraft.

Brent Meeker

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-17 Thread James N Rose

Just to throw a point of perspective into this
conversation about mimicking qualia.

I posed a thematic question in my 1992 opus
"Understanding the Integral Universe".

 "What of a single celled animus like an amoeba or paramecium?
 Does it 'feel' itself?  Does it sense the subtle variations
 in its shape as it bumps around in its liquid world?  Does it
 somehow note changes in water pressure around it?  Is it
 always "hungry"?  What drives a single celled creature to eat?
 What "need", if any is fulfilled?  Is it due to an internal
 pressure gradient in it's chemical metabolism? Is there a
 resilience to its boundary that not only determines its
 particular shape, whether amoebic or firm, but that variations
 in that boundary re-distribute pressures through its form to
 create a range of responsive actions? And, because it is
 coherent for that life form, is "this" primal consciousness?
 How far down into the structure of existence can we reasonably
 extrapolate this? An atom's electron cloud responds and interacts
 with its level of environment, but is this consciousness? We
 cannot personify, and therefore mystify, all kinetic functions
 as different degrees of consciousness; at least not at this point.
 Neither, can we specify with any certainty a level where
 consciousness suddenly appears, where there was none before."
 "UIU"(c)ROSE 1992 ; 02)Intro section.




"Pain" is a net-collective qualia, an 'other-tier' cybernetic 
emerged phenomenon.  But it is -not unrelated- to phenomena
like basic EM field changes and 'system's experiences' in those
precursive tiers.

Also, "pain" (an aspect of -consciousness-), has to be understood
in regard to the panorama of 'kinds-of-sentience' that any given 
system/organism has, embodies, utilizes or enacts.  

In other words, it would be wrong to dismiss the presence of
'pain' in autonomic nervous systems, simply because the
cognitive nervous system is 'unaware' of the signals or
the distress situation generating them.

If one wants to 'define' pain sentience as a closed marker,
and build contrived systems that match the defined conditions
and criteria, that is one thing - and acceptable for what it
is.  But if the 'pain' is a coordination of generalized
engagements and reactions, then a different set of 
design standards needs to be considered/met.

Vis a vis  -this- reasoning:

 



Jamie Rose
Ceptual Institute
cognating on a sunday morning
2006/12/17


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-17 Thread 1Z


Colin Geoffrey Hales wrote:
> Stathis wrote:
> I can understand that, for example, a computer simulation of a storm is
> not a storm, because only a storm is a storm and will get you wet. But
> perhaps counterintuitively, a model of a brain can be closer to the real
> thing than a model of a storm. We don't normally see inside a person's
> head, we just observe his behaviour. There could be anything in there - a
> brain, a computer, the Wizard of Oz - and as long as it pulled the
> person's strings so that he behaved like any other person, up to and
> including doing scientific research, we would never know the difference.
>
> Now, we know that living brains can pull the strings to produce normal
> human behaviour (and consciousness in the process, but let's look at the
> external behaviour for now). We also know that brains follow the laws of
> physics: chemistry, Maxwell's equations, and so on. Maybe we don't
> *understand* electrical fields in the sense that it may feel like
> something to be an electrical field, or in some other as yet unspecified
> sense, but we understand them well enough to predict their physical effect
> on matter. Hence, although it would be an enormous task to gather the
> relevant information and crunch the numbers in real time, it should be
> possible to predict the electrical impulses that come out of the skull to
> travel down the spinal cord and cranial nerves and ultimately pull the
> strings that make a person behave like a person. If we can do that, it
> should be possible to place the machinery which does the predicting inside
> the skull interfaced with the periphery so as to take the brain's place,
> and no-one would know the difference because it would behave just like the
> original.
>
> At which step above have I made a mistake?
>
> Stathis Papaioannou
>
> ---
> I'd say it's here...
>
> "and no-one would know the difference because it would behave just like
> the original"
>
> But for a subtle reason.
>
> The artefact has to be able to cope with exquisite novelty like we do.
> Models cannot do this because as a designer you have been forced to define
> a model that constrains all possible novelty to be that which fits your
> model for _learning_.

If the model has been reverse-engineered from how
the nervous system works (ie, transparent box, not black box), it will
have the learning abilities of NS -- even if we don't know what they
are.

> Therein lies the fundamental flaw. Yes... at a given
> level of knowledge you can define how to learn new things within the
> knowledge framework. But when it comes to something exquisitely novel, all
> that will happen is that it'll be interpreted into the parameters of how
> you told it to learn things... this will impact in a way the artefact
> cannot handle. It will behave differently and probably poorly.
>
> It's the zombie thing all over again.
>
> It's not _knowledge_ that matters. it's _learning_ new knowledge. That's
> what functionalism fails to handle. Being grounded in a phenomenal
> representation of the world outside is the only way to handle arbitrary
> levels of novelty.

That remains to be seen.

>  No phenomenal representation? = You are "model-bound"
> and grounded, in effect, in the phenomenal representation of your
> model-builders, who are forced to predefine all novelty handling in an "I
> don't know that" functional module. Something you cannot do without
> knowing everything a-priori! If you already know that you are god so why
> are you bothering?

So long as you can peak into a system, you can functionally duplicate
it
without knowing how
it behaves under all circumstances. I can rewrite
the C code

double f(double x, double.y)
{
   return 4.2+ sin(x) - exp(cos(y), 9.7);
}

in Pascal, although I couldn't tell you offhand
what the output is for x=0.77 , y=0.33


> Say you bring an artefact X into existence. X may behave exactly like a
> human Y in all the problem domains you used to define you model. Then you
> expose both to novelty nobody has seen, including you and that is
> where the two will differ. The human Y will do better every time. You
> can't program qualia. You have to have them and you can't do without them
> in a 'general intelligence' context.
>
> Here I am on a sat morning...proving I have no life, yet again! :-)
> 
> Colin Hales


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: computer pain

2006-12-17 Thread Mark Peaty

Well this is fascinating! I tend to think that Brent's 'simplistic' 
approach of setting up oscillating EM fields of specific frequencies at 
specific locations is more likely to be good evidence of EM involvement 
in qualia, because the victim, I mean experimental subject, can relate 
what is happening. Do it to enough naive subjects and, if their accounts 
of the changes wrought in their experience agree with your predictions, 
you will have provisional verification. Just make sure you have a 
falsifiable prediction first.

On the other hand Colin's project seems out of reach to me. This is 
probably because I don't really understand it. I do not, for example, 
understand how Colin seems to think that we can dispense with the 
concept of representation. I am however very sceptical of all 'quantum' 
mechanical/entanglement theories of consciousness. As far as I can see 
humans are 'classical' in nature, built out of fundamental particles 
like everything else in the universe of course, but we can live and move 
and have our being BECAUSE each one of us, and the major parts which 
compose us, are all big enough to endure over and above the quantum 
uncertainty. So we don't 'flit in and out of existence' like some people 
say. We wake up, go to sleep, dose off at the wrong time, forget what we 
are doing, live through social/cultural descriptions of the world, dream 
and aspire, and sometimes experience amazing insights which can turn our 
lives around. We survive and endure by doing mostly the tried and true 
things we have learned so well that they are deeply ingrained habits. 
Most of what we do, perceive, and think is so stolidly habitual and 
'built-in' that we are almost completely unaware of it; it is fixtures 
and fittings of the mind if you like. It all works for us, and the whole 
social and cultural milieu of economic and personal transactions, 
accounting, appointments, whatever, can happen so successfully BECAUSE 
so much of what we are and do is solidly habitual and predictable. In my 
simplistic view, consciousness is the registration of discrepancy 
between what the brain has predicted compared to what actually happened. 
Everything else, the bulk of what constitutes the mind in effect, is the 
ceaseless evoking, selecting, ignoring or suppressing, storing, 
amalgamating or splitting of the dynamic logical structures which 
represent our world, and without which we are just lumps of meat. These 
dynamic logical structures actually EXIST during their evocation. [And 
this is why there is 'something it is like to be ...']

This may seem like a very boring view of things but I think now there is 
an amazing amount of explanation already available concerning human 
experience. I am not saying there is nothing new to discover, far from 
it, just that the continuous denial that most of the pieces of the 
puzzle are already exposed and arranged in the right order is not helpful.

What ought to be clear to everybody is that our awareness of being here, 
of being anything in fact, entails a continuous process of 
self-referencing. It entails a continuous process of locating self in 
one's world. This self-referencing is always inherently partial and 
incomplete, but unless this incompleteness itself is explicitly 
represented, we are not aware of it. We are only ever aware of 
relationships explicitly represented and being explicitly represented 
entails inclusion of representation of at least some aspects of how 
whatever it is, is, was, will be, or might become, causally connected to 
oneself. When we perceive or imagine things, it is always from a view 
point, listening point, or at a point of contact. The 'location' of 
something or someone is an intrinsic part of its or their identity, and 
the key element of location as such is in relation to oneself or in 
relation to someone who we ourselves identify with; they are extensions 
of ourselves.

I'll leave that there for the moment. I just want to add that I believe 
Colin Hales is right in focussing on the ability of humans to do 
science. I look at that more from the point of view that being able to 
do science, and being able to perceive and understand entropy - even if 
it is only grasping where crumbs and fluff balls come from -  are what 
allow us to know that we are NOT in some kind of computer generated 
matrix. We live in a real, open universe that exists independently of 
each of us but yet is incomplete without us.
 
Regards
Mark Peaty  CDES
[EMAIL PROTECTED]
http://www.arach.net.au/~mpeaty/
 


Brent Meeker wrote:
> Stathis Papaioannou wrote:
>   
>> Colin Hales writes:
>>
>> 
 I understand your conclusion, that a model of a brain
 won't be able to handle novelty like a real brain,
 but I am trying to understand the nuts and
 bolts of how the model is going to fail. For
 example, you can say that perpetual motion
 machines are impossible because they disobey
 the first or second law of thermodynamics,

Re: computer pain

2006-12-17 Thread 1Z


Brent Meeker wrote:
> Stathis Papaioannou wrote:
> >
> > Colin Hales writes:
> >
> >>> I understand your conclusion, that a model of a brain
> >>> won't be able to handle novelty like a real brain,
> >>> but I am trying to understand the nuts and
> >>> bolts of how the model is going to fail. For
> >>> example, you can say that perpetual motion
> >>> machines are impossible because they disobey
> >>> the first or second law of thermodynamics,
> >>> but you can also look at a particular design of such a
> >>> machine > and point out where the moving parts are going
> >>> to slow down due to friction.
> >>>
> >>> So, you have the brain and the model of the brain,
> >>> and you present them both with the same novel situation,
> >>> say an auditory stimulus. They both process the
> >>> stimulus and produce a response in the form of efferent
> >>> impulses which move the vocal cords and produce speech;
> >>> but the brain says something clever while  the computer
> >>> declares that it is lost for words. The obvious explanation
> >>> is that the computer model is not good enough, and maybe
> >>> a better model would  perform better, but I think you would
> >>> say that *no* model, no matter how good, could match the brain.
> >>>
> >>> Now, we agree that the brain contains matter which
> >>> follows the laws of physics.
> >>> Before the novel stimulus is applied the brain
> >>> is in configuration x. The stimulus essentially adds
> >>> energy to the brain in a very specific way, and as a
> >>> result of this the brain undergoes a very complex sequence
> >>> of physical changes, ending up in
> >>> configuration y, in the process outputting energy
> >>> in a very specific way which causes the vocal cords to move.
> >>> The important point is, in the transformations
> >>> x->y the various parts of the brain are just working
> >>> like parts of an elaborate Rube Goldberg mechanism.
> >>> There can be no surprises, because that would be
> >>> magic: two positively charged entities suddenly
> >>> start attracting each other, or
> >>> the hammer hits the pendulum and no momentum
> >>> is transferred. If there is magic -
> >>> actually worse than that, unpredictable magic -
> >>> then it won't be possible to model
> >>> the brain or the Rube Goldberg machine. But, barring magic,
> >>> it should be possible to predict the physical state
> >>> transitions x->y and hence you will know
> >>> what the motor output to the vocal cords will be and
> >>> what the vocal response to the
> >>> novel  stimulus will be.
> >>>
> >>> Classical chaos and quantum uncertainty may make it
> >>> difficult or impossible to
> >>> predict what a particular brain will do on a
> >>> particular day, but they should not be  a theoretical
> >>> impediment to modelling a generic brain which behaves in an
> >>> acceptably brain-like manner. Only unpredictable magical
> >>> effects would prevent that.
> >>>
> >>> Stathis Papaiaonnou
> >> I get where you're coming from. The problem is, what I am going to say
> >> will, in your eyes, put the reason into the class of 'magic'. I am quite
> >> used to it, and don't find it magical at all
> >>
> >> The problem is that the distal objects that are the subject about which
> >> the brain is informing itself, are literally, physically involved in the
> >> process. You can't model them, because you don't know what they are. All
> >> you have is sensory measurements and they are local and
> >> ambiguousthat's why you are doing the 'qualia dance' with EM fields -
> >> to 'cohere' with the external world. This non-locality is the same
> >> non-locality observed in QM and makes gravity 'action at a distance'
> >> possible. . I've been thinking about this for so long I actually have
> >> the reverse problem now...I find 'locality' really weird! I find 'extent'
> >> really hard to fathom. The non-locality is also predicted as the solution
> >> to the 'unity' issue.
> >>
> >> The empirical testing to verify this non-locality is the real target of my
> >> eventual experimentation. My model and the real chips will behave
> >> differently, it is predicted, because of the involvement of the 'external
> >> world' that is not available to the model.
> >>
> >> I hope to be able to 'switch off' the qualia whilst holding eveything else
> >> the same. The effects on subsequent learning will be indicative of the
> >> involvement of the qualia in learning. What the external world 'looks
> >> like' in the brain is 'virtual circuits' - average EM channels (regions of
> >> low potential that are like a temporary 'wire') down which chemistry can
> >> flow to alter synaptic weights and rearrange channel positions/rafting in
> >> the membrane and so on.
> >>
> >> So I guess my proclaimations about models are all contingent on my own
> >> view of things...and I could be wrong. Only time will tell. I have good
> >> physical grounds to doubt that modelling can work and I have a way of
> >> testing it. So at least it can be res

Re: computer pain

2006-12-17 Thread 1Z


Colin Geoffrey Hales wrote:
> >
> > I understand your conclusion, that a model of a brain
> > won't be able to handle novelty like a real brain,
> > but I am trying to understand the nuts and
> > bolts of how the model is going to fail. For
> > example, you can say that perpetual motion
> > machines are impossible because they disobey
> > the first or second law of thermodynamics,
> > but you can also look at a particular design of such a
> > machine > and point out where the moving parts are going
> > to slow down due to friction.
> >
> > So, you have the brain and the model of the brain,
> > and you present them both with the same novel situation,
> > say an auditory stimulus. They both process the
> > stimulus and produce a response in the form of efferent
> > impulses which move the vocal cords and produce speech;
> > but the brain says something clever while  the computer
> > declares that it is lost for words. The obvious explanation
> > is that the computer model is not good enough, and maybe
> > a better model would  perform better, but I think you would
> > say that *no* model, no matter how good, could match the brain.
> >
> > Now, we agree that the brain contains matter which
> > follows the laws of physics.
> > Before the novel stimulus is applied the brain
> > is in configuration x. The stimulus essentially adds
> > energy to the brain in a very specific way, and as a
> > result of this the brain undergoes a very complex sequence
> > of physical changes, ending up in
> > configuration y, in the process outputting energy
> > in a very specific way which causes the vocal cords to move.
> > The important point is, in the transformations
> > x->y the various parts of the brain are just working
> > like parts of an elaborate Rube Goldberg mechanism.
> > There can be no surprises, because that would be
> > magic: two positively charged entities suddenly
> > start attracting each other, or
> > the hammer hits the pendulum and no momentum
> > is transferred. If there is magic -
> > actually worse than that, unpredictable magic -
> > then it won't be possible to model
> > the brain or the Rube Goldberg machine. But, barring magic,
> > it should be possible to predict the physical state
> > transitions x->y and hence you will know
> > what the motor output to the vocal cords will be and
> > what the vocal response to the
> > novel  stimulus will be.
> >
> > Classical chaos and quantum uncertainty may make it
> > difficult or impossible to
> > predict what a particular brain will do on a
> > particular day, but they should not be  a theoretical
> > impediment to modelling a generic brain which behaves in an
> > acceptably brain-like manner. Only unpredictable magical
> > effects would prevent that.
> >
> > Stathis Papaiaonnou
>
> I get where you're coming from. The problem is, what I am going to say
> will, in your eyes, put the reason into the class of 'magic'. I am quite
> used to it, and don't find it magical at all
>
> The problem is that the distal objects that are the subject about which
> the brain is informing itself, are literally, physically involved in the
> process.

That is true. It is not clear why it should be a problem.

>  You can't model them, because you don't know what they are.

Why not? if the brain is succeeding in informing
itself about them, then it does know what they
are. What else does "informing" mean? (And
remember that one can supplement perceptual
information with information from instruments,etc).

>  All
> you have is sensory measurements and they are local and
> ambiguous...


They are no hopelessly local, because different sensory
feeds can be and are combined, and they are not seriously
ambiguous, because, although illusions and ambiguities
can occur, the sensory system usually succeeds in making a "best
guess".

> .that's why you are doing the 'qualia dance' with EM fields -
> to 'cohere' with the external world. This non-locality is the same
> non-locality observed in QM and makes gravity 'action at a distance'
> possible. .

That is wildly specualtive.

>  I've been thinking about this for so long I actually have
> the reverse problem now...I find 'locality' really weird! I find 'extent'
> really hard to fathom. The non-locality is also predicted as the solution
> to the 'unity' issue.


> The empirical testing to verify this non-locality is the real target of my
> eventual experimentation. My model and the real chips will behave
> differently, it is predicted, because of the involvement of the 'external
> world' that is not available to the model.
>
> I hope to be able to 'switch off' the qualia whilst holding eveything else
> the same. The effects on subsequent learning will be indicative of the
> involvement of the qualia in learning. What the external world 'looks
> like' in the brain is 'virtual circuits' - average EM channels (regions of
> low potential that are like a temporary 'wire') down which chemistry can
> flow to alter synaptic weights and r

RE: computer pain

2006-12-17 Thread Stathis Papaioannou


Brent Meeker writes:

[Colin]
> >> So I guess my proclaimations about models are all contingent on my own
> >> view of things...and I could be wrong. Only time will tell. I have good
> >> physical grounds to doubt that modelling can work and I have a way of
> >> testing it. So at least it can be resolved some day.
> >
[Stathis] 
> > I'm not sure of the details of your experiments, but wouldn't the most 
> > direct 
> > way to prove what you are saying be to isolate just that physical process 
> > which cannot be modelled? For example, if it is EM fields, set up an 
> > appropriately 
> > brain-like configuration of EM fields, introduce some environmental input, 
> > then 
> > show that the response of the fields deviates from what Maxwell's equations 
> > would predict. 
>
> I don't think Colin is claiming the fields deviate from Maxwell's equations - 
> he says they are good descriptions, they just miss the qualia.
> 
> Seems to me it would be a lot simpler to set up some EM fields of various 
> spatial and frequency variation and see if they change your qualia.
> 
> Brent Meeker

I'll let Colin answer, but it seems to me he must say that some aspect of brain 
physics deviates from what the equations tell us (and deviates in an 
unpredictable 
way, otherwise it would just mean that different equations are required) to be 
consistent. If not, then it should be possible to model the behaviour of a 
brain: 
predict what the brain is going to do in a particular situation, including 
novel situations 
such as those involving scientific research. Now, it is possible that the model 
will 
reproduce the behaviour but not the qualia, because the actual brain material 
is 
required for that, but that would mean that the model will be a philosophical 
zombie, 
and Colin has said that he does not believe that philosophical zombies can 
exist. 
Hence, he has to show not only that the computer model will lack the 1st person 
experience, but also lack the 3rd person observable behaviour of the real 
thing; 
and the latter can only be the case if there is some aspect of brain physics 
which 
does not comply with any possible mathematical model. 

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---