Re: Losing Control

2013-04-03 Thread Stathis Papaioannou
On Wed, Apr 3, 2013 at 9:54 AM, John Mikes jami...@gmail.com wrote:

 Dear Stathis,
 your lengthy reply to Craig is a bit longer than I can manage to reply in
 all facets so here is a condensed opinion:


Yes, these posts are probably getting a bit too long.


 Your position about the 'material' world (atoms, etc.) seems a bit
 mechanistic: like us, the (call it:) inanimates are also different no
 matter how identical we think they are in those lines we observe by our
 instruments and reductionist means.
 You ask about Na-ions: well, even atoms/ions are different to a wider
 scrutiny than enclosed in our physical sciences. Just  think about the
 fission-sequence - unpredictable WHICH one will undergo it next. It maybe
 differential within the atomic nucleus, may be in the circumstances and
 their so far not established impact on the individual atoms (ions?) leading
 to a next one. We know only a portion of the totality and just think that
 everything has been covered.
 I am not representing Craig, I make remarks upon your ideas of everything
 being predictably identical to its similars.


As Brent pointed out, there is no way to differentiate between atoms of the
same kind to tell which one, for example, will decay. But even if we could,
it is a fact that the atoms in a person can come from anywhere and the
person is still the same; whereas changing the configuration of the
existing atoms in a person can cause drastic changes in the person. This is
obvious with no more than casual observation.


 The (so far) known facts are neither: not 'known' and not 'facts'.
 Characteristics are restricted to yesterday's inventory and many potentials
 are not even dreamed of.
 We can manipulate a lot of circumstances, but be ready for others that may
 show up tomorrow - beyond our control.


There are, of course, undiscovered scientific facts. If scientists did not
believe that they would give up science. But Craig is not saying that there
are processes inside cells that are controlled by as yet undiscovered
physical effects. What he is saying is that if I decide to move my arm the
arm will move not due to the well-studied sequence of neurological events,
but spontaneously, due to my will. He cites as evidence for this the fact
that on a fMRI parts of the brain light up spontaneously when the subject
thinks about something.


 I agree with Craig (in his response to this same long post):

 ...Nothing is absolutely identical to anything else. Nothing is even
   identical to itself from moment to moment. Identical is a local
 approximation contingent upon the comprehensiveness of sense capacities. If
 your senses aren't very discerning, then lots of things seem identical

 I would add: no TWO events have identical circumstances to face,
 even if you do no detect inividual differences in the observed data of
 participating entities, the influencing circumstances are different from
 instance to instance and call for changes in processes. Bio, or not.

 This is one little corner how agnosticism frees up my mind (beware: not
 freezes!!).


No two things are identical, but they can be close enough to identical for
a particular purpose. If a part in your car breaks you do not junk the
whole car on the grounds that you will not be able to obtain an *identical*
part. Rather, you obtain a part that is close enough - within engineering
tolerance.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Losing Control

2013-04-03 Thread Craig Weinberg


On Tuesday, April 2, 2013 10:59:35 PM UTC-4, Brent wrote:

  On 4/2/2013 6:44 PM, Craig Weinberg wrote:
  


 On Tuesday, April 2, 2013 8:07:48 PM UTC-4, Brent wrote: 

  On 4/2/2013 3:54 PM, John Mikes wrote:
  
 Dear Stathis, 
 your lengthy reply to Craig is a bit longer than I can manage to reply in 
 all facets so here is a condensed opinion:

  Your position about the 'material' world (atoms, etc.) seems a bit 
 mechanistic: like us, the (call it:) inanimates are also different no 
 matter how identical we think they are in those lines we observe by our 
 instruments and reductionist means. 
 You ask about Na-ions: well, even atoms/ions are different to a wider 
 scrutiny than enclosed in our physical sciences. Just  think about the 
 fission-sequence - unpredictable WHICH one will undergo it next. It maybe 
 differential within the atomic nucleus, may be in the circumstances and 
 their so far not established impact on the individual atoms (ions?) leading 
 to a next one. 
  

 That would imply a hidden variable in the atom which determined when it 
 decayed.  Local hidden variables have been ruled out by numerous 
 experiments.  Non-local hidden variables (as in Bohm's quantum mechanics) 
 are not ruled out in non-relativistic experiments but it doesn't appear 
 possible to extend them to quantum field theory in which the number of 
 particles is not conserved.

  We know only a portion of the totality and just think that everything 
 has been covered. 
 I am not representing Craig, I make remarks upon your ideas of everything 
 being predictably identical to its similars. 

  The (so far) known facts are neither: not 'known' and not 'facts'. 
 Characteristics are restricted to yesterday's inventory and many potentials 
 are not even dreamed of. 
 We can manipulate a lot of circumstances, but be ready for others that 
 may show up tomorrow - beyond our control.

  I agree with Craig (in his response to this same long post):

  ...Nothing is absolutely identical to anything else. Nothing is even   
 identical to itself from moment to moment. Identical is a local 
 approximation contingent upon the comprehensiveness of sense capacities. If 
 your senses aren't very discerning, then lots of things seem identical
  

 The Schrodinger equation only works if the interchange of two bosons 
 makes no difference - so it is implicit in the success of quantum mechanics 
 that they are identical. 


 Does being interchangeable necessarily mean identical? 


 It does if the number of states that count toward the entropy doesn't 
 increase when you consider interchanges.  Cars obey Maxwell-Boltzman 
 statistics, elementary particles don't.


If two things have exactly the same, then they are interchangeable in the 
sense of using it for ballast in a ship, but it doesn't make the things 
interchangeable in every way that can be measured, it doesn't make them 
interchangeable in every way that is imaginable, and it certainly does not 
make them identical. Just because microcosmic observations are precisely 
consistent does not mean that all phenomena can be explained in those 
terms. Identical is a myth. There is no identical. A does not = A. The A 
that follows the = can be distinguished from the previous A, both in the 
order in which they were typed and in their relation to the rest of the 
text. The assumption that A = A is an important idea for logic, but it does 
not follow that the cosmos is made of phenomena which follow that narrow 
expectation.
 


  If I am driving in traffic, my car could be exchanged with any other on 
 the road and be observed to behave in the same way, yet my experience is 
 that the car which I am driving is very different from every other car in 
 the universe. If we close our eyes to the reality of subjectivity, then we 
 can't be very surprised when we fail to see how reality could be subjective.

   Similarly the solution changes sign if fermions are interchanged and 
 that requires that the two fermions be identical.  Otherwise bosons 
 wouldn't obey bose-einstein statistics and fermions wouldn't obey 
 fermi-dirac statistics, they would both obey Maxwell-Boltzman statistics - 
 but experiment shows they don't.

   
  I would add: no TWO events have identical circumstances to face, 
 even if you do no detect inividual differences in the observed data of 
 participating entities, the influencing circumstances are different from 
 instance to instance and call for changes in processes. Bio, or not. 
  

 But that becomes an all-purpose excuse for anything-goes.  No 
 generalization is possible, no pattern can be extrapolated.


 Not true. Any generalization is permitted as long as it is recognized as 
 such and not mistaken for a literal and exhaustive description of nature. 


 You mean any generalization at all?  Or any generalization that passes all 
 empirical tests.  


Any generalization that makes enough sense to be useful or appreciated. 
Something can be true 

Re: Losing Control

2013-04-03 Thread Craig Weinberg


On Wednesday, April 3, 2013 3:04:50 AM UTC-4, stathisp wrote:


 On Wed, Apr 3, 2013 at 9:54 AM, John Mikes jam...@gmail.com javascript:
  wrote:

 Dear Stathis,
 your lengthy reply to Craig is a bit longer than I can manage to reply in 
 all facets so here is a condensed opinion:


 Yes, these posts are probably getting a bit too long.
  

 Your position about the 'material' world (atoms, etc.) seems a bit 
 mechanistic: like us, the (call it:) inanimates are also different no 
 matter how identical we think they are in those lines we observe by our 
 instruments and reductionist means. 
 You ask about Na-ions: well, even atoms/ions are different to a wider 
 scrutiny than enclosed in our physical sciences. Just  think about the 
 fission-sequence - unpredictable WHICH one will undergo it next. It maybe 
 differential within the atomic nucleus, may be in the circumstances and 
 their so far not established impact on the individual atoms (ions?) leading 
 to a next one. We know only a portion of the totality and just think that 
 everything has been covered. 
 I am not representing Craig, I make remarks upon your ideas of everything 
 being predictably identical to its similars. 


 As Brent pointed out, there is no way to differentiate between atoms of 
 the same kind to tell which one, for example, will decay. But even if we 
 could, it is a fact that the atoms in a person can come from anywhere and 
 the person is still the same; whereas changing the configuration of the 
 existing atoms in a person can cause drastic changes in the person. This is 
 obvious with no more than casual observation.


You aren't an atom so you have no idea if it 'knows where its been'. They 
certainly seem to know a lot about where they are when they are bunched up 
all together. You know where you've been though, and where you've been has 
a profound influence on who you are, so that is a property of some part of 
the universe. Which part is that do you think?
 

  

 The (so far) known facts are neither: not 'known' and not 'facts'. 
 Characteristics are restricted to yesterday's inventory and many potentials 
 are not even dreamed of. 
 We can manipulate a lot of circumstances, but be ready for others that 
 may show up tomorrow - beyond our control.


 There are, of course, undiscovered scientific facts. If scientists did not 
 believe that they would give up science. But Craig is not saying that there 
 are processes inside cells that are controlled by as yet undiscovered 
 physical effects. What he is saying is that if I decide to move my arm the 
 arm will move not due to the well-studied sequence of neurological events, 
 but spontaneously, due to my will.

  
UGH. No. I say that if I move my arm, the arm will move because I AM 
whatever sequence of events on whatever level - molecular, biochemical, 
physiological, whether well-studied or not. You may not be able to 
understand that what I intend is not to squeeze myself into biology, or to 
magically replace biology, but to present that the entirety of the physics 
of my body intersects with the entirety of the physics of my experience. 
The two aesthetics - public bodies in space and private experiences through 
time, are an involuted (Ouroboran, umbilical, involuted) Monism. If you 
don't understand what that means then you are arguing with a straw man.
 

 He cites as evidence for this the fact that on a fMRI parts of the brain 
 light up spontaneously when the subject thinks about something.


That and also the fact that when I move my fingers to type, they move and 
letters are typed.
 

  

 I agree with Craig (in his response to this same long post):

 ...Nothing is absolutely identical to anything else. Nothing is even 
   identical to itself from moment to moment. Identical is a local 
 approximation contingent upon the comprehensiveness of sense capacities. If 
 your senses aren't very discerning, then lots of things seem identical

 I would add: no TWO events have identical circumstances to face, 
 even if you do no detect inividual differences in the observed data of 
 participating entities, the influencing circumstances are different from 
 instance to instance and call for changes in processes. Bio, or not. 

 This is one little corner how agnosticism frees up my mind (beware: not 
 freezes!!).


 No two things are identical, but they can be close enough to identical for 
 a particular purpose.


Exactly! That's my point. Since consciousness can have no particular 
purpose however, it is that which lends all purposes and cannot be 
simulated.
 

 If a part in your car breaks you do not junk the whole car on the grounds 
 that you will not be able to obtain an *identical* part. Rather, you obtain 
 a part that is close enough - within engineering tolerance.


Right, but that analogy fails when you consider replacing yourself with 
someone who is just like you, but you won't be alive anymore. If the part 
is life itself, identity, awareness, 

Astigmatism Example

2013-04-03 Thread Craig Weinberg
If any of you have a moderate astigmatism, you may have observed this - if 
not, you'll have to take my word for it.

If I close my weak eye*, I find that after a few seconds, the image from 
the strong eye, even though it is closed, tries to creep into my visual 
field. It is not difficult at this point to 'look through' the eye that is 
closed (seeing phosphenes or just darkness). Reversing the test, with my 
weak eye closed, there is no creeping effect and it is not really possible 
for me to look through the eye that is closed.

In a universe of functionalism or comp, I would expect that this would 
never happen, as my brain should always prioritize the information made 
available by any eye that is open over that of an eye which is closed. The 
fact that closing the weak eye instead does not produce the creeping image 
effect demonstrates that there is no functional purpose which could be 
served by favoring the strong eye when it is the one which is closed.

In some people astigmatism progresses until the develop a wandering eye. 
The physicalist can claim victory over the functionalist here in that the 
atrophy of nerve connections to the weak eye and the relative hypertrophy 
of the nerve connections to the strong eye clearly dominate the functional 
considerations of the visual mechanism. The creeping image effect also is 
not immediate, so that it is not the case that the hardware is incapable of 
maintaining clear vision through the weak eye, it is obviously the inertia 
of purely physical-perceptual processes which is dragging the function down.

Between the physical and the perceptual, which one is driving? It would 
seem that physics would win here, because the creeping image is not the 
more aesthetically rich image - however, this is not a case where the 
aesthetics are determined only from the top down. Remember that both eyes 
are exposed to the same light. The retinas receive the same total number of 
photons. The strong eye develops more robust connections to it not because 
it has more light, but because the shape of the eye is such that the cells 
(sub-personal agents) of the retina are able to make more sense out of the 
better focused light. 

There are not more signals being generated, but clearer signals which carry 
farther up the ladder from sub-personal optical detection to personal 
visual sensation. The nerve growth follows the coherence of visual 
consciousness, not a just a photological nutrient supply. The eye becomes 
stronger because the brain population is prioritizing higher sensitivity, 
not because neurons are being pushed around by blind ionic concentration 
gradients. That sensory priority is the cause of the neurological 
investment in that eye's sensitivity, so that it is perceptual inertia 
which drives the creeping image effect not just biological morphology. 

*which is my left eye. Curious if any of you left brainy types have an 
astigmatism in the right eye. 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Astigmatism Example

2013-04-03 Thread Richard Ruquist
I am a leftist astigmatic.

But you raise an interesting point that I believe supports a mind/brain
duality.
In a universe of functionalism or comp, I would expect that this would
never happen, as my brain should always prioritize the information made
available by any eye that is open over that of an eye which is closed. I
agree.

However, in a mind/brain dualism, the mind may be due to comp and the brain
 due to evolution of physical biological organisms, influenced by the mind
comp but not controlled by the mind comp. (However, below the substitution
level the universal mind comp controls all particle interactions and such a
duality does not exist.) So in a mind/brain duality, the prioritization you
mention cannot exist if it has not physically evolved.

In my model, all physical particles and energy are created by comp in the
big bang and are conserved thereafter, subject to the laws and constants of
nature that also come from comp. Consciousness is a property of the
universal mind and also manifests in biological organisms as a mind
consciousness when the complexity of the organism exceeds the 10^120 bit comp
power limit derived from the Bekenstein bound of the universe.
Richard



On Wed, Apr 3, 2013 at 2:42 PM, Craig Weinberg whatsons...@gmail.comwrote:

 If any of you have a moderate astigmatism, you may have observed this - if
 not, you'll have to take my word for it.

 If I close my weak eye*, I find that after a few seconds, the image from
 the strong eye, even though it is closed, tries to creep into my visual
 field. It is not difficult at this point to 'look through' the eye that is
 closed (seeing phosphenes or just darkness). Reversing the test, with my
 weak eye closed, there is no creeping effect and it is not really possible
 for me to look through the eye that is closed.

 In a universe of functionalism or comp, I would expect that this would
 never happen, as my brain should always prioritize the information made
 available by any eye that is open over that of an eye which is closed. The
 fact that closing the weak eye instead does not produce the creeping image
 effect demonstrates that there is no functional purpose which could be
 served by favoring the strong eye when it is the one which is closed.

 In some people astigmatism progresses until the develop a wandering eye.
 The physicalist can claim victory over the functionalist here in that the
 atrophy of nerve connections to the weak eye and the relative hypertrophy
 of the nerve connections to the strong eye clearly dominate the functional
 considerations of the visual mechanism. The creeping image effect also is
 not immediate, so that it is not the case that the hardware is incapable of
 maintaining clear vision through the weak eye, it is obviously the inertia
 of purely physical-perceptual processes which is dragging the function down.

 Between the physical and the perceptual, which one is driving? It would
 seem that physics would win here, because the creeping image is not the
 more aesthetically rich image - however, this is not a case where the
 aesthetics are determined only from the top down. Remember that both eyes
 are exposed to the same light. The retinas receive the same total number of
 photons. The strong eye develops more robust connections to it not because
 it has more light, but because the shape of the eye is such that the cells
 (sub-personal agents) of the retina are able to make more sense out of the
 better focused light.

 There are not more signals being generated, but clearer signals which
 carry farther up the ladder from sub-personal optical detection to personal
 visual sensation. The nerve growth follows the coherence of visual
 consciousness, not a just a photological nutrient supply. The eye becomes
 stronger because the brain population is prioritizing higher sensitivity,
 not because neurons are being pushed around by blind ionic concentration
 gradients. That sensory priority is the cause of the neurological
 investment in that eye's sensitivity, so that it is perceptual inertia
 which drives the creeping image effect not just biological morphology.

 *which is my left eye. Curious if any of you left brainy types have an
 astigmatism in the right eye.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to 

Re: Astigmatism Example

2013-04-03 Thread Craig Weinberg


On Wednesday, April 3, 2013 3:10:29 PM UTC-4, yanniru wrote:

 I am a leftist astigmatic.

 But you raise an interesting point that I believe supports a mind/brain 
 duality.
 In a universe of functionalism or comp, I would expect that this would 
 never happen, as my brain should always prioritize the information made 
 available by any eye that is open over that of an eye which is closed. I 
 agree.
  
 However, in a mind/brain dualism, the mind may be due to comp and the 
 brain  due to evolution of physical biological organisms, influenced by the 
 mind comp but not controlled by the mind comp. (However, below the 
 substitution level the universal mind comp controls all particle 
 interactions and such a duality does not exist.) So in a mind/brain 
 duality, the prioritization you mention cannot exist if it has not 
 physically evolved.

 In my model, all physical particles and energy are created by comp in the 
 big bang and are conserved thereafter, subject to the laws and constants of 
 nature that also come from comp. Consciousness is a property of the 
 universal mind and also manifests in biological organisms as a mind 
 consciousness when the complexity of the organism exceeds the 10^120 bit comp 
 power limit derived from the Bekenstein bound of the universe.


My view is similar to what you describe as far as mind-brain dualism 
proscribing a different evolution of the agendas of mind and the 
consequences of brain conditions. I think that in a complex organism there 
is feedback on multiple levels - the mind and brain influence each other 
constantly, and, in my view, are as the head and tail of the Ouroboros 
serpent - opposite ends of the same unbroken continuum. 

The problem that I have with what you propose, as I understand it is 
twofold:

The presentation problem. If the universal mind is comp, why does the 
universe have any aesthetic content at all? Why does comp create formal 
localizations as a physical phenomenon when it could use the digital 
localizations that it already consists of. 

The de-presentation problem. What would be the point of physical particles 
and energy being created by comp if there could be nothing able to detect 
them until some organism exceeds the 10^120 bit comp power limit? You are 
looking at a universe which is almost completely undetectable except for in 
the processing of a few organisms scattered on planets after billions of 
years of silent darkness.

If you run it the other way, with the Universal Mind as the Universal 
Experience instead, then complexity becomes a symptom of elaborated 
qualities of that experience rather than a cause of experience itself 
appearing into an unconscious world of matter. Our own quality of 
consciousness is not just a mind full of practical or logical thoughts, but 
also of feelings, images, intuitions, visions, etc. Our world has never 
been unconscious or conscious like us, but is rather filled with every sort 
of in-between semi-conscious, from primate to mammal, reptile, etc.. The 
transition to inorganic matter is both smooth and sudden, as phenomena like 
viruses and crystals bridge the gap but also on another level, leave no 
obvious link.

From the Universal Experience, comp is derived as a second order strategy 
to manage the interaction between sub-experiences, and that interaction is 
what we perceive as physics. This way, representation arises naturally 
through any multiplicity of presentations, and both the presentation 
problem and de-presentation problems are resolved. Comp exists to serve 
sensory presence, since sensory presence cannot plausibly serve comp in any 
way. The universe is never silent and unconscious, but is always an 
experience defined by whatever participants are available, regardless of 
the complexity. The Universal Experience, I suggest, has the property of 
conserving appearances of separateness between different kinds of 
sub-experiences, and this accounts for the mistaken impression that 
non-human experiences are objectively and absolutely unconscious - they are 
'as if unconscious' relative to our local realism, but that is necessary to 
insulate our experience from an implosion of significance.

Thanks,
Craig

Richard



 On Wed, Apr 3, 2013 at 2:42 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:

 If any of you have a moderate astigmatism, you may have observed this - 
 if not, you'll have to take my word for it.

 If I close my weak eye*, I find that after a few seconds, the image from 
 the strong eye, even though it is closed, tries to creep into my visual 
 field. It is not difficult at this point to 'look through' the eye that is 
 closed (seeing phosphenes or just darkness). Reversing the test, with my 
 weak eye closed, there is no creeping effect and it is not really possible 
 for me to look through the eye that is closed.

 In a universe of functionalism or comp, I would expect that this would 
 never happen, as my brain should always prioritize 

Re: Astigmatism Example

2013-04-03 Thread Richard Ruquist
My google account is forcing me to reply here rather than interspersed,
which is very inconvenient. But I will try.

1. As far as I know the universal mind is not aesthetic
2. Not sure what your 2nd question means
3. The universe has existed for 13.82 ly with little or no consciousness to
detect it unless you consider a universal consciousness. I do not see how
that is a criticism. Seems to be a fact of nature.
4.I cannot run the other way with my model. That's your model


On Wed, Apr 3, 2013 at 5:01 PM, Craig Weinberg whatsons...@gmail.comwrote:



 On Wednesday, April 3, 2013 3:10:29 PM UTC-4, yanniru wrote:

 I am a leftist astigmatic.

 But you raise an interesting point that I believe supports a mind/brain
 duality.
 In a universe of functionalism or comp, I would expect that this would
 never happen, as my brain should always prioritize the information made
 available by any eye that is open over that of an eye which is closed. I
 agree.

 However, in a mind/brain dualism, the mind may be due to comp and the
 brain  due to evolution of physical biological organisms, influenced by the
 mind comp but not controlled by the mind comp. (However, below the
 substitution level the universal mind comp controls all particle
 interactions and such a duality does not exist.) So in a mind/brain
 duality, the prioritization you mention cannot exist if it has not
 physically evolved.

 In my model, all physical particles and energy are created by comp in the
 big bang and are conserved thereafter, subject to the laws and constants of
 nature that also come from comp. Consciousness is a property of the
 universal mind and also manifests in biological organisms as a mind
 consciousness when the complexity of the organism exceeds the 10^120 bit comp
 power limit derived from the Bekenstein bound of the universe.


 My view is similar to what you describe as far as mind-brain dualism
 proscribing a different evolution of the agendas of mind and the
 consequences of brain conditions. I think that in a complex organism there
 is feedback on multiple levels - the mind and brain influence each other
 constantly, and, in my view, are as the head and tail of the Ouroboros
 serpent - opposite ends of the same unbroken continuum.

 The problem that I have with what you propose, as I understand it is
 twofold:

 The presentation problem. If the universal mind is comp, why does the
 universe have any aesthetic content at all? Why does comp create formal
 localizations as a physical phenomenon when it could use the digital
 localizations that it already consists of.

 The de-presentation problem. What would be the point of physical particles
 and energy being created by comp if there could be nothing able to detect
 them until some organism exceeds the 10^120 bit comp power limit? You are
 looking at a universe which is almost completely undetectable except for in
 the processing of a few organisms scattered on planets after billions of
 years of silent darkness.

 If you run it the other way, with the Universal Mind as the Universal
 Experience instead, then complexity becomes a symptom of elaborated
 qualities of that experience rather than a cause of experience itself
 appearing into an unconscious world of matter. Our own quality of
 consciousness is not just a mind full of practical or logical thoughts, but
 also of feelings, images, intuitions, visions, etc. Our world has never
 been unconscious or conscious like us, but is rather filled with every sort
 of in-between semi-conscious, from primate to mammal, reptile, etc.. The
 transition to inorganic matter is both smooth and sudden, as phenomena like
 viruses and crystals bridge the gap but also on another level, leave no
 obvious link.

 From the Universal Experience, comp is derived as a second order strategy
 to manage the interaction between sub-experiences, and that interaction is
 what we perceive as physics. This way, representation arises naturally
 through any multiplicity of presentations, and both the presentation
 problem and de-presentation problems are resolved. Comp exists to serve
 sensory presence, since sensory presence cannot plausibly serve comp in any
 way. The universe is never silent and unconscious, but is always an
 experience defined by whatever participants are available, regardless of
 the complexity. The Universal Experience, I suggest, has the property of
 conserving appearances of separateness between different kinds of
 sub-experiences, and this accounts for the mistaken impression that
 non-human experiences are objectively and absolutely unconscious - they are
 'as if unconscious' relative to our local realism, but that is necessary to
 insulate our experience from an implosion of significance.

 Thanks,
 Craig

 Richard



 On Wed, Apr 3, 2013 at 2:42 PM, Craig Weinberg whats...@gmail.comwrote:

 If any of you have a moderate astigmatism, you may have observed this -
 if not, you'll have to take my word for it.

 If I close my weak 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-03 Thread Jason Resch
On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes te...@telmomenezes.comwrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg whatsons...@gmail.comwrote:



 Then shouldn't a powerful computer be able to quickly deduce the winning
 Arimaa mappings?


 You're making the same mistake as John Clark, confusing the physical
 computer with the algorithm. Powerful computers don't help us if we don't
 have the right algorithm. The central mystery of AI, in my opinion, is why
 on earth haven't we found a general learning algorithm yet. Either it's too
 complex for our monkey brains, or you're right that computation is not the
 whole story. I believe in the former, but not I'm not sure, of course.
 Notice that I'm talking about generic intelligence, not consciousness,
 which I strongly believe to be two distinct phenomena.



Another point toward Telmo's suspicion that learning is complex:

If learning and thinking intelligently at a human level were
computationally easy, biology wouldn't have evolved to use trillions of
synapses.  The brain is very expensive metabolically (using 20 - 25% of the
total body's energy, about 100 Watts).  If so many neurons were not needed
to do what we do, natural selection would have selected those humans with
fewer neurons and reduced food requirements.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Astigmatism Example

2013-04-03 Thread Jesse Mazer
On Wed, Apr 3, 2013 at 2:42 PM, Craig Weinberg whatsons...@gmail.comwrote:


 In a universe of functionalism or comp, I would expect that this would
 never happen, as my brain should always prioritize the information made
 available by any eye that is open over that of an eye which is closed.


I don't think the function in functionalism is supposed to refer to
utility or purpose. Functionalism as I understand it just refers to the
idea that if you replaced each part of the brain with a functionally
identical part, meaning that its input/output relationship is the same as
the original part, then this will result in no change in conscious
experience, regardless of the material details of how the part produces
this input/output relation (a miniature version of the Chinese room
thought experiment could work, for example). It's also self-evident that
there should be no behavioral change, *if* we assume the reductionist idea
that the large-scale behavior of any physical system is determined by the
rules governing the behavior and interactions of each of its component
parts (you would probably dispute this, but the point is just that this
seems to be one of the assumptions of 'functionalism', and of course almost
all modern scientific theories of systems composed of multiple parts work
with this assumption).

For example, if you have a tumor which is altering your consciousness and
disrupting some other abilities like speech, that is obviously not serving
any useful function, but functionalism wouldn't claim it should, it would
just say that if you replaced the tumor with an artificial device that
affected the surrounding neurons in exactly the same way, the affected
patient wouldn't notice any subjective difference (likewise with more
useful parts of the brain, of course).

There may of course be different meanings that philosophers have assigned
to the term functionalism, but I think this is one, and I'm pretty sure
it's part of what COMP is taken to mean on this list.

Jesse

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Astigmatism Example

2013-04-03 Thread Craig Weinberg


On Wednesday, April 3, 2013 5:30:44 PM UTC-4, yanniru wrote:

 My google account is forcing me to reply here rather than interspersed, 
 which is very inconvenient. But I will try.

 1. As far as I know the universal mind is not aesthetic


Exactly, which is why it can't be responsible for any aesthetic agenda, and 
as far as I can tell, consciousness is a purely aesthetic agenda. No mind 
(or logic, or set of computations) can be responsible for consciousness.
 

 2. Not sure what your 2nd question means
 3. The universe has existed for 13.82 ly with little or no consciousness 
 to detect it unless you consider a universal consciousness.


Little to no consciousness is what I am saying is a bad assumption. Any 
given non-human experience may have little or no consciousness which we 
relate to as human beings, but just as comp (especially Bruno's 
implementation of comp) points to a vast infinity of unfamiliar and 
invisible perfections, my expectation is that the universe without human 
beings is still overflowing with experience. This is a different kind of 
panexperientialism, not one which says that a planet is a living being, but 
that what we see as a planet is a contrived representation of vast set of 
experience on a completely different scale than humans can directly 
interact with. Just as a human brain reveals no clue as to the particular 
feelings and memories of the person who is associated with it, all 
experiences associated with Earth are represented by the Earth itself. My 
panexperientialism is about all phenomena which appear to us as public 
bodies being tokens of the underlying reality, which is not matter, not 
computation, but an eternity of interwoven experiences and meta-experiences.
 

 I do not see how that is a criticism. Seems to be a fact of nature.


Seems is the key word. Of course nature seems to contain a universe of 
unconscious matter to us, because that perceptual relativity is what allows 
us to develop our own rich perceptual inertial frame (niche or umwelt). 
Just as the mites that live in our eyelids have no possible sense of the 
actions which exist on our level, we have no opportunity to view the 
universe from a non-human vantage point - where millions of years pass in 
seconds and solar systems bounce off of each other like spinning tops.
 

 4.I cannot run the other way with my model. That's your model


The truth of nature belongs to everyone, not just me. All that it takes for 
you to be able to run the model my way is some curiosity, bravery, and 
humility.

Craig
 



 On Wed, Apr 3, 2013 at 5:01 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Wednesday, April 3, 2013 3:10:29 PM UTC-4, yanniru wrote:

 I am a leftist astigmatic.

 But you raise an interesting point that I believe supports a mind/brain 
 duality.
 In a universe of functionalism or comp, I would expect that this would 
 never happen, as my brain should always prioritize the information made 
 available by any eye that is open over that of an eye which is closed. I 
 agree.
  
 However, in a mind/brain dualism, the mind may be due to comp and the 
 brain  due to evolution of physical biological organisms, influenced by the 
 mind comp but not controlled by the mind comp. (However, below the 
 substitution level the universal mind comp controls all particle 
 interactions and such a duality does not exist.) So in a mind/brain 
 duality, the prioritization you mention cannot exist if it has not 
 physically evolved.

 In my model, all physical particles and energy are created by comp in 
 the big bang and are conserved thereafter, subject to the laws and 
 constants of nature that also come from comp. Consciousness is a property 
 of the universal mind and also manifests in biological organisms as a mind 
 consciousness when the complexity of the organism exceeds the 10^120 bit 
 comp 
 power limit derived from the Bekenstein bound of the universe.


 My view is similar to what you describe as far as mind-brain dualism 
 proscribing a different evolution of the agendas of mind and the 
 consequences of brain conditions. I think that in a complex organism there 
 is feedback on multiple levels - the mind and brain influence each other 
 constantly, and, in my view, are as the head and tail of the Ouroboros 
 serpent - opposite ends of the same unbroken continuum. 

 The problem that I have with what you propose, as I understand it is 
 twofold:

 The presentation problem. If the universal mind is comp, why does the 
 universe have any aesthetic content at all? Why does comp create formal 
 localizations as a physical phenomenon when it could use the digital 
 localizations that it already consists of. 

 The de-presentation problem. What would be the point of physical 
 particles and energy being created by comp if there could be nothing able 
 to detect them until some organism exceeds the 10^120 bit comp power limit? 
 You are looking at a universe which is almost 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-03 Thread meekerdb

On 4/3/2013 2:44 PM, Jason Resch wrote:
You're making the same mistake as John Clark, confusing the physical computer with the 
algorithm. Powerful computers don't help us if we don't have the right algorithm. The 
central mystery of AI, in my opinion, is why on earth haven't we found a general 
learning algorithm yet. Either it's too complex for our monkey brains, or you're right 
that computation is not the whole story. I believe in the former, but not I'm not sure, 
of course. Notice that I'm talking about generic intelligence, not consciousness, which 
I strongly believe to be two distinct phenomena.


Then do you think there could be philosophical zombies?  How would you operationally test 
a robot to see whether it was (a) intelligent (b) conscious?


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Astigmatism Example

2013-04-03 Thread Craig Weinberg


On Wednesday, April 3, 2013 5:53:40 PM UTC-4, jessem wrote:



 On Wed, Apr 3, 2013 at 2:42 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:


 In a universe of functionalism or comp, I would expect that this would 
 never happen, as my brain should always prioritize the information made 
 available by any eye that is open over that of an eye which is closed.


 I don't think the function in functionalism is supposed to refer to 
 utility or purpose. Functionalism as I understand it just refers to the 
 idea that if you replaced each part of the brain with a functionally 
 identical part, meaning that its input/output relationship is the same as 
 the original part, then this will result in no change in conscious 
 experience, regardless of the material details of how the part produces 
 this input/output relation (a miniature version of the Chinese room 
 thought experiment could work, for example). 


Right, but in the nervous system, the input/output relationship is the 
same as utility or purpose. Think of it this way. If I make a cymatic 
pattern in some sand spread out on top of a drum head by vibrating it with 
a certain frequency of sound, then functionalism says that whatever I do to 
make that pattern must equal a sound. We know that isn't true though. I 
could make that cymatic pattern simply by making a mold of it and filling 
that mold with sand. I could stamp out necklaces with miniature versions of 
that pattern in bronze. I could design a device which records the motion of 
the sand as the pattern forms optically and then reproduces the same motion 
and the same pattern in some other medium, like a TV screen. All of these 
methods reproduce the input/output relationship which creates the 
pattern, yet none of them involve carrying over the sound which I initially 
used to make the pattern.

It's a little different because we can change our conscious experience by 
changing the pattern of our brain activity, and that activity can be 
changed in the same way by different means, so that functionalist 
assumptions can be used legitimately to understand brain physiology - but - 
that does not mean that the functionalist assumptions automatically tell 
the whole story. If they did, then we would not need subjective reports to 
correlate with brain activity, we would be able to simply detect subjective 
qualities as functions, which of course we cannot do in any way. Just as 
there is more than one way to make a pattern in sand, there is more than 
one expression of any given experience. On one level it is hundreds of 
billions of molecules reconfiguring each other, and on another is a single 
experience which contains within it a billion times that number of 
experiences on different levels.

It's also self-evident that there should be no behavioral change, *if* we 
 assume the reductionist idea that the large-scale behavior of any physical 
 system is determined by the rules governing the behavior and interactions 
 of each of its component parts (you would probably dispute this, but the 
 point is just that this seems to be one of the assumptions of 
 'functionalism', and of course almost all modern scientific theories of 
 systems composed of multiple parts work with this assumption).


Look at how freeway traffic works. We can statistically analyze the 
positions and actions of the cars and with a few simple rules, predict a 
model of general traffic flow. Such a model is very effective for 
predicting and controlling traffic, but it does not have access to the 
meaning of the traffic - which is in fact the narrative agendas of each 
individual driver trying to leave one location and get to another. That is 
the reason the traffic exists; because drivers are using vehicles to 
realize their motives. We could model traffic instead as a torrent of 
automotive particles, which attract drivers inside of them automatically 
through a wave like field which happens to be synchronized with rush hour 
and lunch hour, and our model would not be incorrect in its predictions, 
but of course, it would lead us to a completely false conclusion about the 
nature of cars.
 


 For example, if you have a tumor which is altering your consciousness and 
 disrupting some other abilities like speech, that is obviously not serving 
 any useful function, but functionalism wouldn't claim it should, it would 
 just say that if you replaced the tumor with an artificial device that 
 affected the surrounding neurons in exactly the same way, the affected 
 patient wouldn't notice any subjective difference (likewise with more 
 useful parts of the brain, of course).

 There may of course be different meanings that philosophers have assigned 
 to the term functionalism, but I think this is one, and I'm pretty sure 
 it's part of what COMP is taken to mean on this list.


Point taken. I was referring more to the 'ontological implications of 
functionalism' rather than functionalism itself. It's important to 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-03 Thread Craig Weinberg


On Wednesday, April 3, 2013 5:44:24 PM UTC-4, Jason wrote:




 On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes 
 te...@telmomenezes.comjavascript:
  wrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 Then shouldn't a powerful computer be able to quickly deduce the winning 
 Arimaa mappings?


 You're making the same mistake as John Clark, confusing the physical 
 computer with the algorithm. Powerful computers don't help us if we don't 
 have the right algorithm. The central mystery of AI, in my opinion, is why 
 on earth haven't we found a general learning algorithm yet. Either it's too 
 complex for our monkey brains, or you're right that computation is not the 
 whole story. I believe in the former, but not I'm not sure, of course. 
 Notice that I'm talking about generic intelligence, not consciousness, 
 which I strongly believe to be two distinct phenomena.
   


 Another point toward Telmo's suspicion that learning is complex:

 If learning and thinking intelligently at a human level were 
 computationally easy, biology wouldn't have evolved to use trillions of 
 synapses.  The brain is very expensive metabolically (using 20 - 25% of the 
 total body's energy, about 100 Watts).  If so many neurons were not needed 
 to do what we do, natural selection would have selected those humans with 
 fewer neurons and reduced food requirements.


There's no question that human intelligence reflects an improved survival 
through learning, and that that is what makes the physiological investment 
pay off. What I question is why that improvement would entail awareness. 
There are a lot of neurons in our gut as well, and assimilation of 
nutrients is undoubtedly complex and important to survival, yet we are not 
compelled to insist that there must be some conscious experience to manage 
that intelligence. Learning is complex, but awareness itself is simple.

Craig
 


 Jason


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-03 Thread Jason Resch
Brent,

You're mail client is malfunctioning again, you are quoting something Telmo
wrote as coming from me.

My opinion on the matter of philosophical zombies is that they are
logically inconsistent.

Jason


On Wed, Apr 3, 2013 at 5:44 PM, meekerdb meeke...@verizon.net wrote:

 On 4/3/2013 2:44 PM, Jason Resch wrote:

 You're making the same mistake as John Clark, confusing the physical
 computer with the algorithm. Powerful computers don't help us if we don't
 have the right algorithm. The central mystery of AI, in my opinion, is why
 on earth haven't we found a general learning algorithm yet. Either it's too
 complex for our monkey brains, or you're right that computation is not the
 whole story. I believe in the former, but not I'm not sure, of course.
 Notice that I'm talking about generic intelligence, not consciousness,
 which I strongly believe to be two distinct phenomena.


 Then do you think there could be philosophical zombies?  How would you
 operationally test a robot to see whether it was (a) intelligent (b)
 conscious?

 Brent


 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to 
 everything-list+unsubscribe@**googlegroups.comeverything-list%2bunsubscr...@googlegroups.com
 .
 To post to this group, send email to 
 everything-list@googlegroups.**comeverything-list@googlegroups.com
 .
 Visit this group at 
 http://groups.google.com/**group/everything-list?hl=enhttp://groups.google.com/group/everything-list?hl=en
 .
 For more options, visit 
 https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
 .




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-03 Thread Jason Resch
On Wed, Apr 3, 2013 at 6:04 PM, Craig Weinberg whatsons...@gmail.comwrote:



 On Wednesday, April 3, 2013 5:44:24 PM UTC-4, Jason wrote:




 On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes te...@telmomenezes.comwrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg whats...@gmail.comwrote:



 Then shouldn't a powerful computer be able to quickly deduce the
 winning Arimaa mappings?


 You're making the same mistake as John Clark, confusing the physical
 computer with the algorithm. Powerful computers don't help us if we don't
 have the right algorithm. The central mystery of AI, in my opinion, is why
 on earth haven't we found a general learning algorithm yet. Either it's too
 complex for our monkey brains, or you're right that computation is not the
 whole story. I believe in the former, but not I'm not sure, of course.
 Notice that I'm talking about generic intelligence, not consciousness,
 which I strongly believe to be two distinct phenomena.



 Another point toward Telmo's suspicion that learning is complex:

 If learning and thinking intelligently at a human level were
 computationally easy, biology wouldn't have evolved to use trillions of
 synapses.  The brain is very expensive metabolically (using 20 - 25% of the
 total body's energy, about 100 Watts).  If so many neurons were not needed
 to do what we do, natural selection would have selected those humans with
 fewer neurons and reduced food requirements.


 There's no question that human intelligence reflects an improved survival
 through learning, and that that is what makes the physiological investment
 pay off.


Right, so my point is that we should not expect things like human
intelligence or human learning to be trivial or easy to get in robots, when
the human brain is the most complex thing we know, and can perform more
computations than even the largest super computers of today.



 What I question is why that improvement would entail awareness.


A human has to be aware to do the things it does, because zombies are not
possible.  Your examples of blind sight are not a disproof of the
separability of function and awareness, only examples of broken links in
communication (quite similar to split brain patients).


 There are a lot of neurons in our gut as well, and assimilation of
 nutrients is undoubtedly complex and important to survival, yet we are not
 compelled to insist that there must be some conscious experience to manage
 that intelligence. Learning is complex, but awareness itself is simple.


I think the nerves in the gut can manifest as awareness, such as cravings
for certain foods when the body realizes it is deficient in some particular
nutrient.  Afterall, what is the point of all those nerves if they have no
impact on behavior?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-03 Thread meekerdb

Hmm. You're right I was intending to ask Telmo that question.

Brent

On 4/3/2013 5:52 PM, Jason Resch wrote:

Brent,

You're mail client is malfunctioning again, you are quoting something Telmo wrote as 
coming from me.


My opinion on the matter of philosophical zombies is that they are logically 
inconsistent.

Jason


On Wed, Apr 3, 2013 at 5:44 PM, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


On 4/3/2013 2:44 PM, Jason Resch wrote:

You're making the same mistake as John Clark, confusing the physical 
computer
with the algorithm. Powerful computers don't help us if we don't have 
the right
algorithm. The central mystery of AI, in my opinion, is why on earth 
haven't we
found a general learning algorithm yet. Either it's too complex for our 
monkey
brains, or you're right that computation is not the whole story. I 
believe in
the former, but not I'm not sure, of course. Notice that I'm talking 
about
generic intelligence, not consciousness, which I strongly believe to be 
two
distinct phenomena.


Then do you think there could be philosophical zombies?  How would you 
operationally
test a robot to see whether it was (a) intelligent (b) conscious?

Brent


-- 
You received this message because you are subscribed to the Google Groups

Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an 
email to
everything-list+unsubscr...@googlegroups.com
mailto:everything-list%2bunsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com
mailto:everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.



--
You received this message because you are subscribed to the Google Groups Everything 
List group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


No virus found in this message.
Checked by AVG - www.avg.com http://www.avg.com
Version: 2013.0.3267 / Virus Database: 3162/6222 - Release Date: 04/03/13



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-03 Thread Craig Weinberg


On Wednesday, April 3, 2013 8:58:37 PM UTC-4, Jason wrote:




 On Wed, Apr 3, 2013 at 6:04 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Wednesday, April 3, 2013 5:44:24 PM UTC-4, Jason wrote:




 On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes 
 te...@telmomenezes.comwrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg whats...@gmail.comwrote:



 Then shouldn't a powerful computer be able to quickly deduce the 
 winning Arimaa mappings?


 You're making the same mistake as John Clark, confusing the physical 
 computer with the algorithm. Powerful computers don't help us if we don't 
 have the right algorithm. The central mystery of AI, in my opinion, is why 
 on earth haven't we found a general learning algorithm yet. Either it's 
 too 
 complex for our monkey brains, or you're right that computation is not the 
 whole story. I believe in the former, but not I'm not sure, of course. 
 Notice that I'm talking about generic intelligence, not consciousness, 
 which I strongly believe to be two distinct phenomena.
   


 Another point toward Telmo's suspicion that learning is complex:

 If learning and thinking intelligently at a human level were 
 computationally easy, biology wouldn't have evolved to use trillions of 
 synapses.  The brain is very expensive metabolically (using 20 - 25% of the 
 total body's energy, about 100 Watts).  If so many neurons were not needed 
 to do what we do, natural selection would have selected those humans with 
 fewer neurons and reduced food requirements.


 There's no question that human intelligence reflects an improved survival 
 through learning, and that that is what makes the physiological investment 
 pay off.


 Right, so my point is that we should not expect things like human 
 intelligence or human learning to be trivial or easy to get in robots, when 
 the human brain is the most complex thing we know, and can perform more 
 computations than even the largest super computers of today.


Absolutely, but neither should we expect that complexity alone can make an 
assembly of inorganic parts into a subjective experience which compares to 
that of an animal. 


  

 What I question is why that improvement would entail awareness.


 A human has to be aware to do the things it does, because zombies are not 
 possible.


That's begging the question. Anything that is not exactly what it we might 
assume it is would be a 'zombie' to some extent. A human does not have to 
be aware to do the things that it does, which is proved by blindsight, 
sleepwalking, brainwashing, etc. A human may, in reality, have to be aware 
to perform all of the functions that we do, but if comp were true, that 
would not be the case.
 

   Your examples of blind sight are not a disproof of the separability of 
 function and awareness,


I understand why you think that, but ultimately it is proof of exactly that.
 

 only examples of broken links in communication (quite similar to split 
 brain patients).


A broken link in communication which prevents you from being aware of the 
experience which is informing you is the same thing as function being 
separate from awareness. The end result is that it is not necessary to 
experience any conscious qualia to receive optical information. There is no 
difference functionally between a broken link in communication and 
separability of function and awareness. The awareness is broken in the 
dead link, but the function is retained, thus they are in fact separate.

 

 There are a lot of neurons in our gut as well, and assimilation of 
 nutrients is undoubtedly complex and important to survival, yet we are not 
 compelled to insist that there must be some conscious experience to manage 
 that intelligence. Learning is complex, but awareness itself is simple.


 I think the nerves in the gut can manifest as awareness, such as cravings 
 for certain foods when the body realizes it is deficient in some particular 
 nutrient.  Afterall, what is the point of all those nerves if they have no 
 impact on behavior?


Oh I agree, because my view is panexperiential. The gut doesn't have the 
kind of awareness that a human being has as a whole, because the other 
organs of the body are not as significant as the brain is to the organism. 
If we are going by the comp assumption though, then there is an implication 
that nothing has any awareness unless it is running a very sophisticated 
program.


Craig
 


 Jason
  
  
  



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Losing Control

2013-04-03 Thread meekerdb

On 4/3/2013 7:33 PM, Stathis Papaioannou wrote:
Not only is the function of the artificial peptides the same, the patient also feels the 
same. Wouldn't you expect them to feel a bit different?


How do you know?  Maybe they became zombies.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-04-03 Thread Jason Resch
On Wed, Apr 3, 2013 at 9:54 PM, Craig Weinberg whatsons...@gmail.comwrote:



 On Wednesday, April 3, 2013 8:58:37 PM UTC-4, Jason wrote:




 On Wed, Apr 3, 2013 at 6:04 PM, Craig Weinberg whats...@gmail.comwrote:



 On Wednesday, April 3, 2013 5:44:24 PM UTC-4, Jason wrote:




 On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes 
 te...@telmomenezes.comwrote:




 On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg whats...@gmail.comwrote:



 Then shouldn't a powerful computer be able to quickly deduce the
 winning Arimaa mappings?


 You're making the same mistake as John Clark, confusing the physical
 computer with the algorithm. Powerful computers don't help us if we don't
 have the right algorithm. The central mystery of AI, in my opinion, is why
 on earth haven't we found a general learning algorithm yet. Either it's 
 too
 complex for our monkey brains, or you're right that computation is not the
 whole story. I believe in the former, but not I'm not sure, of course.
 Notice that I'm talking about generic intelligence, not consciousness,
 which I strongly believe to be two distinct phenomena.



 Another point toward Telmo's suspicion that learning is complex:

 If learning and thinking intelligently at a human level were
 computationally easy, biology wouldn't have evolved to use trillions of
 synapses.  The brain is very expensive metabolically (using 20 - 25% of the
 total body's energy, about 100 Watts).  If so many neurons were not needed
 to do what we do, natural selection would have selected those humans with
 fewer neurons and reduced food requirements.


 There's no question that human intelligence reflects an improved
 survival through learning, and that that is what makes the physiological
 investment pay off.


 Right, so my point is that we should not expect things like human
 intelligence or human learning to be trivial or easy to get in robots, when
 the human brain is the most complex thing we know, and can perform more
 computations than even the largest super computers of today.


 Absolutely, but neither should we expect that complexity alone


I don't think anyone has argued that complexity alone is sufficient.



 can make an assembly of inorganic parts into a subjective experience which
 compares to that of an animal.


Both are made of the same four fundamental forces interacting with each
other, why should the number of protons in the nucleus of some atoms in
those organic molecules make any difference to the subject?  What led you
to chose the chemical elements as the origin of sense and feeling, as
opposed to higher level structures (neurology, circuits, etc.) or lower
level structures (quarks, gluons, electrons)?







 What I question is why that improvement would entail awareness.


 A human has to be aware to do the things it does, because zombies are not
 possible.


 That's begging the question.


Not quite, I provided an argument for my reasoning.  What is your
objection, that zombies are possible, or that zombies are not possible but
that doesn't mean something that in all ways appears conscious must be
conscious?



 Anything that is not exactly what it we might assume it is would be a
 'zombie' to some extent. A human does not have to be aware to do the things
 that it does, which is proved by blindsight, sleepwalking, brainwashing,
 etc. A human may, in reality, have to be aware to perform all of the
 functions that we do, but if comp were true, that would not be the case.


   Your examples of blind sight are not a disproof of the separability of
 function and awareness,


 I understand why you think that, but ultimately it is proof of exactly
 that.


 only examples of broken links in communication (quite similar to split
 brain patients).


 A broken link in communication which prevents you from being aware of the
 experience which is informing you is the same thing as function being
 separate from awareness. The end result is that it is not necessary to
 experience any conscious qualia to receive optical information. There is no
 difference functionally between a broken link in communication and
 separability of function and awareness. The awareness is broken in the
 dead link, but the function is retained, thus they are in fact separate.


So you take the split brain patient's word for it that he didn't see the
word PAN flashed on the screen
http://www.youtube.com/watch?v=ZMLzP1VCANot=1m50s

Perhaps his left hemisphere didn't see it, but his right hemisphere
certainly did, as his right hemisphere is able to draw a picture of that
pan (something in his brain saw it).

I can't experience life through your eyes right now because our brains are
disconnected, should you take my word for my assertion that you must not be
experiencing anything because the I in Jason's skull doesn't experience any
visual stimulus from Craig's eyes?






 There are a lot of neurons in our gut as well, and assimilation of
 nutrients is undoubtedly complex and important to survival, yet 

Re: Losing Control

2013-04-03 Thread Stathis Papaioannou
On Thu, Apr 4, 2013 at 3:32 AM, Craig Weinberg whatsons...@gmail.comwrote:

There are, of course, undiscovered scientific facts. If scientists did not
 believe that they would give up science. But Craig is not saying that there
 are processes inside cells that are controlled by as yet undiscovered
 physical effects. What he is saying is that if I decide to move my arm the
 arm will move not due to the well-studied sequence of neurological events,
 but spontaneously, due to my will.


 UGH. No. I say that if I move my arm, the arm will move because I AM
 whatever sequence of events on whatever level - molecular, biochemical,
 physiological, whether well-studied or not. You may not be able to
 understand that what I intend is not to squeeze myself into biology, or to
 magically replace biology, but to present that the entirety of the physics
 of my body intersects with the entirety of the physics of my experience.
 The two aesthetics - public bodies in space and private experiences through
 time, are an involuted (Ouroboran, umbilical, involuted) Monism. If you
 don't understand what that means then you are arguing with a straw man.


If you ARE the sequence of neurological events and the neurological events
follow deterministic or probabilistic rules then you will also follow
deterministic or probabilistic rules. However, you don't believe that this
is the case. So sometimes there must be neurological events which are
spontaneous according to your definition - outside the normal causal
chain. Absent this, you return to the default scientific position.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.