Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-19 Thread Stathis Papaioannou
On 19 May 2015 at 15:42, 'Chris de Morsella' via Everything List
everything-list@googlegroups.com wrote:


 -Original Message-
 From: everything-list@googlegroups.com 
 [mailto:everything-list@googlegroups.com] On Behalf Of Stathis Papaioannou
 Sent: Monday, May 18, 2015 10:07 PM
 To: everything-list@googlegroups.com
 Subject: Re: Reconciling Random Neuron Firings and Fading Qualia

 On 19 May 2015 at 14:45, Jason Resch jasonre...@gmail.com wrote:


 On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou
 stath...@gmail.com
 wrote:

 On 19 May 2015 at 11:02, Jason Resch jasonre...@gmail.com wrote:

  I think you're not taking into account the level of the functional
  substitution. Of course functionally equivalent silicon and
  functionally equivalent neurons can (under functionalism) both
  instantiate the same consciousness. But a calculator computing 2+3
  cannot substitute for a human brain computing 2+3 and produce the
  same consciousness.

 In a gradual replacement the substitution must obviously be at a
 level sufficient to maintain the function of the whole brain.
 Sticking a calculator in it won't work.

  Do you think a Blockhead that was functionally equivalent to you
  (it could fool all your friends and family in a Turing test
  scenario into thinking it was intact you) would be conscious in the
  same way as you?

 Not necessarily, just as an actor may not be conscious in the same
 way as me. But I suspect the Blockhead would be conscious; the
 intuition that a lookup table can't be conscious is like the
 intuition that an electric circuit can't be conscious.


 I don't see an equivalency between those intuitions. A lookup table
 has a bounded and very low degree of computational complexity: all
 answers to all queries are answered in constant time.

 While the table itself may have an arbitrarily high information
 content, what in the software of the lookup table program is there to
 appreciate/understand/know that information?

 Understanding emerges from the fact that the lookup table is immensely large. 
 It could be wrong, but I don't think it is obviously less plausible than 
 understanding emerging from a Turing machine made of tin cans.

 Yes... but, the table's immensely large is a measure of its capacity to 
 store information. A complex real time system also requires the ability to 
 handle immense scale of throughput; for example our sensorial streams. 
 Without a capacity for scaling up in the dimension of capacity to handle 
 streams of information all one can ever end up with is a mass storage system 
 (bottlenecked by its limited capacity for handling throughput)

 We could never experience the exquisitely rendered reality we all perceive -- 
 from the vantage pint of our privileged inside looking out point of view -- 
 were it not for the incredible parallelism accelerated ability of our 
 advanced (for this planet) brains to process reality *as it happens*! Our 
 brains are not only big in their ability to store information; they are 
 incredibly powerful parallel processors that chew through massive real time 
 streams, performing all manner of pattern detection matching; memory recall 
 operations, decisional executive processing, memory update and commit 
 operations. For every thought we are consciously aware of, a vast 
 parallelized self-error correcting distributed processing and quorum based 
 decisional neural network has been in operation.
 Ability to handle massive throughput matters!

The Blockhead could pass the Turing test but it would not be
equivalent to a human. The point I was making is that it should not be
dismissed as obviously non-conscious.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Physicists Are Philosophers, Too

2015-05-19 Thread Bruno Marchal


On 18 May 2015, at 18:15, Samiya Illias wrote:




On 18-May-2015, at 7:24 am, Bruno Marchal marc...@ulb.ac.be wrote:



On 13 May 2015, at 08:23, Samiya Illias wrote:




On 12-May-2015, at 9:39 pm, LizR lizj...@gmail.com wrote:

On 13 May 2015 at 14:29, Samiya Illias samiyaill...@gmail.com  
wrote:
1) The Quran reminds us that humans have been made Incharge of  
Earth and hence are responsible for the welfare of the Earth and  
all in it
2) The Quran also tells us that we will be held accountable for  
all that we've been gifted with, hence the more worldly riches or  
power one has, the greater the responsibility and the greater the  
accountability
So yes, it speaks of all of us and says that every action,  
intention, everything is being recorded and will be replayed and  
the criminals will not be able to say anything, rather their  
bodies will bear witness against themselves. Humans will be  
recompensed in full in complete justice, and nobody will be  
wronged in the least.


It's a nice fantasy, at least. As opposed to the (apparent)  
reality that rich people can screw everyone else, each other, and  
the planet, and still make out like bandits.


That is why I suppose facts about creation have been mentioned  
across the Quran so that those who doubt its authenticity can  
study and assess for themselves whether this message is from the  
One who created, knows and is in perfect control of everything to  
the minutest detail, and is therefore able to carry out His Will  
and keep His Promise, or if this is just a fantasy.



The greeks, may be not so much the indians, got the same idea/ 
fantazy. The idea that God is Good.


With comp, good is a protagorean virtue: it obeys []x - ~x

But what is good?

Can a God be good and accepts that one of its creature kills  
another of its creature ... in its Name?


Can a teacher be good who sets an exam for his students, allows the  
students to refer to the textbook and patiently allows the students  
time to attempt the exam?
Can an movie maker be good who assigns roles to the characters,  
gives them the script, and then let's them enact their roles?
Can an employer be good who sets mock assignments (aptitude tests)  
for his prospective employees, and then gives them the liberty to  
execute it, and only later appraises them for their actions?
The students', the actors' and the potential employees' performance  
in their exam, movie or mock assignment determines their future  
potential and possible career.
If we consider this life as the only life and death as a finality,  
then of course the perspective is different. But if we realise that  
it's just the end of the trial, then the perspective changes  
completely.




With comp you already sin once you feel superior, or inferior, with  
respect to Her/It/He.




It's not about feeling superior or inferior, it's about realising  
that we are the 'creation' and God is the 'Creator'. It's about  
being Just and Justice cannot be a sin.


Do you take for granted that the Homo Sapiens is the favorite  
creature of God?


Never claimed to be the favourite. Perhaps among the favoured  
creation, but certainly not the only one.




Does Arithmetical truth loves the Löbian numbers? Ttruth is  
certainly an attractor for the löbian numbers.


??



God is good,


I agree! Thank you :)

but with comp both of them are undefinable, and you might sin by  
deciding what is good and bad by using publicly its Name.


Everyone knows the difference between good and bad


I think it's called Conscience


(it is like the difference between eating a fruit and being burned),


Like Garden of Eden and Fire of Hell?

but above that you have to open your mind on how others talk about  
God, and concentrate on what is common, and discard the difference.


Reality is beyond the Fairy tales, indeed it is even beyond the  
correct humans' and machine's theories.


Yes, we have been given very little knowledge, most of it is unknown.



With comp it is almost obvious that any finite text on God can only  
be a lie. God is beyond texts.


Maybe comp is 'almost obviously' wrong? The Creator who created us  
to the  exact cellular and atomic detail, can the same Creator not  
also provide a User Manual? After all, the scriptures are not 'about  
God', they are about humans!




Texts can help, but texts can delude, also, especially if you  
attach yourself with literal interpretations.


God, who created in us the ability to speak and express ourselves in  
word and in writing, can express and communicate better than we can.  
There is no need to philosophise the scriptures. Literal readings  
are our best chance to attempt to understand the message of the  
scriptures.




In the middle-east, people have discussed the Plato/Aristotle view  
longer than in Occident, and the influence of Plato, and thus of  
that open-mindness toward both reason and mysticism, is quite  
palpable. But the Jews, with Maimonides, and most 

Re: My comments on The Movie Graph Argument Revisited by Russell Standish

2015-05-19 Thread Bruno Marchal


On 18 May 2015, at 18:46, John Clark wrote:


On Mon, May 18, 2015  Bruno Marchal marc...@ulb.ac.be wrote:

 Well then let's make this simple, just use your patented way to  
make calculations without using matter or energy or any of the laws  
of physics and tell me what the factors of 3*2^916773 +1 and 19249 ×  
2^13018586 + 1 are.


 Opportunist fallacy.

Fallacy my ass! Science demands evidence, if somebody claims they  
can cure cancer the claim is not enough, even a detailed description  
of how they intend to cure cancer is not enough, they must actually  
cure cancer. And so I don't want to hear any more about how you can  
make a calculation without using matter or energy or any of the laws  
of physics, I want you to actually do it. Just do it and you've won  
the argument.


Straw man fallacy. Nobody can do a physical computation out of a  
physical reality.
The question is not about doing a computation, but about the existence  
of computation in the block-mind offred by the (sigma_1) arithmetical  
reality, which provably emulates all computation, but obviously not in  
a physical, or locally reproductible way. Indeed the physical will  
emerge from those computations, already there in the block-mind or  
block-computer science reality.







 if you agree that 2+2=4, and if you use the standard  
definition, then you can prove that a tiny part of the standard  
model of arithmetic run all computations.


 The word  run involves changes in physical quantities  like  
position and time. And what sort of thing are you running these  
calculations on?


 No: run is defined mathematically, without any reference to  
physics.


Yes, so I guess you retract your previous comment and now realize  
that you can't run all computations or run any computation at all  
without making use of the physical.



 ?

!



 The set of all true statements is contained within the set of  
all statements, the trick is to separate the true from the false.


 We cannot separate them mechanically, but we can separate them  
mathematically,


 Wow that is wonderful news! Since you know how to separate truth  
from falsehood mathematically you know if Goldbach conjecture is in  
the set of all true statements or in the set of all false  
statements and thus you have won the argument. Ah but by the way,  
which is it?


 To separate mathematically does not mean to separate effectively.

Effectively means in such a manner as to achieve a desired result,


With CT, it means computably.


so if you desire to separate all true statements from all false  
statements and can do it but not do it effectively then you can do  
it but you can not do it. Do think maybe just maybe there might be  
something a bit wrong with that?


Then you defend intuitionism, and we are out of computationalism. I  
have explained that there is no possible effective way to separate the  
code of total and strictly partial program, but to have Church thesis,  
we need an enumeration of all (strictly or not) partial computable  
functions, which will mix the total and strictly partial functions in  
a non computable, non effective way. Actually, I gave you other  
arguments, but you have never answered them, so I am not sure all this  
is not, like in step 3, pure rhetorical hand waving.
In this case, you abandon the excluded middle principle, which is in  
comp, by definition, as you need it to have the classical Church  
thesis. Most theorems in theoretical computer science are not  
constructive, like in the usual math, and in computer science many of  
them are provably necessarily non constructive (unlike the usual math  
where we don't know, in most case).


Bruno





  John K Clark



--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-19 Thread Bruno Marchal


On 18 May 2015, at 22:50, Stathis Papaioannou wrote:




On Tuesday, May 19, 2015, meekerdb meeke...@verizon.net wrote:
On 5/18/2015 10:22 AM, Bruno Marchal wrote:


What about Mister D. During his conference his brain completely  
melted down, disappeared, say, in this thought experience, but a  
lucky cosmic ray, by pure chance, will activate the exact motor's  
neurons, so that he will pursue his conference like if nothing  
happened. Once, an auditor interrupted him and asked a question,  
but Mister D was so lucky that at that moment the lucky ray sent,  
by pure chance, the right activation of the motor nerves. Note  
that Mister D as no inputs, no cortex, no lymbic system, no  
cerebral stem, as the lucky rays activates only the motor nerves.


Is him a zombie? In this case the empty brain is as good as a  
normal brain. For the right 3p behavior, you need just the right  
impulse at the muscles.


What if the brain does not melt down, but get dissociated and  
manages a dream, unrelated to the conference. Who would be Mister D  
from his (?) perspective?


It strikes me that these arguments based on extreme improbabilities  
are worthless.  What difference would it make whether Mister D is  
conscious or not  in a hypothetical that is so improbable as to  
never happen in the history of the universe?  We could accept either  
answer.  It's like reasoning, If pigs could fly, then...


There is a difference in kind, not degree, between the impossible  
and the highly improbable. In ordinary life we can take them as  
equivalent, and do so multiple times a day without thinking about  
it, but not in philosophical discussions such as these.


Agreed. It is the difference between reasoning in a theory, and  
deriving things from counterfactuals, which is very hard. In if pig  
could fly ..., you need to revise the concept of pig to give sense to  
the premise, and here, that would ask for many arbitrary decisions.  
But in the thought experiment that we discuss, we need only to take  
the usual hypothesis (comp) seriously enough. There is no  
counterfactuals at this level, even if the discussion is about their  
role at the object level.


Bruno





--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-19 Thread Bruno Marchal


On 18 May 2015, at 22:47, Stathis Papaioannou wrote:




On Tuesday, May 19, 2015, Bruno Marchal marc...@ulb.ac.be wrote:

On 18 May 2015, at 17:05, Stathis Papaioannou wrote:




On Tuesday, May 19, 2015, Bruno Marchal marc...@ulb.ac.be wrote:

On 16 May 2015, at 07:10, Stathis Papaioannou wrote:






On 13 May 2015, at 11:59 am, Jason Resch jasonre...@gmail.com  
wrote:


Chalmer's fading quailia argument shows that if replacing a  
biological neuron with a functionally equivalent silicon neuron  
changed conscious perception, then it would lead to an absurdity,  
either:
1. quaila fade/change as silicon neurons gradually replace the  
biological ones, leading to a case where the quaila are being  
completely out of touch with the functional state of the brain.

or
2. the replacement eventually leads to a sudden and complete loss  
of all quaila, but this suggests a single neuron, or even a few  
molecules of that neuron, when substituted, somehow completely  
determine the presence of quaila


His argument is convincing, but what happens when we replace  
neurons not with functionally identical ones, but with neurons  
that fire according to a RNG. In all but 1 case, the random  
firings of the neurons will result in completely different  
behaviors, but what about that 1 (immensely rare) case where the  
random neuron firings (by chance) equal the firing patterns of  
the substituted neurons.


In this case, behavior as observed from the outside is identical.  
Brain patterns and activity are similar, but according to  
computationalism the consciousness is different, or perhaps a  
zombie (if all neurons are replaced with random firing neurons).  
Presume that the activity of neurons in the visual cortex is  
required for visual quaila, and that all neurons in the visual  
cortex are replaced with random firing neurons, which by chance,  
mimic the behavior of neurons when viewing an apple.


Is this not an example of fading quaila, or quaila desynchronized  
from the brain state? Would this person feel that they are blind,  
or lack visual quaila, all the while not being able to express  
their deficiency? I used to think when Searle argued this exact  
same thing would occur when substituted functionally identical  
biological neurons with artificial neurons that it was completely  
ridiculous, for there would be no room in the functionally  
equivalent brain to support thoughts such as help! I can't see,  
I am blind! for the information content in the brain is  
identical when the neurons are functionally identical.


But then how does this reconcile with fading quaila as the result  
of substituting randomly firing neurons? The computations are not  
the same, so presumably the consciousness is not the same. But  
also, the information content does not support knowing/believing/ 
expressing/thinking something is wrong. If anything, the  
information content of this random brain is much less, but it  
seems the result is something where the quaila is out of sync  
with the global state of the brain. Can anyone else where shed  
some clarity on what they think happens, and how to explain it in  
the rare case of luckily working randomly firing neurons, when  
only partial substitutions of the neurons in a brain is performed?


So Jason, are you still convinced that the random neurons would  
not be conscious? If you are, you are putting the cart before the  
horse. The fading qualia argument makes the case that any process  
preserving function also preserves consciousness. Any process;  
that computations are one such process is fortuitous.


But the random neurons does not preserve function, nor do the  
movie. OK?


I don't see why you're so sure about this. Function is preserved  
while the randomness corresponds to normal activity, then it all  
falls apart. If by some miracle it continued then the random brain  
is as good as a normal brain, and I'd say yes to the doctor  
offering me such a brain. If you don't think that counts as  
computation, OK - but it would still be conscious.


What about Mister D. During his conference his brain completely  
melted down, disappeared, say, in this thought experience, but a  
lucky cosmic ray, by pure chance, will activate the exact motor's  
neurons, so that he will pursue his conference like if nothing  
happened. Once, an auditor interrupted him and asked a question, but  
Mister D was so lucky that at that moment the lucky ray sent, by  
pure chance, the right activation of the motor nerves. Note that  
Mister D as no inputs, no cortex, no lymbic system, no cerebral  
stem, as the lucky rays activates only the motor nerves.


Is him a zombie? In this case the empty brain is as good as a normal  
brain. For the right 3p behavior, you need just the right impulse at  
the muscles.


He's not a zombie, because if only part of his brain melted he would  
be a partial zombie.


Then all dreams supervene on the empty brain. But then supervene  
seems 

Subsidies for fossil fuel - $5.3 trillion/year

2015-05-19 Thread LizR
http://www.salon.com/2015/05/18/big_oils_astronomical_hand_out_fossil_fuels_receive_5_3_trillion_in_global_subsidies_each_year/

Of course some of these are hidden costs like cleaning up after them, but
even so the G-20 nations give them an estimated $88 billion / year.

Gravy train ahoy!

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: My comments on The Movie Graph Argument Revisited by Russell Standish

2015-05-19 Thread LizR
On 20 May 2015 at 04:41, John Clark johnkcl...@gmail.com wrote:

 On Tue, May 19, 2015 Bruno Marchal marc...@ulb.ac.be wrote:

  Well then let's make this simple, just use your patented way to
 make calculations without using matter or energy or any of the laws of
 physics and tell me what the factors of 3*2^916773 +1 and 19249 ×
 2^13018586 + 1 are.


 Opportunist fallacy.

Fallacy my ass! Science demands evidence, if somebody claims they
 can cure cancer the claim is not enough, even a detailed description of how
 they intend to cure cancer is not enough, they must actually cure cancer.
 And so I don't want to hear any more about how you can make a calculation
 without using matter or energy or any of the laws of physics, I want you to
 actually do it. Just do it and you've won the argument.

   Straw man fallacy.


 I thought it was the  opportunist fallacy. Lets get our fallacies
 straight around here.


I wouldn't be too proud of being called out on multiple fallacies. Bruno's
just pointing out that you aren't addressing the issue - which is true, as
far as I can see. To paraphrase someone, all you need to do is to
successfully address the issue, and you've won the argument.



 Nobody can do a physical computation out of a physical reality.


 Nobody has ever been able to perform a computation of ANY sort without
 matter that obeys the laws of physics and nobody has even come close,
 nobody has ever come within a billion light years of being able to do it.

 Yes, that is definitely a straw man. It's like saying there can't be laws
of physics because no one can make them operate without using them, or
maybe just shut up and calculate.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-19 Thread LizR
I am a strange loop.

Sorry, that should read fruit.

(I will leave it as an exercise to the reader which word to best replace
with fruit.)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Physicists Are Philosophers, Too

2015-05-19 Thread spudboy100 via Everything List
Actually the name of the tune was The Future, not everybody knows. Except me, 
apparently.

Sent from AOL Mobile Mail


-Original Message-
From: LizR lizj...@gmail.com
To: everything-list everything-list@googlegroups.com
Sent: Tue, May 19, 2015 07:18 PM
Subject: Re: Physicists Are Philosophers, Too



div id=AOLMsgPart_2_4ae97f0d-86d6-4251-8c7c-634975e19cf3

 div dir=ltr
Yeah, he's good. 
  

   

  
  

I guess everybody knows that...
  
  div class=aolmail_gmail_extra
   

  /div
 /div 
 p/p -- 
 
 You received this message because you are subscribed to the Google Groups 
Everything List group.
 
 To unsubscribe from this group and stop receiving emails from it, send an 
email to 
 a target=_blank 
href=mailto:everything-list+unsubscr...@googlegroups.com;everything-list+unsubscr...@googlegroups.com/a.
 
 To post to this group, send email to 
 a target=_blank 
href=mailto:everything-list@googlegroups.com;everything-list@googlegroups.com/a.
 
 Visit this group at 
 a target=_blank 
href=http://groups.google.com/group/everything-list;http://groups.google.com/group/everything-list/a.
 
 For more options, visit 
 a target=_blank 
href=https://groups.google.com/d/optout;https://groups.google.com/d/optout/a.
 
 

/div

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-19 Thread Terren Suydam
On Tue, May 19, 2015 at 5:09 PM, meekerdb meeke...@verizon.net wrote:

  On 5/19/2015 11:47 AM, Terren Suydam wrote:

 While I applaud IIT because it seems to be the first theory of
 consciousness that takes information architecture seriously (and thus
 situating theoretical considerations in a holistic rather than reductionist
 context) and to make predictions based on that, I agree with Aaronson's
 criticisms of it - namely, that IIT predicts that certain classes of
 computational systems that we intuitively would fail to see as conscious
 get measures of consciousness potentially higher than for human brains.

  One key feature of consciousness as we know it is *ongoing subjective
 experience**. *So a question I keep coming back to in my own thinking is,
 what kind of information architecture lends itself to a flow of data, such
 that if we assume that consciousness is how data feels as it's processed,
 we might imagine it could correspond to ongoing subjective experience?  It
 seems to me that such an architecture would have, at a bare minimum, its
 current state recursively fed back into itself to be processed in the next
 iteration. This happens in a trivial way in any processor chip (or lookup
 table AI for that matter). As such, there may be a very trivial sort of
 consciousness associated with a processor or lookup table, but this does
 not get us anywhere near understanding the richness of human consciousness.


 I think you need to consider what would be the benefit of this recursion.
 How could it be naturally selected.  Jeff Hawkins idea is that the brain
 continually tries to anticipate, at the perceptual level and even in lower
 layers of the cerebral cortex.  Then signals that don't match the
 prediction get broadcast more widely at the next higher level where they
 may have been anticipated by other neurons.  At the highest level (he says
 there are six in the cortext as I recall) signals spread to language and
 visual modules and one becomes aware of them or they spring to mind.
 This would have the advantage of directing computational resources to that
 which is novel, while leaving familiar things to learned responses.  To
 this I would add that the novel/conscious experience is given some value,
 e.g. emotional weight, which makes it more or less strongly remembered. And
 of course it isn't remembered like recording; it's synopsized in terms of
 it's connection to other remembered events. This memory is needed for
 learning from experience.

 Brent


The biggest benefit I can think of right now would be that the chaotic
dynamics involved with a recursive architecture would allow for a much more
dynamic and volatile range of possible behaviors, and less predictability -
certainly a trait that would get selected for among prey. A feed-forward
architecture OTOH is much more linear and would be easy prey by comparison.
A nervous system kept on the edge of chaos (as in chaos theory) would
settle into attractors of behavior, and these can change on a dime. I have
often thought of Jung's archetypes as exactly that - strange attractors
that arise in given contexts.

Hawkins' ideas are great and not mutually exclusive with a feedback model
like the above. I would say that what you described of his model can
explain some aspects of attention but would fail to explain ongoing
subjective experience.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Physicists Are Philosophers, Too

2015-05-19 Thread LizR
I was making a teensy little Leonard Cohen joke. Having to explain jokes
kills them but just so you know...(what everybody knows...)

http://en.wikipedia.org/wiki/Everybody_Knows_(Leonard_Cohen_song)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Physicists Are Philosophers, Too

2015-05-19 Thread LizR
Yeah, he's good.

I guess everybody knows that...

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-19 Thread LizR
It's sometimes important for animals to be unpredictable in their
interactions with their own species, especially social animals like humans.
(Although even flies have unpredictable behaviour to avoid predators,
especially ones carrying rolled up magazines.)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-19 Thread Jason Resch
On Tue, May 19, 2015 at 12:54 AM, Stathis Papaioannou stath...@gmail.com
wrote:

 On 19 May 2015 at 11:05, Jason Resch jasonre...@gmail.com wrote:
 
 
  On Mon, May 18, 2015 at 10:05 AM, Stathis Papaioannou 
 stath...@gmail.com
  wrote:
 
 
 
  On Tuesday, May 19, 2015, Bruno Marchal marc...@ulb.ac.be wrote:
 
 
  On 16 May 2015, at 07:10, Stathis Papaioannou wrote:
 
 
 
 
 
  On 13 May 2015, at 11:59 am, Jason Resch jasonre...@gmail.com wrote:
 
  Chalmer's fading quailia argument shows that if replacing a biological
  neuron with a functionally equivalent silicon neuron changed conscious
  perception, then it would lead to an absurdity, either:
  1. quaila fade/change as silicon neurons gradually replace the
 biological
  ones, leading to a case where the quaila are being completely out of
 touch
  with the functional state of the brain.
  or
  2. the replacement eventually leads to a sudden and complete loss of
 all
  quaila, but this suggests a single neuron, or even a few molecules of
 that
  neuron, when substituted, somehow completely determine the presence of
  quaila
 
  His argument is convincing, but what happens when we replace neurons
 not
  with functionally identical ones, but with neurons that fire according
 to a
  RNG. In all but 1 case, the random firings of the neurons will result
 in
  completely different behaviors, but what about that 1 (immensely rare)
 case
  where the random neuron firings (by chance) equal the firing patterns
 of the
  substituted neurons.
 
  In this case, behavior as observed from the outside is identical. Brain
  patterns and activity are similar, but according to computationalism
 the
  consciousness is different, or perhaps a zombie (if all neurons are
 replaced
  with random firing neurons). Presume that the activity of neurons in
 the
  visual cortex is required for visual quaila, and that all neurons in
 the
  visual cortex are replaced with random firing neurons, which by chance,
  mimic the behavior of neurons when viewing an apple.
 
  Is this not an example of fading quaila, or quaila desynchronized from
  the brain state? Would this person feel that they are blind, or lack
 visual
  quaila, all the while not being able to express their deficiency? I
 used to
  think when Searle argued this exact same thing would occur when
 substituted
  functionally identical biological neurons with artificial neurons that
 it
  was completely ridiculous, for there would be no room in the
 functionally
  equivalent brain to support thoughts such as help! I can't see, I am
  blind! for the information content in the brain is identical when the
  neurons are functionally identical.
 
  But then how does this reconcile with fading quaila as the result of
  substituting randomly firing neurons? The computations are not the
 same, so
  presumably the consciousness is not the same. But also, the information
  content does not support knowing/believing/expressing/thinking
 something is
  wrong. If anything, the information content of this random brain is
 much
  less, but it seems the result is something where the quaila is out of
 sync
  with the global state of the brain. Can anyone else where shed some
 clarity
  on what they think happens, and how to explain it in the rare case of
  luckily working randomly firing neurons, when only partial
 substitutions of
  the neurons in a brain is performed?
 
 
  So Jason, are you still convinced that the random neurons would not be
  conscious? If you are, you are putting the cart before the horse. The
 fading
  qualia argument makes the case that any process preserving function
 also
  preserves consciousness. Any process; that computations are one such
 process
  is fortuitous.
 
 
  But the random neurons does not preserve function, nor do the movie.
  OK?
 
 
  I don't see why you're so sure about this. Function is preserved while
 the
  randomness corresponds to normal activity, then it all falls apart. If
 by
  some miracle it continued then the random brain is as good as a normal
  brain, and I'd say yes to the doctor offering me such a brain. If you
  don't think that counts as computation, OK - but it would still be
  conscious.
 
 
 
  My third-person function would indeed be preserved by such a Miracle
  Brain, but I would strongly doubt it would preserve my first-person.
 Why do
  you think that the random firing neurons preserve consciousness? Do you
  think they would still preserve consciousness if they became physically
  separated from each other yet maintained the same firing patterns?

 I think the random neurons would preserve consciousness because
 otherwise you could make a partial zombie, as you pointed out. There
 is nothing incoherent about randomly firing neurons sustaining
 consciousness, but there is about partial zombies. If the neurons
 became physically separated but maintained the same firing pattern,
 including motor neurons, then yes, that would also preserve
 consciousness.



I see. Yes the 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-19 Thread Jason Resch
On Mon, May 18, 2015 at 11:54 PM, meekerdb meeke...@verizon.net wrote:

  On 5/18/2015 9:45 PM, Jason Resch wrote:

 Not necessarily, just as an actor may not be conscious in the same way
 as me. But I suspect the Blockhead would be conscious; the intuition
 that a lookup table can't be conscious is like the intuition that an
 electric circuit can't be conscious.


  I don't see an equivalency between those intuitions. A lookup table has
 a bounded and very low degree of computational complexity: all answers to
 all queries are answered in constant time.

  While the table itself may have an arbitrarily high information content,
 what in the software of the lookup table program is there to
 appreciate/understand/know that information?


 What is there is there in a neural network?


A computational state containing significant information content.
Integrated Information Theory makes some strides in explains this I think:

http://en.wikipedia.org/wiki/Integrated_information_theory

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-19 Thread Jason Resch
On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou stath...@gmail.com
wrote:

 On 19 May 2015 at 14:45, Jason Resch jasonre...@gmail.com wrote:
 
 
  On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou stath...@gmail.com
 
  wrote:
 
  On 19 May 2015 at 11:02, Jason Resch jasonre...@gmail.com wrote:
 
   I think you're not taking into account the level of the functional
   substitution. Of course functionally equivalent silicon and
 functionally
   equivalent neurons can (under functionalism) both instantiate the same
   consciousness. But a calculator computing 2+3 cannot substitute for a
   human
   brain computing 2+3 and produce the same consciousness.
 
  In a gradual replacement the substitution must obviously be at a level
  sufficient to maintain the function of the whole brain. Sticking a
  calculator in it won't work.
 
   Do you think a Blockhead that was functionally equivalent to you (it
   could
   fool all your friends and family in a Turing test scenario into
 thinking
   it
   was intact you) would be conscious in the same way as you?
 
  Not necessarily, just as an actor may not be conscious in the same way
  as me. But I suspect the Blockhead would be conscious; the intuition
  that a lookup table can't be conscious is like the intuition that an
  electric circuit can't be conscious.
 
 
  I don't see an equivalency between those intuitions. A lookup table has a
  bounded and very low degree of computational complexity: all answers to
 all
  queries are answered in constant time.
 
  While the table itself may have an arbitrarily high information content,
  what in the software of the lookup table program is there to
  appreciate/understand/know that information?

 Understanding emerges from the fact that the lookup table is immensely
 large. It could be wrong, but I don't think it is obviously less
 plausible than understanding emerging from a Turing machine made of
 tin cans.



The lookup table is intelligent or at least offers the appearance of
intelligence, but it makes the maximum possible advantage of the space-time
trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff

The tin-can Turing machine is unbounded in its potential computational
complexity, there's no reason to be a bio- or silico-chauvinist against it.
However, by definition, a lookup table has near zero computational
complexity, no retained state. Does an ant trained to perform the look
table's operation become more aware when placed in a vast library than when
placed on a small bookshelf, to perform the identical function?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: My comments on The Movie Graph Argument Revisited by Russell Standish

2015-05-19 Thread John Clark
On Tue, May 19, 2015 Bruno Marchal marc...@ulb.ac.be wrote:

 Well then let's make this simple, just use your patented way to
 make calculations without using matter or energy or any of the laws of
 physics and tell me what the factors of 3*2^916773 +1 and 19249 ×
 2^13018586 + 1 are.


 Opportunist fallacy.

Fallacy my ass! Science demands evidence, if somebody claims they can
 cure cancer the claim is not enough, even a detailed description of how
 they intend to cure cancer is not enough, they must actually cure cancer.
 And so I don't want to hear any more about how you can make a calculation
 without using matter or energy or any of the laws of physics, I want you to
 actually do it. Just do it and you've won the argument.

   Straw man fallacy.



I thought it was the  opportunist fallacy. Lets get our fallacies
straight around here.


 Nobody can do a physical computation out of a physical reality.


Nobody has ever been able to perform a computation of ANY sort without
matter that obeys the laws of physics and nobody has even come close,
nobody has ever come within a billion light years of being able to do it.


  The question is not about doing a computation, but about the existence
 of computation in the block-mind offred by the (sigma_1) arithmetical
 reality, which provably emulates all computation,


It can't emulate a damn thing unless the block-mind offred by the
(sigma_1) exists and if it does then produce it and have it calculate 1+1.
Do that and you will have won the argument.


  but obviously not in a physical, or locally reproductible way.


Or to say the same thing with different words, not in a way that
corresponds with reality, or to use yet different words, not in a way that
isn't Bullshit and a complete waste of time.


  Indeed the physical will emerge from those computations, already there
 in the block-mind or block-computer science reality.


Then do so! Starting from pure mathematics tell us why it would be a
logical absurdity for the proton to be anything other than 1836 times as
massive as the electron and for the neutron to be 1842 times as massive as
the electron. Explain what's so special about those two numbers, do that
and you'll have won the argument and as I've said I will personally pay for
your first class airline ticket to Stockholm for the ceremonies.

 Effectively means in such a manner as to achieve a desired result,


  With CT, it means computably.


It means a mechanical method, and nobody has ever made one single
calculation using a non effective method, in fact the ONLY thing anybody
has ever produced with a non-effective method is randomness.  That is the
sum total of non-effective method's accomplishments to date, the only thing
we know for sure it can do.


  you abandon the excluded middle principle, which is in comp,


I don't abandon the excluded middle and I don't care if it's in comp or
not because comp bores me. And don't tell me it's just short for
computationalism because I know what computationalism is and whatever the
hell comp is it's not that.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-19 Thread Terren Suydam
While I applaud IIT because it seems to be the first theory of
consciousness that takes information architecture seriously (and thus
situating theoretical considerations in a holistic rather than reductionist
context) and to make predictions based on that, I agree with Aaronson's
criticisms of it - namely, that IIT predicts that certain classes of
computational systems that we intuitively would fail to see as conscious
get measures of consciousness potentially higher than for human brains.

One key feature of consciousness as we know it is *ongoing subjective
experience**. *So a question I keep coming back to in my own thinking is,
what kind of information architecture lends itself to a flow of data, such
that if we assume that consciousness is how data feels as it's processed,
we might imagine it could correspond to ongoing subjective experience?  It
seems to me that such an architecture would have, at a bare minimum, its
current state recursively fed back into itself to be processed in the next
iteration. This happens in a trivial way in any processor chip (or lookup
table AI for that matter). As such, there may be a very trivial sort of
consciousness associated with a processor or lookup table, but this does
not get us anywhere near understanding the richness of human consciousness.

An architecture that supports that richness - the subjective experience,
IOW, of an embodied sensing agent - would involve that recursion but at a
holistic level. The entire system, potentially, including the system's
informational representations of sensory data (whatever form that took)
would be involved in that feedback loop. So the phi of IIT has a role here,
as the processor/lookup table architecture has a low phi.

What is missing from phi is a measure of recursion - how the modules of a
system feedback in such a way as to create a systemic, recursive processing
loop. My hunch is that this would address Aaronson's objections, as brains
would score high on this measure but the systems that Aaronson complains
about, such as systems that do nothing but apply a low-density
parity-check code, or other simple transformations of their input data
would score low due to lack of recursion.

Terren

On Tue, May 19, 2015 at 12:23 PM, meekerdb meeke...@verizon.net wrote:

  On 5/19/2015 6:47 AM, Jason Resch wrote:



 On Mon, May 18, 2015 at 11:54 PM, meekerdb meeke...@verizon.net wrote:

  On 5/18/2015 9:45 PM, Jason Resch wrote:

 Not necessarily, just as an actor may not be conscious in the same way
 as me. But I suspect the Blockhead would be conscious; the intuition
 that a lookup table can't be conscious is like the intuition that an
 electric circuit can't be conscious.


  I don't see an equivalency between those intuitions. A lookup table has
 a bounded and very low degree of computational complexity: all answers to
 all queries are answered in constant time.

  While the table itself may have an arbitrarily high information
 content, what in the software of the lookup table program is there to
 appreciate/understand/know that information?


 What is there is there in a neural network?


  A computational state containing significant information content.


 A lookup table has significant information content.

   Integrated Information Theory makes some strides in explains this I
 think:

  http://en.wikipedia.org/wiki/Integrated_information_theory


 http://www.scottaaronson.com/blog/?p=1799

 Brent

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Physicists Are Philosophers, Too

2015-05-19 Thread spudboy100 via Everything List

God wants communists preaching communism whilst gloming cash hypocritically
God wants spitting at the Christian God whilst kissing up to the Islamist one 
The great silence of the neo-sovets!
God wants to write The Turning Away, in angst over the win of the horrible 
Margret Thatcher
Waaahh!
God wants his son to be a rioter and hurl down bricks onto police, like a good 
soviet does in a capitalist hell.
It's 1926 again -, zombie zombie zombie-time for a national strike, my Red 
brothers!' What! a Red choking on a Cranberry? 
Zombie Zombie Zombie-get the Jews-they oppose our Islamist chums! We Reds can 
march in jackboots too! So stylish!
God wants you to  Run Rabbit Run, back to your champagne communism  limousine


Better by far then my crap, of the Water's narcissism is Canadian, Lenoard 
Cohen's 


Give me back my broken night
My mirrored room, my secret life
It's lonely here,
There's no one left to torture
Give me absolute control
Over every living soul
And lie beside me, baby,
That's an order!

Give me crack and anal sex
Take the only tree that's left
And stuff it up the hole
In your culture
Give me back the berlin wall
Give me stalin and st paul
I've seen the future, brother
It is murder.

Things are going to slide, slide in all directions
Won't be nothing
Nothing you can measure anymore
The blizzard, the blizzard of the world
Has crossed the threshold
And it has overturned
The order of the soul
When they said repent repent
I wonder what they meant
When they said repent repent
I wonder what they meant
When they said repent repent
I wonder what they meant

You don't know me from the wind
You never will, you never did
I'm the little jew
Who wrote the bible
I've seen the nations rise and fall
I've heard their stories, heard them all
But love's the only engine of survival
Your servant here, he has been told
To say it clear, to say it cold:
It's over, it ain't going
Any further
And now the wheels of heaven stop
You feel the devil's riding crop
Get ready for the future:
It is murder.

Things are going to slide 

There'll be the breaking of the ancient
Western code
Your private life will suddenly explode
There'll be phantoms
There'll be fires on the road
And the white man dancing
You'll see a woman
Hanging upside down
Her features covered by her fallen gown
And all the lousy little poets
Coming round
Tryin' to sound like charlie manson
And the white man dancin'

Give me back the berlin wall
Give me stalin and st paul
Give me christ
Or give me hiroshima
Destroy another fetus now
We don't like children anyhow
I've seen the future, baby
It is murder.

Things are going to slide 

When they said repent repent





-Original Message-
From: LizR lizj...@gmail.com
To: everything-list everything-list@googlegroups.com
Sent: Mon, May 18, 2015 8:01 pm
Subject: Re: Physicists Are Philosophers, Too


 
  
I feel a strange desire to quote Roger Waters.  
  
   
  
  
   
What God wants God gets God help us all   
   

 
  
What God wants God gets (repeated)   
The kid in the corner looked at the priest   
And fingered his pale blue Japanese guitar   
The priest said:   
God wants goodness   
God wants light   
God wants mayhem   
God wants a clean fight   
What God wants God gets   
Don't look so surprised   
It's only dogma   
The alien prophet cried   
The beetle and the springbok   
Took the Bible from its hook   
The monkey in the corner   
Wrote the lesson in his book   
What God wants God gets God help us all   
God wants peace   
God wants war   
God wants famine   
God wants chain stores   
What God wants God gets   
God wants sedition   
God wants sex   
God wants freedom   
God wants semtex   
What God wants God gets   
Don't ok so surprised   
I'm only joking   
The alien comic lied   
The jackass and hyena   
Took the feather from its hook   
The monkey in the corner   
Wrote the joke down in his book   
What God wants God gets   
God wants borders   
God wants crack   
God wants rainfall   
God wants wetbacks   
What God wants God gets   
God wants voodoo   
God wants shrines   
God wants law   
God wants organised crime   
God wants crusade   
God wants jihad  
  
  
  
  
  
  
  
  
  
  
God wants good   
God wants bad   
What God wants God Gets  
  
 

   
   
  
 
  
 --  
 You received this message because you are subscribed to the Google Groups 
Everything List group. 
 To unsubscribe from this group and stop receiving emails from it, send an 
email to  everything-list+unsubscr...@googlegroups.com. 
 To post to this group, send email to  everything-list@googlegroups.com. 
 Visit this group at  http://groups.google.com/group/everything-list. 
 For more options, visit  https://groups.google.com/d/optout. 
 

-- 
You received this 

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-19 Thread meekerdb

On 5/19/2015 11:47 AM, Terren Suydam wrote:
While I applaud IIT because it seems to be the first theory of consciousness that takes 
information architecture seriously (and thus situating theoretical considerations in a 
holistic rather than reductionist context) and to make predictions based on that, I 
agree with Aaronson's criticisms of it - namely, that IIT predicts that certain classes 
of computational systems that we intuitively would fail to see as conscious get measures 
of consciousness potentially higher than for human brains.


One key feature of consciousness as we know it is /ongoing subjective experience//. /So 
a question I keep coming back to in my own thinking is, what kind of information 
architecture lends itself to a flow of data, such that if we assume that consciousness 
is how data feels as it's processed, we might imagine it could correspond to ongoing 
subjective experience?  It seems to me that such an architecture would have, at a bare 
minimum, its current state recursively fed back into itself to be processed in the next 
iteration. This happens in a trivial way in any processor chip (or lookup table AI for 
that matter). As such, there may be a very trivial sort of consciousness associated with 
a processor or lookup table, but this does not get us anywhere near understanding the 
richness of human consciousness.


I think you need to consider what would be the benefit of this recursion.  How could it be 
naturally selected.  Jeff Hawkins idea is that the brain continually tries to anticipate, 
at the perceptual level and even in lower layers of the cerebral cortex.  Then signals 
that don't match the prediction get broadcast more widely at the next higher level where 
they may have been anticipated by other neurons.  At the highest level (he says there are 
six in the cortext as I recall) signals spread to language and visual modules and one 
becomes aware of them or they spring to mind. This would have the advantage of 
directing computational resources to that which is novel, while leaving familiar things to 
learned responses.  To this I would add that the novel/conscious experience is given some 
value, e.g. emotional weight, which makes it more or less strongly remembered. And of 
course it isn't remembered like recording; it's synopsized in terms of it's connection to 
other remembered events. This memory is needed for learning from experience.


Brent



An architecture that supports that richness - the subjective experience, IOW, of an 
embodied sensing agent - would involve that recursion but at a holistic level. The 
entire system, potentially, including the system's informational representations of 
sensory data (whatever form that took) would be involved in that feedback loop. So the 
phi of IIT has a role here, as the processor/lookup table architecture has a low phi.


What is missing from phi is a measure of recursion - how the modules of a system 
feedback in such a way as to create a systemic, recursive processing loop. My hunch is 
that this would address Aaronson's objections, as brains would score high on this 
measure but the systems that Aaronson complains about, such as systems that do nothing 
but apply a low-density parity-check code, or other simple transformations of their 
input data would score low due to lack of recursion.


Terren

On Tue, May 19, 2015 at 12:23 PM, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


On 5/19/2015 6:47 AM, Jason Resch wrote:



On Mon, May 18, 2015 at 11:54 PM, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net wrote:

On 5/18/2015 9:45 PM, Jason Resch wrote:


Not necessarily, just as an actor may not be conscious in the same 
way
as me. But I suspect the Blockhead would be conscious; the intuition
that a lookup table can't be conscious is like the intuition that an
electric circuit can't be conscious.


I don't see an equivalency between those intuitions. A lookup table has 
a
bounded and very low degree of computational complexity: all answers to 
all
queries are answered in constant time.

While the table itself may have an arbitrarily high information 
content, what
in the software of the lookup table program is there to
appreciate/understand/know that information?


What is there is there in a neural network?


A computational state containing significant information content.


A lookup table has significant information content.


Integrated Information Theory makes some strides in explains this I think:

http://en.wikipedia.org/wiki/Integrated_information_theory


http://www.scottaaronson.com/blog/?p=1799

Brent
-- 
You received this message because you are subscribed to the Google Groups

Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an 
email to

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-19 Thread meekerdb

On 5/19/2015 6:47 AM, Jason Resch wrote:



On Mon, May 18, 2015 at 11:54 PM, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


On 5/18/2015 9:45 PM, Jason Resch wrote:


Not necessarily, just as an actor may not be conscious in the same way
as me. But I suspect the Blockhead would be conscious; the intuition
that a lookup table can't be conscious is like the intuition that an
electric circuit can't be conscious.


I don't see an equivalency between those intuitions. A lookup table has a 
bounded
and very low degree of computational complexity: all answers to all queries 
are
answered in constant time.

While the table itself may have an arbitrarily high information content, 
what in
the software of the lookup table program is there to 
appreciate/understand/know
that information?


What is there is there in a neural network?


A computational state containing significant information content.


A lookup table has significant information content.


Integrated Information Theory makes some strides in explains this I think:

http://en.wikipedia.org/wiki/Integrated_information_theory


http://www.scottaaronson.com/blog/?p=1799

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.