Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-28 Thread Telmo Menezes
On Wed, Mar 27, 2013 at 6:29 PM, Craig Weinberg whatsons...@gmail.comwrote:



 On Wednesday, March 27, 2013 1:03:27 PM UTC-4, telmo_menezes wrote:

 Hi Craig,


 On Wed, Mar 27, 2013 at 4:03 PM, Craig Weinberg whats...@gmail.comwrote:

 From the Quora http://www.quora.com/Board-**
 Games/What-are-some-fun-games-**to-play-on-an-8x8-**
 Checkerboard-besides-chess-**checkershttp://www.quora.com/Board-Games/What-are-some-fun-games-to-play-on-an-8x8-Checkerboard-besides-chess-checkers

 This is interesting because I think it shows the weakness of the
 one-dimensional view of intelligence as computation. Whether a program can
 be designed to win or not is beside the point,


 That's not really fair, is it?


 Why not?


How else can I counter your argument against intelligence as computation if
I am not allowed to use computation? My example would not prove that it's
what the brain does, but it would prove that it can be. You are arguing
that it cannot be.






 as it is the difference between this game and chess which hints at the
 differences between bottom-up mechanism and top-down intentionality


 I see what you're saying but I disagree. It just highlights the weak
 points of tree-search approaches like min-max. What I gather from what
 happens when one plays Arimaa (or Go): due to combinatorial explosion,
 players (even human) play quite far away from the perfect game(s). The way
 we deal with combinatorial explosion is by mapping the game into something
 more abstract.


 How do you know that any such mapping is going on? It seems like begging
 the question.


I don't know. I have a strong intuition in it's favor for a few reasons,
scientific and otherwise. The non-scientific one is introspection. I try to
observe my own thought process and I think I use such mappings. The
scientific reason is that this type of approach has been
used successfully to tackle AI problems that could not be solved with
classical search algorithms.


 Put another way, if there were top-down non-computational effort going
 into the game play, why would it look any different than what we see?


 Our brain seems to be quite good at generating such mappings. We do it
 with chess too, I'm sure. Notice that, when two humans play Arimaa, both
 can count on each other's inabilities to play close to the perfect game. As
 with games with incomplete information, like Poker, part of it is modelling
 the opponent. Perhaps not surprisingly, artificial neural networks are
 quite good at producing useful mappings of this sort, and on predicting
 behaviours with incomplete information. Great progress has been achieved
 lately with deep learning. All this fits bottom-up mechanism and
 intelligence as computation. It doesn't prove anything because I can't
 attach the code for an excellent Arimaa player but, on the other hand, if I
 did I'm sure you'd come up with something else. :)


 Except that playing Arimaa is not particularly taxing on the human player.
 There is no suggestion of any complex algorithms and mappings, rather it
 seems to me, there is simplicity.


The mappings don't have to be complex at all (in terms of leading to heavy
computations). That's precisely their point.


 The human finds no fundamental difference between the difficulty between
 Arimaa and Chess, yet there is a clear difference for the computer.


Yes, the classical chess algorithms are clearly not how we do it. I agree
with you there.


 Again, if this does not indicate that there the model of intelligence as
 purely an assembly of logical parts, what actually would? In what way is
 the Strong AI position falsifiable?


I agree, I don't think it's falsifiable and thus not a scientific
hypothesis in the Popperian sense. I see it more as an ambitious goal that
nobody even knows if it's achievable. You might be right, even if we manage
to create an AI that is undistinguishable from human intelligence. I prefer
to believe in Strong AI because I'm interested in it's consequences and in
the intellectual challenge of achieving it. That's all, to be honest.

On the other hand, your hypothesis is also not falsifiable.





 A lot of progress has been made in Poker, both in mapping the game to
 something more abstract and modelling opponents:
 http://poker.cs.ualberta.ca/

 Cheers,
 Telmo.

 PS: The expression brute force annoys me a bit. It implies that
 traditional chess algorithms blindly search the entire space. That's just
 not true, they do clever tree-pruning and use heuristics. Still, they are
 indeed defeated by combinatorial explosion.


 It was a generalization, but I understood what they meant. The important
 thing is that the approach of computation is fundamentally passive and
 eliminative. Games which do not hinge on human intolerance for tedious
 recursive processes are going to be easier for computers because machines
 have no capacity for intolerance. The more tedious the better. Games which
 de-emphasize this as a criteria for success are less 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-28 Thread Craig Weinberg


On Thursday, March 28, 2013 5:52:04 AM UTC-4, telmo_menezes wrote:




 On Wed, Mar 27, 2013 at 6:29 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Wednesday, March 27, 2013 1:03:27 PM UTC-4, telmo_menezes wrote:

 Hi Craig,


 On Wed, Mar 27, 2013 at 4:03 PM, Craig Weinberg whats...@gmail.comwrote:

 From the Quora http://www.quora.com/Board-**
 Games/What-are-some-fun-games-**to-play-on-an-8x8-**
 Checkerboard-besides-chess-**checkershttp://www.quora.com/Board-Games/What-are-some-fun-games-to-play-on-an-8x8-Checkerboard-besides-chess-checkers

 This is interesting because I think it shows the weakness of the 
 one-dimensional view of intelligence as computation. Whether a program can 
 be designed to win or not is beside the point,


 That's not really fair, is it?


 Why not?


 How else can I counter your argument against intelligence as computation 
 if I am not allowed to use computation? My example would not prove that 
 it's what the brain does, but it would prove that it can be. You are 
 arguing that it cannot be.


I'm arguing that a screw is not the same thing as a nail because when you 
hammer a screw it doesn't go in as easily as a nail and when you use a 
screwdriver on a nail it doesn't go in at all. Sometimes the hammer is a 
better tool and sometimes the driver is. As humans, we have a great hammer 
and a decent screwdriver. A computer can't hammer anything, but it has a 
power screwdriver with a potentially infinite set of tips.
 

  

  

  

 as it is the difference between this game and chess which hints at the 
 differences between bottom-up mechanism and top-down intentionality


 I see what you're saying but I disagree. It just highlights the weak 
 points of tree-search approaches like min-max. What I gather from what 
 happens when one plays Arimaa (or Go): due to combinatorial explosion, 
 players (even human) play quite far away from the perfect game(s). The way 
 we deal with combinatorial explosion is by mapping the game into something 
 more abstract. 


 How do you know that any such mapping is going on? It seems like begging 
 the question.


 I don't know. I have a strong intuition in it's favor for a few reasons, 
 scientific and otherwise.


Have you tried thinking about it another way? Where does 'mapping' come 
from? Can you begin mapping without already having a map?
 

 The non-scientific one is introspection. I try to observe my own thought 
 process and I think I use such mappings. 


Maybe you do. Maybe a lot of people do. I don't think that I do though. I 
think that a game can be played directly without abstracting it into 
another game.

The scientific reason is that this type of approach has been 
 used successfully to tackle AI problems that could not be solved with 
 classical search algorithms.


I don't doubt that this game is likely to be solved eventually, maybe even 
soon, but the fact remains that it exposes some fundamentally different 
aesthetics between computation and intelligence. This is impressive to me 
because any game is already hugely biased in favor of computation. A game 
is ideal to be reduced to a set of logical rules, it's turn play is already 
a recursive enumeration. A game is already a computer program. Even so, we 
can see that it is possible to use a game to bypass computational values - 
of generic, unconscious repetition, and hint at something completely 
different and opposite.
 

  

 Put another way, if there were top-down non-computational effort going 
 into the game play, why would it look any different than what we see?
  

 Our brain seems to be quite good at generating such mappings. We do it 
 with chess too, I'm sure. Notice that, when two humans play Arimaa, both 
 can count on each other's inabilities to play close to the perfect game. As 
 with games with incomplete information, like Poker, part of it is modelling 
 the opponent. Perhaps not surprisingly, artificial neural networks are 
 quite good at producing useful mappings of this sort, and on predicting 
 behaviours with incomplete information. Great progress has been achieved 
 lately with deep learning. All this fits bottom-up mechanism and 
 intelligence as computation. It doesn't prove anything because I can't 
 attach the code for an excellent Arimaa player but, on the other hand, if I 
 did I'm sure you'd come up with something else. :)


 Except that playing Arimaa is not particularly taxing on the human 
 player. There is no suggestion of any complex algorithms and mappings, 
 rather it seems to me, there is simplicity. 


 The mappings don't have to be complex at all (in terms of leading to heavy 
 computations). That's precisely their point.


Then shouldn't a powerful computer be able to quickly deduce the winning 
Arimaa mappings?
 

  

 The human finds no fundamental difference between the difficulty between 
 Arimaa and Chess, yet there is a clear difference for the computer. 


 Yes, the classical chess algorithms are 

Re: Any human who has played a bit of Arimaa can beat a computer hands down.

2013-03-28 Thread Craig Weinberg


On Wednesday, March 27, 2013 9:32:46 PM UTC-4, stathisp wrote:



 On Thu, Mar 28, 2013 at 2:03 AM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:

 From the Quora 
 http://www.quora.com/Board-Games/What-are-some-fun-games-to-play-on-an-8x8-Checkerboard-besides-chess-checkers

 This is interesting because I think it shows the weakness of the 
 one-dimensional view of intelligence as computation. Whether a program can 
 be designed to win or not is beside the point, as it is the difference 
 between this game and chess which hints at the differences between 
 bottom-up mechanism and top-down intentionality.

 In Arimaa, the rules invite personal preference as a spontaneous 
 initiative from the start - thus it does not make the reductionist 
 assumption of intelligence as a statistical extraction or 'best choice'. 
 Game play here begins intuitively and strategy is more proprietary-private 
 than generic-public. In addition the interaction of the pieces and 
 inclusion of the four trap squares suggests a game geography which is 
 rooted more in space-time sensibilities than in pure arithmetic like chess. 
 I'm not sure which aspects are the most relevant in the difference between 
 how a computer performs, but it seems likely to me that the difference is 
 specifically *not* related to computing power. To wit:

 There are tens of thousands of possibilities in each turn in Arimaa. 
 The 'brute force approach' to programming Arimaa fails miserably. Any human 
 who has played a bit of Arimaa can beat a computer hands down.

 This to me suggests that Arimaa does a good job of sniffing out the 
 general area where top-down consciousness differs fundamentally from bottom 
 up simulated intelligence.


 If this game shows where top-down consciousness differs fundamentally 
 from bottom up simulated intelligence would you accept a computer beating 
 a human at Arimaa as evidence that computers had the top-down 
 consciousness? 


No, that's why I wrote Whether a program can be designed to win or not is 
beside the point.. You may be able to build a screwdriver that is big 
enough to use as a hammer in some situations, but that doesn't mean that it 
is an actual claw hammer.

Would you accept an AI matching a human in any task whatsoever as evidence 
 of the computer having consciousness? If not, why bother pointing out 
 computers' failings if you believe they are a priori incapable of 
 consciousness or even intelligence?


I point out the computers failings to help discern the difference between 
consciousness and simulated intelligence. I'm interested in that because I 
have a hypothesis about what awareness actually is, and that hypothesis 
indicates that awareness cannot necessarily be assembled from the outside. 
I think computers are great, I use them all day every day by choice and by 
profession, but that doesn't make them the same thing as a person, or a 
proto-person. Not only are they not that, they are, in my hypothesis, the 
precise opposite of that. Machines are impersonal. Trying to build a person 
from impersonal parts is like trying to find some combination of walking 
north and south which will eventually take you east.

Thanks,
Craig
 



 -- 
 Stathis Papaioannou 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Losing Control

2013-03-28 Thread Stathis Papaioannou
On Wed, Mar 27, 2013 at 5:51 AM, Craig Weinberg whatsons...@gmail.com wrote:

 If the right atoms are placed in the right configuration then life or
 consciousness occurs.


 You don't know that, you just assume it. It's like saying that if the right
 cars are placed in the right configuration around the right buildings, then
 New York City occurs. It may not work that way at all. You act as if we have
 made living organisms from scratch already.


 Your theory does not really add anything: what would it look like if the
 atoms or the configuration or the universe lack the essential ingredient you
 claim but had every other physical property unchanged?


 If by essential ingredient you mean the capacity for awareness, then atoms
 could not 'look like' anything at all. They couldn't seem like anything,
 they couldn't be defined by any property that can be observed in any way.
 Unless I'm missing something...what do you think that things look like if
 nothing can possibly ever see them?

I find it difficult to understand how you could be thinking about
these things. If I put atoms in the configuration of a duck but as you
claim I don't get a duck, I must have missed something out. For if I
didn't miss anything anything out it would be a duck, right? So
perhaps the atoms in the duck I made lack the capacity of awareness.
How is that possible, given that all atoms ultimately came from the
same source? What if I made my duck from atoms sourced from steak,
which I know had the capacity for life at one point? How could I tell
the difference between life-affirming atoms and other atoms? Why is
there no difference in activity between natural and synthetic peptides
such as insulin when used medically if the synthetic one lacks
something?

You make detailed pronouncements about sense  and intention but
you fail to propose obvious experimental tests for your ideas. A
scientist tries to test his hypothesis by thinking of ways to falsify
it.

 The person would function identically by any test but you would claim he
 is not only not conscious but also not living? How would you decide this and
 how do you know that the people around you haven't been replaced with these
 unfortunate creatures?


 Lets say you get a call from a lawyer that your rich uncle has died and
 there is a video will. When you go to the reading of the will, there is a TV
 monitor and your relatives and behind you is a large mirror. As you watch
 the video of your uncle reading his will, you begin to wonder if he is
 actually still alive and watching everyone from behind the mirror. Maybe he
 looks at you and says your name, and looks at others as he reads off their
 ten names.

 He could have had a computer generate different video variations of where he
 looks which are played according to how the attorney fills out a seating
 chart on the computer. He could still be dead.

 Either way though, it doesn't matter whether you think he is dead or not.
 The reality is that he actually is dead or alive and it makes no difference
 whether his video convinces you one way or another.

Why do you keep raising this example of videos? You interact with the
image in the video, for example by asking it to raise its hand in the
air.

 The whole zombie argument is bogus because you don't know what our actual
 sensitivities tell us subconsciously about other creatures. We can be
 fooled, but that doesn't mean that our only way of feeling that we are in
 the presence of another animal-level consciousness is by some kind of
 logical testing process. You underestimate consciousness. Logic is a much
 weaker epistemology than aesthetics, feeling, and intuition - even though
 all three of them can be misleading.

Can you tell for sure if someone other than yourself is a zombie? It
seems you do believe zombies are possible, since you think that
passing the Turing test is no guarantee that the entity is conscious.
A zombie is an entity that passes the Turing test but is not
conscious. So I ask you again, how can you be sure that people other
than you are not zombies?

 so the atoms in this artificial cell, being the same type in the same
 configuration, would follow the same laws of physics and behave in a similar
 manner.


 It probably will just be a dead cell.

Which means you must have left something out in making the cell the
way you did, which brings to mind a whole lot of experimental tests to
verify this.

 Unless there is some essential non-physical ingredient which is missing
 how could it be otherwise?


 It's not a non-physical ingredient, it is experience through time.
 Experience is physical.

Which brings to mind a whole lot of experimental tests to verify this.

 Rome itself would not have played the same role if a dust mote had got
 into Julius Caesar's eye, so obviously a copy of Rome would not play out the
 same role. You can't hold the copy to higher standards than the original.


 Then if a nanoscopic dust mote gets into your artificial cell then it 

Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-28 Thread Bruno Marchal


On 26 Mar 2013, at 18:19, meekerdb wrote:


On 3/26/2013 4:21 AM, Bruno Marchal wrote:
I can explain why if a machine can have experience and enough  
reflexivity, then the machine can already understand that she  
cannot justify rationally the presence of its experience. No  
machine, nor us, can ever see how that could be true. It *is* in  
the range of the non communicable.


If some aliens decide that we are not conscious, we will not find  
any test to prove them wrong.


And if we decide the Mars Rover is conscious, can any test prove us  
wrong?


Yes. But it is longer to explain than for comp. Strong AI is refutable  
in a weaker sense than comp. The refutation here are indirect and  
based on the acceptance of the classical tgeory of knowledge, that is  
S4 (not necessarily Theaetetus).




Or if Craig decides an atom is conscious, can any test prove him  
wrong?


A person can be conscious. What would it mean that an atom is  
conscious? What is an atom?





Which I think is John Clark's point: Consciousness is easy.   
Intelligence is hard.



Consciousness might be more easy than intelligence, and certainly than  
matter. Consciousness is easy with UDA,  when you get the difference  
between both G and G*, and between Bp, Bp  p, Bp  Dt, etc. (AUDA).


Matter is more difficult. Today we have only the propositional  
observable.


Intelligence, in my opinion is rather easy too. It is a question of  
abstract thermodynamic, intelligence is when you get enough heat  
while young, something like that. It is close to courage, and it is  
what make competence possible.


Competence is the most difficult, as they are distributed on  
transfinite  lattice of incomparable degrees. Some can ask for  
necessary long work, and can have negative feedback on intelligence.


Bruno







Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-28 Thread Bruno Marchal


On 26 Mar 2013, at 18:33, meekerdb wrote:


On 3/26/2013 7:13 AM, Bruno Marchal wrote:
It is a bit what happens, please study the theory. Qualia are  
useful to accelerate information processing, and the integration of  
that processing in a person. And they are unavoidable for machines  
in rich and statistically stable universal relations with each  
others.


Can you describe exactly how they are unavoidable?


You need the theory, but in a nutshell, they are unavoidable because  
they are truth that machine will discover when looking inward. They  
correspond to true facts, which are not sigma_1, but which concerns  
noneless the machine (like having a local model, or being in some  
situation, etc.).








Specifically I wonder what constraints this puts on them.


They obeys to the two modal logic system: S4Grz1, X1* minus X1, and  
their higher order extensions.





Looked at from the aspect of engineering intelligence I would assume  
it would depend on sensor capabilities, i.e. that machines would  
primarily communicate about what they can both see.  But that  
doesn't account for humans who communicate a lot about what they feel.


Qualia do not need sensors conceptually, with comp, but in practice,  
it is the simplest way to get them in accordance with the local  
universal neighbors. The theories manetionned above can explain well  
why qualia are non communicable---in the sense of rationally justified  
by the machine, but why we can still communicate on them, and project  
them on other machines or entities.


Bruno




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-28 Thread Richard Ruquist
On Thu, Mar 28, 2013 at 10:52 AM, Bruno Marchal marc...@ulb.ac.be wrote:

 On 26 Mar 2013, at 18:19, meekerdb wrote:

 On 3/26/2013 4:21 AM, Bruno Marchal wrote:

 I can explain why if a machine can have experience and enough reflexivity,
 then the machine can already understand that she cannot justify rationally
 the presence of its experience. No machine, nor us, can ever see how that
 could be true. It *is* in the range of the non communicable.

 If some aliens decide that we are not conscious, we will not find any test
 to prove them wrong.


 And if we decide the Mars Rover is conscious, can any test prove us wrong?


 Yes. But it is longer to explain than for comp. Strong AI is refutable in a
 weaker sense than comp. The refutation here are indirect and based on the
 acceptance of the classical tgeory of knowledge, that is S4 (not necessarily
 Theaetetus).



 Or if Craig decides an atom is conscious, can any test prove him wrong?


 A person can be conscious. What would it mean that an atom is conscious?
 What is an atom?



Davies suggests that the threshold for consciousness based on the
Lloyd limit is the complexity of the human cell.



 Which I think is John Clark's point: Consciousness is easy.  Intelligence is
 hard.



 Consciousness might be more easy than intelligence, and certainly than
 matter. Consciousness is easy with UDA,  when you get the difference between
 both G and G*, and between Bp, Bp  p, Bp  Dt, etc. (AUDA).

 Matter is more difficult. Today we have only the propositional observable.

 Intelligence, in my opinion is rather easy too. It is a question of
 abstract thermodynamic, intelligence is when you get enough heat while
 young, something like that. It is close to courage, and it is what make
 competence possible.

 Competence is the most difficult, as they are distributed on transfinite
 lattice of incomparable degrees. Some can ask for necessary long work, and
 can have negative feedback on intelligence.

 Bruno






 Brent

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




 http://iridia.ulb.ac.be/~marchal/



 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-28 Thread Bruno Marchal


On 28 Mar 2013, at 16:08, Richard Ruquist wrote:

On Thu, Mar 28, 2013 at 10:52 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 26 Mar 2013, at 18:19, meekerdb wrote:

On 3/26/2013 4:21 AM, Bruno Marchal wrote:

I can explain why if a machine can have experience and enough  
reflexivity,
then the machine can already understand that she cannot justify  
rationally
the presence of its experience. No machine, nor us, can ever see  
how that

could be true. It *is* in the range of the non communicable.

If some aliens decide that we are not conscious, we will not find  
any test

to prove them wrong.


And if we decide the Mars Rover is conscious, can any test prove us  
wrong?



Yes. But it is longer to explain than for comp. Strong AI is  
refutable in a
weaker sense than comp. The refutation here are indirect and based  
on the
acceptance of the classical tgeory of knowledge, that is S4 (not  
necessarily

Theaetetus).



Or if Craig decides an atom is conscious, can any test prove him  
wrong?



A person can be conscious. What would it mean that an atom is  
conscious?

What is an atom?




Davies suggests that the threshold for consciousness based on the
Lloyd limit is the complexity of the human cell.


In which physics? If he assumes comp, he must derive that physics  
first, to get a valid consequences.

BTW I don't see the use of comp in your paper.

Now, I can accept that human cells have already some consciousness.  
Even bacteria. I dunno but I am open to the idea. Bacteria have  
already full Turing universality, and exploit it in complex genetic  
regulation control.


Comp is open with a strict Moore law: the number of angels (or bit  
processing) that you can put at the top of a needle might be  
unbounded. Like Feynman said, there is room in the bottom. But we  
might have insuperable read and write problems. There might be  
computer in which we can upload our minds, but never came back.


Bruno










Which I think is John Clark's point: Consciousness is easy.   
Intelligence is

hard.



Consciousness might be more easy than intelligence, and certainly  
than
matter. Consciousness is easy with UDA,  when you get the  
difference between

both G and G*, and between Bp, Bp  p, Bp  Dt, etc. (AUDA).

Matter is more difficult. Today we have only the propositional  
observable.


Intelligence, in my opinion is rather easy too. It is a question of
abstract thermodynamic, intelligence is when you get enough heat  
while
young, something like that. It is close to courage, and it is what  
make

competence possible.

Competence is the most difficult, as they are distributed on  
transfinite
lattice of incomparable degrees. Some can ask for necessary long  
work, and

can have negative feedback on intelligence.

Bruno






Brent

--
You received this message because you are subscribed to the Google  
Groups

Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an

email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google  
Groups

Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an

email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-28 Thread meekerdb

On 3/28/2013 7:52 AM, Bruno Marchal wrote:
Intelligence, in my opinion is rather easy too. It is a question of abstract 
thermodynamic, intelligence is when you get enough heat while young, something like 
that. It is close to courage, and it is what make competence possible.


??



Competence is the most difficult, as they are distributed on transfinite  lattice of 
incomparable degrees. Some can ask for necessary long work, and can have negative 
feedback on intelligence.


That sounds like a quibble.  Intelligence is usually just thought of as the the ability to 
learn competence over a very general domain.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: 'Brain Waves' Challenge Area-Specific View of Brain Activity

2013-03-28 Thread Richard Ruquist
On Thu, Mar 28, 2013 at 1:37 PM, Bruno Marchal marc...@ulb.ac.be wrote:

 On 28 Mar 2013, at 16:08, Richard Ruquist wrote:

 On Thu, Mar 28, 2013 at 10:52 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 26 Mar 2013, at 18:19, meekerdb wrote:

 On 3/26/2013 4:21 AM, Bruno Marchal wrote:

 I can explain why if a machine can have experience and enough
 reflexivity,
 then the machine can already understand that she cannot justify
 rationally
 the presence of its experience. No machine, nor us, can ever see how that
 could be true. It *is* in the range of the non communicable.

 If some aliens decide that we are not conscious, we will not find any
 test
 to prove them wrong.


 And if we decide the Mars Rover is conscious, can any test prove us
 wrong?


 Yes. But it is longer to explain than for comp. Strong AI is refutable in
 a
 weaker sense than comp. The refutation here are indirect and based on the
 acceptance of the classical tgeory of knowledge, that is S4 (not
 necessarily
 Theaetetus).



 Or if Craig decides an atom is conscious, can any test prove him wrong?


 A person can be conscious. What would it mean that an atom is conscious?
 What is an atom?



 Davies suggests that the threshold for consciousness based on the
 Lloyd limit is the complexity of the human cell.


 In which physics?

Holographic (Bekenstein bound) physics of 10^120 bits (the Lloyd limit)

If he assumes comp, he must derive that physics first, to
 get a valid consequences.

Davies does not assume comp. I thought I did in my paper.

 BTW I don't see the use of comp in your paper.

I certainly discuss physics derived from comp in my paper
(http://vixra.org/abs/1303.0194) while leaving out all the math
details.
ie. CY manifolds-math-mind/physics- matter

Could you expand when you have time how I do not use comp?
What I do is to place resource limits on comp
(10^120 bits for the universe and perhaps 10^1000 for the Metaverse).
Is that perhaps what you refer to?

Or is it the conjecture that CY manifolds are the comp machine,
one for the universe and another for the metaverse?
Thanks for reading the paper.
Richard


 Now, I can accept that human cells have already some consciousness. Even
 bacteria. I dunno but I am open to the idea. Bacteria have already full
 Turing universality, and exploit it in complex genetic regulation control.

 Comp is open with a strict Moore law: the number of angels (or bit
 processing) that you can put at the top of a needle might be unbounded. Like
 Feynman said, there is room in the bottom. But we might have insuperable
 read and write problems. There might be computer in which we can upload our
 minds, but never came back.

 Bruno








 Which I think is John Clark's point: Consciousness is easy.  Intelligence
 is
 hard.



 Consciousness might be more easy than intelligence, and certainly than
 matter. Consciousness is easy with UDA,  when you get the difference
 between
 both G and G*, and between Bp, Bp  p, Bp  Dt, etc. (AUDA).

 Matter is more difficult. Today we have only the propositional
 observable.

 Intelligence, in my opinion is rather easy too. It is a question of
 abstract thermodynamic, intelligence is when you get enough heat while
 young, something like that. It is close to courage, and it is what make
 competence possible.

 Competence is the most difficult, as they are distributed on transfinite
 lattice of incomparable degrees. Some can ask for necessary long work,
 and
 can have negative feedback on intelligence.

 Bruno






 Brent

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




 http://iridia.ulb.ac.be/~marchal/



 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.



 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.



 http://iridia.ulb.ac.be/~marchal/



 --
 You received this message because 

Re: Free-Will discussion

2013-03-28 Thread Craig Weinberg


On Thursday, March 28, 2013 5:26:23 PM UTC-4, JohnM wrote:

 Stathis wrote:
 *I also have a very simple and straightforward idea of free will: I
 exercise my free will when I make a choice without being coerced*
 *
 *
 And how do you know that you are *not* coerced? your mind works on both
 conscious and (sub-? un-? beyond-?) conscious arguments that 'influence' 
 (nicer, than 'coerced') your decisive process. Then again you may decide 
 to 
 'will' against your best (or not-so-best?) interest - for some reason. You 
 even 
 may misunderstand circumstances and use them wrongly. 
 All such (and another 1000) may influence (coerce??) your free decision. 
 Continuing your sentence:


It's true that there are influences outside of your personal range of 
consciousness which contribute to our personal intentions, but who is to 
say that these sub-conscious or super-conscious influences are not also 
*ourselves*? As our personal awareness blurs out into countless 
semi-conscious interactions where we increasingly blend into the public 
infinity and private eternity, who is to say that our personal will doesn't 
influence those outlying resources as much as they influence us?

*
 *
 * ...I never said that the laws of physics deny the possibility of free 
 will,
 but free will is impossible if you define it in such a way as to be
 incompatible with the laws of physics or even with logic.*
 *
 *
 The Laws of physics are our deduction from the so far observed incomplete
 circumstances - they don't allow or deny - maybe explain at the level 
 of their
 compatibility. The impossibility of free will is not a no-no, unless it 
 has been 
 proven to be an existing(?) FACT (what I do not believe in).


Right on. I would further suggest however that free will doesn't exist in 
public physics, it insists through private physics.

Craig
 


 Logic is the ultimate human pretension, especially if not said 'what kind 
 of'. 

 John M


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: My estimation of Daniel Dennett continues to improve

2013-03-28 Thread John Clark
On Tue, Mar 26, 2013 at 1:55 PM, meekerdb meeke...@verizon.net wrote:

 I exercise my free will when I make a choice without being coerced.


  If you alter your path to avoid walking face first into a brick wall
 has the wall coerced you to do so, or more precisely have the photons that
 entered your eye indicating the presents of the wall caused you to do so?
 If you wish to jump over a mountain has gravity coerced you to stay where
 you are?

   No, I think coercion is influence by another's will


So if somebody else prevents me from doing what I want then I lack free
will, but if anything else prevents me I still have it; thus we are
entirely dependent not on ourselves but on other people for free will to be
meaningful, and on a desert island a man with free will would act and feel
exactly like a man without free will.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.