Hi Bruno et.al.

Once again we have come to grief on the old conflation.

(A) You speak of a universe _AS_ computation (described _as if_ on some 
abstract mega-turing machine)

(B) I speak of computation _OF_ laws of nature, by a computer made of 
natural material, where the laws of nature are those describing how it 
appears to an observer within.

Descriptions of (A) are not the same as (B). Only if you conflate (A) 
and (B) can you be confused about this. Until you can see the difference
 you will continually find my position difficult. My proof relates to 
the real world of computing (B). Your position (A) can be 100% right, 
very interesting and 100% irrelevant to the task at hand. Whatever 
difficulties you and others have with this, they can be sorted out by 
understanding the difference between (A) and (B). Laws of nature in (A) 
are laws of structure. Laws in (B) are laws of appearances (to an 
observer). Like F = MA.

This issue I have proved is EMPIRICALLY PROVEN in domain (B). The 
argument is OVER. 

You can't have it both ways. 

Either 
(1) (B)-style computing of laws of appearances of the cognition is 
LITERALLY cognition....In which case (B)-style computing of laws of 
appearance of combustion must also be LITERALLY combustion.

OR

(2) B)-style computing of laws of appearances of the ANYTHING is NOT 
LITERALLY ANYTHING, ANYWHERE and NEVER WAS.


This is because computing combustion doesn't produce flames. I could 
encode representation in flames. SO WHAT! It's the same bunch of atoms 
dancing about... (table of elements). They don't know what representing 
is going on! What magic changes things merely because representing 
happens? 

At the same time, I would also say that the kind of computing referred 
to by (A) _IS_ flame. But that's not a model of flame. It's the flame. 
You can 'act as if' the flame is running some kind of non existent 
computer, but that does NOT become (B).

Expectation (1) is the universal position of all AGI workers. Now that 
presupposition is FALSE. When neuroscience finds this out (I have a 
paper in already), the entire AGI community is going to be told they are
 not investing in AGI. They are only doing complex AI with predictable 
limits. 

Real AGI will be done by replicating the physics of cognition. I give it
 a year or so.

Colin


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

<<winmail.dat>>

Reply via email to