Re: [foar] Re: The real reasons we don’t have AGI yet

2012-10-09 Thread meekerdb

On 10/8/2012 3:49 PM, Stephen P. King wrote:

Hi Russell,

Question: Why has little if any thought been given in AGI to self-modeling and some 
capacity to track the model of self under the evolutionary transformations? 


It's probably because AI's have not needed to operate in environments where they need a 
self-model.  They are not members of a social community.  Some simpler systems, like Mars 
Rovers, have limited self-models (where am I, what's my battery charge,...) that they need 
to perform their functions, but they don't have general intelligence (yet).


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The real reasons we don’t have AGI yet

2012-10-09 Thread Evgenii Rudnyi

On 08.10.2012 20:45 Alberto G. Corona said the following:

Deutsch is right about the need to advance in Popperian
epistemology, which ultimately is evolutionary epistemology.


You may want to read Three Worlds by Karl Popper. Then you see where to 
Popperian epistemology can evolve.


“To sum up, we arrive at the following picture of the universe.  There 
is the physical universe, world 1, with its most important sub-universe, 
that of the living organisms.  World 2, the world of conscious 
experience, emerges as an evolutionary product from the world of 
organisms.  World 3, the world of the products of the human mind, 
emerges as an evolutionary product from world 2.”


“The feedback effect between world 3 and world 2 is of particular 
importance. Our minds are the creators of world 3; but world 3 in its 
turn not only informs our minds, but largely creates them. The very idea 
of a self depends on world 3 theories, especially upon a theory of time 
which underlies the identity of the self, the self of yesterday, of 
today, and of tomorrow. The learning of a language, which is a world 3 
object, is itself partly a creative act and partly a feedback effect; 
and the full consciousness of self is anchored in our human language.”


Evgenii
--

http://blog.rudnyi.ru/2012/06/three-worlds.html

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The real reasons we don’t have AGI yet

2012-10-09 Thread Alberto G. Corona
2012/10/9 Evgenii Rudnyi use...@rudnyi.ru:
 On 08.10.2012 20:45 Alberto G. Corona said the following:

 Deutsch is right about the need to advance in Popperian
 epistemology, which ultimately is evolutionary epistemology.


 You may want to read Three Worlds by Karl Popper. Then you see where to
 Popperian epistemology can evolve.

 “To sum up, we arrive at the following picture of the universe.  There is
 the physical universe, world 1, with its most important sub-universe, that
 of the living organisms.  World 2, the world of conscious experience,
 emerges as an evolutionary product from the world of organisms.  World 3,
 the world of the products of the human mind, emerges as an evolutionary
 product from world 2.”

..and the perception of world1 is not a objective image of the
phisical realiy, but a result of particular adaptive needs,
interests and purposes. This perception of world1 is not part of world
1, but an evolutionary product, that is part of world 2. This is very
important.

That means that any perception has a purpose from the beginning, and
making an objective idea of phisical reality is not neither can be a
valid purpose, because it is ultimateley, purposeless and,
additionally it has infinite objective versions of it  (do we want
to perceive radiation? neutrinos?  atoms? only macroscopical thigs?.

Therefore, the general intelligence, that is part of World 3, has to
work with the impulses, perceptions and purposes  evolved in world 2 .
Therefore  we the humans are limited by that (and in any other
artificial case, as i would show). But this does not means that the
human mind,  have the take this limit as a absolute limit for their
reasoning. He ask itself about the nature of these limitations and
reach different reactions to this: one answer is to adopt a particular
belief that match with these limitations, negate them (nihilism). or
try to transcend them (gnosticism : my limitations are false
impositions and I have to search for my true self ) or try to know
them (realism)

That is the difference between a tool, like an ordinary Mars Rover
that send data to the eart  from a person or for that matter, an AGI
devide sent to mars with the memories of life in earth erased and with
an impulse to inform the earth by radiowaves: While the ordinary rover
will be incapable to improve its own program in a qualitative way, the
man or the AGI will first think about how to improve his task (because
it would be a pleasure for him and his main purpose). To do so he will
have to ask itself about the motives of the people that sent him. To
do so he will ask himself about the nature of his work to try to know
the intentions of the senders, So he will study the nature of the
phisical medium to better know the purposes of the senders (while
actively working in the task, because it is a pleasure form him) .
But, because he will also have impulses for self preservation and
curiosity in order to inprove self preservation. To predict the future
he will go up in the chain of causations, he will enter into
philosophical questions at some level and adopt a certain wordview
among the three above mentioned ones, beyond the strict limitations
of his task. Otherwise he would not be AGI or human.

 General inteligence by definition can not be limited if there is
enough time and resources. So the true test of AGI would be a
philosophical questioning about existence, purpose, perception. That
includes moral questions that can be asked due to the freedom of
alternatives between different purposes that the AGI has: For example,
whether if the rover would weight less or more the  self preservation
versus task realization  ( Do I go to this dangerous crater that has
this interesting ice looking rocks or I spend the time pointing my
panels to the sun? )

Note that a response to the questions:
1 What is the ultimate meaning of your activity   -It`s my pleasure
to search interesting things and to send the info to the earth.
2  What is is interesting for you?. -Interesting is what i find
interesting by my own program, which I cannot neither I want to know.
3- Don´t you realize that if you adopt this attitude then you can not
improve your task that way? - Dont waste my time. Bye

Would reveal an  worldview (the first) that is a hint of general
intelligence, despite the fact that apparently he refuses to answer
philosophical questions.


 “The feedback effect between world 3 and world 2 is of particular
 importance. Our minds are the creators of world 3; but world 3 in its turn
 not only informs our minds, but largely creates them. The very idea of a
 self depends on world 3 theories, especially upon a theory of time which
 underlies the identity of the self, the self of yesterday, of today, and of
 tomorrow. The learning of a language, which is a world 3 object, is itself
 partly a creative act and partly a feedback effect; and the full
 consciousness of self is anchored in our human language.”

 Evgenii
 --

 

Re: Re: Re: Can computers be conscious ? Re: Zombieopolis ThoughtExperiment

2012-10-09 Thread Roger Clough
Hi Richard,  

My point was that monads do not add any faculty to an object
that it does not already have. Monadization doesn't actually do anything,
it allows what is possible in principle, such as mutual actions between the mind
and body, to actually happen.

But if a computer can think, as you seem to believe, it can think without
considering it as a monad.

Roger Clough, rclo...@verizon.net 
10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Roger Clough  
Receiver: everything-list  
Time: 2012-10-08, 09:25:50 
Subject: Re: Re: Can computers be conscious ? Re: Zombieopolis 
ThoughtExperiment 


Hi Richard Ruquist  

I may have given that impression, sorry, but  
a monad can only make what's inside do what it can do. 

Human and animal monads can both feel, so they can be conscious. 
But a rock is at best unconscious as it cannot feel or think.\ 

There's no way to tell what faculties a computer has. 

Roger Clough, rclo...@verizon.net  
10/8/2012  
Forever is a long time, especially near the end. -Woody Allen  


- Receiving the following content -  
From: Richard Ruquist  
Receiver: everything-list  
Time: 2012-10-07, 11:06:17  
Subject: Re: Can computers be conscious ? Re: Zombieopolis Thought Experiment  


Roger,  

If human consciousness comes from attached monads, as I think you have claimed, 
 
then why could not these monads attach to sufficiently complex computers  
as well.  
Richard  

On Sun, Oct 7, 2012 at 8:17 AM, Roger Clough wrote:  
 Hi John Clark  
  
 Unless computers can deal with inextended objects such as  
 mind and experience, they cannot be conscious.  
  
 Consciousness is direct experience, computers can only deal in descriptions 
 of experience.  
  
 Everything that a computer does is, to my knowledge, at least  
 in principle publicly available, since it uses publicly available symbols or 
 code.  
  
 Consciousness is direct experience, which cannot be put down in code  
 any more than life can be put down in code. It is personal and not publicly 
 available.  
  
 Roger Clough, rclo...@verizon.net  
 10/7/2012  
 Forever is a long time, especially near the end. -Woody Allen  
  
  
 - Receiving the following content -  
 From: John Clark  
 Receiver: everything-list  
 Time: 2012-10-06, 13:56:30  
 Subject: Re: Zombieopolis Thought Experiment  
  
  
 On Fri, Oct 5, 2012 at 6:29 PM, Craig Weinberg wrote:  
  
  
  
 ?I'm openly saying that a high school kid can make a robot that behaves 
 sensibly with just a few transistors.? ?  
  
  
 Only because he lives in a universe in which the possibility of teleology is 
 fully supported from the start.  
  
  
 We know with absolute certainty that the laws of physics in this universe 
 allow for the creation of consciousness, we may not know how they do it but 
 we know for a fact that it can be done. So how on Earth does that indicate 
 that a conscious computer is not possible? Because it doesn't fart??  
  
 ?  
 you have erroneously assumed that intelligence is possible without sense 
 experience.  
  
 No, I am assuming the exact OPPOSITE! In fact I'm not even assuming, I know 
 for a fact that intelligent behavior WITHOUT consciousness confers a 
 Evolutionary advantage, and I know for a fact that intelligent behavior WITH 
 consciousness confers no additional Evolutionary advantage (and if you 
 disagree with that point then you must believe that the Turing Test works for 
 consciousness too and not just intelligence). And in spite of all this I know 
 for a fact that Evolution DID produce consciousness at least once, therefore 
 the only conclusion is that consciousness is a byproduct of intellagence.  
  
  
  
 Adenine and Thymine don't have purpose in seeking to bind with each other?  
  
  
 I don't even know what a question like that means, who's purpose do you 
 expect Adenine and Thymine to serve?  
  
  
  
 How do you know?  
  
  
 I know because I have intelligence and Adenine and Thymine do not know 
 because they have none, they only have cause and effect.  
  
  
  
 How is it different from our purpose in staying in close proximity to places 
 to eat and sleep?  
  
  
 And to think that some people berated me for anthropomorphizing future 
 supercomputers and here you are ? anthropomorphizing simple chemicals.  
  
  
  
 Why is everything aware, why isn't everything not aware?  
  
  
 Because then we wouldn't be aware of having this conversation.  
  
  
 And we are aware of having this conversation because everything is aware, 
 except of course for computers.  
 ?  
  
 Robots are something?  
  
 No, they aren't something.  
  
 That is just a little too silly to argue.  
  
 ?  
  
 Everything is awareness  
  
 Are you certain, I thought everything is klogknee, or maybe its everything is 
 42.  
  
  
  
 evolution requires that something be alive to begin with.  
  
 Evolution requires something that 

The fundamental problem

2012-10-09 Thread Roger Clough
Hi Alberto G. Corona  

IMHO the bottom line revolves around the problem of solipsism,
which is that we cannot prove that other people or objects have minds,
we can only say at most that they appear to have minds.


Roger Clough, rclo...@verizon.net 
10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Alberto G. Corona  
Receiver: everything-list  
Time: 2012-10-08, 14:45:00 
Subject: Re:_The_real_reasons_we_don?_have_AGI_yet 


Deutsch is right about the need to advance in Popperian epistemology, 
which ultimately is evolutionary epistemology. How evolution makes a 
portion of matter ascertain what is truth in virtue of what and for 
what purpose. The idea of intelligence need a knowledge of what is 
truth but also a motive for acting and therefore using this 
intelligence. if there is no purpose there is no acting, if no act, no 
selection of intelligent behaviours if no evolution, no intelligence. 
Not only intelligence is made for acting accoding with arbitrary 
purpose: It has evolved from the selection of resulting behaviours for 
precise purposes. 

an ordinary purpose is non separable from other purposes that are 
coordinated for a particular superior purpose, but the chain of 
reasoning and actng means tthat a designed intelligent robot also need 
an ultimate purpose. otherwise it would be a sequencer and achiever of 
disconnected goals at a certain level where the goals would never have 
coordination, that is it would be not intelligent. 

 This is somewhat different ffom humans, because much of our goals are 
hardcoded and non accessible to introspection, although we can use 
evolutionary reasoning for obtaining falsable hypothesis about 
apparently irrational behaviour, like love, anger aestetics, pleasure 
and so on. However men are from time to time asking themselves for the 
deep meaning of what he does. specially when a whole chain of goals 
have failed, so he is a in a bottleneck. Because this is the right 
thing to do for intelligent beings. A true intelligent being therefore 
has existential, moral and belief problems. If an artificial 
intelligent being has these problems, the designed as solved the 
problem of AGI to the most deeper level. 

An AGI designed has no such core engine of impulses and perceptions 
that drive, in the first place, intelligence to action: curiosity, 
fame and respect, power, social navigation instimcts. It has to start 
from scratch. Concerning perceptions, a man has hardwired 
perceptions that create meaning: There is part of brain circuitry at 
various levels that make it feel that a person in front of him is 
another person. But really it is its evolved circuitry what makes the 
impression that that is a person and that this is true, instead of a 
bunch of moving atoms. Popperian Evoluitionary epistemology build from 
this. All of this link computer science with philosophy at the deeper 
level. 

Another comment concerning design: The evolutionary designs are 
different from rational designs. The modularity in rartional design 
arises from the fact that reason can not reason with many variables at 
the same time. Reason uses divide an conquer. Object oriented design, 
modual architecture and so on are a consequence of that limitation. 
These design are understandable by other humans, but they are not the 
most effcient. In contrast, modularity in evolution is functional. 
That means that if a brain structure is near other in the brain 
forming a greater structuture it is for reasons of efficiency, not for 
reasons of modularity. the interfaces between modules are not 
discrete, but pervasive. This makes essentially a reverse engineering 
of the brain inpossible. 






2012/10/8 John Clark  
 
 How David Deutsch can watch a computer beat the 2 best human Jeopardy! 
 players on planet Earth and then say that AI has made ?o progress whatever 
 during the entire six decades of its existence? is a complete mystery to me. 
 
 John K Clark 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group. 
 To post to this group, send email to everything-list@googlegroups.com. 
 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com. 
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en. 




-- 
Alberto. 

--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, 

I believe that comp's requirement is one of as if rather than is

2012-10-09 Thread Roger Clough
Hi Alberto G. Corona  and Bruno,

Perhaps I can express the problem of solipsism as this.
To have a mind means that one can experience.
Experiences are subjective and thus cannot be actually shared,
the best one can do is share a description of the experience.
If one cannot actually share another's experience, 
one cannot know if they actually had an experience--
that is, that they actually have a mind.

Comp seems to avoid this insurmountable problem 
by avoiding the issue of whether the computer
actually had an experience, only that it appeared
to have an experience.  So comp's requirement  
is as if rather than is.


Roger Clough, rclo...@verizon.net 
10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Alberto G. Corona  
Receiver: everything-list  
Time: 2012-10-08, 15:12:22 
Subject: Re: What Kant did: Consciousness is a top-down structuring ofbottom-up 
sensory info 


Bruno: 

It could be that the indeterminacy in the I means that everything else 
is not a machine, but supposedly, an hallucination. 
But this hallucination has a well defined set of mathematical 
properties that are communicable to other hallucinated expectators. 
This means that something is keeping the picture coherent. If that 
something is not computation or computations, what is the nature of 
this well behaving hallucination according with your point of view? 


2012/10/7 Bruno Marchal : 
 
 On 07 Oct 2012, at 15:11, Alberto G. Corona wrote: 
 
 
 
 2012/10/7 Bruno Marchal  
 
 
 On 07 Oct 2012, at 12:32, Alberto G. Corona wrote: 
 
 Hi Roger: 
 
 ... and cognitive science , which study the hardware and evolutionary 
 psychology (that study the software or mind) assert that this is true. 
 
 
 Partially true, as both the mainstream cognitive science and psychology 
 still does not address the mind-body issue, even less the comp particular 
 mind-body issue. In fact they use comp + weak materialism, which can be 
 shown contradictory(*). 
 
 
 
 
 The Kant idea that even space and time are creations of the mind is 
 crucial for the understanding and to compatibilize the world of perceptions 
 and phenomena with the timeless, reversible, mathematical nature of the 
 laws of physics that by the way, according with M Theory, have also 
 dualities between the entire universe and the interior of a brane on the 
 planck scale (we can not know if we live in such a small brane). 
 
 
 OK. No doubt that Kant was going in the right (with respect to comp at 
 least) direction. But Kant, for me, is just doing 1/100 of what the 
 neoplatonists already did. 
 
 
 
 I don? assume either if this mathematical nature is or not the ultimate 
 nature or reality 
 
 
 Any Turing universal part of it is enough for the ontology, in the comp 
 frame. For the epistemology, no mathematical theories can ever be enough. 
 Arithmetic viewed from inside is bigger than what *any* theory can describe 
 completely. This makes comp preventing any text to capture the essence of 
 what being conscious can mean, be it a bible, string theory, or Peano 
 Arithmetic. In a sense such theories are like new person, and it put only 
 more mess in Platonia. 
 
 
 
 
 Probably the mind (or more specifically each instantiation of the mind 
 along the line of life in space-time) make use a sort of duality in 
 category theory between topological spaces and algebraic structures (as 
 Stephen told me and he can explain you) . 
 
 
 Many dualities exist, but as I have try to explain to Stephen, mind and 
 matter are not symmetrical things if we assume comp. The picture is more 
 that matter is an iceberg tip of reality. 
 
 Even if matter the tip of the iceberg, does the rest of if matter? 
 
 
 Without the rest (water), there would be no iceberg and no tip! 
 
 
 
 do we can know about it this submerged computational nature? 
 
 
 In science we never know. But we can bet on comp, and then, we can know 
 relatively to that bet-theory. So with comp we know that the rest is the 
 external and internal math structures in arithmetic. 
 
 
 
 which phenomena produce the submerged part of this iceberg in the one that 
 we perceive?. 
 
 
 Arithmetic gives the submerged part. The UD complete execution gives it too. 
 The emerged part is given by the first person indeterminacy. 
 
 
 
 
 Multiverse hypothesis propose a collection of infinite icebergs, but this is 
 a way to avoid God and to continue with the speculative business. What the 
 computational nature of reality tries to explain or to avoid? . May be you 
 answered this questions a number of times, ( even to me and I did not 
 realize it) 
 
 
 Careful. Comp makes the observable reality of physics, and the non 
 observable reality of the mind, NON computational. Indeed it needs a God 
 (arithmetical truth). It explains also why God is NOT arithmetical truth as 
 we usually defined it (it is only an approximation). 
 
 
 
 
 By the way, Bruno, you try 

Experiences are not provable because they are private, personal.

2012-10-09 Thread Roger Clough
Hi Bruno Marchal and Stathis, 

1. Only entities in spacetime physically exist, and thus can be measured and 
proven.

2. Experiences exist only in the mind, not in spacetime, because 
they are not extended in nature. They are subjective. Beyond spacetime.
Superphysical. Unproveable to others, but certain to oneself.

3. Numbers, being other than oneself, might possibly have experiences, 
but they cannot share them with us. They can only share descriptions
of experiences, not the experiences themselves (or even if those
experiences exist).


Roger rclo...@verizon.net 

10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Bruno Marchal  
Receiver: everything-list  
Time: 2012-10-08, 10:19:35 
Subject: Re: Zombieopolis Thought Experiment 


Hi Roger, 

On 08 Oct 2012, at 16:14, Roger Clough wrote: 

 Hi Stathis Papaioannou 
 
 I would put it that mind is superphysical. Beyond spacetime. 
 Supernatural as a word carries too much baggage. 

With comp, the natural numbers are supernatural enough. 

Bruno 



 
 Roger Clough, rclo...@verizon.net 
 10/8/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Stathis Papaioannou 
 Receiver: everything-list@googlegroups.com 
 Time: 2012-10-08, 03:14:29 
 Subject: Re: Zombieopolis Thought Experiment 
 
 
 On 08/10/2012, at 3:07 AM, Craig Weinberg wrote: 
 
 Absolutely not. We know no such thing. Quite the opposite, we know  
 with relative certainty that what we understand of physics provides  
 no possibility of anything other than more physics. There is no  
 hint of any kind that these laws should lead to any such thing as  
 an 'experience' or awareness of any kind. You beg the question 100%  
 and are 100% incapable of seeing that you are doing it. 
 
 Well, if it's not the laws of physics then it's something  
 supernatural, isn't it? 
 
 
 -- Stathis Papaioannou  
 
 --  
 You received this message because you are subscribed to the Google  
 Groups Everything List group. 
 To post to this group, send email to everything-list@googlegroups.com. 
 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com  
 . 
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en  
 . 
 
 --  
 You received this message because you are subscribed to the Google  
 Groups Everything List group. 
 To post to this group, send email to everything-list@googlegroups.com. 
 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com  
 . 
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en  
 . 
 

http://iridia.ulb.ac.be/~marchal/ 



--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Zombieopolis Thought Experiment

2012-10-09 Thread Roger Clough
Hi Bruno Marchal 


Roger Clough, rclo...@verizon.net
10/9/2012 
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content - 
From: Bruno Marchal 
Receiver: everything-list 
Time: 2012-10-08, 10:19:35
Subject: Re: Zombieopolis Thought Experiment


Hi Roger,

On 08 Oct 2012, at 16:14, Roger Clough wrote:

 Hi Stathis Papaioannou

 I would put it that mind is superphysical. Beyond spacetime.
 Supernatural as a word carries too much baggage.

With comp, the natural numbers are supernatural enough.

Bruno




 Roger Clough, rclo...@verizon.net
 10/8/2012
 Forever is a long time, especially near the end. -Woody Allen


 - Receiving the following content -
 From: Stathis Papaioannou
 Receiver: everything-list@googlegroups.com
 Time: 2012-10-08, 03:14:29
 Subject: Re: Zombieopolis Thought Experiment


 On 08/10/2012, at 3:07 AM, Craig Weinberg wrote:

 Absolutely not. We know no such thing. Quite the opposite, we know 
 with relative certainty that what we understand of physics provides 
 no possibility of anything other than more physics. There is no 
 hint of any kind that these laws should lead to any such thing as 
 an 'experience' or awareness of any kind. You beg the question 100% 
 and are 100% incapable of seeing that you are doing it.

 Well, if it's not the laws of physics then it's something 
 supernatural, isn't it?


 -- Stathis Papaioannou 

 -- 
 You received this message because you are subscribed to the Google 
 Groups Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com 
 .
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en 
 .

 -- 
 You received this message because you are subscribed to the Google 
 Groups Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com 
 .
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en 
 .


http://iridia.ulb.ac.be/~marchal/



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Conjoined Twins

2012-10-09 Thread Roger Clough
Hi Craig Weinberg 


Roger Clough, rclo...@verizon.net
10/9/2012 
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content - 
From: Craig Weinberg 
Receiver: everything-list 
Time: 2012-10-08, 12:02:27
Subject: Conjoined Twins


Have a look at the first few minutes of this show with conjoined twins Abby and 
Brittany:

http://tlc.howstuffworks.com/tv/abby-and-brittany/videos/big-moves.htm

You can see that although they do not share the same brain they clearly share 
aspects of the same mind. They often speak in unison but they can disagree with 
each other. This can be interpreted to mean that they are similar machines and 
therefore are able to generate the same functions simultaneously, but then how 
can they voluntarily disagree? To me, this shows how fundamentally different 
subjectivity and will is from computation, information, or even physics. Even 
though I think subjectivity is physical, it's because physics is subjective, 
and the way that happens is via intention through time, rather than extension 
across space. The words they say are not being transmitted from inside one 
skull to another, even though Brittany seems to be echoing Abby in the sense 
that she is in a more subservient role in expressing what they are saying, the 
echo is not meaningfully delayed - she is not listening to Abby's words with 
her ears and then imitating her, she is feeling the meaning of what is being 
said at nearly the same time.

I think that Bruno would say that this illustrates the nonlocality of 
arithmetic as each person is a universal machine who is processing similar data 
with similar mechanisms, but I see real-time Quorum Mechanics. They are 
speaking more or less 'in concert'. Were they machines, I would expect that 
they could get out of synch. One could just start repeating the other five 
seconds later, or they could lapse into an infinite regress of echoing. Surely 
the circuitry of such a rare instrument would not and could not evolve rock 
solid error corrective anticipation for this.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/TGERtHlMkLIJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Conjoined Twins

2012-10-09 Thread Roger Clough
Hi Craig Weinberg 

The subjective aspect (Firstness), some of which apparently each twin has, is 
not shareable, only descriptions of it (Thirdness) are shareable. 

Firstness.
What is shareable is Thirdness. What cannot be shared is Firstness.
Thirdness is the description of 

Roger Clough, rclo...@verizon.net 
10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Craig Weinberg  
Receiver: everything-list  
Time: 2012-10-08, 12:02:27 
Subject: Conjoined Twins 


Have a look at the first few minutes of this show with conjoined twins Abby and 
Brittany: 

http://tlc.howstuffworks.com/tv/abby-and-brittany/videos/big-moves.htm 

You can see that although they do not share the same brain they clearly share 
aspects of the same mind. They often speak in unison but they can disagree with 
each other. This can be interpreted to mean that they are similar machines and 
therefore are able to generate the same functions simultaneously, but then how 
can they voluntarily disagree? To me, this shows how fundamentally different 
subjectivity and will is from computation, information, or even physics. Even 
though I think subjectivity is physical, it's because physics is subjective, 
and the way that happens is via intention through time, rather than extension 
across space. The words they say are not being transmitted from inside one 
skull to another, even though Brittany seems to be echoing Abby in the sense 
that she is in a more subservient role in expressing what they are saying, the 
echo is not meaningfully delayed - she is not listening to Abby's words with 
her ears and then imitating her, she is feeling the meaning of what is being 
said at nearly the same time. 

I think that Bruno would say that this illustrates the nonlocality of 
arithmetic as each person is a universal machine who is processing similar data 
with similar mechanisms, but I see real-time Quorum Mechanics. They are 
speaking more or less 'in concert'. Were they machines, I would expect that 
they could get out of synch. One could just start repeating the other five 
seconds later, or they could lapse into an infinite regress of echoing. Surely 
the circuitry of such a rare instrument would not and could not evolve rock 
solid error corrective anticipation for this. 

--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/TGERtHlMkLIJ. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Zombieopolis Thought Experiment

2012-10-09 Thread Roger Clough
Hi Craig Weinberg  

Consciousness had to arise before language.
Apes are conscious.


Roger Clough, rclo...@verizon.net 
10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Craig Weinberg  
Receiver: everything-list  
Time: 2012-10-08, 14:23:18 
Subject: Re: Zombieopolis Thought Experiment 




On Monday, October 8, 2012 1:35:31 PM UTC-4, Brent wrote: 
On 10/8/2012 8:42 AM, John Clark wrote:  
2) Intelligent behavior is NOT associated with subjective experience, in which 
case there is no reason for Evolution to produce consciousness and I have no 
explanation for why I am here, and I have reason to believe that I am the only 
conscious being in the universe. 

There's a third possibility: Intelligent behavior is sometimes associated with 
subjective experience and sometimes not.  Evolution may have produced 
consciousness as a spandrel, an accident of the particular developmental path 
that evolution happened upon.  Or it may be that consciousness is necessarily 
associated with only certain kinds of intelligent behavior, e.g. those related 
to language. 


You are almost right but have it upside down. When someone gets knocked 
unconscious, can they continue to behave intelligently? Can a baby wake up from 
a nap and become conscious before they learn language?  

What would lead us to presume that consciousness itself could supervene on 
intelligence except if we were holding on to a functionalist metaphysics? 

Clearly human intelligence in each individual supervenes on their consciousness 
and clearly supercomputers can't feel any pain or show any signs of fatigue 
that would suggest a state of physical awareness despite their appearances of 
'intelligence'. 

If you flip it over though, you are right. Everything is conscious to some 
extent, but not everything is intelligent in a cognitive sense. The assumption 
of strong AI is that we can take the low hanging fruit of primitive 
consciousness and attach it to the tree tops of anthropological quality 
intelligence and it will grow a new tree into outer space. 

Craig 




Bretn 

--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/ij3bVaKTduQJ. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Zombieopolis Thought Experiment

2012-10-09 Thread Roger Clough
Hi Craig Weinberg 


Roger Clough, rclo...@verizon.net
10/9/2012 
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content - 
From: Craig Weinberg 
Receiver: everything-list 
Time: 2012-10-08, 14:25:15
Subject: Re: Zombieopolis Thought Experiment




On Monday, October 8, 2012 2:19:56 PM UTC-4, Brent wrote:
On 10/8/2012 10:24 AM, Craig Weinberg wrote: 
So the more stimulation you get through your senses of the outside environment 
the less conscious you become. Huh?


Stimulation that you get thorough your senses of the outside environment does 
not control you.

How could you possibly know that, considering that John has accumulated many 
years of stimulation?


Just look at the Conjoined Twins video I posted. Those two people are 
genetically identical, occupy the same body, experience stimulation that is 
very similar, yet they *routinely* disagree.

Craig



Brent

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/D6i59u2_rdEJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Zombieopolis Thought Experiment

2012-10-09 Thread Roger Clough
Hi Craig Weinberg  

They can only disagree about experiences that are spoken.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Zombieopolis Thought Experiment

2012-10-09 Thread Roger Clough
Hi Craig Weinberg  

There are three things:

1) The experiences of each twin, which may be the same or differ (we'll never 
know).

2) What they describe or interpret of their experiences in words.

3) They may the same experience but describe it differently.


Roger Clough, rclo...@verizon.net 
10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Craig Weinberg  
Receiver: everything-list  
Time: 2012-10-08, 16:25:20 
Subject: Re: Zombieopolis Thought Experiment 




On Monday, October 8, 2012 3:38:42 PM UTC-4, Brent wrote: 
On 10/8/2012 11:25 AM, Craig Weinberg wrote:  


On Monday, October 8, 2012 2:19:56 PM UTC-4, Brent wrote:  
On 10/8/2012 10:24 AM, Craig Weinberg wrote:  
So the more stimulation you get through your senses of the outside environment 
the less conscious you become. Huh? 


Stimulation that you get thorough your senses of the outside environment does 
not control you. 

How could you possibly know that, considering that John has accumulated many 
years of stimulation? 


Just look at the Conjoined Twins video I posted. Those two people are 
genetically identical, occupy the same body, experience stimulation that is 
very similar, yet they *routinely* disagree. 


Similar isn't the same. 


But the behavior varies in similarity while their stimulation does not. Clearly 
they are each controlling their own behavior separately, even though the degree 
to which their stimulation from the outside world does not vary separately. If 
the internal conditions were sufficient to allow their control strategies to 
diverge, then they should not re-synchronize again and again constantly. Each 
difference should build on each other, like two slightly different fractal 
kernels wouldn't weave in and out of perfect synch all the time, they would 
follow completely anomalous paths. The fractals might look like the are 
exploring different patterns (if even that) but it seems like they would not 
keep going back to isomorphic patterns at the same time. 

Craig 

Brent 

--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/thWJvtDb6ugJ. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Consciousness is a faraway land

2012-10-09 Thread Roger Clough
Hi John Clark  

Pascal said that the heart knows things of which the mind knows not.

Consciousness is a faraway land of which we know nothing,
except that we can experience things. We can describe our
experiences but cannot share them directly or prove them.

The same is true of religion. The experience of religion is called
faith, what we can share is called beliefs. So we can truly  know
things about religion that we can only partly explain or prove.



Roger Clough, rclo...@verizon.net 
10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: John Clark  
Receiver: everything-list  
Time: 2012-10-08, 11:42:01 
Subject: Re: Zombieopolis Thought Experiment 


On Fri, Oct 5, 2012? Craig Weinberg  wrote: 


 We know with absolute certainty that the laws of physics in this universe 
 allow for the creation of consciousness, we may not know how they do it but 
 we know for a fact that it can be done.  


 Absolutely not. We know no such thing.  


We do unless we abandon reason and pretend that the non answers that religion 
provides actually explain something, or that your Fart Philosophy explains 
something when it says that consciousness exists because consciousness exists.  



 Computers which have been programmed thus far don't have conscious 
 experiences. Would you agree that is a fact? 


No, I most certainly do NOT agree that it is a fact that computers are not 
conscious, nor is it a fact that Craig Weinberg has conscious experiences; it 
is only a fact that sometimes both behave intelligently.  


? I understand that the capacity to have a conscious experience is inversely 
proportionate to the capacity fro that experience to be controlled from the 
outside.  


So the more stimulation you get through your senses of the outside environment 
the less conscious you become. Huh? 
? 
 I know for a fact that intelligent behavior WITHOUT consciousness confers a 
 Evolutionary advantage 


 Which fact is that?  


That intelligent behavior WITHOUT consciousness confers a Evolutionary 
advantage. Having difficulty with your reading comprehension??  


 Which intelligent behavior do you know that you can be certain exists without 
 any subjective experience associated with it? 


I am aware of no such behavior. The only intelligent behavior I know with 
certainty that is always associated with subjective experience is my own. But I 
know with certainty there are 2 possibilities: 

1) Intelligent behavior is always associated with subjective experience, if so 
then if a computer beats you at any intellectual pursuit then it has a 
subjective experience, assuming of course that you yourself are intelligent. 
And I'll let you pick the particular intellectual pursuit for the contest.  

2) Intelligent behavior is NOT associated with subjective experience, in which 
case there is no reason for Evolution to produce consciousness and I have no 
explanation for why I am here, and I have reason to believe that I am the only 
conscious being in the universe. 



 I know for a fact that intelligent behavior WITH consciousness confers no 
 additional Evolutionary advantage (and if you disagree with that point then 
 you must believe that the Turing Test works for consciousness too and not 
 just intelligence).  


 Yet you think that consciousness must have evolved. 

Yes.  



 No contradiction there? 


No contradiction there if consciousness is a byproduct of intelligence, a 
massive? contradiction if it is not; so massive that human beings could not be 
conscious, and yet I am, and perhaps you are too. 



 You think that every behavior in biology exists purely because of evolution 


Yes. 


? except consciousness, which you have no explanation for  


My explanation is that intelligence produces consciousness, I don't know 
exactly how but if Evolution is true then there is a proof that it does.  
? 
 I know for a fact that Evolution DID produce consciousness at least once, 
 therefore the only conclusion is that consciousness is a byproduct of 
 intelligence. 



 A byproduct that does what??? 


A byproduct that produces consciousness. Having difficulty with your reading 
comprehension??  


  who's purpose do you expect Adenine and Thymine to serve? 


 The purpose of their attraction to each other. 


That's nice, but I repeat, who's purpose do you expect Adenine and Thymine to 
serve? 
? 

 Where do you think your intelligence to know this comes from? Surely it is 
 the result in large part of Adenine and Thymine's contribution to the 
 intelligence of DNA. 


If everything (except for some reason computers!) is intelligent, if even 
simple molecules are intelligent then the word has no meaning and is equivalent 
to nothing is intelligent or everything is klogknee or nothing is klogknee.  



 Robots are something?  

? No, they aren't something.  

 That is just a little too silly to argue.  



 You think that a picture of a pipe 

Re: AGI

2012-10-09 Thread Roger Clough
Hi John Mikes  

Intelligence is the ability to make decisions without outside help.

Roger Clough, rclo...@verizon.net 
10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: John Mikes  
Receiver: yanniru  
Time: 2012-10-08, 16:07:09 
Subject: AGI 


Dear Richard, I think the lengthy text is Ben's article in response to D. 
Deutsch.  
Sometimes I was erring in the belief that it is YOUR text, but no. Thanks for 
copying. 
It is too long and too little organized for me to keep up with ramifications 
prima vista.  
What I extracted from it are some remarks I will try to communicate to Ben (a 
longtime e-mail friend) as well.  
? 
I have my (agnostically derived) version of intelligence: the capability of 
reading 'inter'  
lines (words/meanings). Apart from such human distinction: to realize the 
'essence' of relations beyond vocabulary, or 'physical science' definitions.  
Such content is not provided in our practical computing machines (although 
Bruno trans-leaps such barriers with his (L?'s) universal machine 
unidentified). Whatever our (physical) machines can do is within the physical 
limits of information - the content of the actual MODEL of the world we live 
with by yesterday's knowledge, no advanced technology can transcend such 
limitations: there is no input to do so. This may be the limits for AI, and AGI 
as well. Better manipulation etc. do not go BEYOND. 
? 
Human mind-capabilities, however,?(at least in my 'agnostic' worldview) are 
under the influences (unspecified) from the infinite complexity BEYOND our 
MODEL, without our knowledge and specification's power. Accordingly we MAY get 
input from more than the factual content of the MODEL. On such (unspecified) 
influences may be our creativity based (anticipation of Robert Rosen?) what 
cannot be duplicated by cutest algorithms in the best computing machines.  
Our 'factual' knowable in the MODEL are adjusted to our mind's capability - not 
so even the input from?he unknowable 'infinite complexity's' relations.  
? 
Intelligence would go beyond our quotidian limitations, not feasible for 
machines that work within such borders.  
? 
I may dig out relevant information from Ben's text in subsequent readings, 
provided that I get to it back.  
? 

Thanks again, it was a very interesting scroll-down 
? 
John Mikes 
--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: [foar] Re: The real reasons we don’t have AGI yet

2012-10-09 Thread Stephen P. King

On 10/9/2012 2:16 AM, meekerdb wrote:

On 10/8/2012 3:49 PM, Stephen P. King wrote:

Hi Russell,

Question: Why has little if any thought been given in AGI to 
self-modeling and some capacity to track the model of self under the 
evolutionary transformations? 


It's probably because AI's have not needed to operate in environments 
where they need a self-model.  They are not members of a social 
community.  Some simpler systems, like Mars Rovers, have limited 
self-models (where am I, what's my battery charge,...) that they need 
to perform their functions, but they don't have general intelligence 
(yet).


Brent
--


Could the efficiency of the computation be subject to modeling? My 
thinking is that if an AI could rewire itself for some task to more 
efficiently solve that task...


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Zombieopolis Thought Experiment

2012-10-09 Thread Stathis Papaioannou
On Tue, Oct 9, 2012 at 10:28 AM, Craig Weinberg whatsons...@gmail.com wrote:

 There is no assumption that our knowledge of physics is complete; in
 fact if there were that assumption there would be no point in being a
 physicist, would there? As a matter of fact I believe that the basic
 physics of the brain has been understood for a long time and I
 challenge you to point out one thing that has been discovered in
 neuroscience which would surprise a chemist from the middle of last
 century.


 What you are saying is 'nobody thinks physics is complete', followed by
 'everybody knows that the physics of the brain has been complete for a long
 time'.

No, I said there is no *assumption* that our knowledge of physics is
complete. Obviously, it isn't complete, since physics is still an
active field. However, as a matter of scientific fact, there is no
evidence that anything more exotic than organic chemistry is going on
in the brain. There are many, many discoveries in neuroscience every
year but none of them could be described as new physics. New physics
in the brain would, for example, be a discovery that dark matter is
instrumental in nerve conduction. But so far, it's just organic
chemistry.

 This not only supports my point, but it brings up the more important point -
 the blindness of robustly left-hemisphere thinkers to identify their own
 capacity for denial. For me it's like a split brained experiment. I say 'the
 problem is that people think physics is complete' and you say 'no they
 don't. You can't show me any signs that physics of the brain isn't
 complete.' Total disconnect. You'll keep denying it too. Not your fault
 either, apparently, that's just the way a lot of intelligent people are
 wired. I have no idea if it's possible for people to consciously overcome
 that tendency...it would be like glimpsing yourself in the mirror before
 your image actually turned around.

Neuroscience has been a very large, well-funded field for many
decades. Can you point to any experiments at least hinting at new
physics?

 But that is not relevant to this discussion. The question is
 whether the physics of the brain, known or unknown, is computable. If
 it is,


 If the physics of the brain is incomplete, then how could we say whether it
 is computable or not? To me, the color red is physical, so that any
 computation of the brain has to arrive at a computational result that is
 [the experience of seeing red]. I don't think that is remotely possible.

As I have said many, many times in these discussions, I am happy to
assume for the sake of argument that consciousness is *not*
computable. The question being asked is whether the physical movement
of the parts of the brain are computable. That means given a complete
description of initial conditions, an adequate model and sufficient
computing power, is it possible to predict the physical movement of
the parts of the brain? This does *not* mean it is a practical
likelihood, only a theoretical possibility. Indeed, an actual Turing
machine, used in the formal definition of computable, is *not*
physically possible. OK?

If consciousness is not computable, and consciousness affects the
physical movement of the parts of the brain, then the physical
movement of the parts of the brain is not computable. OK? To put this
another way, we would observe neurons (and I guess other cells too, if
consciousness is all pervasive) doing stuff *contrary to the known
laws of physics*. For if neurons only did stuff consistent with what
we know about organic chemistry, and organic chemistry is assumed to
be computable, then the physical movement of the parts of the brain
would be computable too. OK?

 then in theory a computer could be just as intelligent as a
 human. If it isn't, then a computer would always have some deficit
 compared to a human. Maybe it would never be able to play the violin,
 cut your hair or write a book as well as a human.


 The deficiency is that it couldn't feel. It could impersonate a violin
 player, but it would lack character and passion, gravitas, presence. Just
 like whirling CGI graphics of pseudo-metallic transparent reflecty crap.
 It's empty and weightless. Can't you tell? Can't you see that? Again, I
 should not expect everyone to be able to see that. I guess I can only
 understand that I see that and know that you can see a lot of things that I
 can't as well. In your mind there is no reason that we can't eat broken
 glass for breakfast if we install synthetic stomach lining that doesn't know
 the difference between food and glass. Nothing I can say will give you pause
 or question your reasoning, because indeed, the reasoning is internally
 consistent.

Again, I have assumed for the sake of argument that a computer cannot
feel. But if the movement of the parts of the brain is computable, the
computer will be able to behave just like a person, in every respect.
After prolonged interaction with it an observer would guess that it
had feelings even 

Re: I believe that comp's requirement is one of as if rather than is

2012-10-09 Thread Alberto G. Corona
That is true. To pressupose an experience of self in others is a leap
on faith based on similarity. It is duck philosophy.  What seems a
Duck, must be a Duck.  Even Hume had to limit its destructive
philosophy to avoid self destructiveness. Because there are core
beliefs that  we don´t doubt, or we can not doubt seriously because we
can´t accept that this is just a belief without acting self
destructively. That is in the first place the reason why these beliefs
exist: they must have been selected and hardcoded by evolution. That
must be the ultimate meaning of truth in evolutionary epistemology.

In the same way, a self conscious robot must have beliefs about itself
and others. he believe that he is conscious. He can not conceive
otherwise. And  their sensations must be according with this belief.
His belief can not be a boolean switch in a program. He must answer
sincerely to questions about existence, perception and so on.

But still after this reasoning,  I doubt that the self conscious
philosopher robot have the kind of thing, call it a soul, that I have.

2012/10/9 Roger Clough rclo...@verizon.net:
 Hi Alberto G. Corona  and Bruno,

 Perhaps I can express the problem of solipsism as this.
 To have a mind means that one can experience.
 Experiences are subjective and thus cannot be actually shared,
 the best one can do is share a description of the experience.
 If one cannot actually share another's experience,
 one cannot know if they actually had an experience--
 that is, that they actually have a mind.

 Comp seems to avoid this insurmountable problem
 by avoiding the issue of whether the computer
 actually had an experience, only that it appeared
 to have an experience.  So comp's requirement
 is as if rather than is.


 Roger Clough, rclo...@verizon.net
 10/9/2012
 Forever is a long time, especially near the end. -Woody Allen


 - Receiving the following content -
 From: Alberto G. Corona
 Receiver: everything-list
 Time: 2012-10-08, 15:12:22
 Subject: Re: What Kant did: Consciousness is a top-down structuring 
 ofbottom-up sensory info


 Bruno:

 It could be that the indeterminacy in the I means that everything else
 is not a machine, but supposedly, an hallucination.
 But this hallucination has a well defined set of mathematical
 properties that are communicable to other hallucinated expectators.
 This means that something is keeping the picture coherent. If that
 something is not computation or computations, what is the nature of
 this well behaving hallucination according with your point of view?


 2012/10/7 Bruno Marchal :

 On 07 Oct 2012, at 15:11, Alberto G. Corona wrote:



 2012/10/7 Bruno Marchal


 On 07 Oct 2012, at 12:32, Alberto G. Corona wrote:

 Hi Roger:

 ... and cognitive science , which study the hardware and evolutionary
 psychology (that study the software or mind) assert that this is true.


 Partially true, as both the mainstream cognitive science and psychology
 still does not address the mind-body issue, even less the comp particular
 mind-body issue. In fact they use comp + weak materialism, which can be
 shown contradictory(*).




 The Kant idea that even space and time are creations of the mind is
 crucial for the understanding and to compatibilize the world of perceptions
 and phenomena with the timeless, reversible, mathematical nature of the
 laws of physics that by the way, according with M Theory, have also
 dualities between the entire universe and the interior of a brane on the
 planck scale (we can not know if we live in such a small brane).


 OK. No doubt that Kant was going in the right (with respect to comp at
 least) direction. But Kant, for me, is just doing 1/100 of what the
 neoplatonists already did.



 I don? assume either if this mathematical nature is or not the ultimate
 nature or reality


 Any Turing universal part of it is enough for the ontology, in the comp
 frame. For the epistemology, no mathematical theories can ever be enough.
 Arithmetic viewed from inside is bigger than what *any* theory can describe
 completely. This makes comp preventing any text to capture the essence of
 what being conscious can mean, be it a bible, string theory, or Peano
 Arithmetic. In a sense such theories are like new person, and it put only
 more mess in Platonia.




 Probably the mind (or more specifically each instantiation of the mind
 along the line of life in space-time) make use a sort of duality in
 category theory between topological spaces and algebraic structures (as
 Stephen told me and he can explain you) .


 Many dualities exist, but as I have try to explain to Stephen, mind and
 matter are not symmetrical things if we assume comp. The picture is more
 that matter is an iceberg tip of reality.

 Even if matter the tip of the iceberg, does the rest of if matter?


 Without the rest (water), there would be no iceberg and no tip!



 do we can know about it this submerged computational nature?


 In science we never know. But we can 

Can AGI be proven ?

2012-10-09 Thread Roger Clough
Hi Stephen P. King  

I suppose AGI would be the Holy Grail of artificial intelligence,
but I fear that only the computer can know that it has 
actually achieved it, for intelligence is subjective.
Not that computers can't in principle be subjective, 
but that subjectivity (Firstness)  can never be made public ,
only descriptions of it (Thirdness) can be made public.  

Firstness is the raw experience. The object or event as privately experienced.
Unprovable. Since AGI is Firstness, it is not proveable.

Thirdness is a description of that experience. The public expression of that 
private experience.
Proveable yes or no.


Roger Clough, rclo...@verizon.net 
10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Stephen P. King  
Receiver: everything-list  
Time: 2012-10-08, 14:22:20 
Subject: Re: The real reasons we don?_have_AGI_yet 


On 10/8/2012 1:13 PM, Richard Ruquist wrote: 

except from  

The real reasons we don? have AGI yet 
A response to David Deutsch? recent article on AGI 
October 8, 2012 by Ben Goertzel 

So in this view, the main missing ingredient in AGI so far is 
?ognitive synergy?: the fitting-together of different intelligent 
components into an appropriate cognitive architecture, in such a way 
that the components richly and dynamically support and assist each 
other, interrelating very closely in a similar manner to the 
components of the brain or body and thus giving rise to appropriate 
emergent structures and dynamics. 

The reason this sort of intimate integration has not yet been explored 
much is that it? difficult on multiple levels, requiring the design 
of an architecture and its component algorithms with a view toward the 
structures and dynamics that will arise in the system once it is 
coupled with an appropriate environment. Typically, the AI algorithms 
and structures corresponding to different cognitive functions have 
been developed based on divergent theoretical principles, by disparate 
communities of researchers, and have been tuned for effective 
performance on different tasks in different environments. 

Making such diverse components work together in a truly synergetic and 
cooperative way is a tall order, yet my own suspicion is that this ? 
rather than some particular algorithm, structure or architectural 
principle ? is the ?ecret sauce? needed to create human-level AGI 
based on technologies available today. 

Achieving this sort of cognitive-synergetic integration of AGI 
components is the focus of the OpenCog AGI project that I co-founded 
several years ago. We?e a long way from human adult level AGI yet, 
but we have a detailed design and codebase and roadmap for getting 
there. Wish us luck! 
Hi Richard, 

?? My suspicion is that what is needed here, if we can put on our programmer 
hats, is the programer's version of a BEC, Bose-Einstein Condensate, where 
every part is an integrated reflection of the whole. My own idea is that some 
form of algebraic and/or topological closure is required to achieve this as 
inspired by the Brouwer Fixed point theorem. 


--  
Onward! 

Stephen

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Only you can know if you actually have intelligence

2012-10-09 Thread Roger Clough
Hi meekerdb  

Only you can know if you actually have intelligence,
although you can appear to have intelligence (as if).
You can be tested for it.

Thus comp is not different from us, or at least it has the same 
limitations.


Roger Clough, rclo...@verizon.net 
10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: meekerdb  
Receiver: everything-list  
Time: 2012-10-08, 13:35:28 
Subject: Re: Zombieopolis Thought Experiment 


On 10/8/2012 8:42 AM, John Clark wrote:  
2) Intelligent behavior is NOT associated with subjective experience, in which 
case there is no reason for Evolution to produce consciousness and I have no 
explanation for why I am here, and I have reason to believe that I am the only 
conscious being in the universe. 

There's a third possibility: Intelligent behavior is sometimes associated with 
subjective experience and sometimes not.  Evolution may have produced 
consciousness as a spandrel, an accident of the particular developmental path 
that evolution happened upon.  Or it may be that consciousness is necessarily 
associated with only certain kinds of intelligent behavior, e.g. those related 
to language. 

Bretn

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



A clearer definition of the self

2012-10-09 Thread Roger Clough
Hi meekerdb  

The empiricists such as Hume and Locke maintained that
all that we know has first arrived through our senses.
I agree. The stuff of knowledge comes from below.

But Kant showed that this is not enough,
for our minds have to make mental sense of this data.
Consciousness or intelligence is thus structured for this purpose.
Our minds tell us what in particular we do know.

Such logical structures intelligence uses cannot arrive through our senses,
but must come platonically from above, so that the mental sense
is finally arrived at using the platonic forms and
structures from above.

In the meeting place of the above and the below 
there exists, analogous to Maxwell's Demon, the active agent 
called the self. The self makes sense of sensual data and
stores that in memory. We have learned something.


Roger Clough, rclo...@verizon.net 
10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: meekerdb  
Receiver: everything-list  
Time: 2012-10-08, 14:19:52 
Subject: Re: Zombieopolis Thought Experiment 


On 10/8/2012 10:24 AM, Craig Weinberg wrote:  
So the more stimulation you get through your senses of the outside environment 
the less conscious you become. Huh? 


Stimulation that you get thorough your senses of the outside environment does 
not control you. 

How could you possibly know that, considering that John has accumulated many 
years of stimulation? 

Brent

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



On Beauty

2012-10-09 Thread Roger Clough
Hi Platonist Guitar Cowboy  

The definition of beauty that I like is that
beauty is unity in diversity.


Roger Clough, rclo...@verizon.net 
10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Platonist Guitar Cowboy  
Receiver: everything-list  
Time: 2012-10-08, 11:58:53 
Subject: Re: On Zuckerman's paper 


Hi Stephen, Bruno, and Jason, 

Do I understand correctly that comp requires a relative measure on the set of 
all partial computable functions and that for Steven Both abstractions, such 
as numbers and their truths, and physical worlds must emerge together from a 
primitive ground which is neutral in that it has no innate properties at all 
other that necessary possibility. It merely exists. 

If so, naively I ask then: Why is beauty, in the imho non-chimeric sense posed 
by Plotinus in Ennead I.6 On Beauty, not a candidate for approximating that 
set, or for describing that which has no innate properties? 

Here the translation from Steven MacKenna: 

http://eawc.evansville.edu/anthology/beauty.htm 

Because, what drew me to Zuckerman was just a chance find on youtube... and 
seeing Infinite descending chains, decorations, self-reference etc. all tied 
together in a set theory context, I didn't think Wow, that's true but simply 
hmm, that's nice, maybe they'll elaborate a more precise frame. I know, 
people want to keep separate art and science. But I am agnostic on this as 
composing and playing music just bled into engineering and mathematical 
problems and solutions, as well as programming and the computer on their own. I 
apologize in advance, if this is off-topic as I find the discussion here 
fascinating and hate interrupting it. 

Mark 

--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Nature's firewall

2012-10-09 Thread Roger Clough
Hi Stathis,

The separation you missed is that mind and consciousness 
are subjective entities(not shareable), while computations 
are objective (shareable). 

Nature put a firewall between these so we don't get them confused.


Roger Clough, rclo...@verizon.net 
10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Roger Clough  
Receiver: everything-list  
Time: 2012-10-08, 09:19:40 
Subject: Re: Re: On complexity and bottom-up theories and calculations 


Hi Stathis Papaioannou  

Computation can give you letters on a page.  
Are they conscious ?  

There's no way that I can think of however, to prove or 
disprove that objects are conscious or not, only that 
they may simulate consciousness.  


Roger Clough, rclo...@verizon.net  
10/8/2012  
Forever is a long time, especially near the end. -Woody Allen  


- Receiving the following content -  
From: Stathis Papaioannou  
Receiver: everything-list  
Time: 2012-10-07, 10:45:10  
Subject: Re: On complexity and bottom-up theories and calculations  


On Mon, Oct 8, 2012 at 1:23 AM, Bruno Marchal wrote:  

 One theory is that existence of platonic entities such as numbers is  
 not ontologically distinct from actual existence. In that case, all  
 possible universes necessarily exist, and the one that has the laws of  
 physics allowing observers is the one the observers observe.  
  
  
 That is Tegmark error. It cannot work. First it is obvious that numbers  
 have a distinct existence than, say, this table or that chair, and secondly,  
 once you accept comp, whatever meaning you give to the existence of numbers  
 as long as you agree that 2+2=4 is independent of you, the global  
 indeterminacy on arithmetic, or on the UD, has to be taken into account, and  
 physics has to be explained in term of *all* computation. That is what  
 Tegmark and Schmidhuber have missed, and which I have explained when  
 entering on this mailing list.  
  
 Even in the case one (little program), like DeWitt-Wheeler equation for  
 example, would be correct, so that indeed there would be only one  
 computation allowing consciousness, such a fact has to be justified in term  
 of the measure taken on *all* computation. I thought you did grasp this  
 sometime ago. Step 8 is not really needed here.  

Computation necessarily exists, computation is enough to generate  
consciousness and physics, therefore no need for a separate physical  
reality. Can you explain the subtlety I've missed?  


--  
Stathis Papaioannou  

--  
You received this message because you are subscribed to the Google Groups 
Everything List group.  
To post to this group, send email to everything-list@googlegroups.com.  
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.  
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en. 

--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Re: On complexity and bottom-up theories and calculations

2012-10-09 Thread Roger Clough
Hi Roger Clough 


Roger Clough, rclo...@verizon.net
10/9/2012 
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content - 
From: Roger Clough 
Receiver: everything-list 
Time: 2012-10-08, 09:19:40
Subject: Re: Re: On complexity and bottom-up theories and calculations


Hi Stathis Papaioannou 

Computation can give you letters on a page. 
Are they conscious ? 

There's no way that I can think of however, to prove or
disprove that objects are conscious or not, only that
they may simulate consciousness. 


Roger Clough, rclo...@verizon.net 
10/8/2012 
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content - 
From: Stathis Papaioannou 
Receiver: everything-list 
Time: 2012-10-07, 10:45:10 
Subject: Re: On complexity and bottom-up theories and calculations 


On Mon, Oct 8, 2012 at 1:23 AM, Bruno Marchal wrote: 

 One theory is that existence of platonic entities such as numbers is 
 not ontologically distinct from actual existence. In that case, all 
 possible universes necessarily exist, and the one that has the laws of 
 physics allowing observers is the one the observers observe. 
 
 
 That is Tegmark error. It cannot work. First it is obvious that numbers 
 have a distinct existence than, say, this table or that chair, and secondly, 
 once you accept comp, whatever meaning you give to the existence of numbers 
 as long as you agree that 2+2=4 is independent of you, the global 
 indeterminacy on arithmetic, or on the UD, has to be taken into account, and 
 physics has to be explained in term of *all* computation. That is what 
 Tegmark and Schmidhuber have missed, and which I have explained when 
 entering on this mailing list. 
 
 Even in the case one (little program), like DeWitt-Wheeler equation for 
 example, would be correct, so that indeed there would be only one 
 computation allowing consciousness, such a fact has to be justified in term 
 of the measure taken on *all* computation. I thought you did grasp this 
 sometime ago. Step 8 is not really needed here. 

Computation necessarily exists, computation is enough to generate 
consciousness and physics, therefore no need for a separate physical 
reality. Can you explain the subtlety I've missed? 


-- 
Stathis Papaioannou 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Has man created an AGI ? Maybe ?

2012-10-09 Thread Roger Clough
Hi meekerdb  

We don't know, nor can we ever know for certain, that
man has created an AGI, because actual intelligence is 
subjective, so only the AGI itself can know if it is truly
intelligent. The best we can do is test it to see
if it acts as if it has intelligence.



Roger Clough, rclo...@verizon.net 
10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: meekerdb  
Receiver: everything-list  
Time: 2012-10-08, 17:18:59 
Subject: Re: Zombieopolis Thought Experiment 


On 10/8/2012 2:10 PM, Craig Weinberg wrote:  


On Monday, October 8, 2012 4:57:08 PM UTC-4, Brent wrote:  
On 10/8/2012 1:25 PM, Craig Weinberg wrote:  


On Monday, October 8, 2012 3:38:42 PM UTC-4, Brent wrote:  
On 10/8/2012 11:25 AM, Craig Weinberg wrote:  


On Monday, October 8, 2012 2:19:56 PM UTC-4, Brent wrote:  
On 10/8/2012 10:24 AM, Craig Weinberg wrote:  
So the more stimulation you get through your senses of the outside environment 
the less conscious you become. Huh? 


Stimulation that you get thorough your senses of the outside environment does 
not control you. 

How could you possibly know that, considering that John has accumulated many 
years of stimulation? 


Just look at the Conjoined Twins video I posted. Those two people are 
genetically identical, occupy the same body, experience stimulation that is 
very similar, yet they *routinely* disagree. 


Similar isn't the same. 


But the behavior varies in similarity while their stimulation does not.  

Sure it does.  They are not in exactly the same place.  

That's true but irrelevant. If they move to the left two feet so that Brittany 
is in Abby's position, Brittany doesn't become Abby.  

Because they're not in the same place in SPACETIME. 


We are talking about two people in the same body who act the same sometimes and 
completely different other times. This is not the result in air pressure 
differences in the room or the angle of incidence on their retina.  


How do you know that?  There are differences and differences can be amplified.  
Even K_40 decays in their brain could trigger different thoughts. 




Haven't you heard of chaotic dynamics.  Even perfectly identical systems can 
diverge in behavior due to infinitesimal differences in stimulation. 


Sure, but do they then converge again and again? 
  



Clearly they are each controlling their own behavior separately, even though 
the degree to which their stimulation from the outside world does not vary 
separately.  

But you don't know that.  You are just looking at the current stimulation.  Yet 
their behavior, even their internal structure, has been molded by different 
stimulations since they were embryos. 


I agree, they are different. How do they know how to speak in unison sometimes 
and they argue with each other at other times? 


The brain is modular. 

Brent

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Zombieopolis Thought Experiment

2012-10-09 Thread Roger Clough
Hi meekerdb 


Roger Clough, rclo...@verizon.net
10/9/2012 
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content - 
From: meekerdb 
Receiver: everything-list 
Time: 2012-10-08, 17:18:59
Subject: Re: Zombieopolis Thought Experiment


On 10/8/2012 2:10 PM, Craig Weinberg wrote: 


On Monday, October 8, 2012 4:57:08 PM UTC-4, Brent wrote: 
On 10/8/2012 1:25 PM, Craig Weinberg wrote: 


On Monday, October 8, 2012 3:38:42 PM UTC-4, Brent wrote: 
On 10/8/2012 11:25 AM, Craig Weinberg wrote: 


On Monday, October 8, 2012 2:19:56 PM UTC-4, Brent wrote: 
On 10/8/2012 10:24 AM, Craig Weinberg wrote: 
So the more stimulation you get through your senses of the outside environment 
the less conscious you become. Huh?


Stimulation that you get thorough your senses of the outside environment does 
not control you.

How could you possibly know that, considering that John has accumulated many 
years of stimulation?


Just look at the Conjoined Twins video I posted. Those two people are 
genetically identical, occupy the same body, experience stimulation that is 
very similar, yet they *routinely* disagree.


Similar isn't the same.


But the behavior varies in similarity while their stimulation does not. 

Sure it does.  They are not in exactly the same place. 

That's true but irrelevant. If they move to the left two feet so that Brittany 
is in Abby's position, Brittany doesn't become Abby. 

Because they're not in the same place in SPACETIME.


We are talking about two people in the same body who act the same sometimes and 
completely different other times. This is not the result in air pressure 
differences in the room or the angle of incidence on their retina. 


How do you know that?  There are differences and differences can be amplified.  
Even K_40 decays in their brain could trigger different thoughts.




Haven't you heard of chaotic dynamics.  Even perfectly identical systems can 
diverge in behavior due to infinitesimal differences in stimulation.


Sure, but do they then converge again and again?
 



Clearly they are each controlling their own behavior separately, even though 
the degree to which their stimulation from the outside world does not vary 
separately. 

But you don't know that.  You are just looking at the current stimulation.  Yet 
their behavior, even their internal structure, has been molded by different 
stimulations since they were embryos.


I agree, they are different. How do they know how to speak in unison sometimes 
and they argue with each other at other times?


The brain is modular.

Brent

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



more firewalls

2012-10-09 Thread Roger Clough
Hi Richard Ruquist  

Nature has put a firewall between subjective entities such as monads
and objective entities such as BECs or the manifolds.
When I said attached I should have said associated to.
There's no physical, only logical connections.

Roger Clough, rclo...@verizon.net 
10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Richard Ruquist  
Receiver: everything-list  
Time: 2012-10-08, 12:35:34 
Subject: Re: Re: Can computers be conscious ? Re: Zombieopolis 
ThoughtExperiment 


Roger, 
Monads are everywhere, inside computers 
as well as humans, rocks and free space. 
Whatever allows monads to connect to physical objects 
may be operative for inanimates as well as animates. 

So the first step is to identify the connecting mechanism. 

For physical consciousness I conjecture the connection 
is based on BECs (Bose-Einstein Condensates) 
in the monadic mind entangled with BECs in the brain. 

It has been demonstrated experimentally 
that BECs of disparate substances can still be entangled. 
So once a computer is designed with BECs as in the human brain 
then it may be capable of consciousness. 
Richard 


On Mon, Oct 8, 2012 at 9:25 AM, Roger Clough  wrote: 
 Hi Richard Ruquist 
 
 I may have given that impression, sorry, but 
 a monad can only make what's inside do what it can do. 
 
 Human and animal monads can both feel, so they can be conscious. 
 But a rock is at best unconscious as it cannot feel or think.\ 
 
 There's no way to tell what faculties a computer has. 
 
 Roger Clough, rclo...@verizon.net 
 10/8/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: Richard Ruquist 
 Receiver: everything-list 
 Time: 2012-10-07, 11:06:17 
 Subject: Re: Can computers be conscious ? Re: Zombieopolis Thought Experiment 
 
 
 Roger, 
 
 If human consciousness comes from attached monads, as I think you have 
 claimed, 
 then why could not these monads attach to sufficiently complex computers 
 as well. 
 Richard 
 
 On Sun, Oct 7, 2012 at 8:17 AM, Roger Clough wrote: 
 Hi John Clark 
 
 Unless computers can deal with inextended objects such as 
 mind and experience, they cannot be conscious. 
 
 Consciousness is direct experience, computers can only deal in descriptions 
 of experience. 
 
 Everything that a computer does is, to my knowledge, at least 
 in principle publicly available, since it uses publicly available symbols or 
 code. 
 
 Consciousness is direct experience, which cannot be put down in code 
 any more than life can be put down in code. It is personal and not publicly 
 available. 
 
 Roger Clough, rclo...@verizon.net 
 10/7/2012 
 Forever is a long time, especially near the end. -Woody Allen 
 
 
 - Receiving the following content - 
 From: John Clark 
 Receiver: everything-list 
 Time: 2012-10-06, 13:56:30 
 Subject: Re: Zombieopolis Thought Experiment 
 
 
 On Fri, Oct 5, 2012 at 6:29 PM, Craig Weinberg wrote: 
 
 
 
 ?I'm openly saying that a high school kid can make a robot that behaves 
 sensibly with just a few transistors.? ? 
 
 
 Only because he lives in a universe in which the possibility of teleology 
 is fully supported from the start. 
 
 
 We know with absolute certainty that the laws of physics in this universe 
 allow for the creation of consciousness, we may not know how they do it but 
 we know for a fact that it can be done. So how on Earth does that indicate 
 that a conscious computer is not possible? Because it doesn't fart?? 
 
 ? 
 you have erroneously assumed that intelligence is possible without sense 
 experience. 
 
 No, I am assuming the exact OPPOSITE! In fact I'm not even assuming, I know 
 for a fact that intelligent behavior WITHOUT consciousness confers a 
 Evolutionary advantage, and I know for a fact that intelligent behavior WITH 
 consciousness confers no additional Evolutionary advantage (and if you 
 disagree with that point then you must believe that the Turing Test works 
 for consciousness too and not just intelligence). And in spite of all this I 
 know for a fact that Evolution DID produce consciousness at least once, 
 therefore the only conclusion is that consciousness is a byproduct of 
 intellagence. 
 
 
 
 Adenine and Thymine don't have purpose in seeking to bind with each other? 
 
 
 I don't even know what a question like that means, who's purpose do you 
 expect Adenine and Thymine to serve? 
 
 
 
 How do you know? 
 
 
 I know because I have intelligence and Adenine and Thymine do not know 
 because they have none, they only have cause and effect. 
 
 
 
 How is it different from our purpose in staying in close proximity to 
 places to eat and sleep? 
 
 
 And to think that some people berated me for anthropomorphizing future 
 supercomputers and here you are ? anthropomorphizing simple chemicals. 
 
 
 
 Why is everything aware, why isn't everything not aware? 
 
 
 

Mysterious Algorithm Was 4% of Trading Activity Last Week

2012-10-09 Thread Craig Weinberg
Shades of things to come. What happens when we plug the economy of the 
entire world into mindless machines programmed to go to war against numbers.

*Mysterious Algorithm Was 4% of Trading Activity Last Week*

http://www.cnbc.com/id/49333454

A single mysterious computer program that placed orders — and then 
subsequently canceled them — made up 4 percent of all quote traffic in the 
U.S. stock market last week, according to the top tracker of *high-frequency 
trading http://www.cnbc.com/id/15837548/?cid=187385High_Frequency_Trading
* activity. The motive of the algorithm is still unclear. 

The program placed orders in 25-millisecond bursts involving about 500 
stocks, according to Nanex, a market data firm. The algorithm never 
executed a single trade, and it abruptly ended at about 10:30 a.m. Friday. 

“Just goes to show you how just one person can have such an outsized impact 
on the market,” said *Eric Hunsader http://www.cnbc.com/id/49216434/*, 
head of *Nanex http://nanex.net/* and the No. 1 detector of trading 
anomalies watching Wall Street today. “Exchanges are just not monitoring 
it.” 

Hunsader’s sonar picked up that this was a single high-frequency trader 
after seeing the program’s pattern (200 fake quotes, then 400, then 1,000) 
repeated over and over. Also, it was being routed from the same place, the *
Nasdaq* [COMP  3112.35-23.84  (-0.76%)   
]http://data.cnbc.com/quotes/COMP. 


“My guess is that the algo was testing the market, as high-frequency 
frequently does,” says Jon Najarian, co-founder of TradeMonster.com. “As 
soon as they add bandwidth, the HFT crowd sees how quickly they can top out 
to create latency.” 

*(Read More: Unclear What Caused Kraft Spike: Nanex 
Founderhttp://www.cnbc.com/id/49216434/
)*

Translation: the ultimate goal of many of these programs is to gum up the 
system so it slows down the quote feed to others and allows the computer 
traders (with their co-located servers at the exchanges) to gain a 
money-making arbitrage opportunity. 


The scariest part of this single program was that its millions of quotes 
accounted for 10 percent of the bandwidth that is allowed for trading on 
any given day, according to Nanex. (The size of the bandwidth pipe is 
determined by a group made up of the exchanges called the Consolidated 
Quote System.) 

*(Read More: **Cuban, Cooperman: Curb High-Frequency 
Trading*http://www.cnbc.com/id/49216430/
*)*

“This is pretty out there to see this effect this many stocks at the same 
time,” said Hunsader. High frequency traders are doing anything to “tip the 
odds in their favor.” 

A Senate panel at the end of September sought answers on high-frequency 
trading as investigators look into the best way to stop wealth-destroying 
events such as the Knight Capital computer glitch in August and the market 
“Flash Crash” two years ago. 

*(Read More: Ex-Insider Calls High-Frequency Trading 
‘Cheating’http://www.cnbc.com/id/49103708/?Ex_Insider_Calls_High_Frequency_Trading_Cheating
)*

Regulators are trying to see how they can rein in the practice, which 
accounts for 70 percent of trading each day, without slowing down progress 
and profits for Wall Street and the U.S. exchanges. 

RELATED LINKS

   - Cuban, Cooperman: Curb High-Frequency 
Tradinghttp://www.cnbc.com/id/49216430
   - Do Markets Need a 'Kill Switch'? http://www.cnbc.com/id/49245253
   - Ex-Insider: HFT Is 'Cheating' http://www.cnbc.com/id/49103708
   - Will Glitches Get Worse? http://www.cnbc.com/id/48464725

“I feel a tax on order stuffing is what the markets need at this point,” 
said David Greenberg of Greenberg Capital. “This will cut down on the 
number of erroneous bids and offers placed into the market at any given 
time and should help stabilize the trading environment.” 

Hunsader warned that regulators better do something fast, speculating that 
this single program could have led to something very bad if big news broke 
or a sell-off occurred and one entity was hogging this much of the system 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/aVa0FXmOt-8J.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Zombieopolis Thought Experiment

2012-10-09 Thread Craig Weinberg


On Tuesday, October 9, 2012 6:38:24 AM UTC-4, rclough wrote:

 Hi Craig Weinberg   

 They can only disagree about experiences that are spoken. 


You mean they can only verbally disagree. It is pretty clear that they can 
disagree about their taste in things without having spoken about them. 

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/ex4ZOvVGYCAJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: more firewalls

2012-10-09 Thread Richard Ruquist
Hi Roger,
What makes you think that what you claim is true?
Richard

On Tue, Oct 9, 2012 at 8:19 AM, Roger Clough rclo...@verizon.net wrote:
 Hi Richard Ruquist

 Nature has put a firewall between subjective entities such as monads
 and objective entities such as BECs or the manifolds.
 When I said attached I should have said associated to.
 There's no physical, only logical connections.

 Roger Clough, rclo...@verizon.net
 10/9/2012
 Forever is a long time, especially near the end. -Woody Allen


 - Receiving the following content -
 From: Richard Ruquist
 Receiver: everything-list
 Time: 2012-10-08, 12:35:34
 Subject: Re: Re: Can computers be conscious ? Re: Zombieopolis 
 ThoughtExperiment


 Roger,
 Monads are everywhere, inside computers
 as well as humans, rocks and free space.
 Whatever allows monads to connect to physical objects
 may be operative for inanimates as well as animates.

 So the first step is to identify the connecting mechanism.

 For physical consciousness I conjecture the connection
 is based on BECs (Bose-Einstein Condensates)
 in the monadic mind entangled with BECs in the brain.

 It has been demonstrated experimentally
 that BECs of disparate substances can still be entangled.
 So once a computer is designed with BECs as in the human brain
 then it may be capable of consciousness.
 Richard


 On Mon, Oct 8, 2012 at 9:25 AM, Roger Clough  wrote:
 Hi Richard Ruquist

 I may have given that impression, sorry, but
 a monad can only make what's inside do what it can do.

 Human and animal monads can both feel, so they can be conscious.
 But a rock is at best unconscious as it cannot feel or think.\

 There's no way to tell what faculties a computer has.

 Roger Clough, rclo...@verizon.net
 10/8/2012
 Forever is a long time, especially near the end. -Woody Allen


 - Receiving the following content -
 From: Richard Ruquist
 Receiver: everything-list
 Time: 2012-10-07, 11:06:17
 Subject: Re: Can computers be conscious ? Re: Zombieopolis Thought Experiment


 Roger,

 If human consciousness comes from attached monads, as I think you have 
 claimed,
 then why could not these monads attach to sufficiently complex computers
 as well.
 Richard

 On Sun, Oct 7, 2012 at 8:17 AM, Roger Clough wrote:
 Hi John Clark

 Unless computers can deal with inextended objects such as
 mind and experience, they cannot be conscious.

 Consciousness is direct experience, computers can only deal in descriptions 
 of experience.

 Everything that a computer does is, to my knowledge, at least
 in principle publicly available, since it uses publicly available symbols 
 or code.

 Consciousness is direct experience, which cannot be put down in code
 any more than life can be put down in code. It is personal and not publicly 
 available.

 Roger Clough, rclo...@verizon.net
 10/7/2012
 Forever is a long time, especially near the end. -Woody Allen


 - Receiving the following content -
 From: John Clark
 Receiver: everything-list
 Time: 2012-10-06, 13:56:30
 Subject: Re: Zombieopolis Thought Experiment


 On Fri, Oct 5, 2012 at 6:29 PM, Craig Weinberg wrote:



 ?I'm openly saying that a high school kid can make a robot that behaves 
 sensibly with just a few transistors.? ?


 Only because he lives in a universe in which the possibility of teleology 
 is fully supported from the start.


 We know with absolute certainty that the laws of physics in this universe 
 allow for the creation of consciousness, we may not know how they do it but 
 we know for a fact that it can be done. So how on Earth does that indicate 
 that a conscious computer is not possible? Because it doesn't fart??

 ?
 you have erroneously assumed that intelligence is possible without sense 
 experience.

 No, I am assuming the exact OPPOSITE! In fact I'm not even assuming, I know 
 for a fact that intelligent behavior WITHOUT consciousness confers a 
 Evolutionary advantage, and I know for a fact that intelligent behavior 
 WITH consciousness confers no additional Evolutionary advantage (and if you 
 disagree with that point then you must believe that the Turing Test works 
 for consciousness too and not just intelligence). And in spite of all this 
 I know for a fact that Evolution DID produce consciousness at least once, 
 therefore the only conclusion is that consciousness is a byproduct of 
 intellagence.



 Adenine and Thymine don't have purpose in seeking to bind with each other?


 I don't even know what a question like that means, who's purpose do you 
 expect Adenine and Thymine to serve?



 How do you know?


 I know because I have intelligence and Adenine and Thymine do not know 
 because they have none, they only have cause and effect.



 How is it different from our purpose in staying in close proximity to 
 places to eat and sleep?


 And to think that some people berated me for anthropomorphizing future 
 supercomputers and here you are ? anthropomorphizing simple chemicals.



 Why is everything 

Re: Only you can know if you actually have intelligence

2012-10-09 Thread Stathis Papaioannou
On Tue, Oct 9, 2012 at 10:33 PM, Roger Clough rclo...@verizon.net wrote:
 Hi meekerdb

 Only you can know if you actually have intelligence,
 although you can appear to have intelligence (as if).
 You can be tested for it.

 Thus comp is not different from us, or at least it has the same
 limitations.

I think you mean consciousness rather than intelligence.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Popper's faulty epistemology

2012-10-09 Thread Roger Clough
Hi Evgenii Rudnyi  

Popper's three worlds are related to but not exactly Peirces
three categories:

World 1 is the objective world, which I would have to call Category 0.

World 2 is what Popper calls subjective reality, or what Peirce called Firstness

World 3 is Popper's objective knowledge, which is Pierce's Thirdness.

Popper may have included world 2 in what Peirce called Secondness, but 
it's not clear. Secondness is his missing step, it's the one in which
your mind makes sense of your subjective perception. My own
understanding is your mind compares what you see with what you 
already know and either identifies it as such or modifies it to
another, newly invented or associated  description. If you
see two apples, it then calls the image two apples.

Thirdness is what you then call it and can express to others.


Roger Clough, rclo...@verizon.net 
10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Evgenii Rudnyi  
Receiver: everything-list  
Time: 2012-10-09, 03:09:48 
Subject: Re: The real reasons we don?_have_AGI_yet 


On 08.10.2012 20:45 Alberto G. Corona said the following: 
 Deutsch is right about the need to advance in Popperian 
 epistemology, which ultimately is evolutionary epistemology. 

You may want to read Three Worlds by Karl Popper. Then you see where to  
Popperian epistemology can evolve. 

?o sum up, we arrive at the following picture of the universe. There  
is the physical universe, world 1, with its most important sub-universe,  
that of the living organisms. World 2, the world of conscious  
experience, emerges as an evolutionary product from the world of  
organisms. World 3, the world of the products of the human mind,  
emerges as an evolutionary product from world 2.? 

?he feedback effect between world 3 and world 2 is of particular  
importance. Our minds are the creators of world 3; but world 3 in its  
turn not only informs our minds, but largely creates them. The very idea  
of a self depends on world 3 theories, especially upon a theory of time  
which underlies the identity of the self, the self of yesterday, of  
today, and of tomorrow. The learning of a language, which is a world 3  
object, is itself partly a creative act and partly a feedback effect;  
and the full consciousness of self is anchored in our human language.? 

Evgenii 
--  

http://blog.rudnyi.ru/2012/06/three-worlds.html 

--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Nature's firewall

2012-10-09 Thread Stathis Papaioannou
On Tue, Oct 9, 2012 at 11:07 PM, Roger Clough rclo...@verizon.net wrote:
 Hi Stathis,

 The separation you missed is that mind and consciousness
 are subjective entities(not shareable), while computations
 are objective (shareable).

But we don't know the subjective qualities of a given computation, do we?

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



the self as active agent

2012-10-09 Thread Roger Clough
Hi meekerdb  

IMHO self is an active agent, something like Maxwell's Demon,
that can intelligently sort raw experiences into meaningful bins, such
as Kant's categories, thus giving them some meaning.


Roger Clough, rclo...@verizon.net 
10/9/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: meekerdb  
Receiver: everything-list  
Time: 2012-10-09, 02:16:22 
Subject: Re: [foar] Re: The real reasons we don?_have_AGI_yet 


On 10/8/2012 3:49 PM, Stephen P. King wrote:  
Hi Russell,  

?? Question: Why has little if any thought been given in AGI to self-modeling 
and some capacity to track the model of self under the evolutionary 
transformations?  

It's probably because AI's have not needed to operate in environments where 
they need a self-model.? They are not members of a social community.? 
Some simpler systems, like Mars Rovers, have limited self-models (where am I, 
what's my battery charge,...) that they need to perform their functions, 
but they don't have general intelligence (yet). 

Brent

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Conjoined Twins

2012-10-09 Thread Craig Weinberg


On Tuesday, October 9, 2012 6:32:19 AM UTC-4, rclough wrote:

  Hi Craig Weinberg 
  
 The subjective aspect (Firstness), some of which apparently each twin has, 
 is 
 not shareable, only descriptions of it (Thirdness) are shareable. 


Maybe not in these twins, but in these other, brain conjoined twins, 
Firstness IS SHARED.

http://www.youtube.com/watch?v=YWDsXa5nNbI  (start at 5:50  if you want to 
skip the human interest stuff)

Proof.

Craig

 
 Firstness.
  What is shareable is Thirdness. What cannot be shared is Firstness.
 Thirdness is the description of 

 Roger Clough, rcl...@verizon.net javascript: 
 10/9/2012  
 Forever is a long time, especially near the end. -Woody Allen 


 - Receiving the following content -  
 From: Craig Weinberg  
 Receiver: everything-list  
 Time: 2012-10-08, 12:02:27 
 Subject: Conjoined Twins 


 Have a look at the first few minutes of this show with conjoined twins Abby 
 and Brittany: 


 http://tlc.howstuffworks.com/tv/abby-and-brittany/videos/big-moves.htm 

 You can see that although they do not share the same brain they clearly share 
 aspects of the same mind. They often speak in unison but they can disagree 
 with each other. This can be interpreted to mean that they are similar 
 machines and therefore are able to generate the same functions 
 simultaneously, but then how can they voluntarily disagree? To me, this shows 
 how fundamentally different subjectivity and will is from computation, 
 information, or even physics. Even though I think subjectivity is physical, 
 it's because physics is subjective, and the way that happens is via intention 
 through time, rather than extension across space. The words they say are not 
 being transmitted from inside one skull to another, even though Brittany 
 seems to be echoing Abby in the sense that she is in a more subservient role 
 in expressing what they are saying, the echo is not meaningfully delayed - 
 she is not listening to Abby's words with her ears and then imitating her, 
 she is feeling the meaning of what is being said at nearly the same time. 


 I think that Bruno would say that this illustrates the nonlocality of 
 arithmetic as each person is a universal machine who is processing similar 
 data with similar mechanisms, but I see real-time Quorum Mechanics. They are 
 speaking more or less 'in concert'. Were they machines, I would expect that 
 they could get out of synch. One could just start repeating the other five 
 seconds later, or they could lapse into an infinite regress of echoing. 
 Surely the circuitry of such a rare instrument would not and could not evolve 
 rock solid error corrective anticipation for this. 


 --  
 You received this message because you are subscribed to the Google Groups 
 Everything List group. 

 To view this discussion on the web visit 
 https://groups.google.com/d/msg/everything-list/-/TGERtHlMkLIJ. 
 To post to this group, send email to 
 everyth...@googlegroups.comjavascript:. 

 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com javascript:. 
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/ssA5349ARf8J.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Conjoined Twins

2012-10-09 Thread Bruno Marchal


On 08 Oct 2012, at 18:02, Craig Weinberg wrote:

Have a look at the first few minutes of this show with conjoined  
twins Abby and Brittany:


http://tlc.howstuffworks.com/tv/abby-and-brittany/videos/big-moves.htm

You can see that although they do not share the same brain they  
clearly share aspects of the same mind. They often speak in unison  
but they can disagree with each other. This can be interpreted to  
mean that they are similar machines and therefore are able to  
generate the same functions simultaneously, but then how can they  
voluntarily disagree? To me, this shows how fundamentally different  
subjectivity and will is from computation, information, or even  
physics. Even though I think subjectivity is physical, it's because  
physics is subjective, and the way that happens is via intention  
through time, rather than extension across space. The words they say  
are not being transmitted from inside one skull to another, even  
though Brittany seems to be echoing Abby in the sense that she is in  
a more subservient role in expressing what they are saying, the echo  
is not meaningfully delayed - she is not listening to Abby's words  
with her ears and then imitating her, she is feeling the meaning of  
what is being said at nearly the same time.


I think that Bruno would say that this illustrates the nonlocality  
of arithmetic as each person is a universal machine who is  
processing similar data with similar mechanisms,


For non locality, you need same instead of similar.

Of course you can say that the if-then-else subroutine, defined in  
some functional way, act no locally in many different brains and  
computers, but that does not give rise to the non-locality we can  
observe from being in the UD, or in Everett universal wave.





but I see real-time Quorum Mechanics. They are speaking more or less  
'in concert'. Were they machines, I would expect that they could get  
out of synch. One could just start repeating the other five seconds  
later, or they could lapse into an infinite regress of echoing.  
Surely the circuitry of such a rare instrument would not and could  
not evolve rock solid error corrective anticipation for this.



I think Brittany and Abby are two single individual persons.

Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Zombieopolis Thought Experiment

2012-10-09 Thread Bruno Marchal


On 08 Oct 2012, at 19:35, meekerdb wrote:


On 10/8/2012 8:42 AM, John Clark wrote:


2) Intelligent behavior is NOT associated with subjective  
experience, in which case there is no reason for Evolution to  
produce consciousness and I have no explanation for why I am here,  
and I have reason to believe that I am the only conscious being in  
the universe.


There's a third possibility: Intelligent behavior is sometimes  
associated with subjective experience and sometimes not.  Evolution  
may have produced consciousness as a spandrel, an accident of the  
particular developmental path that evolution happened upon.  Or it  
may be that consciousness is necessarily associated with only  
certain kinds of intelligent behavior, e.g. those related to language.


Consciousness is when you bet in your consistency, or in a reality, to  
help yourself.


Consciousness precedes language, but follows perception and sensation.  
It unifies the interpretation of the senses, making the illusion  
possible.


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: On Beauty

2012-10-09 Thread Platonist Guitar Cowboy
On Tue, Oct 9, 2012 at 2:03 PM, Roger Clough rclo...@verizon.net wrote:

 Hi Platonist Guitar Cowboy

 The definition of beauty that I like is that
 beauty is unity in diversity.



Hi Roger,

As I mentioned, I think its very hard/perhaps impossible to tie down like
that, even though I think I can grasp what you mean. For instance,
concerning the definition you mentioned: is that diversity harmoniously
completing itself, starkly contrasting itself, even in conflict with itself
to appear unified on some other level? Picking up the last: you can have a
narrative pitting protagonists against each other say in a film with heavy
conflict. And their conflict produces a more convincing unified whole that
is beautiful. Then take the wholeness of humans or machines on this planet
and look at the conflict of war.

Placing now aside, that people die physically in wars and not in fiction
(there are many stuntmen that have died...) and pretending all were fiction
to exercise more aesthetic, instead of moral, judgement: in both cases you
have diversity as conflict and a wholeness (protagonists/whole film against
vivid description of humanity in war). Still, Its really difficult to
answer whether one is more beautiful than the other in some absolute sense,
or to pin down properties or hierarchies that would make this so. But show
a person both films you've made, and they will prefer one over the other.
In other words, we know it when we meet it, or we see it in past or future
through introspection. So employing fuzzy metaphors instead of defining it:
it is a wild animal hard to catch, but universally present and always
easily accessible.

m



 Roger Clough, rclo...@verizon.net
 10/9/2012
 Forever is a long time, especially near the end. -Woody Allen


 - Receiving the following content -
 From: Platonist Guitar Cowboy
 Receiver: everything-list
 Time: 2012-10-08, 11:58:53
 Subject: Re: On Zuckerman's paper


 Hi Stephen, Bruno, and Jason,

 Do I understand correctly that comp requires a relative measure on the set
 of all partial computable functions and that for Steven Both abstractions,
 such as numbers and their truths, and physical worlds must emerge together
 from a primitive ground which is neutral in that it has no innate
 properties at all other that necessary possibility. It merely exists.

 If so, naively I ask then: Why is beauty, in the imho non-chimeric sense
 posed by Plotinus in Ennead I.6 On Beauty, not a candidate for
 approximating that set, or for describing that which has no innate
 properties?

 Here the translation from Steven MacKenna:

 http://eawc.evansville.edu/anthology/beauty.htm

 Because, what drew me to Zuckerman was just a chance find on youtube...
 and seeing Infinite descending chains, decorations, self-reference etc.
 all tied together in a set theory context, I didn't think Wow, that's
 true but simply hmm, that's nice, maybe they'll elaborate a more precise
 frame. I know, people want to keep separate art and science. But I am
 agnostic on this as composing and playing music just bled into engineering
 and mathematical problems and solutions, as well as programming and the
 computer on their own. I apologize in advance, if this is off-topic as I
 find the discussion here fascinating and hate interrupting it.

 Mark

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The real reasons we don’t have AGI yet

2012-10-09 Thread Bruno Marchal


On 08 Oct 2012, at 20:50, Craig Weinberg wrote:


Deutsch is right.


Deutsch is not completely wrong, just unaware of the progress in  
theoretical computer science, which explains why some paths are  
necessarily long, and can help to avoid the confusion between  
consciousness, intelligence, competence, imagination, creativity.


I have already explain why, since recently I think that all UMs are  
already conscious, including the computer you are looking at right  
now. But that consciousness is still disconnected, or only trivially  
connected, to you and the environment.


Since always I think PA, ZF, and all Löbian machines are as conscious  
as you and me. But still not connected, except on mathematical notion.  
I have explained and justified that proving in formal theory, or in a  
way that we think we could formalize if we and the times, is a way to  
actually talk to such machine, and the 8 hypostases are why any such  
little machine can already told us. They are sort of reincarnation of  
plotinus, to put it in that way.


It is easy to confuse them with zombie, as the actual dialog has to be  
made by hand, with transpiration. But such machine are already as  
conscious as all Löbian entities, from the octopus to us.


Consciousness and intelligence are both not definable, and have  
complex positive and negative feedback on competence.


General intelligence of machine needs *us* opening our mind.

The singularity is in the past. Now we can only make UMs as deluded as  
us, for the best, or the worth. They have already a well defined self,  
a repesentation of the self, some connection with truth (where the  
experience will come from), but here the organic entities and billions  
years of advantage. But they evolves also, and far quicker that the  
organic.


No progress in AI? I see explosive progress. Especially in the 1930  
for the main thing: the discovery of the Universal Machine (UM).





Searle is right.


Searle is invalid. Old discussion.
He confuses levels of description. There is nothing to add to  
Hofstadter and Dennett critics of the argument in Mind'I.


It is the same error as confusing proving A and emulating a machine  
proving A.
ZF can prove the consistency of PA, and PA cannot. But PA can prove  
that ZF can prove the consistency of PA. The first proof provide an  
emulation of the second, but PA and ZF keeps their distinct identities  
in that process.


Bruno



Genuine AGI can only come when thoughts are driven by feeling and  
will rather than programmatic logic. It's a fundamental  
misunderstanding to assume that feeling can be generated by  
equipment which is incapable of caring about itself. Without  
personal investment, there is no drive to develop right hemisphere  
awareness - to look around for enemies and friends, to be vigilant.  
These kinds of capacities cannot be burned into ROM, they have to be  
discovered through unscripted participation. They have to be able to  
lie and have a reason to do so.


I'm not sure about Deutsch's purported Popper fetish, but if that's  
true, I can see why that would be the case. My hunch is that  
although Ben Goertzel is being fair to Deutsch, he may be distorting  
Deutsch's position somewhat as far as I question that he is  
suggesting that we invest in developing Philosophy instead of  
technology. Maybe he is, but it seems like an exaggeration. It seems  
to me that Deutsch is advocating the very reasonable position that  
we evaluate our progress with AGI before doubling down on the same  
strategy for the next 60 years. Nobody whats to cut off AGI funding  
- certainly not me, I just think that the approach has become  
unscientific and sentimental like alchemists with their dream of  
turning lead into gold. Start playing with biology and maybe you'll  
have something. It will be a little messier though, since with  
biology and unlike with silicon computers, when you start getting  
close to something with human like intelligence, people tend to  
object when you leave twitching half-persons moaning around the  
laboratory. You will know you have real AGI because there will be a  
lot of people screaming.


Craig


--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/-iG7-y2ddXsJ 
.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this 

Re: What Kant did: Consciousness is a top-down structuring of bottom-up sensory info

2012-10-09 Thread Bruno Marchal


On 08 Oct 2012, at 21:12, Alberto G. Corona wrote:


Bruno:

It could be that the indeterminacy in the I means that everything else
is not a machine, but supposedly, an hallucination.


If reified as real, which the machine is obliged to do.



But this hallucination has a well defined set of mathematical
properties that are communicable to other  hallucinated expectators.


The hallucination has communicable components, and in the case of  
physics, sharable one. Like second-life video game, except that its  
roots are more solid as they rely on arithmetic directly. (No bug at  
that level).





This means that something is keeping the picture coherent.


Yes. The laws of arithmetic (or the laws of your favorite UM: it does  
not matter which one, as that would change only superficially the  
ontology, but not what is really real, which is epistemological  
(physics is first person plural epistemology with comp, as I argue at  
least).





If that
something is not computation or  computations, what is the nature of
this well behaving hallucination according with your point of view?


Computation = Sigma_1 truth.
But from inside the machine are confronted to all Sigma_i truth, with  
oracles, and even beyond. The nature of the well behaving  
hallucination is truth, the whole arithmetical truth (and beyond, as  
we can't know that comp is true, and that plays some role).


Bruno






2012/10/7 Bruno Marchal marc...@ulb.ac.be:


On 07 Oct 2012, at 15:11, Alberto G. Corona wrote:



2012/10/7 Bruno Marchal marc...@ulb.ac.be



On 07 Oct 2012, at 12:32, Alberto G. Corona wrote:

Hi Roger:

... and cognitive science , which study the hardware and  
evolutionary
psychology (that study the software or mind) assert that this is  
true.



Partially true, as both the mainstream cognitive science and  
psychology
still does not address the mind-body issue, even less the comp  
particular
mind-body issue. In fact they use comp + weak materialism, which  
can be

shown contradictory(*).




The Kant idea that even space and time are creations of the mind is
crucial for the understanding and to compatibilize the world of  
perceptions
and phenomena with the timeless, reversible,  mathematical  nature  
of  the

laws of physics that by the way, according with M Theory, have also
dualities between the entire universe and the interior of a brane  
on the

planck scale (we can not know if we live in such a small brane).


OK. No doubt that Kant was going in the right (with respect to  
comp at

least) direction. But Kant, for me, is just doing 1/100 of what the
neoplatonists already did.



I don´t assume either if  this mathematical nature is or not the  
ultimate

nature or reality


Any Turing universal part of it is enough for the ontology, in the  
comp
frame. For the epistemology, no mathematical theories can ever be  
enough.
Arithmetic viewed from inside is bigger than what *any* theory can  
describe
completely. This makes comp preventing any text to capture the  
essence of
what being conscious can mean, be it a bible, string theory, or  
Peano
Arithmetic. In a sense such theories are like new person, and it  
put only

more mess in Platonia.




Probably the mind (or more specifically each instantiation of the  
mind

along the line of life in space-time) make  use a sort of duality in
category theory between topological spaces and algebraic  
structures (as

Stephen told me and he can explain you) .


Many dualities exist, but as I have try to explain to Stephen,  
mind and
matter are not symmetrical things if we assume comp. The picture  
is more

that matter is an iceberg tip of reality.

Even  if matter the tip of the iceberg, does the rest of if   
matter?



Without the rest (water), there would be no iceberg and no tip!



do we can know about it this submerged computational nature?


In science we never know. But we can bet on comp, and then, we can  
know
relatively to that bet-theory. So with comp we know that the rest  
is the

external and internal math structures in arithmetic.



which phenomena produce the submerged part of this iceberg in the  
one that

we perceive?.


Arithmetic gives the submerged part. The UD complete execution  
gives it too.

The emerged part is given by the first person indeterminacy.




Multiverse hypothesis propose a collection of infinite icebergs,  
but this is
a way to avoid God and to continue with the speculative business.  
What the
computational nature of reality tries to explain or to avoid? . May  
be you

answered this questions a number of times, ( even to me and I did not
realize it)


Careful. Comp makes the observable reality of physics, and the non
observable reality of the mind, NON computational. Indeed it needs  
a God
(arithmetical truth). It explains also why God is NOT arithmetical  
truth as

we usually defined it (it is only an approximation).




By the way, Bruno, you try to demolish physicalism from below by  
proposing a

computational 

Re: AGI

2012-10-09 Thread Bruno Marchal


On 08 Oct 2012, at 22:07, John Mikes wrote:

Dear Richard, I think the lengthy text is Ben's article in  
response to D. Deutsch.
Sometimes I was erring in the belief that it is YOUR text, but no.  
Thanks for copying.
It is too long and too little organized for me to keep up with  
ramifications prima vista.
What I extracted from it are some remarks I will try to communicate  
to Ben (a longtime e-mail friend) as well.


I have my (agnostically derived) version of intelligence: the  
capability of reading 'inter'
lines (words/meanings). Apart from such human distinction: to  
realize the 'essence' of relations beyond vocabulary, or 'physical  
science' definitions.
Such content is not provided in our practical computing machines  
(although Bruno trans-leaps such barriers with his (Löb's) universal  
machine unidentified).



Unidentified?I give a lot of  examples: PA, ZF, John Mikes, me,  
and the octopus.


In some sense they succeed enough the mirror test. That's enough for  
me to consider them, well, not just conscious, but as conscious as me,  
and you.
The difference are only on domain competence, and intelligence (in  
which case it might be that octopus are more intelligent than us, as  
we are blinded by our competences).


It is possible that when competence grows intelligence decrease, but I  
am not sure.


Bruno


Whatever our (physical) machines can do is within the physical  
limits of information - the content of the actual MODEL of the  
world we live with by yesterday's knowledge, no advanced technology  
can transcend such limitations: there is no input to do so. This may  
be the limits for AI, and AGI as well. Better manipulation etc. do  
not go BEYOND.


Human mind-capabilities, however, (at least in my 'agnostic'  
worldview) are under the influences (unspecified) from the infinite  
complexity BEYOND our MODEL, without our knowledge and  
specification's power. Accordingly we MAY get input from more than  
the factual content of the MODEL. On such (unspecified) influences  
may be our creativity based (anticipation of Robert Rosen?) what  
cannot be duplicated by cutest algorithms in the best computing  
machines.
Our 'factual' knowable in the MODEL are adjusted to our mind's  
capability - not so even the input from the unknowable 'infinite  
complexity's' relations.


Intelligence would go beyond our quotidian limitations, not feasible  
for machines that work within such borders.


I may dig out relevant information from Ben's text in subsequent  
readings, provided that I get to it back.



Thanks again, it was a very interesting scroll-down

John Mikes

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Universe on a Chip

2012-10-09 Thread Bruno Marchal


On 08 Oct 2012, at 22:38, Craig Weinberg wrote:



If the universe were a simulation, would the constant speed of  
light correspond to the clock speed driving the simulation? In other  
words, the “CPU speed?”


As we are “inside” the simulation, all attempts to measure the speed  
of the simulation appear as a constant value.


Light “executes” (what we call “movement”) at one instruction per  
cycle.


Any device we built to attempt to measure the speed of light is also  
inside the simulation, so even though the “outside” CPU clock could  
be changing speed, we will always see it as the same constant value.


A “cycle” is how long it takes all the information in the universe  
to update itself relative to each other. That is all the speed of  
light really is. The speed of information updating in the universe…  
(more here http://www.quora.com/Physics/If-the-universe-were-a-simulation-would-the-constant-speed-of-light-correspond-to-the-clock-speed-driving-the-simulation-In-other-words-the-CPU-speed?)
I can make the leap from CPU clock frequency to the speed of light  
in a vacuum if I view light as an experienced event or energy state  
which occurs local to matter rather than literally traveling through  
space. With this view, the correlation between distance and latency  
is an organizational one, governing sequence and priority of  
processing rather than the presumed literal existence of racing  
light bodies (photons).


This would be consistent with your model of Matrix-universe on a  
meta-universal CPU in that light speed is simply the frequency at  
which the computer processes raw bits. The change of light speed  
when propagating through matter or gravitational fields etc wouldn’t  
be especially consistent with this model…why would the ghost of a  
supernova slow down the cosmic computer in one area of memory, etc?


The model that I have been developing suggests however that the CPU  
model would not lead to realism or significance though, and could  
only generate unconscious data manipulations. In order to have  
symbol grounding in genuine awareness, I think that instead of a CPU  
cranking away rendering the entire cosmos over and over as a bulwark  
against nothingness, I think that the cosmos must be rooted in  
stasis. Silence. Solitude. This is not nothingness however, it is  
everythingness. A universal inertial frame which loses nothing but  
rather continuously expands within itself by taking no action at all.


The universe doesn’t need to be racing to mechanically redraw the  
cosmos over and over because what it has drawn already has no place  
to disappear to. It can only seem to disappear through…

…
…
…
latency.

The universe as we know it then arises out of nested latencies. A  
meta-diffraction of symmetrically juxtaposed latency-generating  
methodologies. Size, scale, distance, mass, and density on the  
public side, richness, depth, significance, and complexity on the  
private side. Through these complications, the cosmic CPU is cast as  
a theoretical shadow, when the deeper reality is that rather than  
zillions of cycles per second, the real mainframe is the slowest  
possible computer. It can never complete even one cycle. How can it,  
when it has all of these subroutines that need to complete their  
cycles first?



?

If the universe is a simulation (which it can't, by comp, but let us  
say), then if the computer clock is changed, the internal creatures  
will not see any difference. Indeed it is a way to understand that  
such a time does not need to be actualized. Like in COMP and GR.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Zombieopolis Thought Experiment

2012-10-09 Thread John Clark
On Mon, Oct 8, 2012  Craig Weinberg whatsons...@gmail.com wrote:

 Ok, which computers do you think have conscious experiences? Windows
 laptops? Deep Blue? Cable TV boxes?


How the hell should I know if computers have conscious experiences? How the
hell should I know if people have conscious experiences? All I know for
certain is that some things external to me display intelligent behavior and
some things do not, from that point on everything is conjecture and theory;
I happen to think that intelligence is associated with consciousness is a
pretty good theory but I admit it's only a theory and if your theory is
that you're the only conscious being in the universe I can't prove you
wrong.

 Is it a fact that you have conscious experiences?


Yes, however I have no proof of that, or at least none I can share with
anyone else, so I would understand if you don't believe me; however to
believe me but not to believe a computer that made a similar claim just
because you don't care for the elements out of which it is made would be
rank bigotry.

 Stimulation that you get thorough your senses of the outside environment
 does not control you.


The difference between influence and control is just one of degree not of
kind. Usually lots of things cause us to do what we do, if all of them came
from the outside then its control, if only some of the causes were external
and some were internal, such as memory, then its influence.

 intelligent behavior WITHOUT consciousness confers a Evolutionary
 advantage. Having difficulty with your reading comprehension?


  but what example or law are you basing this on? Who says this is a fact
 other than you?


It almost seems that you're trying to say that intelligent behavior gives
an organism no advantage over a organism that is stupid, but nobody is that
stupid; so what are you saying?

 Who claims to know that intelligence without consciousness exists?


I give up, who claims to know that intelligence without consciousness
exists?

 The only intelligent behavior I know with certainty that is always
 associated with subjective experience is my own. But I know with certainty
 there are 2 possibilities:
 1) Intelligent behavior is always associated with subjective experience,
 if so then if a computer beats you at any intellectual pursuit then it has
 a subjective experience, assuming of course that you yourself are
 intelligent. And I'll let you pick the particular intellectual pursuit for
 the contest.
 2) Intelligent behavior is NOT associated with subjective experience, in
 which case there is no reason for Evolution to produce consciousness and I
 have no explanation for why I am here, and I have reason to believe that I
 am the only conscious being in the universe.


  I choose 3) The existence of intelligent behavior is contingent upon
 recognition and interpretation by a conscious agent.


That's EXACTLY the same as #1, you're saying that intelligent behavior
without consciousness is impossible, I can't prove it but I suspect you're
probably right. And if we are right then a computer beating you at a
intellectual task is evidence that it is conscious, assuming only that you
yourself are intelligent and conscious.

 Behavior can be misinterpreted by a conscious agent as having a higher
 than actual quality of subjectivity when it doesn't


But that's what I'm asking, what behavior gave you the clue that it would
be a misinterpretation to attribute consciousness to something?

This started with your question Which intelligent behavior do you know
that you can be certain exists without any subjective experience associated
with it? I said there was no behavior to enable us to determine what is
conscious and what is not, all you're basically saying is that conscious
intellectual behavior is intellectual behavior in which consciousness is
involved; and I already knew that, and it is not helpful in figuring out
what is conscious and what is not.

 No being that we know of has become conscious by means of intelligence
 alone.


Other than ourselves we know with certainty of no other being that is
conscious PERIOD. All we can do is observe intelligent behavior and make
guesses from there.

 Every conscious being develops sensorimotor and emotional awareness
 before any cognitive intelligence arises.


How they hell do you know?

 Babies cry before they talk.


Yes, without a doubt babies exhibit crying behavior before talking
behavior, their brains need further development and they need to gain more
knowledge before they can advance from one sort of behavior to another; and
that is perfectly consistent with my belief that emotion is easy but
intelligence is hard.

 You think that every behavior in biology exists purely because of
 evolution


Every biological structure exists purely because of Evolution, however one
of those physical structures, the brain, allows for a far far richer range
of behavior than Evolution can provide, behaviors contingent on
astronomically complex 

Re: Zombieopolis Thought Experiment

2012-10-09 Thread Craig Weinberg


On Tuesday, October 9, 2012 10:17:41 AM UTC-4, Bruno Marchal wrote:



 Consciousness is when you bet in your consistency, or in a reality, to 
 help yourself. 

 Consciousness precedes language, but follows perception and sensation. 


Nice. It can be tricky because perception and sensation can both be seen as 
kinds of awareness and some people use the term consciousness as a synonym 
for awareness. This is not entirely incorrect. It's like saying that cash 
and credit cards are both kind of money and that economics is a synonym for 
money. It can be if you want it to be, it's just a word that we define by 
consensus usage, but if we want to get precise, then I try to have a vague 
taxonomy of sensation  perception  feeling  awareness  consciousness so 
that consciousness is an awareness of awareness. The continuum is 
logarithmic, but not discretely so because of the nature of subjectivity 
ins not discrete but runs the full spectrum from discrete to nebulous.
 

 It unifies the interpretation of the senses, making the illusion possible.


Here it is, Bruno. This is where I can see you saying exactly what I used 
to believe was true, but now I understand it 180 degrees away from the 
*whole* truth.

All that you have to do is drop the assumption that each sense is a 
separate discrete process built up from nothing and see it as a sieve, 
filtering out or receiving particular ranges of non-illusory experience. 
The filtered sensation do not need to be conditioned mechanically or 
mechanically, they aren't objects which need to be assembled. The 
unification of the senses is like the nuclear force - unity is the a priori 
default, it is only the processes of the brain which modulate the 
obstruction of that unity. Sanity does not need to be propped up and 
scripted like a program, it is a familiar attractor (as opposed to strange 
attractor) of any given inertial frame.

The only illusion we have is when our non-illusory capacity to tell the 
difference between conflicting inertial frames of perception, cognition, 
sensation, etc recovers that difference and identifies with one sense frame 
over another, because of a perception of greater sense or significance. 
It's not subject to emulation. It actually has to make more sense to the 
person. The content doesn't matter. You can have a dream that makes no 
cognitive sense at all but without your waking life to compare it to, you 
have no problem accepting that there is a donkey driving you to work. 
Realism is not emergent functionally or assembled digitally from the bottom 
up, it is recovered apocatastatically from the top down.

Craig


 Bruno



 http://iridia.ulb.ac.be/~marchal/





-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/rQor-nft0osJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The real reasons we don’t have AGI yet

2012-10-09 Thread Bruno Marchal


On 08 Oct 2012, at 23:39, Russell Standish wrote:


On Mon, Oct 08, 2012 at 01:13:35PM -0400, Richard Ruquist wrote:

The real reasons we don’t have AGI yet
A response to David Deutsch’s recent article on AGI
October 8, 2012 by Ben Goertzel




Thanks for posting this, Richard. I was thinking of writing my own
detailed response to David Deutsch's op ed, but Ben Goertzel has done
such a good job, I now don't have to!

My response, similar to Ben's is that David does not convincingly
explain why Popperian epistemology is the secret sauce. In fact, it
is not even at all obvious how to practically apply Popperian
epistemology to the task at hand. Until some more detailed practical
proposal is put forward, the best I can say is, meh, I'll believe it
when it happens.


Strictly speaking, John Case has refuted Popperian epistemology(*), in  
the sense that he showed that some Non Popperian machine can recognize  
larger classes and more classes of phenomena than Popperian machine.  
Believing in some non refutable theories can give an advantage with  
respect of some classes of phenomena.







The problem that exercises me (when I get a chance to exercise it) is
that of creativity. David Deutsch correctly identifies that this is  
one of

the main impediments to AGI. Yet biological evolution is a creative
process, one for which epistemology apparently has no role at all.


Not sure it is more creative than the UMs, the UD, the Mandelbrot set,  
or arithmetic.






Continuous, open-ended creativity in evolution is considered the main
problem in Artificial Life (and perhaps other fields). Solving it may
be the work of a single moment of inspiration (I wish), but more
likely it will involve incremental advances in topics such as
information, complexity, emergence and other such partly philosophical
topics before we even understand what it means for something to be
open-ended creative.


I agree. That's probably why people take time to understand that UMs  
and arithmetic are already creative.




Popperian epistemology, to the extent it has a
role, will come much further down the track.


Yes. With is good uses, and its misuses. Popper just made precise what  
science is, except for its criteria of interesting and good theory. In  
fact Popper theory was a real interesting theory, in the sense of  
Popper, as it was refutable. But then people should not be so much  
astonished that it has been refuted (of course in a theoretical  
context(*)). I can accept that Popper analysis has a wide spectrum  
where it works well, but in the foundations, it cannot be used a dogma.


Bruno

(*) CASE J.  NGO-MANGUELLE S., 1979, Refinements of inductive  
inference by Popperian
machines. Tech. Rep., Dept. of Computer Science, State Univ. of New- 
York, Buffalo.



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: [foar] Re: The real reasons we don’t have AGI yet

2012-10-09 Thread Bruno Marchal


On 09 Oct 2012, at 08:16, meekerdb wrote:


On 10/8/2012 3:49 PM, Stephen P. King wrote:


Hi Russell,

Question: Why has little if any thought been given in AGI to  
self-modeling and some capacity to track the model of self under  
the evolutionary transformations?


It's probably because AI's have not needed to operate in  
environments where they need a self-model.  They are not members of  
a social community.  Some simpler systems, like Mars Rovers, have  
limited self-models (where am I, what's my battery charge,...) that  
they need to perform their functions, but they don't have general  
intelligence (yet).


Unlike PA and ZF and Lôbian entity which have already the maximal  
possible noyion of self (both in the 3p and 1p sense).


But PA and ZF have no amount at all of reasonable local incarnation  
(reasonable with respect of doing things on Earth, or on Mars).
Mars Rovers is far beyond PA and ZF in that matter, I mean of being  
connected to some real mundane life.


Bruno





Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The real reasons we don’t have AGI yet

2012-10-09 Thread Alberto G. Corona
I thin that natural selection is tautological  (is selected what has
fitness, fitness is what is selected)   but at the same time is not
empty  and it is scientifc because it can be falsified. At the same
time, if it is agreed that is the direct mechanism that design the
minds then this is the perfect condition for a foundation of
eplistemology, and  an absolute meaning of truth.

2012/10/9 Bruno Marchal marc...@ulb.ac.be:

 On 08 Oct 2012, at 23:39, Russell Standish wrote:

 On Mon, Oct 08, 2012 at 01:13:35PM -0400, Richard Ruquist wrote:

 The real reasons we don’t have AGI yet
 A response to David Deutsch’s recent article on AGI
 October 8, 2012 by Ben Goertzel



 Thanks for posting this, Richard. I was thinking of writing my own
 detailed response to David Deutsch's op ed, but Ben Goertzel has done
 such a good job, I now don't have to!

 My response, similar to Ben's is that David does not convincingly
 explain why Popperian epistemology is the secret sauce. In fact, it
 is not even at all obvious how to practically apply Popperian
 epistemology to the task at hand. Until some more detailed practical
 proposal is put forward, the best I can say is, meh, I'll believe it
 when it happens.


 Strictly speaking, John Case has refuted Popperian epistemology(*), in the
 sense that he showed that some Non Popperian machine can recognize larger
 classes and more classes of phenomena than Popperian machine. Believing in
 some non refutable theories can give an advantage with respect of some
 classes of phenomena.






 The problem that exercises me (when I get a chance to exercise it) is
 that of creativity. David Deutsch correctly identifies that this is one of
 the main impediments to AGI. Yet biological evolution is a creative
 process, one for which epistemology apparently has no role at all.


 Not sure it is more creative than the UMs, the UD, the Mandelbrot set, or
 arithmetic.





 Continuous, open-ended creativity in evolution is considered the main
 problem in Artificial Life (and perhaps other fields). Solving it may
 be the work of a single moment of inspiration (I wish), but more
 likely it will involve incremental advances in topics such as
 information, complexity, emergence and other such partly philosophical
 topics before we even understand what it means for something to be
 open-ended creative.


 I agree. That's probably why people take time to understand that UMs and
 arithmetic are already creative.



 Popperian epistemology, to the extent it has a
 role, will come much further down the track.


 Yes. With is good uses, and its misuses. Popper just made precise what
 science is, except for its criteria of interesting and good theory. In fact
 Popper theory was a real interesting theory, in the sense of Popper, as it
 was refutable. But then people should not be so much astonished that it has
 been refuted (of course in a theoretical context(*)). I can accept that
 Popper analysis has a wide spectrum where it works well, but in the
 foundations, it cannot be used a dogma.

 Bruno

 (*) CASE J.  NGO-MANGUELLE S., 1979, Refinements of inductive inference by
 Popperian
 machines. Tech. Rep., Dept. of Computer Science, State Univ. of New-York,
 Buffalo.


 http://iridia.ulb.ac.be/~marchal/




 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




-- 
Alberto.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: I believe that comp's requirement is one of as if rather than is

2012-10-09 Thread Bruno Marchal


On 09 Oct 2012, at 11:50, Roger Clough wrote:


Hi Alberto G. Corona  and Bruno,

Perhaps I can express the problem of solipsism as this.
To have a mind means that one can experience.


Hmm... Not really, with my terminology. A mind is not enough for an  
experience. You need a soul. It is a fixed point in a transformation  
of the mind to itself. I can conceive mind without soul. But OK. It is  
a detail perhaps here.






Experiences are subjective and thus cannot be actually shared,
the best one can do is share a description of the experience.


Not really. You can share the pleasure you have about a movie, by  
describing the movie and your feeling.


But, if you know your partner very well, you can share the experience  
of the movie, partially, by going together at the movie projection.

Sharing does not necessitate communication.






If one cannot actually share another's experience,
one cannot know if they actually had an experience--
that is, that they actually have a mind.


Indeed. But even in dream we have instinctive empathy, and have good  
reason (even if *sometimes* false) to bet other people have experience.






Comp seems to avoid this insurmountable problem
by avoiding the issue of whether the computer
actually had an experience, only that it appeared
to have an experience.  So comp's requirement
is as if rather than is.



Not at all. This is BEH-MEC (behavioral mechanism). Already STRONG-AI  
(weaker than comp) makes precise that it postulates that machine can  
be conscious, even independently of behavior. Then COMP is stronger  
that STRON AI, as it postulates that YOU are a machine, and that your  
experience (which is of course assumed to exist for the rest making  
sense) is invariant for some digital transformation.



Please try to not deform the hypothesis. Comp is a postulate in a  
theory of consciousness, experience, subjective life, etc. It is an  
axiom, or an hypothesis, or a question (quasi synomym for my purpose)  
of the cognitive science.


We have COMP === STRONG AI  BEH-MEC,

But none of those arrows can be reversed, logically.

Bruno







Roger Clough, rclo...@verizon.net
10/9/2012
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content -
From: Alberto G. Corona
Receiver: everything-list
Time: 2012-10-08, 15:12:22
Subject: Re: What Kant did: Consciousness is a top-down structuring  
ofbottom-up sensory info



Bruno:

It could be that the indeterminacy in the I means that everything else
is not a machine, but supposedly, an hallucination.
But this hallucination has a well defined set of mathematical
properties that are communicable to other hallucinated expectators.
This means that something is keeping the picture coherent. If that
something is not computation or computations, what is the nature of
this well behaving hallucination according with your point of view?


2012/10/7 Bruno Marchal :


On 07 Oct 2012, at 15:11, Alberto G. Corona wrote:



2012/10/7 Bruno Marchal



On 07 Oct 2012, at 12:32, Alberto G. Corona wrote:

Hi Roger:

... and cognitive science , which study the hardware and  
evolutionary
psychology (that study the software or mind) assert that this is  
true.



Partially true, as both the mainstream cognitive science and  
psychology
still does not address the mind-body issue, even less the comp  
particular
mind-body issue. In fact they use comp + weak materialism, which  
can be

shown contradictory(*).




The Kant idea that even space and time are creations of the mind is
crucial for the understanding and to compatibilize the world of  
perceptions
and phenomena with the timeless, reversible, mathematical nature  
of the

laws of physics that by the way, according with M Theory, have also
dualities between the entire universe and the interior of a brane  
on the

planck scale (we can not know if we live in such a small brane).


OK. No doubt that Kant was going in the right (with respect to  
comp at

least) direction. But Kant, for me, is just doing 1/100 of what the
neoplatonists already did.



I don? assume either if this mathematical nature is or not the  
ultimate

nature or reality


Any Turing universal part of it is enough for the ontology, in the  
comp
frame. For the epistemology, no mathematical theories can ever be  
enough.
Arithmetic viewed from inside is bigger than what *any* theory can  
describe
completely. This makes comp preventing any text to capture the  
essence of
what being conscious can mean, be it a bible, string theory, or  
Peano
Arithmetic. In a sense such theories are like new person, and it  
put only

more mess in Platonia.




Probably the mind (or more specifically each instantiation of the  
mind

along the line of life in space-time) make use a sort of duality in
category theory between topological spaces and algebraic  
structures (as

Stephen told me and he can explain you) .


Many dualities exist, but as I have try to explain to Stephen, 

Re: [foar] Re: The real reasons we don’t have AGI yet

2012-10-09 Thread meekerdb

On 10/9/2012 4:22 AM, Stephen P. King wrote:

On 10/9/2012 2:16 AM, meekerdb wrote:

On 10/8/2012 3:49 PM, Stephen P. King wrote:

Hi Russell,

Question: Why has little if any thought been given in AGI to self-modeling and 
some capacity to track the model of self under the evolutionary transformations? 


It's probably because AI's have not needed to operate in environments where they need a 
self-model.  They are not members of a social community.  Some simpler systems, like 
Mars Rovers, have limited self-models (where am I, what's my battery charge,...) that 
they need to perform their functions, but they don't have general intelligence (yet).


Brent
--


Could the efficiency of the computation be subject to modeling? My thinking is that 
if an AI could rewire itself for some task to more efficiently solve that task...


I don't see why not.  A genetic-algorithm might be a subprogram that seeks an efficient 
code for some function within some larger program.  Of course it would need some 
definition or measure of what counts as 'efficient'.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Experiences are not provable because they are private, personal.

2012-10-09 Thread Bruno Marchal


On 09 Oct 2012, at 12:21, Roger Clough wrote:


Hi Bruno Marchal and Stathis,

1. Only entities in spacetime physically exist, and thus can be  
measured and proven.


Too much vague to me. I am OK, and I am not OK, for different  
reasonable intepretations of what you say here.
I doubt something in physics can be proved in physics, about reality.  
You need a theology or a metaphysics, which needs axioms too.





2. Experiences exist only in the mind, not in spacetime, because
   they are not extended in nature. They are subjective. Beyond  
spacetime.

   Superphysical. Unproveable to others, but certain to oneself.


OK.





3. Numbers, being other than oneself, might possibly have experiences,
   but they cannot share them with us.


Number cannot have experiences, per se. They just get involved in  
complex arithmetical relation which supports a person's life. Only the  
person is conscious, but the person supervene or on a quite complex  
clouds of numbers and oracle, in highly complex relationships.





They can only share descriptions
   of experiences, not the experiences themselves (or even if those
   experiences exist).


Yes. Like us. And PA and ZF already stay mute when asked about the  
experience they can have.Their G* (guardian angel, as I call it a  
long time ago) already explain why they stay mute,  as they know that  
communicating that they have experience will make people believe that  
they have been programmed to say that. They already try to avoid being  
treated as zombie, apparently.



Bruno






Roger rclo...@verizon.net

10/9/2012
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content -
From: Bruno Marchal
Receiver: everything-list
Time: 2012-10-08, 10:19:35
Subject: Re: Zombieopolis Thought Experiment


Hi Roger,

On 08 Oct 2012, at 16:14, Roger Clough wrote:


Hi Stathis Papaioannou

I would put it that mind is superphysical. Beyond spacetime.
Supernatural as a word carries too much baggage.


With comp, the natural numbers are supernatural enough.

Bruno





Roger Clough, rclo...@verizon.net
10/8/2012
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content -
From: Stathis Papaioannou
Receiver: everything-list@googlegroups.com
Time: 2012-10-08, 03:14:29
Subject: Re: Zombieopolis Thought Experiment


On 08/10/2012, at 3:07 AM, Craig Weinberg wrote:


Absolutely not. We know no such thing. Quite the opposite, we know
with relative certainty that what we understand of physics provides
no possibility of anything other than more physics. There is no
hint of any kind that these laws should lead to any such thing as
an 'experience' or awareness of any kind. You beg the question 100%
and are 100% incapable of seeing that you are doing it.


Well, if it's not the laws of physics then it's something
supernatural, isn't it?


-- Stathis Papaioannou  

--  
You received this message because you are subscribed to the Google

Groups Everything List group.
To post to this group, send email to everything- 
l...@googlegroups.com.

To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
.

--  
You received this message because you are subscribed to the Google

Groups Everything List group.
To post to this group, send email to everything- 
l...@googlegroups.com.

To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
.



http://iridia.ulb.ac.be/~marchal/



--  
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Zombieopolis Thought Experiment

2012-10-09 Thread Bruno Marchal


On 09 Oct 2012, at 12:34, Roger Clough wrote:


Hi Craig Weinberg

Consciousness had to arise before language.
Apes are conscious.


OK.

I think that all animals and plants are conscious, although not on a  
really common scale with most animals.


I think all animals above the octopus, including some spiders, are  
self-conscious.


Today.

Bruno







Roger Clough, rclo...@verizon.net
10/9/2012
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content -
From: Craig Weinberg
Receiver: everything-list
Time: 2012-10-08, 14:23:18
Subject: Re: Zombieopolis Thought Experiment




On Monday, October 8, 2012 1:35:31 PM UTC-4, Brent wrote:
On 10/8/2012 8:42 AM, John Clark wrote:
2) Intelligent behavior is NOT associated with subjective  
experience, in which case there is no reason for Evolution to  
produce consciousness and I have no explanation for why I am here,  
and I have reason to believe that I am the only conscious being in  
the universe.


There's a third possibility: Intelligent behavior is sometimes  
associated with subjective experience and sometimes not.  Evolution  
may have produced consciousness as a spandrel, an accident of the  
particular developmental path that evolution happened upon.  Or it  
may be that consciousness is necessarily associated with only  
certain kinds of intelligent behavior, e.g. those related to language.



You are almost right but have it upside down. When someone gets  
knocked unconscious, can they continue to behave intelligently? Can  
a baby wake up from a nap and become conscious before they learn  
language?


What would lead us to presume that consciousness itself could  
supervene on intelligence except if we were holding on to a  
functionalist metaphysics?


Clearly human intelligence in each individual supervenes on their  
consciousness and clearly supercomputers can't feel any pain or show  
any signs of fatigue that would suggest a state of physical  
awareness despite their appearances of 'intelligence'.


If you flip it over though, you are right. Everything is conscious  
to some extent, but not everything is intelligent in a cognitive  
sense. The assumption of strong AI is that we can take the low  
hanging fruit of primitive consciousness and attach it to the tree  
tops of anthropological quality intelligence and it will grow a new  
tree into outer space.


Craig




Bretn

--  
You received this message because you are subscribed to the Google  
Groups Everything List group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/ij3bVaKTduQJ 
.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: [foar] Re: The real reasons we don’t have AGI yet

2012-10-09 Thread Bruno Marchal


On 09 Oct 2012, at 13:22, Stephen P. King wrote:


On 10/9/2012 2:16 AM, meekerdb wrote:

On 10/8/2012 3:49 PM, Stephen P. King wrote:


Hi Russell,

Question: Why has little if any thought been given in AGI to  
self-modeling and some capacity to track the model of self under  
the evolutionary transformations?


It's probably because AI's have not needed to operate in  
environments where they need a self-model.  They are not members of  
a social community.  Some simpler systems, like Mars Rovers, have  
limited self-models (where am I, what's my battery charge,...) that  
they need to perform their functions, but they don't have general  
intelligence (yet).


Brent
--


Could the efficiency of the computation be subject to modeling?  
My thinking is that if an AI could rewire itself for some task to  
more efficiently solve that task...


Betting on self-consistency, and variant of that idea, shorten the  
proofs and speed the computations, sometimes in the wrong direction.


On almost all inputs, universal machine (creative set, by Myhill  
theorem, and in a sense of Post) have the alluring property to be  
arbitrarily speedable.


Of course the rtick is in on almost all inputs which means all,  
except a finite number of exception, and this concerns more evolution  
than reason.


Evolution is basically computation + the halting oracle. Implemented  
with the physical time (which is is based itself on computation + self- 
reference + arithmetical truth).


Bruno




--
Onward!

Stephen

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: I believe that comp's requirement is one of as if rather than is

2012-10-09 Thread Bruno Marchal


On 09 Oct 2012, at 13:29, Alberto G. Corona wrote:



But still after this reasoning,  I doubt that the self conscious
philosopher robot have the kind of thing, call it a soul, that I have.


?

You mean it is a zombie?

I can't conceive consciousness without a soul. Even if only the  
universal one.

So I am not sure what you mean by soul.

Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Zombieopolis Thought Experiment

2012-10-09 Thread Craig Weinberg


On Tuesday, October 9, 2012 11:21:59 AM UTC-4, John Clark wrote:

 On Mon, Oct 8, 2012  Craig Weinberg whats...@gmail.com wrote:

  Ok, which computers do you think have conscious experiences? Windows 
 laptops? Deep Blue? Cable TV boxes?


 How the hell should I know if computers have conscious experiences? How 
 the hell should I know if people have conscious experiences? 


I  didn't ask which ones you know are conscious, I asked which ones you 
think are conscious. I have no trouble at all saying that zero computers 
are conscious and that all living people have had conscious experiences.
 

 All I know for certain is that some things external to me display 
 intelligent behavior and some things do not, 


Why do you think that you know that? What makes a behavior intelligent? 
Over how long a time period are we talking about? Is a species as a whole 
intelligent? Are ecosystems intelligent? Caves full of growing crystals?
 

 from that point on everything is conjecture and theory; I happen to think 
 that intelligence is associated with consciousness is a pretty good theory 
 but I admit it's only a theory and if your theory is that you're the only 
 conscious being in the universe I can't prove you wrong. 


When did I ever say that I am the only conscious being in the universe?
 


  Is it a fact that you have conscious experiences?


 Yes, however I have no proof of that, or at least none I can share with 
 anyone else, so I would understand if you don't believe me; however to 
 believe me but not to believe a computer that made a similar claim just 
 because you don't care for the elements out of which it is made would be 
 rank bigotry.


It's not bigotry, it's an observation that computers don't do anything 
remotely implying consciousness of any kind. They are literally automatons. 
Their inorganic composition only gives us a way of understanding why it is 
the case that assembling something which can be publicly controlled is 
mutually exclusive from growing something which can be privately 
experienced.
 


  Stimulation that you get thorough your senses of the outside environment 
 does not control you.


 The difference between influence and control is just one of degree not of 
 kind. 


That's your assumption. I am sympathetic to it to the extent that all 
difference between degrees and kinds are also a difference between degree 
not of kind. I am influenced by traffic lights, but I still have to have 
control of the car I am driving. Control is a continuum. If you say that 
control is subsumed completely by influence, then you are denying that 
there is anything which can be discerned by degree. Control does not 
automatically arise from a lack of influence. A rock will not sing 
showtunes if given a chance.
 

 Usually lots of things cause us to do what we do,


If they caused everything to happen without us, then there would be no us. 
Why would there be?
 

 if all of them came from the outside then its control, if only some of the 
 causes were external and some were internal, such as memory, then its 
 influence.


We are still the ones who evaluate the influences and contribute directly 
to our actions.
 


  intelligent behavior WITHOUT consciousness confers a Evolutionary 
 advantage. Having difficulty with your reading comprehension?


  but what example or law are you basing this on? Who says this is a fact 
 other than you?


 It almost seems that you're trying to say that intelligent behavior gives 
 an organism no advantage over a organism that is stupid, but nobody is that 
 stupid; so what are you saying?


What does that have to do with this idea of yours that intelligence can 
exist without consciousness? You are trying to dodge the question.
 


  Who claims to know that intelligence without consciousness exists?


 I give up, who claims to know that intelligence without consciousness 
 exists?


You. Very insistently: intelligent behavior WITHOUT consciousness confers 
a Evolutionary advantage. Having difficulty with your reading 
comprehension?
 
Having difficulty remembering your edicts?


  The only intelligent behavior I know with certainty that is always 
 associated with subjective experience is my own. But I know with certainty 
 there are 2 possibilities:
 1) Intelligent behavior is always associated with subjective experience, 
 if so then if a computer beats you at any intellectual pursuit then it has 
 a subjective experience, assuming of course that you yourself are 
 intelligent. And I'll let you pick the particular intellectual pursuit for 
 the contest.
 2) Intelligent behavior is NOT associated with subjective experience, in 
 which case there is no reason for Evolution to produce consciousness and I 
 have no explanation for why I am here, and I have reason to believe that I 
 am the only conscious being in the universe.


  I choose 3) The existence of intelligent behavior is contingent upon 
 recognition and interpretation by a conscious agent.


 

Re: Only you can know if you actually have intelligence

2012-10-09 Thread Bruno Marchal


Hi  Roger Clough,


Hi meekerdb

Only you can know if you actually have intelligence,
although you can appear to have intelligence (as if).
You can be tested for it.

Thus comp is not different from us, or at least it has the same
limitations.


Exactly. And above the level of Löbianity, or with the induction  
axioms, I mean at a rather precise technical level, the machine is or  
can be aware of such limitations.


Bruno






Roger Clough, rclo...@verizon.net
10/9/2012
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content -
From: meekerdb
Receiver: everything-list
Time: 2012-10-08, 13:35:28
Subject: Re: Zombieopolis Thought Experiment


On 10/8/2012 8:42 AM, John Clark wrote:
2) Intelligent behavior is NOT associated with subjective  
experience, in which case there is no reason for Evolution to  
produce consciousness and I have no explanation for why I am here,  
and I have reason to believe that I am the only conscious being in  
the universe.


There's a third possibility: Intelligent behavior is sometimes  
associated with subjective experience and sometimes not.  Evolution  
may have produced consciousness as a spandrel, an accident of the  
particular developmental path that evolution happened upon.  Or it  
may be that consciousness is necessarily associated with only  
certain kinds of intelligent behavior, e.g. those related to language.


Bretn

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: A clearer definition of the self

2012-10-09 Thread Bruno Marchal


On 09 Oct 2012, at 13:49, Roger Clough wrote:


Hi meekerdb

The empiricists such as Hume and Locke maintained that
all that we know has first arrived through our senses.
I agree. The stuff of knowledge comes from below.


Perhaps, but we don't know that.
Cf: the dream argument.

This is already a theory, presuming things like senses, and things  
perturbing the senses.






But Kant showed that this is not enough,
for our minds have to make mental sense of this data.


Good point.




Consciousness or intelligence is thus structured for this purpose.


OK.



Our minds tell us what in particular we do know.

Such logical structures intelligence uses cannot arrive through our  
senses,

but must come platonically from above, so that the mental sense
is finally arrived at using the platonic forms and
structures from above.


OK.





In the meeting place of the above and the below
there exists, analogous to Maxwell's Demon, the active agent
called the self. The self makes sense of sensual data and
stores that in memory. We have learned something.


OK.

Bruno






Roger Clough, rclo...@verizon.net
10/9/2012
Forever is a long time, especially near the end. -Woody Allen


- Receiving the following content -
From: meekerdb
Receiver: everything-list
Time: 2012-10-08, 14:19:52
Subject: Re: Zombieopolis Thought Experiment


On 10/8/2012 10:24 AM, Craig Weinberg wrote:
So the more stimulation you get through your senses of the outside  
environment the less conscious you become. Huh?



Stimulation that you get thorough your senses of the outside  
environment does not control you.


How could you possibly know that, considering that John has  
accumulated many years of stimulation?


Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Mysterious Algorithm Was 4% of Trading Activity Last Week

2012-10-09 Thread Bruno Marchal

Thanks Craig. Interesting.

Bruno




On 09 Oct 2012, at 14:21, Craig Weinberg wrote:

Shades of things to come. What happens when we plug the economy of  
the entire world into mindless machines programmed to go to war  
against numbers.


Mysterious Algorithm Was 4% of Trading Activity Last Week

http://www.cnbc.com/id/49333454

A single mysterious computer program that placed orders — and then  
subsequently canceled them — made up 4 percent of all quote traffic  
in the U.S. stock market last week, according to the top tracker of  
high-frequency trading activity. The motive of the algorithm is  
still unclear.
The program placed orders in 25-millisecond bursts involving about  
500 stocks, according to Nanex, a market data firm. The algorithm  
never executed a single trade, and it abruptly ended at about 10:30  
a.m. Friday.


“Just goes to show you how just one person can have such an outsized  
impact on the market,” said Eric Hunsader, head of Nanex and the No.  
1 detector of trading anomalies watching Wall Street today.  
“Exchanges are just not monitoring it.”


Hunsader’s sonar picked up that this was a single high-frequency  
trader after seeing the program’s pattern (200 fake quotes, then  
400, then 1,000) repeated over and over. Also, it was being routed  
from the same place, the Nasdaq [COMP  3112.35 -23.84  (-0.76%)   
	 ].



“My guess is that the algo was testing the market, as high-frequency  
frequently does,” says Jon Najarian, co-founder of TradeMonster.com.  
“As soon as they add bandwidth, the HFT crowd sees how quickly they  
can top out to create latency.”


(Read More: Unclear What Caused Kraft Spike: Nanex Founder)

Translation: the ultimate goal of many of these programs is to gum  
up the system so it slows down the quote feed to others and allows  
the computer traders (with their co-located servers at the  
exchanges) to gain a money-making arbitrage opportunity.




The scariest part of this single program was that its millions of  
quotes accounted for 10 percent of the bandwidth that is allowed for  
trading on any given day, according to Nanex. (The size of the  
bandwidth pipe is determined by a group made up of the exchanges  
called the Consolidated Quote System.)


(Read More: Cuban, Cooperman: Curb High-Frequency Trading)

“This is pretty out there to see this effect this many stocks at the  
same time,” said Hunsader. High frequency traders are doing anything  
to “tip the odds in their favor.”


A Senate panel at the end of September sought answers on high- 
frequency trading as investigators look into the best way to stop  
wealth-destroying events such as the Knight Capital computer glitch  
in August and the market “Flash Crash” two years ago.


(Read More: Ex-Insider Calls High-Frequency Trading ‘Cheating’)

Regulators are trying to see how they can rein in the practice,  
which accounts for 70 percent of trading each day, without slowing  
down progress and profits for Wall Street and the U.S. exchanges.



RELATED LINKS

Cuban, Cooperman: Curb High-Frequency Trading
Do Markets Need a 'Kill Switch'?
Ex-Insider: HFT Is 'Cheating'
Will Glitches Get Worse?
“I feel a tax on order stuffing is what the markets need at this  
point,” said David Greenberg of Greenberg Capital. “This will cut  
down on the number of erroneous bids and offers placed into the  
market at any given time and should help stabilize the trading  
environment.”


Hunsader warned that regulators better do something fast,  
speculating that this single program could have led to something  
very bad if big news broke or a sell-off occurred and one entity was  
hogging this much of the system



--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/aVa0FXmOt-8J 
.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: I believe that comp's requirement is one of as if rather than is

2012-10-09 Thread Alberto G. Corona
It may be a zombie or not. I can´t know.

The same applies to other persons. It may be that the world is made of
zombie-actors that try to cheat me, but I have an harcoded belief in
the conventional thing.   Maybe it is, because otherwise, I will act
in strange and self destructive ways. I would act as a paranoic, after
that, as a psycopath (since they are not humans). That will not be
good for my success in society. Then,  I doubt that I will have any
surviving descendant that will develop a zombie-solipsist
epistemology.

However there are people that believe these strange things. Some
autists do not recognize humans as beings like him. Some psychopaths
too, in a different way. There is no authistic or psichopathic
epistemology because the are not functional enough to make societies
with universities and philosophers. That is the whole point of
evolutionary epistemology.



2012/10/9 Bruno Marchal marc...@ulb.ac.be:

 On 09 Oct 2012, at 13:29, Alberto G. Corona wrote:


 But still after this reasoning,  I doubt that the self conscious
 philosopher robot have the kind of thing, call it a soul, that I have.


 ?

 You mean it is a zombie?

 I can't conceive consciousness without a soul. Even if only the universal
 one.
 So I am not sure what you mean by soul.

 Bruno


 http://iridia.ulb.ac.be/~marchal/



 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.



-- 
Alberto.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Universe on a Chip

2012-10-09 Thread Craig Weinberg


On Tuesday, October 9, 2012 11:04:51 AM UTC-4, Bruno Marchal wrote:


 On 08 Oct 2012, at 22:38, Craig Weinberg wrote:


   If the universe were a simulation, would the constant speed of light 
 correspond to the clock speed driving the simulation? In other words, the 
 “CPU speed?” 

 As we are “inside” the simulation, all attempts to measure the speed of 
 the simulation appear as a constant value.

 Light “executes” (what we call “movement”) at one instruction per cycle. 

 Any device we built to attempt to measure the speed of light is also 
 inside the simulation, so even though the “outside” CPU clock could be 
 changing speed, we will always see it as the same constant value.

 A “cycle” is how long it takes all the information in the universe to 
 update itself relative to each other. That is all the speed of light really 
 is. The speed of information updating in the universe… (more 
 herehttp://www.quora.com/Physics/If-the-universe-were-a-simulation-would-the-constant-speed-of-light-correspond-to-the-clock-speed-driving-the-simulation-In-other-words-the-CPU-speed?__snids__=6179
  
 http://www.quora.com/Physics/If-the-universe-were-a-simulation-would-the-constant-speed-of-light-correspond-to-the-clock-speed-driving-the-simulation-In-other-words-the-CPU-speed?)

   I can make the leap from CPU clock frequency to the speed of light in a 
 vacuum if I view light as an experienced event or energy state which occurs 
 local to matter rather than literally traveling through space. With this 
 view, the correlation between distance and latency is an organizational 
 one, governing sequence and priority of processing rather than the presumed 
 literal existence of racing light bodies (photons). 

 This would be consistent with your model of Matrix-universe on a 
 meta-universal CPU in that light speed is simply the frequency at which the 
 computer processes raw bits. The change of light speed when propagating 
 through matter or gravitational fields etc wouldn’t be especially 
 consistent with this model…why would the ghost of a supernova slow down the 
 cosmic computer in one area of memory, etc?

 The model that I have been developing suggests however that the CPU model 
 would not lead to realism or significance though, and could only generate 
 unconscious data manipulations. In order to have symbol grounding in 
 genuine awareness, I think that instead of a CPU cranking away rendering 
 the entire cosmos over and over as a bulwark against nothingness, I think 
 that the cosmos must be rooted in stasis. Silence. Solitude. This is not 
 nothingness however, it is everythingness. A universal inertial frame which 
 loses nothing but rather continuously expands within itself by taking no 
 action at all. 

 The universe doesn’t need to be racing to mechanically redraw the cosmos 
 over and over because what it has drawn already has no place to disappear 
 to. It can only seem to disappear through…
 …
 …
 …
 latency.

 The universe as we know it then arises out of nested latencies. A 
 meta-diffraction of symmetrically juxtaposed latency-generating 
 methodologies. Size, scale, distance, mass, and density on the public side, 
 richness, depth, significance, and complexity on the private side. Through 
 these complications, the cosmic CPU is cast as a theoretical shadow, when 
 the deeper reality is that rather than zillions of cycles per second, the 
 real mainframe is the slowest possible computer. It can never complete even 
 one cycle. How can it, when it has all of these subroutines that need to 
 complete their cycles first?

 ?

 If the universe is a simulation (which it can't, by comp, but let us say), 
 then if the computer clock is changed, the internal creatures will not see 
 any difference. Indeed it is a way to understand that such a time does 
 not need to be actualized. Like in COMP and GR.


I'm not sure how that relates to what I was saying about the universe 
arising before even the first tick of the clock is finished, but we can 
talk about this instead if you like.

What you are saying, like what my friend up there was saying about the CPU 
clock being invisible to the Sims, I have no problem with. That's why I was 
saying it's like a computer game. You can stop the game, debug the program, 
start it back up where you left off, and if there was a Sim person actually 
experiencing that, they would not experience any interruption. Fine.

The problem is the meanwhile you have this meta-universe which is doing the 
computing, yes? What does it run on? If it doesn't need to run on anything, 
then way not just have that be the universe in the first place?

Craig


 Bruno

 http://iridia.ulb.ac.be/~marchal/





-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/ee_vcX_1ymcJ.
To post to this group, send email to everything-list@googlegroups.com.

Re: AGI

2012-10-09 Thread meekerdb

On 10/9/2012 8:01 AM, Bruno Marchal wrote:
In some sense they succeed enough the mirror test. That's enough for me to consider 
them, well, not just conscious, but as conscious as me, and you.
The difference are only on domain competence, and intelligence (in which case it might 
be that octopus are more intelligent than us, as we are blinded by our competences). 


At least they would be smart enough to have adopted base 8 numbering.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Conjoined Twins

2012-10-09 Thread Craig Weinberg


On Tuesday, October 9, 2012 10:09:57 AM UTC-4, Bruno Marchal wrote:





 I think Brittany and Abby are two single individual persons. 


I do too, but we can see that there is much more behavioral synchronization 
that we would expect from two single individual persons.

Then there are the brain conjoined twins: 
http://www.youtube.com/watch?v=YWDsXa5nNbI  (skip to 5:23 if you want)

That situation is a bit different, but notice the similarities between kids 
who literally share some part of their brain and ones who share nothing 
from the neck up.

Craig



 Bruno 


 http://iridia.ulb.ac.be/~marchal/ 





-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/wUU57vdgPAUJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: [foar] Re: The real reasons we don’t have AGI yet

2012-10-09 Thread Stephen P. King

On 10/9/2012 12:01 PM, meekerdb wrote:

On 10/9/2012 4:22 AM, Stephen P. King wrote:

On 10/9/2012 2:16 AM, meekerdb wrote:

On 10/8/2012 3:49 PM, Stephen P. King wrote:

Hi Russell,

Question: Why has little if any thought been given in AGI to 
self-modeling and some capacity to track the model of self under 
the evolutionary transformations? 


It's probably because AI's have not needed to operate in 
environments where they need a self-model.  They are not members of 
a social community.  Some simpler systems, like Mars Rovers, have 
limited self-models (where am I, what's my battery charge,...) that 
they need to perform their functions, but they don't have general 
intelligence (yet).


Brent
--


Could the efficiency of the computation be subject to modeling? 
My thinking is that if an AI could rewire itself for some task to 
more efficiently solve that task...


I don't see why not.  A genetic-algorithm might be a subprogram that 
seeks an efficient code for some function within some larger program.  
Of course it would need some definition or measure of what counts as 
'efficient'.


Brent
--


How about capable of finding the required solution given a finite 
quantity of resources.


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: [foar] Re: The real reasons we don’t have AGI yet

2012-10-09 Thread Stephen P. King

On 10/9/2012 12:28 PM, Bruno Marchal wrote:


On 09 Oct 2012, at 13:22, Stephen P. King wrote:


On 10/9/2012 2:16 AM, meekerdb wrote:

On 10/8/2012 3:49 PM, Stephen P. King wrote:

Hi Russell,

Question: Why has little if any thought been given in AGI to 
self-modeling and some capacity to track the model of self under 
the evolutionary transformations? 


It's probably because AI's have not needed to operate in 
environments where they need a self-model.  They are not members of 
a social community.  Some simpler systems, like Mars Rovers, have 
limited self-models (where am I, what's my battery charge,...) that 
they need to perform their functions, but they don't have general 
intelligence (yet).


Brent
--


Could the efficiency of the computation be subject to modeling? 
My thinking is that if an AI could rewire itself for some task to 
more efficiently solve that task...


Betting on self-consistency, and variant of that idea, shorten the 
proofs and speed the computations, sometimes in the wrong direction.


Hi Bruno,

Could you elaborate a bit on the betting mechanism so that it is 
more clear how the shorting of proofs and speed-up of computations obtains?





On almost all inputs, universal machine (creative set, by Myhill 
theorem, and in a sense of Post) have the alluring property to be 
arbitrarily speedable.


This is a measure issue, no?



Of course the trick is in on almost all inputs which means all, 
except a finite number of exception, and this concerns more evolution 
than reason.


OK.



Evolution is basically computation + the halting oracle. Implemented 
with the physical time (which is is based itself on computation + 
self-reference + arithmetical truth).


Bruno



So you are equating selection by fitness in a local environment 
with a halting oracle?


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: AGI

2012-10-09 Thread John Mikes
Bruno,
examples are not identifiction. I was referring to (your?) lack of detailed
description what the universal machine consists of and how it functions
(maybe: beyond what we know - ha ha). A comprehensive ID. Your lot of
examples rather denies that you have one. And:
'if it is enough FOR YOU to consider them, it may not be enough for me. I
don't really know HOW conscious I am.

I like your  counter-point in competence and intelligence.
I identified the wisdom (maybe it should read: the intelligence) of the
oldies as not 'disturbed' by too many factual(?) known circumstances -
maybe it is competence.
To include our inventory accumulated over the millennia as impediment
('blinded by').

John M

On Tue, Oct 9, 2012 at 11:01 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 08 Oct 2012, at 22:07, John Mikes wrote:

 Dear Richard, I think the lengthy text is Ben's article in response to
 D. Deutsch.
 Sometimes I was erring in the belief that it is YOUR text, but no. Thanks
 for copying.
 It is too long and too little organized for me to keep up with
 ramifications prima vista.
 What I extracted from it are some remarks I will try to communicate to
 Ben (a longtime e-mail friend) as well.

 I have my (agnostically derived) version of intelligence: the capability
 of reading 'inter'
 lines (words/meanings). Apart from such human distinction: to realize the
 'essence' of relations beyond vocabulary, or 'physical science' definitions.
 Such content is not provided in our practical computing machines
 (although Bruno trans-leaps such barriers with his (Löb's) universal
 machine unidentified).



 Unidentified?I give a lot of  examples: PA, ZF, John Mikes, me, and
 the octopus.

 In some sense they succeed enough the mirror test. That's enough for me to
 consider them, well, not just conscious, but as conscious as me, and you.
 The difference are only on domain competence, and intelligence (in which
 case it might be that octopus are more intelligent than us, as we are
 blinded by our competences).

 It is possible that when competence grows intelligence decrease, but I am
 not sure.

 Bruno


  Whatever our (physical) machines can do is within the physical limits of
 information - the content of the actual MODEL of the world we live with
 by yesterday's knowledge, no advanced technology can transcend such
 limitations: there is no input to do so. This may be the limits for AI, and
 AGI as well. Better manipulation etc. do not go BEYOND.

 Human mind-capabilities, however, (at least in my 'agnostic' worldview)
 are under the influences (unspecified) from the infinite complexity BEYOND
 our MODEL, without our knowledge and specification's power. Accordingly we
 MAY get input from more than the factual content of the MODEL. On such
 (unspecified) influences may be our creativity based (anticipation of
 Robert Rosen?) what cannot be duplicated by cutest algorithms in the best
 computing machines.
 Our 'factual' knowable in the MODEL are adjusted to our mind's capability
 - not so even the input from the unknowable 'infinite complexity's'
 relations.

 Intelligence would go beyond our quotidian limitations, not feasible for
 machines that work within such borders.

 I may dig out relevant information from Ben's text in subsequent
 readings, provided that I get to it back.


 Thanks again, it was a very interesting scroll-down

 John Mikes

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to 
 everything-list@googlegroups.**comeverything-list@googlegroups.com
 .
 To unsubscribe from this group, send email to
 everything-list+unsubscribe@**googlegroups.comeverything-list%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at http://groups.google.com/**
 group/everything-list?hl=enhttp://groups.google.com/group/everything-list?hl=en
 .


 http://iridia.ulb.ac.be/~**marchal/ http://iridia.ulb.ac.be/~marchal/




 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to 
 everything-list@googlegroups.**comeverything-list@googlegroups.com
 .
 To unsubscribe from this group, send email to everything-list+unsubscribe@
 **googlegroups.com everything-list%2bunsubscr...@googlegroups.com.
 For more options, visit this group at http://groups.google.com/**
 group/everything-list?hl=enhttp://groups.google.com/group/everything-list?hl=en
 .



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Creativity

2012-10-09 Thread John Mikes
On 09/10/2012, at 8:39 AM, Russell Standish wrote:


The problem that exercises me (when I get a chance to exercise it) is
that of creativity. David Deutsch correctly identifies that this is one of
the main impediments to AGI. Yet biological evolution is a creative
process, one for which epistemology apparently has no role at all.

Continuous, open-ended creativity in evolution is considered the main
problem in Artificial Life (and perhaps other fields). Solving it may
be the work of a single moment of inspiration (I wish), but more
likely it will involve incremental advances in topics such as
information, complexity, emergence and other such partly philosophical
topics before we even understand what it means for something to be
open-ended creative. Popperian epistemology, to the extent it has a
role, will come much further down the track.
Cheers...

JM: Not that I want to produce such 'single moment of inspiration':
I gave some thought to the concept of creativity over the past 20 years.
At this moment I stand (and my stance is likely to undergo further changes)
with including Robert Rosen's anticipation concept as applied to my own
world-view (belief!) of *agnosticism*: there is an infinite complexity we
cannot know, not even approach and from it we get info-morsels from time to
time into OUR world. We are not up to consider those 'morsels' by their
real and full nature, only adjusted to our mental capabilities and the so
far circumscribed 'world' we live in(?).
This constitutes our 'image' of our world - indeed the model of it we can
muster in our actual mental inventory (including the application of
conventional sciences.).

Our curiosity in topics MAY (or may not?) trigger topical info and it is up
to us whether we do, or don't pay attention and - maybe - consider them as
worthwhile pursuing - which is the way I figure *anticipation. *
If we relate to such anticipation with a positive feedback, we may fail, or
succeed, the latter callable the 'creative approach.
It goes beyond our 'model', beyond what we could feed into our computers,
beyond the inventory (status quo ante?) of what we already knew (I say:
yesterday).
No consequences drawn.
John M

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The real reasons we don’t have AGI yet

2012-10-09 Thread Russell Standish
Maybe I will take you up on this - I think my uni library card expired
years ago, and its a PITA to renew.

However, since one doesn't need a mind to be creative (and my interest
is actually in mindless creative processes), I'm not sure exactly how
relevant something titled Mechanism of Mind it will be.

BTW - very close to sending you a finished draft of Amoeba's Secret. I
just have to check the translations I wasn't sure of now that I have
access to a dictionary/Google translate, and also redo the citations
in a more regular manner.

Cheers

On Tue, Oct 09, 2012 at 02:52:29PM +1100, Kim Jones wrote:
 Please, please read Edward de Bono's book The Mechanism of Mind for some 
 genuine insights into creativity and how this comes about in mind. Russell if 
 you can't track down a copy I'll lend you mine but it's a treasured object, 
 not least because of the fact that the author autographed it!
 
 
 
 
 On 09/10/2012, at 8:39 AM, Russell Standish wrote:
 
  The problem that exercises me (when I get a chance to exercise it) is
  that of creativity. David Deutsch correctly identifies that this is one of
  the main impediments to AGI. Yet biological evolution is a creative
  process, one for which epistemology apparently has no role at all.
  
  Continuous, open-ended creativity in evolution is considered the main
  problem in Artificial Life (and perhaps other fields). Solving it may
  be the work of a single moment of inspiration (I wish), but more
  likely it will involve incremental advances in topics such as
  information, complexity, emergence and other such partly philosophical
  topics before we even understand what it means for something to be
  open-ended creative. Popperian epistemology, to the extent it has a
  role, will come much further down the track. 
  
  Cheers
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en.
 

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Creativity

2012-10-09 Thread Richard Ruquist
John,

Your model may explain why some drugs improve creativity.
Richard

On Tue, Oct 9, 2012 at 4:52 PM, John Mikes jami...@gmail.com wrote:
 On 09/10/2012, at 8:39 AM, Russell Standish wrote:


 The problem that exercises me (when I get a chance to exercise it) is
 that of creativity. David Deutsch correctly identifies that this is one of
 the main impediments to AGI. Yet biological evolution is a creative
 process, one for which epistemology apparently has no role at all.

 Continuous, open-ended creativity in evolution is considered the main
 problem in Artificial Life (and perhaps other fields). Solving it may
 be the work of a single moment of inspiration (I wish), but more
 likely it will involve incremental advances in topics such as
 information, complexity, emergence and other such partly philosophical
 topics before we even understand what it means for something to be
 open-ended creative. Popperian epistemology, to the extent it has a
 role, will come much further down the track.

 Cheers...
 
 JM: Not that I want to produce such 'single moment of inspiration':
 I gave some thought to the concept of creativity over the past 20 years.
 At this moment I stand (and my stance is likely to undergo further changes)
 with including Robert Rosen's anticipation concept as applied to my own
 world-view (belief!) of agnosticism: there is an infinite complexity we
 cannot know, not even approach and from it we get info-morsels from time to
 time into OUR world. We are not up to consider those 'morsels' by their real
 and full nature, only adjusted to our mental capabilities and the so far
 circumscribed 'world' we live in(?).
 This constitutes our 'image' of our world - indeed the model of it we can
 muster in our actual mental inventory (including the application of
 conventional sciences.).

 Our curiosity in topics MAY (or may not?) trigger topical info and it is up
 to us whether we do, or don't pay attention and - maybe - consider them as
 worthwhile pursuing - which is the way I figure anticipation.
 If we relate to such anticipation with a positive feedback, we may fail, or
 succeed, the latter callable the 'creative approach.
 It goes beyond our 'model', beyond what we could feed into our computers,
 beyond the inventory (status quo ante?) of what we already knew (I say:
 yesterday).
 No consequences drawn.
 John M

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The real reasons we don’t have AGI yet

2012-10-09 Thread Kim Jones
It just may provide you that flash of insight you hanker for; that's my grand 
hope, anyway.

here's a snippet:

There may be no reason to say something until after it has been said. Once it 
has been said a context develops to support it, and yet it would never have 
been produced by a context. It may not be possible to plan a new style in art, 
but once it comes about, it creates its own validity. It is usual to proceed 
forward step by step until one has got somewhere. But - it is also possible to 
get there first by any means and then look back and find the best route. A 
problem may be worked forward from the beginning but it may also be worked 
backward from the end.

Instead of proceeding steadily along a pathway, one jumps tpo a different 
point, or several different points in turn, and then waits for them to link 
together to form a coherent pattern. It is in the nature of the self-maximising 
system of the memory-surface that is mind to create a coherent pattern out of 
such separate points. If the pattern is effective then it cannot possibly 
matter whether it came about in a sequential fashion or not. A frame of 
reference is a context provided by the current arrangement of information. It 
is the direction of development implied by this arrangement. One cannot break 
out of this frame of reference by working from within it. It maybe necessary to 
jump out, and if the jump is successful then the frame of reference is itself 
altered. (p. 240 - description of the process known as Lateral Thinking.)

Give me a bell in about a week and we will jump in somewhere for a beer and I 
will pass you this volume (if still interested after reading the above) - I 
will have a little less Uni work to do for a short while; I may be able to get 
down to a bit of finessing of our translation of Bruno's Amoebas.

Kim Jones



On 10/10/2012, at 8:16 AM, Russell Standish wrote:

 Maybe I will take you up on this - I think my uni library card expired
 years ago, and its a PITA to renew.
 
 However, since one doesn't need a mind to be creative (and my interest
 is actually in mindless creative processes), I'm not sure exactly how
 relevant something titled Mechanism of Mind it will be.
 
 BTW - very close to sending you a finished draft of Amoeba's Secret. I
 just have to check the translations I wasn't sure of now that I have
 access to a dictionary/Google translate, and also redo the citations
 in a more regular manner.
 
 Cheers
 
 On Tue, Oct 09, 2012 at 02:52:29PM +1100, Kim Jones wrote:
 Please, please read Edward de Bono's book The Mechanism of Mind for some 
 genuine insights into creativity and how this comes about in mind. Russell 
 if you can't track down a copy I'll lend you mine but it's a treasured 
 object, not least because of the fact that the author autographed it!
 
 
 
 
 On 09/10/2012, at 8:39 AM, Russell Standish wrote:
 
 The problem that exercises me (when I get a chance to exercise it) is
 that of creativity. David Deutsch correctly identifies that this is one of
 the main impediments to AGI. Yet biological evolution is a creative
 process, one for which epistemology apparently has no role at all.
 
 Continuous, open-ended creativity in evolution is considered the main
 problem in Artificial Life (and perhaps other fields). Solving it may
 be the work of a single moment of inspiration (I wish), but more
 likely it will involve incremental advances in topics such as
 information, complexity, emergence and other such partly philosophical
 topics before we even understand what it means for something to be
 open-ended creative. Popperian epistemology, to the extent it has a
 role, will come much further down the track. 
 
 Cheers
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en.
 
 
 -- 
 
 
 Prof Russell Standish  Phone 0425 253119 (mobile)
 Principal, High Performance Coders
 Visiting Professor of Mathematics  hpco...@hpcoders.com.au
 University of New South Wales  http://www.hpcoders.com.au
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to