Re: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Ben Goertzel

 3. I think it is extremely important, that we give an AGI no bias about
 space and time as we seem to have. Our intuitive understanding of space and
 time is useful for our life on earth but it is completely wrong as we know
 from theory of relativity and quantum physics.

 -Matthias Heger



Well, for the purpose of creating the first human-level AGI, it seems
important **to** wire in humanlike bias about space and time ... this will
greatly ease the task of teaching the system to use our language and
communicate with us effectively...

But I agree that not **all** AGIs should have this inbuilt biasing ... for
instance an AGI hooked directly to quantum microworld sensors could become a
kind of quantum mind with a totally different intuition for the physical
world than we have...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Eric Burton
Well, for the purpose of creating the first human-level AGI, it seems 
important **to**
wire in humanlike bias about space and time ... this will greatly ease the 
task of
teaching the system to use our language and communicate with us effectively...

The same thing occurred to me while browsing this thread. We really
want an AGI that can function with human-like ease in a simulated
Earth-like environment for reasons of training and communication. But
by no means should this be the limit of its abilities! You'd like an
intelligence that functions with native casualness while embodied in
VR but can make smooth transitions into entirely different modalities
without the time it spends there biasing its behavior in other ones.

I mean, you want an AGI to be a master of all modes and to be able to
switch between them seamlessly, neither of which are particularly
lauded properties of human intelligence. It kind of goes to reinforce
the notion that strong AI will likely be strongly non-human.

On 10/5/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 3. I think it is extremely important, that we give an AGI no bias about
 space and time as we seem to have. Our intuitive understanding of space
 and
 time is useful for our life on earth but it is completely wrong as we know
 from theory of relativity and quantum physics.

 -Matthias Heger



 Well, for the purpose of creating the first human-level AGI, it seems
 important **to** wire in humanlike bias about space and time ... this will
 greatly ease the task of teaching the system to use our language and
 communicate with us effectively...

 But I agree that not **all** AGIs should have this inbuilt biasing ... for
 instance an AGI hooked directly to quantum microworld sensors could become a
 kind of quantum mind with a totally different intuition for the physical
 world than we have...

 ben g



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Mike Tintner
Matthias: I think it is extremely important, that we give an AGI no bias 
about

space and time as we seem to have.

Well, I ( possibly Ben) have been talking about an entity that is in many 
places at once - not in NO place. I have no idea how you would swing that - 
other than what we already have - machines that are information-processors 
with no sense of identity at all.Do you? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Stan Nilsen

Mike Tintner wrote:
Matthias: I think it is extremely important, that we give an AGI no bias 
about

space and time as we seem to have.

Well, I ( possibly Ben) have been talking about an entity that is in 
many places at once - not in NO place. I have no idea how you would 
swing that - other than what we already have - machines that are 
information-processors with no sense of identity at all.Do you?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com


Seems hard to imagine information processing without identity. 
Intelligence is about invoking methods.  Methods are created because 
they are expected to create a result.  The result is the value - the 
value that allows them to be selected from many possible choices.


Identity, involves placing ones powers into a situation that is unique 
according to place and time.  If it's Matt's global brain, then it will 
be critical for agents to grasp the value factors - which come from the 
time and place one inhabits.


Is it the time and space bias that is the issue?  If so, what is the 
bias that humans have which machines shouldn't?


just quick reactive thoughts...
Stan


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Mike Tintner

Matthias,

First, I see both a human body-brain and a distributed entity, such as a 
computer network,  as *physically integrated* units, with a sense of their 
physical integrity. The fascinating thought, (perhaps unrealistic) for me 
was of being able to physically look at a scene or scenes, from different 
POV's more or less simultaneously - a thought worth exploring.


Second, your idea, AFAICT, of an unbiassed-as-to-time-and-space 
intelligence, while v. vague, is also worth exploring. I suspect the 
all-important fallacy here is of pure objectivity - the idea that an 
object or scene or world can be depicted WITHOUT any location or reference 
or comparison. When we talk of time and space,  which are fictions that have 
no concrete existence -  we are really talking (no?) of frameworks we use to 
locate and refer other things to. Clocks. 3/4 dimensional grids... All 
things have to be referred and compared to other things in order to be 
understood, which is an inevitably biassed process. So is there any such 
thing as your non-bias?   Just my first stumbling thoughts.





Matthias:


From my points 1. and 2. it should be clear that I was not talking about a

distributed AGI which is in NO place. The AGI you mean consists of several
parts which are in different places. But this is already the case with the
human body. The only difference is, that the parts of the distributed AGI
can be placed several kilometers from each other. But this is only a
quantitative and not a qualitative point.

Now to my statement of an useful representation of space and time for AGI.
We know, that our intuitive understanding of space and time works very well
in our life. But the ultimate goal of AGI is that it can solve problems
which are very difficult for us. If we give an AGI bias of a model of space
and time which is not state of the art of the knowledge we have from
physics, then we give AGI a certain limitation which we ourselves suffer
from and which is not necessary for an AGI.
This point has nothing to do with the question whether the AGI is
distributed or not.
I mentioned this point because your question has relations to the more
fundamental question whether and which bias we should give AGI for the
representation of space and time.


Ursprüngliche Nachricht-
Von: Mike Tintner [mailto:[EMAIL PROTECTED]
Gesendet: Samstag, 4. Oktober 2008 14:13
An: agi@v2.listbox.com
Betreff: Re: [agi] I Can't Be In Two Places At Once.

Matthias: I think it is extremely important, that we give an AGI no bias
about
space and time as we seem to have.

Well, I ( possibly Ben) have been talking about an entity that is in many
places at once - not in NO place. I have no idea how you would swing that -
other than what we already have - machines that are information-processors
with no sense of identity at all.Do you?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] I Can't Be In Two Places At Once.

2008-10-03 Thread Ben Goertzel
yah, I discuss this in chapter 2 of The Hidden Pattern ;-) ...

the short of it is: the self-model of such a mind will be radically
different than that of a current human, because we create our self-models
largely by analogy to our physical organisms ...

intelligences w/o fixed physical embodiment will still have self-models but
they will be less grounded in body metaphors ... hence radically different


we can explore this different analytically, but it's hard for us to grok
empathically...

a hint of this is seen in the statement my son Zeb (who plays too many
videogames) made: i don't like the real world as much as videogames because
in the real world I always have first person view and can never switch to
third person

one would suspect that minds w/o fixed embodiment would have more explicitly
contextualized inference, rather than so often positioning all their
inferences/ideas within one default context ... for starters...

ben

On Fri, Oct 3, 2008 at 8:43 PM, Mike Tintner [EMAIL PROTECTED]wrote:

 The foundation of the human mind and system is that we can only be in one
 place at once, and can only be directly, fully conscious of that place. Our
 world picture,  which we and, I think, AI/AGI tend to take for granted, is
 an extraordinary triumph over that limitation   - our ability to conceive of
 the earth and universe around us, and of societies around us, projecting
 ourselves outward in space, and forward and backward in time. All animals
 are similarly based in the here and now.

 But,if only in principle, networked computers [or robots] offer the
 possibility for a conscious entity to be distributed and in several places
 at once, seeing and interacting with the world simultaneously from many
 POV's.

 Has anyone thought about how this would change the nature of identity and
 intelligence?



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] I Can't Be In Two Places At Once.

2008-10-03 Thread Mike Tintner
I think either way - computers or robots - a distributed entity has to be 
looking at the world from different POV's more or less simultaneously, even if 
rapidly switching. My immediate intuitive response is that that would make the 
entity much less self-ish -much more open to merging or uniting with others.

The idea of a distributed entity may well have the power to change our ideas 
about God/ the divine force/principle ,  I suspect our ideas are directly or 
indirectly v. located. Even if we, say, think about God or the force being 
everywhere, it's hard not to think of that being the same force spread out.

But the idea of a distributed entity IMO  opens up the possibility of an entity 
with a highly multiple personality  - and perhaps also might make it possible 
to see all humans, say, and/or animals as one  - an idea which has always given 
me, personally, a headache.


Ben:yah, I discuss this in chapter 2 of The Hidden Pattern ;-) ...

the short of it is: the self-model of such a mind will be radically different 
than that of a current human, because we create our self-models largely by 
analogy to our physical organisms ...

intelligences w/o fixed physical embodiment will still have self-models but 
they will be less grounded in body metaphors ... hence radically different 

we can explore this different analytically, but it's hard for us to grok 
empathically...

a hint of this is seen in the statement my son Zeb (who plays too many 
videogames) made: i don't like the real world as much as videogames because in 
the real world I always have first person view and can never switch to third 
person   

one would suspect that minds w/o fixed embodiment would have more explicitly 
contextualized inference, rather than so often positioning all their 
inferences/ideas within one default context ... for starters...

ben


  On Fri, Oct 3, 2008 at 8:43 PM, Mike Tintner [EMAIL PROTECTED] wrote:

The foundation of the human mind and system is that we can only be in one 
place at once, and can only be directly, fully conscious of that place. Our 
world picture,  which we and, I think, AI/AGI tend to take for granted, is an 
extraordinary triumph over that limitation   - our ability to conceive of the 
earth and universe around us, and of societies around us, projecting ourselves 
outward in space, and forward and backward in time. All animals are similarly 
based in the here and now.

But,if only in principle, networked computers [or robots] offer the 
possibility for a conscious entity to be distributed and in several places at 
once, seeing and interacting with the world simultaneously from many POV's.

Has anyone thought about how this would change the nature of identity and 
intelligence? 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  Nothing will ever be attempted if all possible objections must be first 
overcome   - Dr Samuel Johnson




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com