Agreed, internal simulation is necessary, and then real world testing (& 
correction/adjustment) must follow. But at what level of granularity must the 
simulation proceed? Why must it be a physical/visual simulation, rather than 
(or in conjunction with) a symbolic one?

I don't care for strict Boolean logic anymore than you. I think it's an 
oversimplification. Things aren't just true or false. They can be 
senseless/meaningless or true half the time or true unless you get surprised by 
something, etc. etc. etc. But there are ways to represent that in semantic 
networks.

Given a non-Boolean representation of truth, it becomes possible to weigh 
conflicting information and alternative explanations or expectations. And 
putting these into a semantic network that is capable of representing a story 
or chain of causally related events (or even a branching tree if you're 
thinking about alternative possibilities) allows an event-based symbolic 
simulation. Furthermore, in cases where physical movement in a 3D space is 
required, a physical model can be consulted for finer-grained modeling when an 
action's consequences aren't clearly discernable from the semantic information.

People often visualize things unrealistically. I'm reading Steven Pinker's How 
the Mind Works, and he mentions an experiment where people predicted that a 
ball shot out of a corkscrew tube would continue on a corkscrew path. Of 
course, when shown a video of it actually happening that way, the same people 
laughed at the absurdity of the image. If you've already thought about the 
scenario, however, and you're asked the same question again, you don't need to 
visualize it all over again to know it's BS. You just pop out with, "Of course 
not!" and laugh, because you've stored the information away semantically for 
rapid contextually-based retrieval. This semantic information is accumulated 
about all kinds of things, all the time.

I think the key difference between people and animals is that we've evolved 
means for manipulating semantic information directly, taking a shortcut past 
all the physical modeling, and, more importantly, *communicating* that semantic 
information directly via speech, taking a shortcut past all the physical 
experience. How much computation does it take to run a 3D simulation, vs. 
bumping a few fuzzy truth values up against each other to get a rough estimate? 
Why wouldn't evolution engineer that computational efficiency into us?



-- Sent from my Palm Pre
On Oct 20, 2012 2:16 PM, Mike Tintner <[email protected]> wrote: 





Aaron: The human brain is defined by this "box", the drive to identify and
collect useful new concepts, ...These are hardcoded into the human condition by 
our genes. They are algorithms. Why can't we copy them, or improve on them, in 
software?.
 
Why can’t science work without 
experiments or investigation? Why can’t we go back to pre-science, ivory tower, 
in-a-box/book, natural philosophy?
 
I have outlined at length before why 
concepts are fluid schemas/outlines - and not just outlines on a mental canvas 
but “outlines of [embodied] action”. 
 
DRAW ME A CHAIR
HAND ME THAT BOX
 
are not and never could be hardcoded or 
algorithmic or disembodied – they are there like all concepts to direct/embrace 
an infinite diversity of possible actions in an infinite diversity of possible 
real world situations, incl. an infinite diversity of *new* actions and 
situations. And a real world AGI – you – must not just check like logic whether 
those ideas are logically *true*  but, like science,  first simulate 
them in an embodied way internally in order to start to *realise* them and see 
whether they are *realistic* when compared with the relevant *evidence* you 
have 
about such actions, and  then like science must put them into real world 
action and test whether they *really* work.,
 
But no time now... I mainly wanted to 
say thanks for your personal response – sure I bash too much.





 

From: [email protected] 
Sent: Saturday, October 20, 2012 7:33 PM
To: AGI 

Subject: Re: [agi] ONE EXAMPLE
 
This 
is by far the best post I've seen come out of you, Mike. You started talking 
about what you think instead of just bashing what other people think. If you 
continue this way, it might be possible to come to the point of mutual 
understanding, if not agreement, instead of just frustrated gnashing of teeth 
on 
both sides. Having said that, I have a couple of points I disagree with you
on.


1) Being on a desk, in a pocket, or roving around in a robot body 
is not what makes the difference between the chef/program that remixes the same 
tired old ingredients/concepts to synthesize new recipes/behaviors/thoughts and 
the chef/program that goes looking for new ingredients/concepts to make 
something fundamentally new. What makes the difference is *interaction* with 
the 
real world.

A robot that's blind and deaf isn't going to be able to 
incorporate new concepts any better than a PC on a desk that's equally blind 
and 
deaf. It's the ability to sense the world that's key to developing new 
concepts.

Where you'll probably disagree with me, however, is that I 
think natural language in the form of text might be sufficient sensory input to 
develop a limited range of new concepts. This is not to say that true AGI could 
come out of a text-based interface, but that we could potentially get something 
of the same flavor but watered down.

2) You consider algorithms to be 
pre-planned courses of action, as opposed to improvised, ad hoc courses of
action. This is true for your typical stereotype of an algorithm, e.g. using an 
algorithm specifically designed so the robot can make coffee or something.
However, AGI is defined by the attempt to build something that comes up with 
these things on it's own, not one that just follows a pre-planned routine or 
picks from among a set of pre-planned routines.

The human brain's 
learning mechanisms implement algorithms for recognizing new concepts and 
incorporating them, and the human brain's planning mechanisms implement 
algorithms that improvise new courses of action on the fly, often preempting 
those already being acted out.

To change these mechanisms, you would have 
to change the person's DNA or rewire their brain. This ensures that we don't go 
haywire, which would be contrary to evolutionary fitness. These make the "box" 
that defines what we're capable of thinking of, including what kinds of new
concepts or courses of action we're capable of coming up with. We are all 
limited in our level of insight. No one can think of everything, but for any 
one 
thing taken by itself, there's a good chance a person can think of it given the 
time.

What AGIers want to build is this "box", not the contents. We're 
dissatisfied with standard AI because it focuses on the contents. This seems to 
be your complaint about AGI, as well. But I think really this is a failure of 
recognition, not of existence.

We want to build programs that 
automatically recognize and incorporate new concepts, much as your chef with 
his 
new ingredients. But the chef himself is following a standard algorithm: 
collect 
and incorporate new ingredients. This is the "box" that grabs new contents
instead of fuddling though the same ones over and over.

We want to build 
programs that automatically generate new courses of action on the fly using
these concepts they have accumulated through experience, much as your chef
creates new recipes by recombining old and new ingredients. The chef follows a 
standard algorithm here, too: pick some ingredients, and pick some steps by
which to combine and prepare them. This is the "box" that shakes up the 
contents 
it has already accumulated to invent a new combination thereof, which fits the 
needs of the moment.

The human brain is defined by this "box", the drive 
to identify and collect useful new concepts, and the drive to utilize the 
concepts already collected towards meeting the person's needs. These are 
hardcoded into the human condition by our genes. They are algorithms. Why can't 
we copy them, or improve on them, in software? Yes, the algorithms are static, 
unchanging, but what they *do* -- identifying & collecting new concepts and 
using them in improvised, appropriate behavior -- is dynamic and 
fluid.




-- 
Sent from my Palm Pre


On Oct 20, 2012 7:24 AM, Mike Tintner <[email protected]> wrote: 





Jeez Ben, that is the longest most convoluted post ....
 
You really are not registering (forget about agreeing with) the other side
at all – and thinking in strictly retro, out-of-date terms.
 
1) Vis-a-vis your ideas – the accusation is simple: you do not have an idea 
for take-off – an idea that will explain how a machine will go on from one 
diverse task to another. Not even an attempt at an idea. Your magic sauce is 
eternally in the post. 
 
Such an idea would explain, say, how your robot would pick up not just one
form of object, but another and another – and ultimately any object within the 
capacity of its effector – **all without being reprogrammed.** Or it would 
explain how a robot would go from one terrain to another to another – from 
rocky 
to sandy to beach – **all without being reprogrammed.**
 
Or how an AGI would go from understanding a story about dogs, to a story 
about lions, giraffes, snakes etc – all without being reprogrammed.
 
Nor BTW does the entire field have even an *attempt* at a take-off 
idea.
 
 
2) “AGI –gadfly”.  I am not. I am an opponent of what you and most 
here represent which is **standalone computational AGI**  (yes, I know you 
are making gestures towards robots, but they are not fundamental/ integral.) 

 
I OTOH am a proponent of what will be the next and first generation of 
**real AGI** – ROBOTICS AGI.   
 
3) Creativity is what AGI is about – I agree with Deutsch/ he agrees with 
me. It is what will distinguish real AGI from narrow AI. 
 
Creativity does indeed involve a) the incorporation of, and b) 
adaptation,  to **new elements**.  And it is in no way a mechanical 
problem for a robot – it is merely impossible for a standalone computer.
 
[The identification of *new elements* is key to defining creativity – 
precisely because any computer program can be called trivially creative. As I 
think you said, a chess program is trivially creative because it plays new 
games 
it has never played before.
 
But does it introduce *new elements*? Does it introduce *new moves* or *new
pieces*? No it doesn’t and can’t. So now we can define why it isn’t truly 
creative]
 
By extension, most AGI-ers incl. you AFAICT, seem deeply confused about 
whether GA’s are creative or not. They are not – precisely because all they do 
is re-mix an existing set of old elements.
 
Put this simply – let’s say you are a creative chef who wants to create a 
new dish.
 
If you follow the GA approach, you will take an existing set of dishes with
a limited set of ingredients, and you will endlessly remix them. You remix the 
same old elements. So if you want a new ice cream, you will take existing 
flavours and endlessly remix those. Lemon and vanilla and caramel etc. And what 
you will get will be new, but definitely still vanilla-y/caramel-y/lemon-y etc 
– 
still v. much of the same family.
 
That is not true creativity  or the way real world creativity, or real
AGI, does or can proceed.
 
The creative chef looks for NEW INGREDIENTS – new elements. The creative 
chef will be able to come up with **snail** ice cream or **eggs and bacon** ice 
cream or – what the heck – leather ice cream. How will he do this? No 
mechanical 
mystery – to put it v. simply : he will look around the world for new 
ingredients, reach out and pick them up and add them to his ice-cream mix/stew 
and mix them in – and see if they work.  (This by the way is what 
improvisation is -  nothing to do with those musical progs. which do not 
improvise at all – merely remix existing musical elements).
 

The chef uses his robotic/embodied capacity to find new 
elements in the world – “objets trouves”. -  both physical and mental 
elements. Standalone computer programs cannot do this.

 

4) The REAL BATTLE – so what is going on here is not a battle 
between you – guardian of the sacred flame – and some irritant gadfly – but a 
clash between two fundamentally opposed visions of how AGI should 
proceed

 

a) STANDALONE, VIRTUAL WORLD AGI 

 

vs

 

b) ROBOTIC, REAL WORLD AGI

 

(and this can be framed also as a clash between standalone 
desktop computers OTOH and real-world-embodied-and-embedded tablets and mobile 
phones OTOH).

 

And secondly, it is a clash between

 

a) PRE-PLANNED COURSES OF ACTION (of wh. algorithms are one 
form)

 

and

 

b) IMPROVISED, AD HOC COURSES OF ACTION.

 

Once you move out of your computational virtual world, . (as 
present robotics AGI-ers are doing), you will realise the central challenge of 
navigating the real, unstructured world -  how on earth can you plan for a 
real world that is continually throwing new things at you – (new elements) – 
where you can never know what is around the next corner, or foresee every 
pitfall and stumbling block?

 

And secondly, – though this is something you have not even 
begun to think about  – why on earth would you *want* to plan for the real 
world – when there are so many new and better things to do – so many new and 
better ways to navigate?

 

5) CONCLUSION – So what is going on here is not a clash 
between heroic AGI-ers and some mad troll whose only apparent purpose in life 
is 
to cause them trouble , but between two grand overarching visions of what AGI 
is.-

 

between a VIRTUAL, PREPLANNED, “DETACHED”, AGI and a ROBOTIC, 
IMPROVISED, “EMBODIED-AND-EMBEDDED” AGI.

 

And just as Deutsch has just more or less echoed a whole set 
of points that I have been making for years, and squishy robots came along to 
support my fluid schemas, so others will come to support the still-new points 
that I am making.

 

Although my vision has not been set out systematically here, 
it is indeed very extensive and very coherent – and has very practical 
consequences. A great deal of thought has gone into it.

 
 

 

 

 

 

From: 
Ben Goertzel 


Sent: Saturday, October 20, 2012 4:45 AM
To: AGI 

Subject: Re: [agi] ONE EXAMPLE
 
***
As 
for Ben, he  has never faced the problem of AGI in his life. Ask him what 
ideas he has for AGI take-off/creativity 
***


Mike, as you know I wrote a book on the nature of creativity in 1997 
(From Complexity to Creativity) ... You unfortunately lack the scientific 
background to read it carefully....  And my views on AGI have been written 
down extensively, albeit not in sufficiently simplified language for you to
understand.... ( I'm working to remedy that, with a popular-audience book on 
AGI 
in the works...)


The kind of argument you are trying to make was made far more ably by 
George kampis in his mid-1990s book "self-modifying systems ..." , and I 
counter-argued his points extensively in my own book


Far from new and radical, the view you're presenting is very familiar 
to me (and everyone else in the AGI field), we just don't agree with 
it

 
Still, though you are (among other things) an under-educated, obnoxious 
mailing-list troll, the points you raise have been raised often enough by 
others 
with more incisive minds and better educations than you, that I feel moved to 
write a somewhat thorough response...
 
The question of creativity and "radical novelty" is an interesting one that
I've often discussed with others F2F...
 
I just wrote a blog post
 
http://multiverseaccordingtoben.blogspot.com/2012/10/can-computers-be-creative.html
 
that addresses these issues.  I preferred to write a blog post than a
long email, as emails have more of a feeling of vanishing into the ether, 
whereas blog posts feel more persistent..
 
ben
 
 


  
  
    AGI | Archives  | Modify Your 
      Subscription 
    


  
  
    AGI | Archives  | Modify Your 
      Subscription
    


  
  
    AGI | Archives  | Modify 
      Your Subscription 
    


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  







-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to