Re: [agi] Re: Merging - or: Multiplicity

2008-05-29 Thread Steve Richfield
Mike,

On 5/28/08, Mike Tintner [EMAIL PROTECTED] wrote:

  Steve: I have been advocating fixing the brain shorts that lead to
 problems, rather than jerking the entire world around to make brain shorted
 people happy.

 Which brain shorts? IMO the brain's capacity for shorts in one situation
 is almost always a capacity for short-cuts in another - and dangerous to
 tamper with.


It appears that the principle of reverse reductio ad absurdum is SO
non-obvious that it has escaped human notice for about a million years. In
its absence, we have countless needless problems, and have evolved into a
specie who would rather fight than solve problems with advanced reasoning
methods (that we haven't had). Yes, I AM including AGIers in this list, and
excepting only those with a working understanding of reverse reductio ad
absurdum. By my count, that means that all but maybe a few dozen people on
the face of the earth are SERIOUSLY brain shorted, as will be any AGIs that
they construct.

This discussion reminds me of the floating-point that IBM adopted on their
mainframe computers, complete with normalization that shifted 4 bits at a
time. As one CS person noted, it made roundoff errors faster than any other
computer on the face of the earth. Hence, let's first get things working
right, and then lets work on the short cuts.


  Steve:Let's instead 1.  make something USEFUL, like knowledge management
 programs that do things that people (and future AGIs) are fundamentally poor
 at doing

 Well, in principle, a general expert system that can be a problem-solving
 aid in many domains would be a fine thing. But - if you'll forgive the
 ignorance of this question - my impression was that expert systems were a
 big fad that has largely failed??? If you have a link to some survey here,
 I'd appreciate it.


My own Dr. Eliza incorporates the missing pieces that doomed prior
efforts. Not the least of these is coding regular expressions for what
people say who both have a particular problems, and who are ignorant of its
workings. Surprisingly, this is not nearly as difficult as it sounds.


 Steve, the capacity for general thinking/intelligence HAS to be - and is
 being - explored. William may be right that all the main AGI-ers are like
 him avoiding the challenge of general problemsolving, and hoping that the
 answer will emerge later on in the development of their systems. But
 roboticists are setting themselves general problems nbw  - in the shape if
 nothing else of the ICRA challenge, as I've pointed out before.


This has been an ongoing effort for the last ~40 years, so while we all
remain hopeful, I am not expecting anything spectacular anytime soon.

Do you have some reason to expect a breakthrough?

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Merging - or: Multiplicity

2008-05-28 Thread William Pearson
2008/5/27 Mike Tintner [EMAIL PROTECTED]:
 Will:And you are part of the problem insisting that an AGI should be tested
 by its ability to learn on its own and not get instruction/help from
 other agents be they human or other artificial intelligences.

 I insist[ed] that an AGI should be tested on its ability to solve some
 *problems* on its own - cross-domain problems - just as we do. Of course, it
 should learn from others, and get help on other problems, as we do too.

But you don't test for that, and as the loebner prize shows you only
tend to get what you test for.

 But
 if it can't solve many general problems on its own - which seemed OK by you
 (after setting up your initially appealing submersible problem - solutio
 interrupta!) - then it's only a narrow AI.

I am happy for the baby machine (which is what we will be dealing with
to start with) not to be able to solve general problems on its own.
Later on I would be disappointed.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Merging - or: Multiplicity

2008-05-28 Thread Mike Tintner
Steve: I have been advocating fixing the brain shorts that lead to problems, 
rather than jerking the entire world around to make brain shorted people happy.

Which brain shorts? IMO the brain's capacity for shorts in one situation is 
almost always a capacity for short-cuts in another - and dangerous to tamper 
with. 

Steve:Let's instead 
1.  make something USEFUL, like knowledge management programs that do things 
that people (and future AGIs) are fundamentally poor at doing

Well, in principle, a general expert system that can be a problem-solving aid 
in many domains would be a fine thing. But - if you'll forgive the ignorance of 
this question - my impression was that expert systems were a big fad that has 
largely failed??? If you have a link to some survey here, I'd appreciate it.

Steve, the capacity for general thinking/intelligence HAS to be - and is being 
- explored. William may be right that all the main AGI-ers are like him 
avoiding the challenge of general problemsolving, and hoping that the answer 
will emerge later on in the development of their systems. But roboticists are 
setting themselves general problems nbw  - in the shape if nothing else of the 
ICRA challenge, as I've pointed out before.

I 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] Re: Merging - or: Multiplicity

2008-05-27 Thread Mike Tintner
Steve:Presuming that you do NOT want to store all of history and repeatedly 
analyze all of it as your future AGI operates, you must accept MULTIPLE 
potentially-useful paradigms, adding new ones and trashing old ones as more 
information comes in. Our own very personal ideas of learning and thinking do 
NOT typically allow for the maintenance of multiple simultaneous paradigms, 
cross-paradigm translation, etc.

Steve,

Some odd thoughts in response to an odd but interesting post :). 

1). A true AGI - incl. every living creature - has to be a SELF-EDUCATOR, s.o. 
who doesn't just learn, but learns how to learn, and that means 

2) A true AGI also has to be a CONSUMER in every sphere of their activities - 
choosing from multiple available paradigms; and to be a mix of different 
paradigms in each area - which is healthy and inevitable.

3) Every true intelligence is, and can only be, one individual in a SOCIETY OF 
INTELLIGENCES..  -  a consumer in an extensive MARKET of multiple ideas and 
paradigms.; (wouldn't anything less be un-American?)

Correct me, but all the ideas of AGI's that I've seen are about INDIVIDUAL 
isolated minds - single superpowerful computers taking over the world, as per 
the sci-fi movies, that seem to have shaped everyone's thinking.

Actually, that's an absurdity. The whole story of evolution tells us that the 
problems of living in this world for any species of creature/intelligence at 
any level can only be solved by a SOCIETY of individuals. This whole dimension 
seems to be entirely missing from AGI.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Merging - or: Multiplicity

2008-05-27 Thread William Pearson
2008/5/27 Mike Tintner [EMAIL PROTECTED]:

 Actually, that's an absurdity. The whole story of evolution tells us that
 the problems of living in this world for any species of
 creature/intelligence at any level can only be solved by a SOCIETY of
 individuals. This whole dimension seems to be entirely missing from AGI.


And you are part of the problem insisting that an AGI should be tested
by its ability to learn on its own and not get instruction/help from
other agents be they human or other artificial intelligences.

The social aspect of mimicry has been picked up Ben Goertzel at least
in the initial stages of development of his AGI, he may think it will
evolve beyond that eventually.

I don't think it will, as every mind is capable of getting stuck in a
rut (they are attractor states), getting out of that rut is easier
with other intelligences to show the way out (themselves getting stuck
in different ruts). Societies can get stuck in their own ruts but
generally have bigger  spaces to explore, so might find their way out
in a long time.

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Merging - or: Multiplicity

2008-05-27 Thread Mike Tintner

Will:And you are part of the problem insisting that an AGI should be tested
by its ability to learn on its own and not get instruction/help from
other agents be they human or other artificial intelligences.

I insist[ed] that an AGI should be tested on its ability to solve some 
*problems* on its own - cross-domain problems - just as we do. Of course, it 
should learn from others, and get help on other problems, as we do too. But 
if it can't solve many general problems on its own - which seemed OK by you 
(after setting up your initially appealing submersible problem - solutio 
interrupta!) - then it's only a narrow AI. 





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Merging - or: Multiplicity

2008-05-27 Thread Steve Richfield
Mike,

On 5/27/08, Mike Tintner [EMAIL PROTECTED] wrote:

  Steve:Presuming that you do NOT want to store all of history and
 repeatedly analyze all of it as your future AGI operates, you must accept
 MULTIPLE potentially-useful paradigms, adding new ones and trashing old ones
 as more information comes in. Our own very personal ideas of learning and
 thinking do NOT typically allow for the maintenance of multiple simultaneous
 paradigms, cross-paradigm translation, etc.

 Steve,

 Some odd thoughts in response to an odd but interesting post :).

 1). A true AGI - incl. every living creature - has to be a SELF-EDUCATOR,
 s.o. who doesn't just learn, but learns how to learn, and that means

 2) A true AGI also has to be a CONSUMER in every sphere of their activities
 - choosing from multiple available paradigms; and to be a mix of different
 paradigms in each area - which is healthy and inevitable.

 3) Every true intelligence is, and can only be, one individual in a SOCIETY
 OF INTELLIGENCES..  -  a consumer in an extensive MARKET of multiple ideas
 and paradigms.; (wouldn't anything less be un-American?)

 Correct me, but all the ideas of AGI's that I've seen are about INDIVIDUAL
 isolated minds - single superpowerful computers taking over the world, as
 per the sci-fi movies, that seem to have shaped everyone's thinking.


As the one lone holdout here, I have been advocating fixing the brain shorts
that lead to problems, rather than jerking the entire world around to make
brain shorted people happy. This is what professional negotiators call a
win win solution. Every professional negotiator KNOWS that there is ALWAYS
a win win solution. All reverse reduction ad absurdum does is provide the
PROOF of this, along with some guidance to finding the solution, so there is
no longer any excuse for failing to come up with a win win solution.


 Actually, that's an absurdity. The whole story of evolution tells us that
 the problems of living in this world for any species of
 creature/intelligence at any level can only be solved by a SOCIETY of
 individuals.


This sure hasn't worked for the last million years or so.

 This whole dimension seems to be entirely missing from AGI.


That sure isn't the only thing that is missing from AGI.

We already have BILLIONS of human-scale AGIs running around and are turning
out more at the rate of one per second. Why waste an hour trying to make
still more since we have quite enough, unless of course you are in the
company of a beautiful young woman? Let's instead either
1.  make something USEFUL, like knowledge management programs that do things
that people (and future AGIs) are fundamentally poor at doing, and/or
2.  make something VALUABLE, like life-forever-machines, that may not be
very useful, but at least they might be valuable to rich people who want to
live forever.

It is sure nice that this is a VIRTUAL forum, for if we were all in one room
together, my posting above would probably get me thrashed by the *true* AGI
believers here.

Does anyone here want to throw a virtual stone?

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Re: Merging - or: Multiplicity

2008-05-27 Thread Derek Zahn
Steve Richfield:
 It is sure nice that this is a VIRTUAL forum, for if we were all 
 in one room together, my posting above would probably get 
 me thrashed by the true AGI believers here.
 
 Does anyone here want to throw a virtual stone?
 
Sure.
 
*plonk*
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com