Re: [agi] The problem with AGI per Sloman

2010-06-28 Thread Ian Parker
The space navigation system is a case in point. I listened to a talk by
Peter Norvig in which he talked about looking at the moton of the heavens in
an empirical way. Let's imagine angels and crystal spheres and see where we
get. He rather decried Newton and the apple. In fact the apple story isn't
true. Newton investigated gravity after discussion with Halley of comet
fame. Newton was not generous enough to admit this and came up the *apple*
story. In fact there is a 7th parameter - the planet's mass. This is needed
when you think about the effect that a planet has on the others. Adams and
Leverrier discovered Neptune in 1846 by looking at irregularities in the
orbit of Uranus.

Copernicus came up with the Heliocentric theory because Ptolomy's epicycles
were so cumbersome. In fact a planet can be characterised by 6 things. Its
major axis, its eccentricity, the plane of its orbit, the angle of the major
axis and the position in orbit - time. These parameters are constant unless
a third body is present.

How do we get to Saturn? Cassini had a series of encounters. We view each
encounter by calculating first a solar orbit and then a
Venusian/Terrestrial/Jovian orbit. Deep in an encounter a planet is the main
influence. 3 body problems are tricky and we do a numerical computation with
time steps. We have to work out first of all the series of encounters and
then the effects of inaccuracies. We need to correct, for example *before* we
encounter a planet.

How does this fit in with AGI? Well Peter I don't think you empirical
approach will get you to Saturn. There is the need for theoretical
knowledge. How is this knowledge represented in AGI? It is represented in an
abstract mathematical form, a form which describes a general planet and
incorporates the inverse square law.

What in fact we need to know about space navigation is whether our system
incorporates these abstract definitions. I would view AGI as something which
will understand an abstract definition. This is something a little meta
mathematical. It understands ellipses and planets in general and wants to
know how this knowledge is incorporated in our system. In fact I would view
AGI as something checking on the integrity of other programs.

*Abstract Language*
*
*
By this I mean an abstract grammatical description of a language, gender and
morphology. This is vital. I don't think our Peter has ever learnt Spanish
even, at least not properly. We can find out what morphology a word has once
we have a few examples if*f* we have a morphological description built in.


  - Ian Parker


 A narrow embedded system, like say a DMV computer network is not an AGI.
 But that doesn't mean an AGI could not perform that function. In fact, AGI
 might arise out of these systems needing to become more intelligent. And an
 AGI system, that same AGI software may be used for a DMV, a space
 navigation
 system, IRS, NASDAQ, etc. it could adapt. .. efficiently. There are some
 systems that tout multi-use now but these are basically very narrow AI. AGI
 will be able to apply it's intelligence across domains and should be able
 to
 put its feelers into all the particular subsystems. Although I foresee some
 types of standard interfaces perhaps into these narrow AI computer
 networks;
 some sort of intelligence standards maybe, or the AGI just hooks into the
 human interfaces...

 An AGI could become a God but also it could do some useful stuff like run
 everyday information systems just like people with brains have to perform
 menial labor.

 John





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] The problem with AGI per Sloman

2010-06-27 Thread John G. Rose
It's just that something like world hunger is so complex AGI would have to
master simpler problems. Also, there are many people and institutions that
have solutions to world hunger already and they get ignored. So an AGI would
have to get established over a period of time for anyone to really care what
it has to say about these types of issues. It could simulate things and come
up with solutions but they would not get implemented unless it had power to
influence. So in addition AGI would need to know how to make people
listen... and maybe obey.

 

IMO I think AGI will take the embedded route - like other types of computer
systems - IRS, weather, military, Google, etc. and we become dependent
intergenerationally so that it is impossible to survive without. At that
point AGI's will have power to influence.

 

John

 

From: Ian Parker [mailto:ianpark...@gmail.com] 
Sent: Saturday, June 26, 2010 2:19 PM
To: agi
Subject: Re: [agi] The problem with AGI per Sloman

 

Actually if you are serious about solving a political or social question
then what you really need is CRESS http://cress.soc.surrey.ac.uk/web/home
. The solution of World Hunger is BTW a political question not a technical
one. Hunger is largely due to bad governance in the Third World. How do you
get good governance. One way to look at the problem is via CRESS and run
simulations in second life.

 

One thing which has in fact struck me in my linguistic researches is this.
Google Translate is based on having Gigabytes of bilingual text. The fact
that GT is so bad at technical Arabic indicates the absence of such
bilingual text. Indeed Israel publishes more papers than the whole of the
Islamic world. This is of profound importance for understanding the Middle
East. I am sure CRESS would confirm this.

 

AGI would without a doubt approach political questions by examining all the
data about the various countries before making a conclusion. AGI would
probably be what you would consult for long term solutions. It might not be
so good at dealing with something (say) like the Gaza flotilla. In coing to
this conclusion I have the University of Surrey and CRESS in mind.

 

 

  - Ian Parker

On 26 June 2010 14:36, John G. Rose johnr...@polyplexic.com wrote:

 -Original Message-
 From: Ian Parker [mailto:ianpark...@gmail.com]


 How do you solve World Hunger? Does AGI have to. I think if it is truly
G it
 has to. One way would be to find out what other people had written on the
 subject and analyse the feasibility of their solutions.



Yes, that would show the generality of their AGI theory. Maybe a particular
AGI might be able to work with some problems but plateau out on its
intelligence for whatever reason and not be able to work on more
sophisticated issues. An AGI could be hardcoded perhaps and not improve
much, whereas another AGI might improve to where it could tackle vast
unknowns at increasing efficiency. There are common components in tackling
unknowns, complexity classes for example, but some AGI systems may operate
significantly more efficiently and improve. Human brains at some point may
plateau without further augmentation though I'm not sure we have come close
to what the brain is capable of.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?
https://www.listbox.com/member/?; 
Powered by Listbox: http://www.listbox.com

 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-27 Thread Ian Parker
On 27 June 2010 21:25, John G. Rose johnr...@polyplexic.com wrote:

 It's just that something like world hunger is so complex AGI would have to
 master simpler problems.


I am not sure that that follows necessarily. Computing is full of situations
where a seemingly simple problem is not solved and a more complex one is. I
remember posting some time ago on Cassini.

 Also, there are many people and institutions that have solutions to world
 hunger already and they get ignored.

Indeed. AGI in the shape of a search engine would find these solutions.
World Hunger might well be soluble *simply because so much work has already
been done.* AGI might well start off as search and develop into feasibility
ans solutions.

 So an AGI would have to get established over a period of time for anyone to
 really care what it has to say about these types of issues. It could
 simulate things and come up with solutions but they would not get
 implemented unless it had power to influence. So in addition AGI would need
 to know how to make people listen... and maybe obey.


This is CRESS. CRESS would be an accessible option.



 IMO I think AGI will take the embedded route - like other types of computer
 systems - IRS, weather, military, Google, etc. and we become dependent
 intergenerationally so that it is impossible to survive without. At that
 point AGI's will have power to influence.



Look! The point is this:-

1) An embedded system is AI not AGI.

2) AGI will arise simply because all embedded systems are themselves
searchable.


  - Ian Parker



 *From:* Ian Parker [mailto:ianpark...@gmail.com]
 *Sent:* Saturday, June 26, 2010 2:19 PM
 *To:* agi
 *Subject:* Re: [agi] The problem with AGI per Sloman



 Actually if you are serious about solving a political or social question
 then what you really need is CRESShttp://cress.soc.surrey.ac.uk/web/home.
 The solution of World Hunger is BTW a political question not a technical
 one. Hunger is largely due to bad governance in the Third World. How do you
 get good governance. One way to look at the problem is via CRESS and
 run simulations in second life.



 One thing which has in fact struck me in my linguistic researches is this.
 Google Translate is based on having Gigabytes of bilingual text. The fact
 that GT is so bad at technical Arabic indicates the absence of such
 bilingual text. Indeed Israel publishes more papers than the whole of the
 Islamic world. This is of profound importance for understanding the Middle
 East. I am sure CRESS would confirm this.



 AGI would without a doubt approach political questions by examining all the
 data about the various countries before making a conclusion. AGI would
 probably be what you would consult for long term solutions. It might not be
 so good at dealing with something (say) like the Gaza flotilla. In coing to
 this conclusion I have the University of Surrey and CRESS in mind.





   - Ian Parker

 On 26 June 2010 14:36, John G. Rose johnr...@polyplexic.com wrote:

  -Original Message-
  From: Ian Parker [mailto:ianpark...@gmail.com]
 
 
  How do you solve World Hunger? Does AGI have to. I think if it is truly
 G it
  has to. One way would be to find out what other people had written on the
  subject and analyse the feasibility of their solutions.
 
 

 Yes, that would show the generality of their AGI theory. Maybe a particular
 AGI might be able to work with some problems but plateau out on its
 intelligence for whatever reason and not be able to work on more
 sophisticated issues. An AGI could be hardcoded perhaps and not improve
 much, whereas another AGI might improve to where it could tackle vast
 unknowns at increasing efficiency. There are common components in tackling
 unknowns, complexity classes for example, but some AGI systems may operate
 significantly more efficiently and improve. Human brains at some point may
 plateau without further augmentation though I'm not sure we have come close
 to what the brain is capable of.

 John



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com



 *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/| 
 Modifyhttps://www.listbox.com/member/?;Your Subscription

 http://www.listbox.com


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http

RE: [agi] The problem with AGI per Sloman

2010-06-27 Thread John G. Rose
 -Original Message-
 From: Ian Parker [mailto:ianpark...@gmail.com]
 
 So an AGI would have to get established over a period of time for anyone
to
 really care what it has to say about these types of issues. It could
simulate
 things and come up with solutions but they would not get implemented
 unless it had power to influence. So in addition AGI would need to know
how
 to make people listen... and maybe obey.
 
 This is CRESS. CRESS would be an accessible option.
 

Yes, I agree, it looks like that.

 IMO I think AGI will take the embedded route - like other types of
computer
 systems - IRS, weather, military, Google, etc. and we become dependent
 intergenerationally so that it is impossible to survive without. At that
point
 AGI's will have power to influence.
 
 Look! The point is this:-
 
 1) An embedded system is AI not AGI.
 
 2) AGI will arise simply because all embedded systems are themselves
 searchable.
 

A narrow embedded system, like say a DMV computer network is not an AGI.
But that doesn't mean an AGI could not perform that function. In fact, AGI
might arise out of these systems needing to become more intelligent. And an
AGI system, that same AGI software may be used for a DMV, a space navigation
system, IRS, NASDAQ, etc. it could adapt. .. efficiently. There are some
systems that tout multi-use now but these are basically very narrow AI. AGI
will be able to apply it's intelligence across domains and should be able to
put its feelers into all the particular subsystems. Although I foresee some
types of standard interfaces perhaps into these narrow AI computer networks;
some sort of intelligence standards maybe, or the AGI just hooks into the
human interfaces...

An AGI could become a God but also it could do some useful stuff like run
everyday information systems just like people with brains have to perform
menial labor.

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] The problem with AGI per Sloman

2010-06-26 Thread John G. Rose
 -Original Message-
 From: Ian Parker [mailto:ianpark...@gmail.com]
 
 
 How do you solve World Hunger? Does AGI have to. I think if it is truly
G it
 has to. One way would be to find out what other people had written on the
 subject and analyse the feasibility of their solutions.
 
 

Yes, that would show the generality of their AGI theory. Maybe a particular
AGI might be able to work with some problems but plateau out on its
intelligence for whatever reason and not be able to work on more
sophisticated issues. An AGI could be hardcoded perhaps and not improve
much, whereas another AGI might improve to where it could tackle vast
unknowns at increasing efficiency. There are common components in tackling
unknowns, complexity classes for example, but some AGI systems may operate
significantly more efficiently and improve. Human brains at some point may
plateau without further augmentation though I'm not sure we have come close
to what the brain is capable of. 

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-26 Thread Ian Parker
Actually if you are serious about solving a political or social question
then what you really need is CRESS http://cress.soc.surrey.ac.uk/web/home.
The solution of World Hunger is BTW a political question not a technical
one. Hunger is largely due to bad governance in the Third World. How do you
get good governance. One way to look at the problem is via CRESS and
run simulations in second life.

One thing which has in fact struck me in my linguistic researches is this.
Google Translate is based on having Gigabytes of bilingual text. The fact
that GT is so bad at technical Arabic indicates the absence of such
bilingual text. Indeed Israel publishes more papers than the whole of the
Islamic world. This is of profound importance for understanding the Middle
East. I am sure CRESS would confirm this.

AGI would without a doubt approach political questions by examining all the
data about the various countries before making a conclusion. AGI would
probably be what you would consult for long term solutions. It might not be
so good at dealing with something (say) like the Gaza flotilla. In coing to
this conclusion I have the University of Surrey and CRESS in mind.


  - Ian Parker

On 26 June 2010 14:36, John G. Rose johnr...@polyplexic.com wrote:

  -Original Message-
  From: Ian Parker [mailto:ianpark...@gmail.com]
 
 
  How do you solve World Hunger? Does AGI have to. I think if it is truly
 G it
  has to. One way would be to find out what other people had written on the
  subject and analyse the feasibility of their solutions.
 
 

 Yes, that would show the generality of their AGI theory. Maybe a particular
 AGI might be able to work with some problems but plateau out on its
 intelligence for whatever reason and not be able to work on more
 sophisticated issues. An AGI could be hardcoded perhaps and not improve
 much, whereas another AGI might improve to where it could tackle vast
 unknowns at increasing efficiency. There are common components in tackling
 unknowns, complexity classes for example, but some AGI systems may operate
 significantly more efficiently and improve. Human brains at some point may
 plateau without further augmentation though I'm not sure we have come close
 to what the brain is capable of.

 John



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-25 Thread Mike Tintner
Colin,

Thanks. Do you have access to any of the full articles? I can't make too 
informed comments about the quality of work of all the guys writing for this 
journal, but they're certainly raising v. important questions - and this 
journal appears to have been unjustly ignored by this group.

Sloman, for example, seems to be exploring again the idea of a metaprogram (or 
I'd say, general program vs specialist program), wh. is the core of AGI, as 
Ben appears to be only v. recently starting to acknowledge:

A methodology for making progress is summarised and a novel requirement 
proposed for a theory of how human minds work: the theory should support a 
single generic design for a learning, developing system


From: Colin Hales 
Sent: Friday, June 25, 2010 4:30 AM
To: agi 
Subject: Re: [agi] The problem with AGI per Sloman


Not sure if this might be fodder for the discussion. The International Journal 
of Machine Consciousness (IJMC) has just issued Vol 2 #1 here: 
http://www.worldscinet.com/ijmc/02/0201/S17938430100201.html

It has a Sloman article and invited commentary on it.

cheers
colin hales




  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-25 Thread rob levy
 But there is some other kind of problem.  We should have figured it out by
 now.  I believe that there must be some fundamental computational problem
 that is standing as the major obstacle to contemporary AGI.  Without solving
 that problem we are going to have to wade through years of incremental
 advances.  I believe that the most likely basis of the problem is efficient
 logical satisfiability.  It makes the most senese given the nature of the
 computer and the nature of the best theories of mind.



I think there must be a computational or physical/computational problem we
have yet to clearly identify that goes along with an objection certain
philosophers like Chalmers have made about neural correlates, roughly: why
should one level of analysis or type of structure (eg neurons, brain
regions, dynamically synchronized ensembles of neurons,  or even the
organism-environment system), have this magic property of consciousness?

Since to me at least it seem obvious that the ecological level is the
relevant level of analysis at which to find the meaning relevant to
biological organisms, my sense is that we can reduce the above problem to a
question about meaning/significance, that is: what is it about a system that
makes it unified/integrated such that its relationship to other things
constitutes  a landscape of relevant meaning to the system as a whole.

I think that if that an explanation of meaning-to-a-system is either the
same as an explanation of first-hand subjectivity, or is closely tied to it,
though if subjectivity turns out to be part of a physical problem and not a
purely computational one, then we probably won't solve the above-posed
problem without such a physical explanation being clarified (not necessarily
explained though, just as we don't know what electricity really is for
example).

All computer software and situated robots that have ever been made are
composed of actions or expressions that are meaningful to people, but
software or robots have never been created that can refer to their own
actions in a way that demonstrates skillful knowledge indicating that they
are organized in a truly semantic way, as opposed to a merely programmatic
way.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-25 Thread Jim Bromer
On Fri, Jun 25, 2010 at 7:35 PM, rob levy r.p.l...@gmail.com wrote:
I think there must be a computational or physical/computational problem we
have yet to clearly identify that goes along with an objection certain
philosophers like Chalmers have made about neural correlates, roughly: why
should one level of analysis or type of structure (eg neurons, brain
regions, dynamically synchronized ensembles of neurons,  or even the
organism-environment system), have this magic property of consciousness?

I don't think that it will be understood fully during our lifetimes.  And I
don't think that the unknown aspects of this is relevant to computer
programming.  However, the question of subjective meaning is very relevant.

rob levy r.p.l...@gmail.com wrote:
what is it about a system that makes it unified/integrated such that its
relationship to other things constitutes  a landscape of relevant meaning to
the system as a whole.
I think that if that an explanation of meaning-to-a-system is either the
same as an explanation of first-hand subjectivity, or is closely tied to it,
though if subjectivity turns out to be part of a physical problem and not a
purely computational one, then we probably won't solve the above-posed
problem without such a physical explanation being clarified (not necessarily
explained though, just as we don't know what electricity really is for
example).

That is interesting.  I wonder if there is a way to make that sense of
subjectivity and subjective meaning a basic quality of a simple AGI program,
and if it could be a valuable elemental method of analyzing the IO data
environment.  I think objectives are an important method of testing ideas
(and idea-like impressions and reactions).  And this combination of setting
objectives to test ideas and further develop new ideas does seem to lend
itself to developing a sense of subjective experience in relation to the
'objects' of the IO data environment.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] The problem with AGI per Sloman

2010-06-24 Thread Mike Tintner
One of the problems of AI researchers is that too often they start off with an 
inadequate
understanding of the problems and believe that solutions are only a few years 
away. We need an educational system that not only teaches techniques and 
solutions, but also an understanding of problems and their difficulty - which 
can come from a broader multi-disciplinary education. That could speed up 
progress.
A. Sloman

( who else keeps saying that?)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Jim Bromer
Both of you are wrong.  (Where did that quote come from by the way.  What
year did he write or say that.)

An inadequate understanding of the problems is exactly what has to
be expected by researchers (both professional and amateurs) when they are
facing a completely novel pursuit.  That is why we have endless discussions
like these.  What happened over and over again in AI research is that the
amazing advances in computer technology always seemed to suggest that
similar advances in AI must be just off the horizon.  And the reality is
that there have been major advances in AI.  In the 1970's a critic stated
that he wouldn't believe that AI was possible until a computer was able to
beat him in chess.  Well, guess what happened and guess what conclusion he
did not derive from the experience.  One of the problems with critics is
that they can be as far off as those whose optimism is absurdly unwarranted.

If a broader multi-disciplinary effort was the obstacle to creating AGI, we
would have AGI by now.  It should be clear to anyone who examines the
history of AI or the present day reach of computer programming that a
multi-discipline effort is not the key to creating effective AGI.  Computers
have become pervasive in modern day life, and if it was just a matter of
getting people with different kinds of interests involved, it would have
been done by now.  It is a little like saying that the key to safe deep sea
drilling is to rely on the expertise of companies that make billions and
billions of dollars and which stand to lose billions by mistakes.  While
that should make sense, if you look a little more closely, you can see that
it doesn't quite work out that way in the real world.

Jim Bromer

On Thu, Jun 24, 2010 at 7:33 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  One of the problems of AI researchers is that too often they start off
 with an inadequate
 understanding of the *problems* and believe that solutions are only a few
 years away. We need an educational system that not only teaches techniques
 and solutions, but also an understanding of problems and their difficulty —
 which can come from a broader multi-disciplinary education. That could speed
 up progress.
 A. Sloman

 ( who else keeps saying that?)
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Ben Goertzel
Yes... the idea underlying Sloman's quote is why the interdisciplinary field
of cognitive science was invented a few decades ago...

ben g

On Thu, Jun 24, 2010 at 12:05 PM, Jim Bromer jimbro...@gmail.com wrote:

 Both of you are wrong.  (Where did that quote come from by the way.  What
 year did he write or say that.)

 An inadequate understanding of the problems is exactly what has to
 be expected by researchers (both professional and amateurs) when they are
 facing a completely novel pursuit.  That is why we have endless discussions
 like these.  What happened over and over again in AI research is that the
 amazing advances in computer technology always seemed to suggest that
 similar advances in AI must be just off the horizon.  And the reality is
 that there have been major advances in AI.  In the 1970's a critic stated
 that he wouldn't believe that AI was possible until a computer was able to
 beat him in chess.  Well, guess what happened and guess what conclusion he
 did not derive from the experience.  One of the problems with critics is
 that they can be as far off as those whose optimism is absurdly unwarranted.

 If a broader multi-disciplinary effort was the obstacle to creating AGI, we
 would have AGI by now.  It should be clear to anyone who examines the
 history of AI or the present day reach of computer programming that a
 multi-discipline effort is not the key to creating effective AGI.  Computers
 have become pervasive in modern day life, and if it was just a matter of
 getting people with different kinds of interests involved, it would have
 been done by now.  It is a little like saying that the key to safe deep sea
 drilling is to rely on the expertise of companies that make billions and
 billions of dollars and which stand to lose billions by mistakes.  While
 that should make sense, if you look a little more closely, you can see that
 it doesn't quite work out that way in the real world.

 Jim Bromer

 On Thu, Jun 24, 2010 at 7:33 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  One of the problems of AI researchers is that too often they start off
 with an inadequate
 understanding of the *problems* and believe that solutions are only a few
 years away. We need an educational system that not only teaches techniques
 and solutions, but also an understanding of problems and their difficulty —
 which can come from a broader multi-disciplinary education. That could speed
 up progress.
 A. Sloman

 ( who else keeps saying that?)
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org


“When nothing seems to help, I go look at a stonecutter hammering away at
his rock, perhaps a hundred times without as much as a crack showing in it.
Yet at the hundred and first blow it will split in two, and I know it was
not that blow that did it, but all that had gone before.”



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread David Jones
I have to agree that a big problem with the field is a lack of understanding
of the problems and how they should be solved. I see too many people
pursuing solutions to poorly defined problems and without defining why the
solution solves the problem. I even see people pursuing solutions to the
wrong problems altogether. I also believe that a strong knowledge of
existing methods to solve problems is a hindrance in this research. It makes
people not want to start from scratch. They just use a method that work to
some degree, but is wrong at its foundation.

Lately I've begun to consider these problems more carefully and directly.
What I've found interesting is that even at an extremely simplified level,
the solution is not immediately clear. In fact, there are many solutions.

So, given so many solutions to even simplified problems, which one is the
right one? This is the reason that AGI is so F-ing hard. The reason is that
the right solution is not clear from the simplified problem or complete
problem. The number of possible solutions increase in a sort of exponential
manner as you add new constraints and complexity. What I've decided lately
is to analyze several of these possible solutions in a sort of tree and
pursue several for simplified versions. I hope to find a pattern and figure
out which path to pursue more than others. I think many should be pursued
though. Maybe this can be somewhat automated some day.

Each solution has pros and cons. There aren't even a limited number of
solutions. With creativity, you can probably find an infinite number of
solutions to the same problem.

This explains why the brain was able to accidentally come upon A solution.
It works. Not necessarily the best, but it works quite well after being
tested and refined over billions of years of evolution.

Dave

On Thu, Jun 24, 2010 at 7:33 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  One of the problems of AI researchers is that too often they start off
 with an inadequate
 understanding of the *problems* and believe that solutions are only a few
 years away. We need an educational system that not only teaches techniques
 and solutions, but also an understanding of problems and their difficulty —
 which can come from a broader multi-disciplinary education. That could speed
 up progress.
 A. Sloman

 ( who else keeps saying that?)
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Jim Bromer
Let me be very clear about this.  Of course a multi-disciplinary approach is
helpful!  And when AGI becomes a reality, that will be even more obvious.  I
am only able to follow what I am able to follow thanks to the
contemporary philosophers who note it and contribute to it.  All that I am
saying is that this is not the central problem that still needs to be
solved.  More and more people with different interests are using computers
and their use is more than an electronic filing cabinet.

And both pessimists and optimists will have aided in the study.  The same
thing goes for the discretionists and the weightednists.  The rationalists
and the intuitionists.  The mystics and the supra-materialists.  The hackers
and the planners.  The neural biologists and the ideationists.  The dreamers
and the pragmatists.

But there is some other kind of problem.  We should have figured it out by
now.  I believe that there must be some fundamental computational problem
that is standing as the major obstacle to contemporary AGI.  Without solving
that problem we are going to have to wade through years of incremental
advances.  I believe that the most likely basis of the problem is efficient
logical satisfiability.  It makes the most senese given the nature of the
computer and the nature of the best theories of mind.

Jim Bromer

On Thu, Jun 24, 2010 at 12:05 PM, Jim Bromer jimbro...@gmail.com wrote:

 Both of you are wrong.  (Where did that quote come from by the way.  What
 year did he write or say that.)

 An inadequate understanding of the problems is exactly what has to
 be expected by researchers (both professional and amateurs) when they are
 facing a completely novel pursuit.  That is why we have endless discussions
 like these.  What happened over and over again in AI research is that the
 amazing advances in computer technology always seemed to suggest that
 similar advances in AI must be just off the horizon.  And the reality is
 that there have been major advances in AI.  In the 1970's a critic stated
 that he wouldn't believe that AI was possible until a computer was able to
 beat him in chess.  Well, guess what happened and guess what conclusion he
 did not derive from the experience.  One of the problems with critics is
 that they can be as far off as those whose optimism is absurdly unwarranted.

 If a broader multi-disciplinary effort was the obstacle to creating AGI, we
 would have AGI by now.  It should be clear to anyone who examines the
 history of AI or the present day reach of computer programming that a
 multi-discipline effort is not the key to creating effective AGI.  Computers
 have become pervasive in modern day life, and if it was just a matter of
 getting people with different kinds of interests involved, it would have
 been done by now.  It is a little like saying that the key to safe deep sea
 drilling is to rely on the expertise of companies that make billions and
 billions of dollars and which stand to lose billions by mistakes.  While
 that should make sense, if you look a little more closely, you can see that
 it doesn't quite work out that way in the real world.

 Jim Bromer

 On Thu, Jun 24, 2010 at 7:33 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  One of the problems of AI researchers is that too often they start off
 with an inadequate
 understanding of the *problems* and believe that solutions are only a few
 years away. We need an educational system that not only teaches techniques
 and solutions, but also an understanding of problems and their difficulty —
 which can come from a broader multi-disciplinary education. That could speed
 up progress.
 A. Sloman

 ( who else keeps saying that?)
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Mike Tintner
[BTW Sloman's quote is a month old]

I think he means what I do - the end-problems that an AGI must face. Please 
name me one true AGI end-problem being dealt with by any AGI-er - apart from 
the toybox problem. 

As I've repeatedly said- AGI-ers simply don't address or discuss AGI 
end-problems.  And they do indeed start with solutions - just as you are 
doing - re the TSP problem and the problem of combinatorial complexity, both of 
wh. have in fact nothing to do with AGI, and for neither of wh.. can you 
provide a single example of a  relevant AGI problem.

One could not make up this total avoidance of the creative problem,

And AGI-ers are not just shockingly but obscenely narrow in their 
disciplinarity/ the range of their problem interests - maths, logic, standard 
narrow AI computational problems,  NLP, a little robotics and that's about it - 
with by my rough estimate some 90% of human and animal real world 
problemsolving of no interest to them. That esp. includes their chosen key 
fields of language, conversation and vision - all of wh. are much more the 
province of the *arts* than the sciences, when it comes to AGI

The fact that creative, artistic problemsolving presents a totally different 
paradigm to that of programmed, preplanned problemsolving, is of no interest to 
them - because they lack what educationalists would call any kind of 
metacognitive ( interdisciplinary) scaffolding to deal with it.

It doesn't matter that programming itself, and developing new formulae and 
theorems - (all the forms IOW of creative maths, logic, programming, science 
and technology)  -  the very problemsolving upon wh. they absolutely depend.- 
also come under artistic problemsolving.  

So there is a major need for broadening AI  AGI education both in terms of 
culturally creative problemsolving and true culture-wide multidisciplinarity.





From: Jim Bromer 
Sent: Thursday, June 24, 2010 5:05 PM
To: agi 
Subject: Re: [agi] The problem with AGI per Sloman


Both of you are wrong.  (Where did that quote come from by the way.  What year 
did he write or say that.)  

An inadequate understanding of the problems is exactly what has to be expected 
by researchers (both professional and amateurs) when they are facing a 
completely novel pursuit.  That is why we have endless discussions like these.  
What happened over and over again in AI research is that the amazing advances 
in computer technology always seemed to suggest that similar advances in AI 
must be just off the horizon.  And the reality is that there have been major 
advances in AI.  In the 1970's a critic stated that he wouldn't believe that AI 
was possible until a computer was able to beat him in chess.  Well, guess what 
happened and guess what conclusion he did not derive from the experience.  One 
of the problems with critics is that they can be as far off as those whose 
optimism is absurdly unwarranted.

If a broader multi-disciplinary effort was the obstacle to creating AGI, we 
would have AGI by now.  It should be clear to anyone who examines the history 
of AI or the present day reach of computer programming that a multi-discipline 
effort is not the key to creating effective AGI.  Computers have become 
pervasive in modern day life, and if it was just a matter of getting people 
with different kinds of interests involved, it would have been done by now.  It 
is a little like saying that the key to safe deep sea drilling is to rely on 
the expertise of companies that make billions and billions of dollars and which 
stand to lose billions by mistakes.  While that should make sense, if you look 
a little more closely, you can see that it doesn't quite work out that way in 
the real world. 

Jim Bromer


On Thu, Jun 24, 2010 at 7:33 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  One of the problems of AI researchers is that too often they start off with 
an inadequate
  understanding of the problems and believe that solutions are only a few years 
away. We need an educational system that not only teaches techniques and 
solutions, but also an understanding of problems and their difficulty — which 
can come from a broader multi-disciplinary education. That could speed up 
progress.
  A. Sloman

  ( who else keeps saying that?)
agi | Archives  | Modify Your Subscription   



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] The problem with AGI per Sloman

2010-06-24 Thread John G. Rose
I think some confusion occurs where AGI researchers want to build an
artificial person verses artificial general intelligence. An AGI might be
just a computational model running in software that can solve problems
across domains.  An artificial person would be much else in addition to AGI.

 

With intelligence engineering and other engineering that artificial person
could be built, or some interface where it appears to be a person. And a
huge benefit is in having artificial people to do things that real people
do. But pursuing AGI need not have to be pursuit of building artificial
people.

 

Also, an AGI need not have to be able to solve ALL problems initially.
Coming out and asking why some AGI theory wouldn't be able to figure out how
to solve some problem like say, world hunger, I mean WTF is that?

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Thursday, June 24, 2010 5:33 AM
To: agi
Subject: [agi] The problem with AGI per Sloman

 

One of the problems of AI researchers is that too often they start off with
an inadequate
understanding of the problems and believe that solutions are only a few
years away. We need an educational system that not only teaches techniques
and solutions, but also an understanding of problems and their difficulty -
which can come from a broader multi-disciplinary education. That could speed
up progress.

A. Sloman

 

( who else keeps saying that?)


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread David Jones
Mike, I think your idealistic view of how AGI should be pursued does not
work in reality. What is your approach that fits all your criteria? I'm sure
that any such approach would be severely flawed as well.

Dave

On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  [BTW Sloman's quote is a month old]

 I think he means what I do - the end-problems that an AGI must face. Please
 name me one true AGI end-problem being dealt with by any AGI-er - apart from
 the toybox problem.

 As I've repeatedly said- AGI-ers simply don't address or discuss AGI
 end-problems.  And they do indeed start with solutions - just as you are
 doing - re the TSP problem and the problem of combinatorial complexity, both
 of wh. have in fact nothing to do with AGI, and for neither of wh.. can you
 provide a single example of a  relevant AGI problem.

 One could not make up this total avoidance of the creative problem,

 And AGI-ers are not just shockingly but obscenely narrow in their
 disciplinarity/ the range of their problem interests - maths, logic,
 standard narrow AI computational problems,  NLP, a little robotics and
 that's about it - with by my rough estimate some 90% of human and
 animal real world problemsolving of no interest to them. That esp. includes
 their chosen key fields of language, conversation and vision - all of wh.
 are much more the province of the *arts* than the sciences, when it comes to
 AGI

 The fact that creative, artistic problemsolving presents a totally
 different paradigm to that of programmed, preplanned problemsolving, is of
 no interest to them - because they lack what educationalists would call any
 kind of metacognitive ( interdisciplinary) scaffolding to deal with it.

 It doesn't matter that programming itself, and developing new formulae and
 theorems - (all the forms IOW of creative maths, logic, programming, science
 and technology)  -  the very problemsolving upon wh. they absolutely
 depend.- also come under artistic problemsolving.

 So there is a major need for broadening AI  AGI education both in terms of
 culturally creative problemsolving and true culture-wide
 multidisciplinarity.





  *From:* Jim Bromer jimbro...@gmail.com
 *Sent:* Thursday, June 24, 2010 5:05 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] The problem with AGI per Sloman

 Both of you are wrong.  (Where did that quote come from by the way.  What
 year did he write or say that.)

 An inadequate understanding of the problems is exactly what has to
 be expected by researchers (both professional and amateurs) when they are
 facing a completely novel pursuit.  That is why we have endless discussions
 like these.  What happened over and over again in AI research is that the
 amazing advances in computer technology always seemed to suggest that
 similar advances in AI must be just off the horizon.  And the reality is
 that there have been major advances in AI.  In the 1970's a critic stated
 that he wouldn't believe that AI was possible until a computer was able to
 beat him in chess.  Well, guess what happened and guess what conclusion he
 did not derive from the experience.  One of the problems with critics is
 that they can be as far off as those whose optimism is absurdly unwarranted.

 If a broader multi-disciplinary effort was the obstacle to creating AGI, we
 would have AGI by now.  It should be clear to anyone who examines the
 history of AI or the present day reach of computer programming that a
 multi-discipline effort is not the key to creating effective AGI.  Computers
 have become pervasive in modern day life, and if it was just a matter of
 getting people with different kinds of interests involved, it would have
 been done by now.  It is a little like saying that the key to safe deep sea
 drilling is to rely on the expertise of companies that make billions and
 billions of dollars and which stand to lose billions by mistakes.  While
 that should make sense, if you look a little more closely, you can see that
 it doesn't quite work out that way in the real world.

 Jim Bromer

 On Thu, Jun 24, 2010 at 7:33 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  One of the problems of AI researchers is that too often they start off
 with an inadequate
 understanding of the *problems* and believe that solutions are only a few
 years away. We need an educational system that not only teaches techniques
 and solutions, but also an understanding of problems and their difficulty —
 which can come from a broader multi-disciplinary education. That could speed
 up progress.
 A. Sloman

 ( who else keeps saying that?)
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Mike Tintner
John,

You're making a massively important point, wh. I have been thinking about 
recently.

I think it's more useful to say that AGI-ers are thinking in terms of building 
a *complete AGI system* (rather than person) wh. could range from a simple 
animal robot to fantasies of an all intelligent brain-in-a-box.

No AGI-er has (and no team of supercreative AGI-ers could have) even a remotely 
realistic understanding of how massively complex a feat this would be.

I've changed recently to thinking that realistic AGI in the near future will 
have to concentrate instead (or certainly have one major focus) on what might 
be called local AGI as opposed to global AGI - getting a robot able to do 
just *one* or two things in a truly general way - with a very well-defined goal 
- rather than a true all-round AGI robot system. (more of this another time).

Look at Venter - he is not trying to build a complete artificial cell in one - 
that would be insane, and yet not a tiny fraction of the insanity of present 
AGI systembuilders' goals. He is taking it one narrow step at a time - one 
relatively narrow part at a time. That is a law of both natural and machine 
evolution to wh. I don't think there are any exceptions - from simple to 
complex in gradual, progressive stages.




From: John G. Rose 
Sent: Thursday, June 24, 2010 6:20 PM
To: agi 
Subject: RE: [agi] The problem with AGI per Sloman


I think some confusion occurs where AGI researchers want to build an artificial 
person verses artificial general intelligence. An AGI might be just a 
computational model running in software that can solve problems across domains. 
 An artificial person would be much else in addition to AGI.

 

With intelligence engineering and other engineering that artificial person 
could be built, or some interface where it appears to be a person. And a huge 
benefit is in having artificial people to do things that real people do. But 
pursuing AGI need not have to be pursuit of building artificial people.

 

Also, an AGI need not have to be able to solve ALL problems initially. Coming 
out and asking why some AGI theory wouldn't be able to figure out how to solve 
some problem like say, world hunger, I mean WTF is that?

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Thursday, June 24, 2010 5:33 AM
To: agi
Subject: [agi] The problem with AGI per Sloman

 

One of the problems of AI researchers is that too often they start off with an 
inadequate
understanding of the problems and believe that solutions are only a few years 
away. We need an educational system that not only teaches techniques and 
solutions, but also an understanding of problems and their difficulty - which 
can come from a broader multi-disciplinary education. That could speed up 
progress.

A. Sloman

 

( who else keeps saying that?)

  agi | Archives | Modify Your Subscription
 
 

 

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Jim Bromer
On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  [BTW Sloman's quote is a month old]


Are you sure it was A. Sloman who wrote or said that?  From where I'm
sitting it looks like it was Margaret Boden who wrote it.  But then again, I
am one of those people who sometimes make mistakes.
Jim Bromer


On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  [BTW Sloman's quote is a month old]

 I think he means what I do - the end-problems that an AGI must face. Please
 name me one true AGI end-problem being dealt with by any AGI-er - apart from
 the toybox problem.

 As I've repeatedly said- AGI-ers simply don't address or discuss AGI
 end-problems.  And they do indeed start with solutions - just as you are
 doing - re the TSP problem and the problem of combinatorial complexity, both
 of wh. have in fact nothing to do with AGI, and for neither of wh.. can you
 provide a single example of a  relevant AGI problem.

 One could not make up this total avoidance of the creative problem,

 And AGI-ers are not just shockingly but obscenely narrow in their
 disciplinarity/ the range of their problem interests - maths, logic,
 standard narrow AI computational problems,  NLP, a little robotics and
 that's about it - with by my rough estimate some 90% of human and
 animal real world problemsolving of no interest to them. That esp. includes
 their chosen key fields of language, conversation and vision - all of wh.
 are much more the province of the *arts* than the sciences, when it comes to
 AGI

 The fact that creative, artistic problemsolving presents a totally
 different paradigm to that of programmed, preplanned problemsolving, is of
 no interest to them - because they lack what educationalists would call any
 kind of metacognitive ( interdisciplinary) scaffolding to deal with it.

 It doesn't matter that programming itself, and developing new formulae and
 theorems - (all the forms IOW of creative maths, logic, programming, science
 and technology)  -  the very problemsolving upon wh. they absolutely
 depend.- also come under artistic problemsolving.

 So there is a major need for broadening AI  AGI education both in terms of
 culturally creative problemsolving and true culture-wide
 multidisciplinarity.





  *From:* Jim Bromer jimbro...@gmail.com
 *Sent:* Thursday, June 24, 2010 5:05 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] The problem with AGI per Sloman

 Both of you are wrong.  (Where did that quote come from by the way.  What
 year did he write or say that.)

 An inadequate understanding of the problems is exactly what has to
 be expected by researchers (both professional and amateurs) when they are
 facing a completely novel pursuit.  That is why we have endless discussions
 like these.  What happened over and over again in AI research is that the
 amazing advances in computer technology always seemed to suggest that
 similar advances in AI must be just off the horizon.  And the reality is
 that there have been major advances in AI.  In the 1970's a critic stated
 that he wouldn't believe that AI was possible until a computer was able to
 beat him in chess.  Well, guess what happened and guess what conclusion he
 did not derive from the experience.  One of the problems with critics is
 that they can be as far off as those whose optimism is absurdly unwarranted.

 If a broader multi-disciplinary effort was the obstacle to creating AGI, we
 would have AGI by now.  It should be clear to anyone who examines the
 history of AI or the present day reach of computer programming that a
 multi-discipline effort is not the key to creating effective AGI.  Computers
 have become pervasive in modern day life, and if it was just a matter of
 getting people with different kinds of interests involved, it would have
 been done by now.  It is a little like saying that the key to safe deep sea
 drilling is to rely on the expertise of companies that make billions and
 billions of dollars and which stand to lose billions by mistakes.  While
 that should make sense, if you look a little more closely, you can see that
 it doesn't quite work out that way in the real world.

 Jim Bromer

 On Thu, Jun 24, 2010 at 7:33 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  One of the problems of AI researchers is that too often they start off
 with an inadequate
 understanding of the *problems* and believe that solutions are only a few
 years away. We need an educational system that not only teaches techniques
 and solutions, but also an understanding of problems and their difficulty —
 which can come from a broader multi-disciplinary education. That could speed
 up progress.
 A. Sloman

 ( who else keeps saying that?)
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


   *agi* | Archives https://www.listbox.com/member

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Mike Tintner
Dave, Re my first point there is no choice whatsoever - you (any serious 
creative)  *have* to start by addressing the creative problem - in this case 
true AGI end-problems. You have to start, e.g.,, addressing the problem part of 
your would-be plane, the part that's going to give you take-off, because that 
then affects all the other parts of the plane/machine. Start anywhere else  
the odds of irrelevancy would be around IMO 100%, It is truly staggering that 
this primary law of serious creativity is being flouted - with inevitably 
zero results.

Re the idea of a truly broad education, sure that's v. idealistic. But AGI 
education should be at the v. least open-minded - willing to consider any 
form of problemsolving. People should be willing to think, for example, about 
an alternative form of machine to the TM because again there is no choice - the 
TM does *not* incorporate or address the creative problemsolving of the 
programmer upon which it depends. But there is no willingness to think outside 
the box/frame here.

Re flawed goals - yes, I think you  I might agree, that almost any goals you 
set for your AGI will be overambitious. However, if they are insanely 
overambitious, as in trying to build an entire AGI system, then you can't 
really learn much from your mistakes. If they are basic, local AGI goals, as 
I mentioned to John   - (and altho I disagree with your particular goals, your 
overall philosophy seems to be broadly consistent with this idea) - then you 
can learn from your mistakes, and make your targets more realistic still.




From: David Jones 
Sent: Thursday, June 24, 2010 6:22 PM
To: agi 
Subject: Re: [agi] The problem with AGI per Sloman


Mike, I think your idealistic view of how AGI should be pursued does not work 
in reality. What is your approach that fits all your criteria? I'm sure that 
any such approach would be severely flawed as well.

Dave


On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  [BTW Sloman's quote is a month old]

  I think he means what I do - the end-problems that an AGI must face. Please 
name me one true AGI end-problem being dealt with by any AGI-er - apart from 
the toybox problem. 

  As I've repeatedly said- AGI-ers simply don't address or discuss AGI 
end-problems.  And they do indeed start with solutions - just as you are 
doing - re the TSP problem and the problem of combinatorial complexity, both of 
wh. have in fact nothing to do with AGI, and for neither of wh.. can you 
provide a single example of a  relevant AGI problem.

  One could not make up this total avoidance of the creative problem,

  And AGI-ers are not just shockingly but obscenely narrow in their 
disciplinarity/ the range of their problem interests - maths, logic, standard 
narrow AI computational problems,  NLP, a little robotics and that's about it - 
with by my rough estimate some 90% of human and animal real world 
problemsolving of no interest to them. That esp. includes their chosen key 
fields of language, conversation and vision - all of wh. are much more the 
province of the *arts* than the sciences, when it comes to AGI

  The fact that creative, artistic problemsolving presents a totally different 
paradigm to that of programmed, preplanned problemsolving, is of no interest to 
them - because they lack what educationalists would call any kind of 
metacognitive ( interdisciplinary) scaffolding to deal with it.

  It doesn't matter that programming itself, and developing new formulae and 
theorems - (all the forms IOW of creative maths, logic, programming, science 
and technology)  -  the very problemsolving upon wh. they absolutely depend.- 
also come under artistic problemsolving.  

  So there is a major need for broadening AI  AGI education both in terms of 
culturally creative problemsolving and true culture-wide multidisciplinarity.





  From: Jim Bromer 
  Sent: Thursday, June 24, 2010 5:05 PM
  To: agi 
  Subject: Re: [agi] The problem with AGI per Sloman


  Both of you are wrong.  (Where did that quote come from by the way.  What 
year did he write or say that.)  

  An inadequate understanding of the problems is exactly what has to be 
expected by researchers (both professional and amateurs) when they are facing a 
completely novel pursuit.  That is why we have endless discussions like these.  
What happened over and over again in AI research is that the amazing advances 
in computer technology always seemed to suggest that similar advances in AI 
must be just off the horizon.  And the reality is that there have been major 
advances in AI.  In the 1970's a critic stated that he wouldn't believe that AI 
was possible until a computer was able to beat him in chess.  Well, guess what 
happened and guess what conclusion he did not derive from the experience.  One 
of the problems with critics is that they can be as far off as those whose 
optimism is absurdly unwarranted.

  If a broader multi-disciplinary effort

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Mike Tintner
Are you sure it was A. Sloman who wrote or said that?  From where I'm sitting 
it looks like it was Margaret Boden who wrote it.  But then again, I am one of 
those people who sometimes make mistakes.
Jim Bromer

And this is indeed another of your mistakes:

http://onthehuman.org/2010/05/can-computer-models-help-us-to-understand-human-creativity/

see [ironically] his:

CREATIVITY AS A RESPONSE TO COMBINATORICS

and nb. his previous paras  [wh. I hadn't fully noted] and wh. also agree with 
me:

All this is a brief introduction to the study of the many ways in which 
biological evolution was under pressure to provided humans and other animals 
with information-processing mechanisms that are capable of acquiring many 
different kinds of information and then developing novel ways of using that 
information to solve any of millions of different problems without having to 
learn solutions by trial and error, without having to be taught, and without 
having to imitate behaviour of others. I.e. they are P-creative solutions.
I conjecture that these highly practical forms of creativity, which are 
obviously important in crafts, engineering, science, and many everyday 
activities at home or at work, are closely related to the mechanisms that also 
produce artistic forms of creativity.


From: Jim Bromer 
Sent: Thursday, June 24, 2010 6:57 PM
To: agi 
Subject: Re: [agi] The problem with AGI per Sloman


On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  [BTW Sloman's quote is a month old]

Are you sure it was A. Sloman who wrote or said that?  From where I'm sitting 
it looks like it was Margaret Boden who wrote it.  But then again, I am one of 
those people who sometimes make mistakes.
Jim Bromer


On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  [BTW Sloman's quote is a month old]

  I think he means what I do - the end-problems that an AGI must face. Please 
name me one true AGI end-problem being dealt with by any AGI-er - apart from 
the toybox problem. 

  As I've repeatedly said- AGI-ers simply don't address or discuss AGI 
end-problems.  And they do indeed start with solutions - just as you are 
doing - re the TSP problem and the problem of combinatorial complexity, both of 
wh. have in fact nothing to do with AGI, and for neither of wh.. can you 
provide a single example of a  relevant AGI problem.

  One could not make up this total avoidance of the creative problem,

  And AGI-ers are not just shockingly but obscenely narrow in their 
disciplinarity/ the range of their problem interests - maths, logic, standard 
narrow AI computational problems,  NLP, a little robotics and that's about it - 
with by my rough estimate some 90% of human and animal real world 
problemsolving of no interest to them. That esp. includes their chosen key 
fields of language, conversation and vision - all of wh. are much more the 
province of the *arts* than the sciences, when it comes to AGI

  The fact that creative, artistic problemsolving presents a totally different 
paradigm to that of programmed, preplanned problemsolving, is of no interest to 
them - because they lack what educationalists would call any kind of 
metacognitive ( interdisciplinary) scaffolding to deal with it.

  It doesn't matter that programming itself, and developing new formulae and 
theorems - (all the forms IOW of creative maths, logic, programming, science 
and technology)  -  the very problemsolving upon wh. they absolutely depend.- 
also come under artistic problemsolving.  

  So there is a major need for broadening AI  AGI education both in terms of 
culturally creative problemsolving and true culture-wide multidisciplinarity.





  From: Jim Bromer 
  Sent: Thursday, June 24, 2010 5:05 PM
  To: agi 
  Subject: Re: [agi] The problem with AGI per Sloman


  Both of you are wrong.  (Where did that quote come from by the way.  What 
year did he write or say that.)  

  An inadequate understanding of the problems is exactly what has to be 
expected by researchers (both professional and amateurs) when they are facing a 
completely novel pursuit.  That is why we have endless discussions like these.  
What happened over and over again in AI research is that the amazing advances 
in computer technology always seemed to suggest that similar advances in AI 
must be just off the horizon.  And the reality is that there have been major 
advances in AI.  In the 1970's a critic stated that he wouldn't believe that AI 
was possible until a computer was able to beat him in chess.  Well, guess what 
happened and guess what conclusion he did not derive from the experience.  One 
of the problems with critics is that they can be as far off as those whose 
optimism is absurdly unwarranted.

  If a broader multi-disciplinary effort was the obstacle to creating AGI, we 
would have AGI by now.  It should be clear to anyone who examines the history 
of AI or the present day reach of computer programming

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread David Jones
Mike,

start by addressing the creative problem. this phrase doesn't mean
anything to me. You haven't properly defined what you mean by creative to
me. What do you think the true AGI end-problems are? Try not to use the word
creative so much. There possible algorithms that produce high level
creative behavior but which are not any more creative themselves than many
existing algorithm. I think your over emphasis on creativity is unfounded.

Dave

On Thu, Jun 24, 2010 at 2:54 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Dave, Re my first point there is no choice whatsoever - you (any serious
 creative)  *have* to start by addressing the creative problem - in this case
 true AGI end-problems. You have to start, e.g.,, addressing the problem part
 of your would-be plane, the part that's going to give you take-off, because
 that then affects all the other parts of the plane/machine. Start anywhere
 else  the odds of irrelevancy would be around IMO 100%, It is truly
 staggering that this primary law of serious creativity is being flouted -
 with inevitably zero results.

 Re the idea of a truly broad education, sure that's v. idealistic. But AGI
 education should be at the v. least open-minded - willing to consider any
 form of problemsolving. People should be willing to think, for
 example, about an alternative form of machine to the TM because again there
 is no choice - the TM does *not* incorporate or address the creative
 problemsolving of the programmer upon which it depends. But there is no
 willingness to think outside the box/frame here.

 Re flawed goals - yes, I think you  I might agree, that almost any goals
 you set for your AGI will be overambitious. However, if they are insanely
 overambitious, as in trying to build an entire AGI system, then you can't
 really learn much from your mistakes. If they are basic, local AGI goals,
 as I mentioned to John   - (and altho I disagree with your particular goals,
 your overall philosophy seems to be broadly consistent with this idea) -
 then you can learn from your mistakes, and make your targets more realistic
 still.



  *From:* David Jones davidher...@gmail.com
 *Sent:* Thursday, June 24, 2010 6:22 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] The problem with AGI per Sloman

 Mike, I think your idealistic view of how AGI should be pursued does not
 work in reality. What is your approach that fits all your criteria? I'm sure
 that any such approach would be severely flawed as well.

 Dave

 On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  [BTW Sloman's quote is a month old]

 I think he means what I do - the end-problems that an AGI must face.
 Please name me one true AGI end-problem being dealt with by any AGI-er -
 apart from the toybox problem.

 As I've repeatedly said- AGI-ers simply don't address or discuss AGI
 end-problems.  And they do indeed start with solutions - just as you are
 doing - re the TSP problem and the problem of combinatorial complexity, both
 of wh. have in fact nothing to do with AGI, and for neither of wh.. can you
 provide a single example of a  relevant AGI problem.

 One could not make up this total avoidance of the creative problem,

 And AGI-ers are not just shockingly but obscenely narrow in their
 disciplinarity/ the range of their problem interests - maths, logic,
 standard narrow AI computational problems,  NLP, a little robotics and
 that's about it - with by my rough estimate some 90% of human and
 animal real world problemsolving of no interest to them. That esp. includes
 their chosen key fields of language, conversation and vision - all of wh.
 are much more the province of the *arts* than the sciences, when it comes to
 AGI

 The fact that creative, artistic problemsolving presents a totally
 different paradigm to that of programmed, preplanned problemsolving, is of
 no interest to them - because they lack what educationalists would call any
 kind of metacognitive ( interdisciplinary) scaffolding to deal with it.

 It doesn't matter that programming itself, and developing new formulae and
 theorems - (all the forms IOW of creative maths, logic, programming, science
 and technology)  -  the very problemsolving upon wh. they absolutely
 depend.- also come under artistic problemsolving.

 So there is a major need for broadening AI  AGI education both in terms
 of culturally creative problemsolving and true culture-wide
 multidisciplinarity.





  *From:* Jim Bromer jimbro...@gmail.com
 *Sent:* Thursday, June 24, 2010 5:05 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] The problem with AGI per Sloman

 Both of you are wrong.  (Where did that quote come from by the way.  What
 year did he write or say that.)

 An inadequate understanding of the problems is exactly what has to
 be expected by researchers (both professional and amateurs) when they are
 facing a completely novel pursuit.  That is why we have endless discussions
 like these.  What happened over

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Ian Parker
I think there is a great deal of confusion between these two objectives.
When I wrote that if you had a car accident due to a fault in AI/AGI and
Matt wrote back talking about downloads this was a case in point. I was
assuming that you had a system which was intelligent but was *not* a
download in any shape or form.

Watsonhttp://learning.blogs.nytimes.com/2010/06/23/waxing-philosophical-on-watson-and-artificial-intelligence/is
intelligent. I would be interested to know other peoples answers to
the
5 questions.

1) Turing test - Quite possibly with modifications. Watson needs to be
turned into a chatterbox. This can be done fairly trivially by allowing
Watson to store conversation in his database.

2) Meaningless question. Watson could produce results of thought and feed
these back in. Watson could design a program by referencing other programs
and their comment data. Sinilarly for engineering.

3,4,5 Absolutely not.

How do you solve World Hunger? Does AGI have to. I think if it is truly G
it has to. One way would be to find out what other people had written on the
subject and analyse the feasibility of their solutions.


  - Ian Parker

On 24 June 2010 18:20, John G. Rose johnr...@polyplexic.com wrote:

 I think some confusion occurs where AGI researchers want to build an
 artificial person verses artificial general intelligence. An AGI might be
 just a computational model running in software that can solve problems
 across domains.  An artificial person would be much else in addition to AGI.



 With intelligence engineering and other engineering that artificial person
 could be built, or some interface where it appears to be a person. And a
 huge benefit is in having artificial people to do things that real people
 do. But pursuing AGI need not have to be pursuit of building artificial
 people.



 Also, an AGI need not have to be able to solve ALL problems initially.
 Coming out and asking why some AGI theory wouldn't be able to figure out how
 to solve some problem like say, world hunger, I mean WTF is that?



 John



 *From:* Mike Tintner [mailto:tint...@blueyonder.co.uk]
 *Sent:* Thursday, June 24, 2010 5:33 AM
 *To:* agi
 *Subject:* [agi] The problem with AGI per Sloman



 One of the problems of AI researchers is that too often they start off
 with an inadequate
 understanding of the *problems* and believe that solutions are only a few
 years away. We need an educational system that not only teaches techniques
 and solutions, but also an understanding of problems and their difficulty —
 which can come from a broader multi-disciplinary education. That could speed
 up progress.

 A. Sloman



 ( who else keeps saying that?)

 *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/| 
 Modifyhttps://www.listbox.com/member/?;Your Subscription

 http://www.listbox.com


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Jim Bromer
On Thu, Jun 24, 2010 at 3:21 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Are you sure it was A. Sloman who wrote or said that?  From where I'm
 sitting it looks like it was Margaret Boden who wrote it.  But then again, I
 am one of those people who sometimes make mistakes.
 Jim Bromer

 And this is indeed another of your mistakes:


 http://onthehuman.org/2010/05/can-computer-models-help-us-to-understand-human-creativity/

 see [ironically] his:

 CREATIVITY AS A RESPONSE TO COMBINATORICS

 and nb. his previous paras  [wh. I hadn't fully noted] and wh. also agree
 with me:

 All this is a brief introduction to the study of the many ways in which
 biological evolution was under pressure to provided humans and other animals
 with information-processing mechanisms that are capable of acquiring many
 different kinds of information and then developing novel ways of using that
 information to solve any of millions of different problems without having to
 learn solutions by trial and error, without having to be taught, and without
 having to imitate behaviour of others. I.e. they are P-creative solutions.

 I conjecture that these highly practical forms of creativity, which are
 obviously important in crafts, engineering, science, and many everyday
 activities at home or at work, are closely related to the mechanisms that
 also produce *artistic* forms of creativity.

But he goes on to say, Of course, none of this will impress people who
don't WANT to believe that machines can be creative. They just need to learn
to think more creatively, and that is another one of your mistakes.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Fatmah
I suggest we form a team for this purpose ..and I am willing to join




From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Thu, June 24, 2010 2:33:01 PM
Subject: [agi] The problem with AGI per Sloman


One of the 
problems of AI researchers is that too often they start off with an 
inadequate
understanding of the problems and believe that solutions 
are only a few years away. We need an educational system that not only teaches 
techniques and solutions, but also an understanding of problems and their 
difficulty — which can come from a broader multi-disciplinary education. That 
could speed up progress.
A. Sloman
 
( who else keeps saying 
that?)
agi | Archives  | Modify Your Subscription  


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com