Re: Re[6]: [agi] Funding AGI research

2007-11-25 Thread Benjamin Goertzel
Linas:
 I find it telling that no one is saying I've got the code, I just need to
 scale it up
 1000-fold to make it impressive ...

Yes, that's an accurate comment.  Novamente will hopefully reach that
point in a few years.

For now, we will need (and use) a lotta machines for commercial product
deployment purposes.

But for RD purposes, it's all about solving a large number of moderate-sized
computer science and AI research problems, that are connected together via
the overall NM AGI design.  Once these problems are all worked through and
we have a completed Novamente codebase then we will be far better able
to evaluate what our hardware requirements actually are.  I am pretty sure
they will be large.  But right now, having masses of hardware wouldn't
accelerate
our progress all that much.  What is useful to us is money to pay the
right brains
to solve the long list of apparently-not-that-huge technical problems between
here and a completed Novamente system.  And of course there is always a nonzero
risk that one of these apparently-not-that-huge technical problems will turn out
to be huge; but, a lot of thinking has gone on over a number of years
in a serious
attempt to avoid this...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=68394009-e1d34e


RE: Re[6]: [agi] Funding AGI research

2007-11-25 Thread Edward Porter

A few days ago there was some discussion on this list about the potential 
usefulness of narrow AI to AGI.  
 
Nick Cassimatis, who is speaking at AGI 2008, has something he calls Polyscheme 
which is described partially at the following AGIRI link: 
http://www.agiri.org/workshop/Cassimatis.ppt 
 
It appears to use what are arguably narrow AI modules in a coordinated manner 
to achieve AGI.  
 
Is this a correct interpretation?  Does it work?  And, if so, how?
 
I can imagine how multiple narrow AI’s could be used to create a more general 
AGI if there were some AGI glue to represent and learn the relationships 
between the different AGI modalities.  Cassimatis mentions tying these 
different modalities together using relations involving “times, space, events, 
identity, causality and belief.”  (But I don’t remember much description of how 
it does it.)
 
Arguably these are enough dimensions to create generalized representations, 
provided there is some generalized means for representing all the important 
states and representations in each of the Narrow AI modalities and the 
relationships between them in each of these dimensions and compositions and 
generalizations formed from such relationships.  
 
Is that what Cassimatis is talking about?
 
Ed Porter 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=68480216-19f95d

Re: Re[6]: [agi] Funding AGI research

2007-11-25 Thread Benjamin Goertzel
Cassimatis's system is an interesting research system ... it doesn't yet have
lotsa demonstrated practical functionality, if that's what you mean by
work...

He wants to take a bunch of disparately-functioning agents, and hook
them together
into a common framework using a common logical interlingua

I think this approach is  unlikely to lead to the various agents involved
quelling, rather than exacerbating, each others' intrinsic combinatorial
explosions...

I think it is unlikely to lead to sufficiently coherent system-wide
emergent dynamics
to give rise to an effective phenomenal self ...

But given the primitive state of AGI theory at the moment, I can't
*prove* that these
complaints are correct, of course...

-- Ben G

On Nov 25, 2007 7:22 PM, Edward Porter [EMAIL PROTECTED] wrote:



 A few days ago there was some discussion on this list about the potential
 usefulness of narrow AI to AGI.



 Nick Cassimatis, who is speaking at AGI 2008, has something he calls
 Polyscheme which is described partially at the following AGIRI link:
 http://www.agiri.org/workshop/Cassimatis.ppt



 It appears to use what are arguably narrow AI modules in a coordinated
 manner to achieve AGI.



 Is this a correct interpretation?  Does it work?  And, if so, how?



 I can imagine how multiple narrow AI's could be used to create a more
 general AGI if there were some AGI glue to represent and learn the
 relationships between the different AGI modalities.  Cassimatis mentions
 tying these different modalities together using relations involving times,
 space, events, identity, causality and belief.  (But I don't remember much
 description of how it does it.)



 Arguably these are enough dimensions to create generalized representations,
 provided there is some generalized means for representing all the important
 states and representations in each of the Narrow AI modalities and the
 relationships between them in each of these dimensions and compositions and
 generalizations formed from such relationships.



 Is that what Cassimatis is talking about?



 Ed Porter 
  This list is sponsored by AGIRI: http://www.agiri.org/email

 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=68488723-9c2917


RE: Re[4]: [agi] Funding AGI research

2007-11-25 Thread John G. Rose
 Yeah - because weak AI is so simple.  Why not just make some
 run-of-the-mill narrow AI with a single goal of Build AGI?  You can
 just relax while it does all the work.
 

I kind of like the idea of building software that then builds AGI. But you
could say that that software is part of the AGI itself especially in an
intelligence emergence generator type AGI. It depends on what code language
the AGI is running in. C++ could host another language that is hosting the
emerged AGI. The original c++ code could be written by some other c++
application instead of human coders.

Another way is to read data from the real world and have that generate code.
Stuff comes in and that is turned into c++ code based on its properties,
behaviors, etc.. and that stuff is then integrated into the main engine. The
main engine is just a very abstract system that aggregates, systematizes and
housekeeps the real world's c++ systems. So they come in as OOP mini
applications and these are analyzed and categorized at the c++ semtactic
level and then assimilated/integrated into the system's catagorizational
internal framework/network. It's basically a form of continuous compilation
AGI, you'd have processes and servers just compiling continuously while the
thing is ripping apart, modifying, generating and compiling code.

But yeah the trick is how well it modifies itself, the idea being it starts
off simple thus maximizing the developer's relaxation time :) It can be
evolutionary or it could follow other intelligent automata type pattern
generative operative source-code expression and interaction.

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=68498882-d93fe1