On 10/20/07, Edward W. Porter <[EMAIL PROTECTED]> wrote: You mean I wasted my
time and money by buying and reading the Novamente article in "Artificial
General Intelligence" when I could have bought the new and improved
"Advances in Artificial General Intelligence."  What a rip off!

Ed

(((

Bummer, eh?  ;-)

Seriously though: The articles on NM in the newer AGI edited volume don't
review the overall NM architecture and design as thoroughly as the article
on NM in the older AGI edited volume.   We tried not to be redundant in
writing the NM articles for the new volume.  However, the articles in the
new volume do go into more detail on various specific aspects of the NM
system.

One problem with the original ("older") Artificial General Intelligence book
is that the articles in it were actually written in 2002, but the book did
not appear until 2006!  This was because of various delays associated with
the publishing process, which fortunately were not repeated with the newer
volume...

The good news is, the articles on NM in the newer AGI edited volume are
available online at the AGIRI.org website, on the page devoted to the 2006
AGIRI workshop...

http://www.agiri.org/forum/index.php?act=ST&f=21&t=23

-- Ben


 -----Original Message-----
> *From:* Benjamin Goertzel [mailto:[EMAIL PROTECTED]
> *Sent:* Saturday, October 20, 2007 5:24 PM
> *To:* agi@v2.listbox.com
> *Subject:* Re: [agi] An AGI Test/Prize
>
>
> Ah, gotcha...
>
> The recent book "Advances in Artificial General Intelligence" gives a
> bunch more detail than those, actually (though not as much of the conceptual
> motivations as The Hidden Pattern) ... but not nearly as much as the
> not-yet-released stuff...
>
> -- Ben
>
> On 10/20/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> >
> >  Ben,
> >
> > The books I was referring to were "The Hidden Pattern" and "Artificial
> > General Intelligence", both of which I purchased from Amazon.  I know you
> > have a better description, but what is in these two books is quite helpful.
> >
> > Ed Porter
> >
> >  -----Original Message-----
> > *From:* Benjamin Goertzel [mailto:[EMAIL PROTECTED]
> > *Sent:* Saturday, October 20, 2007 4:01 PM
> > *To:* agi@v2.listbox.com
> > *Subject:* Re: [agi] An AGI Test/Prize
> >
> >
> > On 10/20/07, Edward W. Porter < [EMAIL PROTECTED]> wrote:
> >
> > John,
> >
> >
> >
> > So rather than a definition of intelligence you want a recipe for how to
> > make a one?
> >
> >
> >
> > Goertzel's descriptions of Novamente in his two recent books are the
> > closest, publicly-available approximation of that of which I currently
> > know.
> >
> >
> > Actually my book on how to build an AGI are not publicly available at
> > this point ... but I'm strongly leaning toward making them so ... it's
> > mostly just a matter of finding time to proofread them, remove obsolete
> > ideas, etc. and generally turn them from draft manuscripts into finalized
> > manuscripts.  I have already let a bunch of people read the drafts...
> >
> > Of course, a problem with putting material like this in dead-tree form
> > is that the ideas are evolving.  We learn new stuff as we proceed through
> > implementing the stuff in the books....  But the basic framework (knowledge
> > rep, algorithms, cognitive architecture, teaching methodology) has not
> > changed as we've proceed through the work so far, just some of the "details"
> > (wherein the devil famously lies ;-)
> >
> > -- Ben
> >
> >
> >   Edward W. Porter
> > > Porter & Associates
> > > 24 String Bridge S12
> > > Exeter, NH 03833
> > > (617) 494-1722
> > > Fax (617) 494-1822
> > > [EMAIL PROTECTED]
> > >
> > >  -----Original Message-----
> > > *From:* John G. Rose [mailto: [EMAIL PROTECTED]
> > > *Sent:* Saturday, October 20, 2007 3:16 PM
> > > *To:* agi@v2.listbox.com
> > > *Subject:* RE: [agi] An AGI Test/Prize
> > >
> > >  No you are not mundane. All these things on the list (or most) are
> > > very well to be expected from a generally intelligent system or its
> > > derivatives. But I have this urge, being a software developer, to smash 
> > > all
> > > these things up into their constituent components, partition commonalties,
> > > eliminate dupes, and perhaps further smash up into an atomic 
> > > representation
> > > of intelligence as little intelligent engines that can be combined in
> > > various ways to build higher level functions. Kind of like a cellular
> > > automata approach and perhaps CA structures can be used. I really don't 
> > > want
> > > to waste 10 years developing a giant piece of bloatage code that never 
> > > fully
> > > works. Better to exhaust all possibilities in the mind and on paper as 
> > > much
> > > as possible as software dev can be a giant PIA mess if not thought out
> > > beforehand as much as possible. Yes you can go so far before doing
> > > prototyping and testing but certain prototypes can take many months to
> > > build.
> > >
> > >
> > >
> > > Several on this email list have already gotten to this point and it
> > > may be more productive digesting their systems instead of reinventing…  
> > > Even
> > > so that leaves many questions open about testing. Someone can claim they
> > > have AGI but how do you really know, could be just a highly sophisticated
> > > chatterbot.
> > >
> > >
> > >
> > >  John
> > >
> > >
> > >
> > >
> > >
> > > *From:* Edward W. Porter [mailto: [EMAIL PROTECTED]
> > >
> > >  I guess I am mundane.  I don't spend a lot of time thinking about a
> > > "definition of intelligence."  Goertzel's is good enough for me.
> > >
> > >
> > >
> > > Instead I think in  terms of what I want these machines to do -- which
> > > includes human-level:
> > >
> > >
> > >
> > > -NL understanding and generation (including discourse level)
> > >
> > > -Speech recognition and generation (including appropriate pitch and
> > > volume modulation)
> > >
> > > -Non-speech auditory recognition and generation
> > >
> > > -Visual recognition and real time video generation
> > >
> > > -World-knowledge representation, understanding and reasoning
> > >
> > > -Computer program understanding and generation
> > >
> > > -Common sense reasoning
> > >
> > > -Cognition
> > >
> > > -Context sensitivity
> > >
> > > -Automatic learning
> > >
> > > -Intuition
> > >
> > > -Creativity
> > >
> > > -Inventiveness
> > >
> > > -Understanding human nature and human desires and goals(not expecting
> > > full human-level here)
> > >
> > > -Ability to scan and store and, over time, convert and incorporate
> > > into learned deep structure vast amounts of knowledge including ultimately
> > > all available recorded knowledge
> > >
> > > .
> > >
> > >
> > >
> > > To do such thinking I have come up with a fairly uniform approach to
> > > all these tasks, so I guess you could call that approach something
> > > approaching "a theory of intelligence".  But I mainly think of it as a
> > > theory of how to get certain really cool things done.
> > >
> > >
> > >
> > > I don't expect to get what is listed all at once, but, barring some
> > > major set back, this will probably all happen (with perhaps partial
> > > exception on the last item) within twenty years, and with the right people
> > > getting big money most of it could substantially all happen in ten.
> > >
> > >
> > >
> > > In addition, as we get closer to the threshold I think "intelligence"
> > > (at least from our perspective) should include:
> > >
> > >
> > >
> > > -helping make individual people, human organizations, and human
> > > government more intelligent, happy, cooperative, and peaceful
> > >
> > > -helping creating a transition into the future that is satisfying
> > > for most humans
> > >
> > >
> > >
> > > Edward W. Porter
> > > Porter & Associates
> > > 24 String Bridge S12
> > > Exeter, NH 03833
> > > (617) 494-1722
> > > Fax (617) 494-1822
> > > [EMAIL PROTECTED]
> > >
> > >
> > > ------------------------------
> > > This list is sponsored by AGIRI: http://www.agiri.org/email
> > > To unsubscribe or change your options, please go to:
> > > http://v2.listbox.com/member/?&;
> > >
> > > ------------------------------
> > > This list is sponsored by AGIRI: http://www.agiri.org/email
> > > To unsubscribe or change your options, please go to:
> > > http://v2.listbox.com/member/?&;
> > >
> >
> > ------------------------------
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> >
> > ------------------------------
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> >
>
> ------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
> ------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55817525-eecab7

Reply via email to