I'd be very much interested in hearing intuition-level theories that
grow around AGI efforts. There are very few stable theories that
actually aspire to achieve AGI. For the last several months I've been
thinking over possible knowledge representations and learning
dynamics, generalizing and refining various approaches, and puzzle
grows and grows, transforming into new shapes, even though complexity
of the picture is kept limited. I keep encountering things that are
similar to those I saw here and there, which I didn't view as in the
least related to AGI (and they were not described in a way that would
show them this way). There is knowledge out there, which isn't
expressed in relevant terms to be of any help, which only mocks in a
hindsight. It would be enlightening to have in presented in an
accessible form.

On 11/8/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> --- Linas Vepstas <[EMAIL PROTECTED]> wrote:
>
> > On Wed, Nov 07, 2007 at 08:38:40AM -0700, Derek Zahn wrote:
> > > A large number of individuals on this list are "architecting" an AGI
> > > solution (or part of one) in their spare time.  I think that most of
> > > those efforts do not have meaningful answers to many of the questions,
> > > but rather intend to address AGI questions from a particular perspective.
> >
> > [...]
> > >
> > > Probably most people like that are not "serious contenders" in the sense
> > > of having a complete detailed plan for achieving a full AGI.
> >
> > And the "serious contenders" are a handful of small companies that
> > seem unlikely to fill out a self-assesment status report card
> > revealing thier weaknesses and strengths to the competition.
> >
> > Tell me again why *anyone* would want to fill this out?
> > If I had some neat whiz-bang thing, I know enough marketing
> > to know that I should emphasize what its great at, rather
> > than placing large blaring red X's on the 19 check-boxes
> > that it sucks at.
> >
> > I thought the point was to promote colaboration, but I don't
> > see how.  Do you really think you'll convince Cyc corp to
> > use SUMO's upper ontology, or v.v.? Do you think that anyone
> > working on a theorem prover will abandon it, to go work on
> > NARS, or v.v?
> >
> > Most of the major projects already have articles on Wikipedia;
> > I don't see much addition here except cruft.  Maybe I missed
> > the point; excuse me if I sound negative.
>
> Maybe listing all the projects that have NOT achieved AGI might give us some
> insight.
>
> - Early attempts at AI like GPS [1] and the 1959 Russian-English translation
> project seriously underestimated the difficulty of the problem.
> - Later attempts like SHRDLU and Cyc seriously underestimated the difficulty
> of the problem.
> - Current AGI projects like Novamente are forging ahead, even though we STILL
> do not know how much training data and computing power we need.
> - Big companies like Google and IBM (Blue Brain) with massive data sets and
> computing power are still doing basic research.
> - Really smart people like Minsky, Kurzweil, and Yudkowsky are not trying to
> actually build AGI.
>
> 1. A. Newell, H. A. Simon, "GPS: A Program that Simulates Human Thought",
> Lernende Automaten, Munich: R. Oldenbourg KG, 1961.
>
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>


-- 
Vladimir Nesov                            mailto:[EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=62736311-283287

Reply via email to