>
> On Dec 2, 2019 at 2:50 AM, <Matt Mahoney (mailto:[email protected])>
> wrote:
>
>
>
> I don't believe anyone on this list is working on SOAR. It is an old system
> (1983). Back then the only thing you could do in AI with available computing
> power was structured knowledge representation, rule based language models,
> and expert systems. Any work done with autonomous agents was only in
> simulation. There was no vision, hearing, speech, or robotics. Language
> understanding was brittle, with little resemblance to the way humans process
> language. We learn semantics before grammar, the opposite of machines, which
> is why they are so bad with ambiguity.
>
>
> OpenCog by Ben Goertzel, who posts here occasionally, dates to 1995 if you
> include it's precursors Webmind and Novamente. It has many similar
> limitations due to hardware. The atomspace architecture is supposed to
> support structured knowledge, probabilistic reasoning, induction, and
> learning, but there is nevertheless no knowledge base or useful applications.
> The evolutionary learner MOSES and neural vision system DeSTIN only work on
> toy problems and were never integrated with atomspace as it was designed to
> be.
>
>
>
> MOSES is being integrated with AtomSpace, see the “as-moses” repo on GitHub.
> DeSTIN has been dropped in favor of modern Artificial Neural Network models,
> which makes sense given their limited manpower.
>
>
>
> The last public demo was in 2009 of a puppy in a virtual world. Since then
> there really hasn't been any basic research.
>
>
>
> I don't mean to be critical but AGI is a really hard problem which no
> individual on this list has the resources to solve. Google, Amazon, Apple,
> Facebook, and Microsoft have made some progress, but these are companies with
> trillion dollar market caps.
>
>
>
> You don’t magically make progress in reasearch by throwing money at it - it
> won’t make the people working on it smarter, it does however allow you to
> hire lots of developers to make high-quality software for you and to use lots
> of data and compute to train statistical models.
>
>
>
> Current AI techniques still result in models that are often brittle and show
> signs of not being really congruent with human mode of learning - so much for
> that basic research done by those trillion cap companies, I guess...
>
>
>
> A human brain sized neural network needs 10 to 20 petaflops and a petabyte of
> RAM. Our software, encoded in DNA, is equivalent to 300 million lines, or
> $30 billion. And then you have to train it on an exabyte of video.
>
>
>
> But this approach doesn't even make sense. Our whole economy is based on job
> specialization. It is far more efficient to organize machines like we
> organize people, each doing a specific task. Everyone making progress in AI
> is doing narrow AI, and really this is the only practical approach. Instead
> of trying to automate a million different jobs all at once, you'll have more
> success automating one job. That's going to be hard enough, given that all
> the low hanging fruit has been picked.
>
>
>
>
> On Sat, Nov 30, 2019, 11:28 AM digikar via AGI <[email protected]
> (mailto:[email protected])> wrote:
>
> >
> >
> > I am a third year student pursuing a Bachelors in Computer Science and
> > Engineering, and have been wanting to get into AGI, since, two years may
> > be. I discovered OpenCog, and felt it to be too daunting - like I think
> > I'll require another year or two of study to make good sense of it.
> >
> >
> >
> > I studied some first language acquisition the last summer (along with a
> > basic Andrew Ng's ML course, another NLPwDL CS 224n from Stanford, and a
> > more rigorous and exhaustive (than the Ng's anyways) Foundations of ML at
> > my own university). Reading about first language acquisition led me to
> > believe that a primary problem is being able to represent the world (with
> > as much details as possible, since dealing with block worlds is easy). So,
> > this is the primary issue.
> >
> >
> >
> > Recently, I spent some time with Natural Semantic Metalanguage, and its
> > criticisms. The concept is definitely ambitious; however, that definitely
> > doesn't seem to be the way our thoughts work. For instance, see
> > explication for "left"
> > (https://linguistics.stackexchange.com/questions/29586/nsm-explication-for-left);
> > for me, 'left' just evokes a direction than all the other things. May be,
> > it might be useful for actually representing the world in a computer, but
> > an explicit simulation (with which I haven't worked with yet) seems more
> > wieldy.
> >
> >
> >
> > I found SOAR ambitious and more "established" - however, their forums
> > (https://soar.eecs.umich.edu/forum) seemed void. So, just wanted to know
> > if anyone is working on it.
> >
> >
> >
> > Other than that, does their exist some system for representing the world -
> > gaming systems come to mind, but is their some established standard?
> >
> >
> >
> > Thanks!
> >
> >
>
>
> Artificial General Intelligence List (https://agi.topicbox.com/latest) /
> AGI / see discussions (https://agi.topicbox.com/groups/agi) + participants
> (https://agi.topicbox.com/groups/agi/members) + delivery options
> (https://agi.topicbox.com/groups/agi/subscription) Permalink
> (https://agi.topicbox.com/groups/agi/T7116d9f5c1f0551d-M3f779a984c3fdccc6da33e64)
>
>
------------------------------------------
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T7116d9f5c1f0551d-M52b924fb1d530ec990de84e4
Delivery options: https://agi.topicbox.com/groups/agi/subscription