The obvious application of AGI is automating $80 trillion per year that we
have to pay people for work that machines aren't smart enough to do. That
means solving hard problems in language, vision, robotics, art, and
modeling human behavior. I listed the requirements in more detail in my
paper.
Machines today only want what we make them want, only we truly want things,
so if you want AGI you need to create something with a true purpose, not an
artificial one.
I think any more than narrow a.i. is blowing it out of superfluous proportions,
and we need something more important to do
Thank you. I really enjoy and appreciate your comments.
There is no universal problem solver. So for the purpose of building a real
AGI, how many problems should our model be able to solve? How big is our
problem space?
On Thu, Aug 1, 2019, 8:22 AM Matt Mahoney wrote:
> The human brain cannot
Thank you for your email. You know, it is not about time management and its
worth. I am here to learn and I appreciate your comments and criticism.
What approach would you suggest that will never lead to undefined?
On Thu, Aug 1, 2019, 3:25 PM Jim Bromer wrote:
> Mohammadreza said, "I think
and importantly, as Ben predicts, 6) the ability for a narrow AGI to
utilize multiple sub AGIs seamlessly within a function area group
increases
On 8/1/19, Mike Archbold wrote:
> I like this editorial but I'm not sure "Narrow AGI" is the best label.
> At the moment I don't have a better name
I like this editorial but I'm not sure "Narrow AGI" is the best label.
At the moment I don't have a better name for it though. I mean, I
agree in principle but it's like somebody saying "X is a liberal
conservative." X might really be so, but it might be that... oh hell,
why don't we just call it
Sorry. I didn't know
On Thu, Aug 1, 2019 at 8:25 PM Matt Mahoney wrote:
> You let it out of the box?!?!? WE'RE DOOMED!!!
>
> On Thu, Aug 1, 2019, 7:10 AM Danko Nikolic
> wrote:
>
>> Hi everyone,
>>
>> I just tried the new agi library for Python. This is so exciting! But it
>> does not
So Mars gets conquered by AI robots. What Tensor Flaw is so intelligent
about surgery or proving math theorems?
Bias?
On 01.08.2019 13:16, Ben Goertzel wrote:
https://blog.singularitynet.io/from-narrow-ai-to-agi-via-narrow-agi-9618e6ccf2ce
--
The cortical algorithm is interesting. Cortical columns are pretty sexy
too because they're an obvious target for finding a high level algorithm
that does the same or better.
But regardless of how you implement cortex, the next thing you MUST do
to achieve a full functioning mind is to
Narrow elements within AGI development can serve as scaffolding but unless the
project philosophy is inherently General it's likely to fall into the Narrow AI
trap.
https://towardsdatascience.com/no-you-cant-get-from-narrow-ai-to-agi-eedc70e36e50
from External to Internal Intelligence:
That was a good article - I generally agree with it, but am a little
skeptical in terms of different industries sharing knowledge openly for it
to be completely effective.
It most likely will turn out to be a lot of paywalls / walled gardens.
On Thu, Aug 1, 2019 at 7:47 PM Ben Goertzel wrote:
Hi everyone,
I just tried the new agi library for Python. This is so exciting! But it
does not work really well for me. It is not responding any more. Where am I
making the mistake? Please see below a screenshot of my code.
Thanks for any help.
Danko
[image: capture.PNG]
Mohammadreza said, "I think "intelligence" means optimization. So, if it is
true, how can we tell an AGI agent to act optimally? e.g. with IF-THEN
rules? definitely Not! These rules may lead to unforeseen states."
If-Then rules are not the only application of discrete reasoning that are
possible.
https://blog.singularitynet.io/from-narrow-ai-to-agi-via-narrow-agi-9618e6ccf2ce
--
Ben Goertzel, PhD
http://goertzel.org
“The only people for me are the mad ones, the ones who are mad to
live, mad to talk, mad to be saved, desirous of everything at the same
time, the ones who never yawn or say
14 matches
Mail list logo