I wonder if there could be an open source where people or teams might be
able to try their own ideas without being persuaded to pursue someone
else's overarching theory?
Jim Bromer


On Sun, Feb 24, 2019 at 12:09 PM Mike Archbold <jazzbo...@gmail.com> wrote:

> On 2/24/19, Matt Mahoney <mattmahone...@gmail.com> wrote:
> > Colin, I think the source of our disagreement is that we have very
> > different ideas about what we mean by AGI. To you, AGI is an autonomous
> > agent that learns on its own by doing science (experiments). It is
> > completely general and can work in any field like a real human. To me,
> AGI
> > means automating anything we might have to otherwise have to pay a human
> to
> > do.
>
> It seems like you are both right: AGI should be an autonomous agent
> the learns on its own by doing science (experiments) AND AGI should be
> automating anything we  might have to otherwise pay a human to do.
>
> (Now all that remains is the trivial task of development .... :)
>
> >
> > You believe (I think) that AGI is not even possible in conventional
> > computers. I tend to agree. First, transistors use too much power, about
> a
> > megawatt for a human brain sized neural network. We might achieve tens of
> > kilowatts using neuromorphic computing at the physical limits of
> > minituration.
> >
> > Second, the brain is optimized for reproductive fitness, not universal
> > learning, something Legg proved is not even mathematically possible.
> > Instead we are born with 10^9 bits of knowledge encoded on our DNA. That
> is
> > half of what we know as adults, and that knowledge took 3 billion years
> to
> > program at the rate of one bit per generation. For example, you cannot
> > learn to remember a 20 digit permutation on a screen and immediately
> recall
> > it back no matter how much you practice, which a something a gorilla can
> do
> > because its DNA is different.
> >
> > Third, we do not even want autonomy. Then we would have to deal with
> human
> > limitations like emotions and the need to sleep, take vacations, and get
> > paid. We work around these limitations and train humans to specialize in
> a
> > million different fields because that is how an organization gets work
> > done.
> >
> > I don't expect AGI to look anything like a human. We already automate 99%
> > of work using specialized machines that are vastly better at their jobs
> > than humans could ever be. We don't want autonomy. We want to be in
> > control. We want to asymptomatically approach 100% as the cost of the
> > remaining human portion rises at 3-4% per year as it has for centuries.
> AGI
> > is not a robot revolution. AGI is more productivity with less effort
> using
> > machines that can see and understand language, sense but not feel, know
> > what we want without wanting, and recognize and predict human emotions
> > without having any.
> >
> > On Sun, Feb 24, 2019, 1:56 AM Colin Hales <col.ha...@gmail.com> wrote:
> >
> >> Matt:
> >>
> >> "When you put millions of these specialists together you have AGI."
> >>
> >> No you don't!!!! Says who? (1) Where's the proof? (2) Where's the
> >> principle that suggests it? You have neither of these things. Even if
> you
> >> had both these things you'd still have to build AGI and test it assuming
> >> they are false in order to do the science properly.
> >>
> >> And you do not get to 'define it' this way.
> >>
> >> This is SCIENCE. You have an artificial version of a natural general
> >> intelligence when you've built one and it (yes the AGI itself, not your
> >> or
> >> anyone else's blessing) AUTONOMOUSLY proves it. Like fire. Flight and a
> >> million other things.
> >>
> >> I can think of 1000 things that such a specialist-narrow-AI-collection
> >> doesn't cover (like everything that science does not know, but could
> find
> >> out), and that a natural general intelligence can autonomously learn
> >> (find
> >> out), but that 'collection of specialist narrow AI' lacks .... along
> with
> >> an ability to _autonomously_ learn it, which is also something natural
> >> general intelligence (yes us) can do. And even worse: such specialist
> >> collections have ZERO intelligence, not because it doesn't know
> >> something, but because it lacks the _autonomous_ bit. A real AGI can
> know
> >> absolutely NOTHING and yet have non-zero intelligence because it
> includes
> >> a
> >> means to autonomously find out. Like us.
> >>
> >> So that collection cannot be an "artificial  version of a natural
> general
> >> intelligence".
> >>
> >> End of story.
> >>
> >> Real AGI is defined by how it autonomously handles what it doesn't know
> >> (its ignorance!), not by what we bestow on it. If we bestow a means for
> >> learning X, that too does not make it AGI. A real but artificial version
> >> of
> >> a natural general intelligence has to _autonomously_ learn how to learn
> >> something it does not know, like us. To prove it you first prove it does
> >> NOT know it. Then, later, after learning, when it proves it does,
> >> autonomously, it gets a stab at the AGI gold medal.
> >>
> >> Meanwhile? Nothing wrong with what's being done. Powerful. Useful.
> >> Impressive. All good. Rah rah rah, carry on. BUT: Its AUTOMATION based
> on
> >> natural general intelligence, not AGI and it has ZERO intellect.
> >>
> >> [image: AGI.JPG]
> >>
> >> The entire AGI project is foundered on this basic fact of the science.
> >>
> >> And all we ever get here is the endless echo chamber of "if only we can
> >> program enough computers" (=  AUTOMATION) AGI will magically appear.
> >> Rubbish. It was rubbish 65 years ago and we've done nothing but prove
> >> more
> >> completely it is still rubbish.
> >>
> >> Enough.
> >>
> >> I know you'll never face it. Forget it.
> >>
> >>
> >>
> >> On Sun., 24 Feb. 2019, 11:08 am Matt Mahoney, <mattmahone...@gmail.com>
> >> wrote:
> >>
> >>> OpenCog is one open source effort. But real progress in AI like Google,
> >>> Siri, Alexa etc. is not just software. It's hundreds of petabytes of
> >>> data
> >>> from the 4 billion people on the internet and the millions of CPUs
> >>> needed
> >>> to process it. It's not just something you could download and run.
> >>>
> >>> I realize it's not AGI yet. We are still spending USD  $83 trillion per
> >>> year for work that machines can't do yet. There are still incompletely
> >>> solved problems in vision, language, robotics, art, and modelling human
> >>> behavior. That's going to take lots more data and computing power. The
> >>> theoretical work is mostly done, although we still lack good models of
> >>> humor and music and much of our own DNA.
> >>>
> >>> If you want to make progress, choose a narrow AI problem. When you put
> >>> millions of these specialists together you have AGI. Don't try to do it
> >>> all
> >>> yourself. You can't.
> >>>
> >>> On Sat, Feb 23, 2019, 12:33 PM Ed Pell <edp...@optonline.net> wrote:
> >>>
> >>>> All, why is there no open source AGI effort?
> >>>>
> >>>> Ed Pell
> >>>>
> >>> *Artificial General Intelligence List <https://agi.topicbox.com/latest
> >*
> >> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> >> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> >> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> >> <
> https://agi.topicbox.com/groups/agi/T69c9e23ba6b51be9-M908fd539511a9ae0f08c9a65

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T69c9e23ba6b51be9-Mcaba863ff522f7804cc1f3f7
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to