I have nothing but kudos for anyone who works in the AGI field, no matter what they're working on. Considering the fact that in ~70 years of work no one has once demonstrated anything resembling a strong AI, i think it's premature to consider any approach as superior to any other. We clearly need as many people as possible working on as many different approaches as possible - overlapping or not - to cover this massive research space. What i can't deal with quietly is, 1) those who make unsubstantiated claims about their own work, and 2) those who make undue criticisms of the work of others. So far this morning i've seen both.

On 6/30/2015 11:06 AM, Julian Samaroo wrote:
Dillon,

I have in fact watched those videos many times over, and while they are nice to look at, they describe absolutely nothing about how Numenta plans to implement behavior. I could, right now, code a simple TD-learning system (which is well-described in literature), to perform actions, and then I could slap on a convolutional neural network to allow it to behave in response to certain images of people, places, objects, or really whatever I'd like. Why is Numenta so slow to add something so simple? These things have been done time-and-time again, just like HTM even. Look up Recurrent Sparse Autoencoders, and you'll find those perform the exact same operations, and have been known about for decades. I'm sorry, but every time I hear Jeff speak about something, it seems like he's embellishing some empty concept in his head, without a single way to implement it on paper or in code. And when he does give some inkling towards how to implement it, it's already been done by a previous researcher.

Julian

Julian Samaroo
Manager of Information Technology
BluePrint Pathways, LLC
(516) 993-1150

On Tue, Jun 30, 2015 at 10:01 AM, Dillon Bender <[email protected] <mailto:[email protected]>> wrote:

    Lol, okay...

    Have you watched this? https://www.youtube.com/watch?v=ZFazR5yqesk

    Though, technically, they aren’t developing any structure per se,
    just the algorithms. This talk is obviously about including the
    algorithms behind sub-cortical structures. It’s the main focus
    right now. I’m not pretending anything.

    Jeff also explains his goals for integrating sensorimotor feedback
    in this Q&A session: https://www.youtube.com/watch?v=HRwf2uQTiWU

    - Dillon

    *From:*nupic [mailto:[email protected]
    <mailto:[email protected]>] *On Behalf Of *Matthew
    Lohbihler
    *Sent:* Tuesday, June 30, 2015 9:37 AM


    *To:* Dillon Bender
    *Subject:* Re: Response to Jeff Hawkins interview.

    Actually, he doesn't. Jeff talks about cortex all the time. I have
    never seen any talk of, research into, or plans to develop any
    other structure. Don't get me wrong: cortex is a key thing. But
    let's not pretend that, publicly anyway, anything else matters
    much to Numenta at the moment.

    On 6/30/2015 10:20 AM, Dillon Bender wrote:

        Right, Jeff talks about this all the time. An isolated cortex
        knows virtually nothing and can cause nothing. It requires the
        sub-cortical structures like the basal ganglia for learning
        sensorimotor perception and control. That aspect will no doubt
        need to be included in HTM in some form. But like he also says
        all the time, there’s no reason it has to resemble natural,
        humanoid functions. All the cortical principles will be
        applied generally to any sensory domain, limited by our
        imagination. No circumvention of the biological algorithm is
        planned.

        - Dillon

        *From:*nupic [mailto:[email protected]] *On
        Behalf Of *Matthew Lohbihler
        *Sent:* Tuesday, June 30, 2015 9:03 AM
        *To:* Dillon Bender
        *Subject:* Re: Response to Jeff Hawkins interview.

        I tend to agree with John. I suspect that intelligence
        developed upon a neurological substrate without which that
        cortex can't function completely. Maybe, maybe, MI can still
        be developed by circumventing the substrate, but we'll learn
        so much more by developing it too.


        On 6/30/2015 9:49 AM, Dillon Bender wrote:

            <John> "And I think we'll have to work our way through the whole animal 
kingdom to get a humanoid robot working."

            If what you mean is that researchers should start with building 
simple organisms and then bolt on the more recently evolved systems, then I 
think this is false. The human brain contains the entirety of non-mammal to 
mammal evolution, so there is no reason to model non-mammals.

            I think you have missed out on Numenta's current research goals to work 
sensorimotor into CLA theory, because they realized before you that intelligence 
"needs to be embodied with sensory-motor loop at the core of its 
functionality." They have stated many times that the previous version of the theory 
modeled L2/3 of the cortex, and now adding L4 (and soon L5) will help close the 
sensorimotor loop.

            - Dillon

            -----Original Message-----

            From: nupic [mailto:[email protected]] On Behalf Of 
John Blackburn

            Sent: Tuesday, June 30, 2015 4:55 AM

            To: Dillon Bender

            Subject: Re: Response to Jeff Hawkins interview.

            Sorry to reopen this thread, I missed it! David, I wanted to 
comment on what you said on Facebook:

            2.) For the first time in human history, we have an algorithm which 
models activity in the neocortex and performs with true intelligence exactly 
**how** the brain does it (its the HOW that is truly important here). ...and by 
the way, this was also contributed by Jeff Hawkins and Numenta.

            "performs with true intelligence" is a pretty bold claim. If this 
is the case, how come there are no very convincing examples of HTM working with human 
like intelligence? The Hotgym example is nice but it is really no better than what could 
be achieved with many existing neural networks. Echo state networks have been around for 
years and can make temporal predictions quite well. I recently presented some time 
sequence data relating to a bridge to this forum but HTM did not succeed in modelling 
this (ESNs worked much better). So outside of Hotgym, what really compelling demos do you 
have? I've been away for a while so maybe I missed something...

            I am also rather concerned HTM needs swarming before it can model anything. 
Isn't that "cheating" in a way? It seems the HTM is rather fragile and needs a 
lot of help. The human brain does not have this luxury it just has to cope with whatever 
data it gets.

            I'm also not convinced the neocortex is everything as Jeff Hawkins 
thinks. I seriously doubt the bulk of the brain is just scaffolding.

            I've been told birds have no neocortex but are capable of very 
intelligent behaviour including constructing tools. Meanwhile I don't see any 
AI robot capable of even ant-like intelligence. (ants are

            amazing!) Has anyone even constructed a robot based on HTM?

            Personally I don't think a a disembodied computer can ever be 
intelligent (not even ant-like intelligence). IMO a robot (and it must BE a 
robot) needs to be embodied with sensory-motor loop at the core of its 
functionality to start behaving like an animal. (animals are the only things we 
know that show intelligence: clouds don't, volcanos don't, computers don't). 
And I think we'll have to work our way through the whole animal kingdom to get 
a humanoid robot working.

            John.

            On Mon, May 25, 2015 at 10:17 PM, cogmission (David 
Ray)<[email protected]>  <mailto:[email protected]>  wrote:

                You're probably right :-)

                On Mon, May 25, 2015 at 4:16 PM, Matthew Lohbihler

                <[email protected]>  
<mailto:[email protected]>  wrote:

                    Yes, I agree. Except for the part about checking up on us. 
As i

                    mentioned before, indifference to us seems to me to be more 
the

                    default than caring about us.

                    On 5/25/2015 5:03 PM, cogmission (David Ray) wrote:

                    Let me try and think this through. Only in the context of 
scarcity

                    does the question of AGI **or** us come about. Where there 
is no

                    scarcity, I think an AGI will just go about its business - 
peeking in

                    from time to time to make sure we're doing ok. Why in a 
universe

                    where it can go anywhere it wants and produce infinite 
energy and not

                    be bound by our planet, would a super-super intelligent 
being even be

                    obsessed over us, when it could merely go someplace else? I 
honestly

                    thing that is the way it will be. (and maybe is already!)

                    On Mon, May 25, 2015 at 3:56 PM, Matthew Lohbihler

                    <[email protected]>  
<mailto:[email protected]>  wrote:

                        Forgive me David, but these are very loose definitions, 
and i've

                        lost track of how they relate back to what an AGI will 
think about

                        humanity. But to use your terms - hopefully accurately 
- what if the

                        AGI satisfies its sentient need for "others" by 
creating other AGIs,

                        ones that it can love and appreciate? I doubt humans 
would ever be

                        up such a task, unless 1) as pets, or 2) with 
cybernetic improvements.

                        On 5/25/2015 4:37 PM, David Ray wrote:

                        Observation is the phenomenon of distinction, in the 
domain of language.

                        The universe consists of two things, content and 
context. Content

                        depends on its boundaries in order to exist. It depends 
on what it

                        is not for it's being. Context is the space for things 
to be, though

                        it is not quite space because space is yet another 
thing. It has no

                        boundaries and it cannot be arrived at by assembling 
all of its content.

                        Ideas; love, hate, our sense of who we are, our 
histories what we

                        know to be true all of those are content. Context is 
what allows for

                        that stuff to be. And all of it lives in language 
without which there would be nothing.

                        There maybe would be a "drift" but we wouldn't know 
about it and we

                        wouldn't be able to observe it.

                        Sent from my iPhone

                        On May 25, 2015, at 3:26 PM, Matthew Lohbihler

                        <[email protected]>  
<mailto:[email protected]>

                        wrote:

                        You lost me. You seem to be working with definitions of

                        "observation" and "space for thinking" that i'm unaware 
of.

                        On 5/25/2015 4:14 PM, David Ray wrote:

                        Matthew L.,

                        It isn't a thought. It is there before observation or 
thoughts or

                        thinking. It actually is the space for thinking to 
occur - it is the

                        context that allows for thought. We bring it to the 
table - it is

                        there before we are (ontologically speaking). (It being 
this sense

                        of integrity/wholeness)

                        Sent from my iPhone

                        On May 25, 2015, at 2:59 PM, Matthew Lohbihler

                        <[email protected]>  
<mailto:[email protected]>

                        wrote:

                        Goodness. I thought we agreed that an AGI would not 
think like humans.

                        And besides, "love" doesn't feel like something i want 
to depend on

                        as obvious in a machine.

                        On 5/25/2015 3:50 PM, David Ray wrote:

                        If I can take this conversation into yet a different 
direction.

                        I think we've all been dancing around The question of 
what belies

                        the generation of morality or how will an AI derive its 
sense of

                        ethics? Of course initially there will be those 
parameters that are

                        programmed in - but eventually those will be gotten 
around.

                        There has been a lot of research into this actually - 
though it's

                        not common knowledge it is however knowledge developed 
over the

                        observation of millions of people.

                        The universe and all beings along the gradient of 
sentience observe

                        (albeit perhaps unconsciously), a sense of what I will 
call

                        integrity or "wholeness". We'd like to think that 
mankind steered

                        itself through the ages toward notions of gentility and 
societal

                        sophistication; but it didn't really. The idea that a 
group or

                        different groups devised a grand plan to have it turn 
out this way is totally preposterous.

                        What is more likely is that there is a natural order to 
things and

                        that is motion toward what works for the whole. I can't 
prove any of

                        this but internally we all know when it's missing or 
when we are not

                        in alignment with it. This ineffable sense is what love 
is - it's concern for the whole.

                        So I say that any truly intelligent being, by virtue of 
existing in

                        a substrate of integrity will have this built in and a 
super

                        intelligent being will understand this - and that is 
ultimately the

                        best chance for any single instance to survive is for 
the whole to survive.

                        Yes I know immediately people want to cite all the 
aberrations and

                        of course yes there are aberrations just as there are 
mutations but

                        those aberrations our reactions to how a person is 
shown love during

                        their development.

                        Like I said I can't prove any of this but eventually it 
will bear

                        itself out and we will find it to be so in the future.

                        You can be skeptical if you want to but ask yourself 
some questions.

                        Why is it that we all know when it's missing

                        (fairness/justice/integrity)? Why is it that we develop 
open source

                        software and free software? Why is it that despite our 
greed and

                        insecurity society moves toward freedom and equality 
for everyone?

                        One more question. Why is it that the most advanced 
philosophical

                        beliefs cite that where we are located as a 
phenomenological event,

                        is not in separate bodies?

                        I know this kind of talk doesn't go over well in this 
crowd of

                        concrete thinkers but I know that there is some science 
somewhere that backs this up.

                        Sent from my iPhone

                        On May 25, 2015, at 2:12 PM, vlab<[email protected]>  
<mailto:[email protected]>  wrote:

                        Small point: Even if they did decide that our diverse 
intelligence

                        is worth keeping around (having not already mapped it 
into silicon)

                        why would they need all of us.  Surely 10% of the 
population would

                        give them enough 'sample size' to get their diversity 
ration, heck maybe 1/10 of 1% would be

                        enough.   They may find that we are wasting away the 
planet (oh, not maybe,

                        we are) and the planet would be more efficient and they 
could have

                        more energy without most of us.  (Unless we become 
'copper tops' as

                        in the Matrix movie).

                        On 5/25/2015 2:40 PM, Fergal Byrne wrote:

                        Matthew,

                        You touch upon the right point. Intelligence which can 
self-improve

                        could only come about by having an appreciation for 
intelligence, so

                        it's not going to be interested in destroying diverse 
sources of

                        intelligence. We represent a crap kind of intelligence 
to such an AI

                        in a certain sense, but one which it itself would 
rather communicate

                        with than condemn its offspring to have to live like. 
If these

                        things appear (which looks inevitable) and then they 
kill us, many

                        of them will look back at us as a kind of "lost 
civilisation" which they'll struggle to reconstruct.

                        The nice thing is that they'll always be able to 
rebuild us from the

                        human genome. It's just a file of numbers after all.

                        So, we have these huge threats to humanity. The AGI 
future is the

                        only reversible one.

                        Regards

                        Fergal Byrne

                        --

                        Fergal Byrne, Brenter IT

                        Author, Real Machine Intelligence with Clortex and NuPIC

                        https://leanpub.com/realsmartmachines

                        Speaking on Clortex and HTM/CLA at euroClojure Krakow, 
June 2014:

                        http://euroclojure.com/2014/

                        and at LambdaJam Chicago, July 
2014:http://www.lambdajam.com

                        http://inbits.com  - Better Living through Thoughtful 
Technology

                        http://ie.linkedin.com/in/fergbyrne/  -

                        https://github.com/fergalbyrne

                        e:[email protected]  
<mailto:e:[email protected]>  t:+353 83 4214179  
<tel:%2B353%2083%204214179>  Join the quest for

                        Machine Intelligence athttp://numenta.org  Formerly of 
Adnet

                        [email protected]  <mailto:[email protected]>  
http://www.adnet.ie

                        On Mon, May 25, 2015 at 7:27 PM, Matthew Lohbihler

                        <[email protected]>  
<mailto:[email protected]>  wrote:

                            I think Jeff underplays a couple of points, the 
main one being the

                            speed at which an AGI can learn. Yes, there is a 
natural limit to

                            how much experimentation in the real world can be 
done in a given

                            amount of time. But we humans are already going 
beyond this with,

                            for example, protein folding simulations, which 
speeds up the

                            discovery of new drugs and such by many orders of 
magnitude. Any

                            sufficiently detailed simulation could massively 
narrow down the

                            amount of real world verification necessary, such 
that new

                            discoveries happen more and more quickly, possibly 
at some point

                            faster than we know the AGI is doing them. An 
intelligence

                            explosion is not a remote possibility. The major 
risk here is what Eliezer Yudkowsky pointed out: not that the AGI is evil or 
something, but that it is indifferent to humanity.

                            No one yet goes out of their way to make any form 
of AI care about

                            us (because we don't yet know how). What if an AI 
created

                            self-replicating nanobots just to prove a 
hypothesis?

                            I think Nick Bostrom's book is what got Stephen, 
Elon, and Bill all

                            upset. I have to say it starts out merely 
interesting, but gets to

                            a dark place pretty quickly. But he goes too far in 
the other

                            direction, at the same time easily accepting that 
superinteligences

                            have all manner of cognitive skill, but at the same 
time can't

                            fathom the how humans might not like the idea of 
having our brain's

                            pleasure centers constantly poked, turning us all 
into smiling idiots (as i mentioned here:

                            
http://blog.serotoninsoftware.com/so-smart-its-stupid).

                            On 5/25/2015 2:01 PM, Fergal Byrne wrote:

                            Just one last idea in this. One thing that crops up 
every now and

                            again in the Culture novels is the response of the 
Culture to

                            Swarms, which are self-replicating viral machines 
or organisms.

                            Once these things start consuming everything else, 
the AIs (mainly

                            Ships and Hubs) respond by treating the swarms as a 
threat to the

                            diversity of their Culture. They first try to 
negotiate, then

                            they'll eradicate. If they can contain them, 
they'll do that.

                            They do this even though they can themselves 
withdraw from real

                            spacetime. They don't have to worry about their own 
survival. They

                            do this simply because life is more interesting 
when it includes all the rest of us.

                            Regards

                            Fergal Byrne

                            --

                            Fergal Byrne, Brenter IT

                            Author, Real Machine Intelligence with Clortex and 
NuPIC

                            https://leanpub.com/realsmartmachines

                            Speaking on Clortex and HTM/CLA at euroClojure 
Krakow, June 2014:

                            http://euroclojure.com/2014/

                            and at LambdaJam Chicago, July 
2014:http://www.lambdajam.com

                            http://inbits.com  - Better Living through 
Thoughtful Technology

                            http://ie.linkedin.com/in/fergbyrne/  -

                            https://github.com/fergalbyrne

                            e:[email protected]  
<mailto:e:[email protected]>  t:+353 83 4214179  
<tel:%2B353%2083%204214179>  Join the quest for

                            Machine Intelligence athttp://numenta.org  Formerly 
of Adnet

                            [email protected]  <mailto:[email protected]>  
http://www.adnet.ie

                            On Mon, May 25, 2015 at 5:04 PM, cogmission (David 
Ray)

                            <[email protected]>  
<mailto:[email protected]>  wrote:

                                This was someone's response to Jeff's interview 
(see here:

                                
https://www.facebook.com/fareedzakaria/posts/10152703985901330)

                                Please read and comment if you feel the need...

                                Cheers,

                                David

                                --

                                With kind regards,

                                David Ray

                                Java Solutions Architect

                                Cortical.io

                                Sponsor of:  HTM.java

                                [email protected]  <mailto:[email protected]>

                                http://cortical.io

                    --

                    With kind regards,

                    David Ray

                    Java Solutions Architect

                    Cortical.io

                    Sponsor of:  HTM.java

                    [email protected]  <mailto:[email protected]>

                    http://cortical.io

                --

                With kind regards,

                David Ray

                Java Solutions Architect

                Cortical.io

                Sponsor of:  HTM.java

                [email protected]  <mailto:[email protected]>

                http://cortical.io



Reply via email to