I apologize for being so hostile. As David and Matthew stated, yes, I do
get rather defensive when others assault the hard work that has been done
throughout the years in the Machine Learning field, as I personally look up
to those researchers. I also should say that HTM is what really got me
interested in ML starting off, so I have no hatred towards it. While I
don't like many of the biases exhibited here and on the public Gitter chat
room, I do realize that I am a part of it. Thus, moving forwards I think it
would help for all of us to shed these biases, and approach the pursuit of
AI from a more laid-back perspective, considering all approaches equally.

Therefore, as Matthew suggested, I think it would be in all of our best
interests if we might each attempt to add pieces to the puzzle, so to
speak. I clearly have more experience with other ML techniques distinct
from HTM and those residing in NuPIC, and I am indeed currently putting
together a project to showcase a combination of these algorithms, which is
the approach that I believe to be most likely to produce the AGI that we
seek. And of course, Numenta and friends have more experience on the HTM
and neocortical aspects of cognition to press forwards and develop THE
cortical algorithm on which our cortices operate. I think it might be good
then for each of us to work on our respective pieces more-or-less
separately, and divide up the work that will go into creating AI.

Let me then layout the work that I have cutout for myself:

   - Reinforcement learning, specifically using a singular
   reward/punishment signal to produce internal and external actions
   - Error-driven learning (such as found in the cerebellum), to allow an
   AI to model it's environment in a way which merges sensory and motor
   systems on a fast timescale
   - Episodic-like memory formation, such as found in the hippocampus, for
   the storage and retrieval of "memories" of past or current events
   - Working memory, as found in the PFC, for temporary storage and
   retrieval of relevant information required at some later point (useful for
   matching tasks)

Following from that, it seems that Numenta is already in it's preferred
spot:

   - Feature learning and encoding of diverse stimuli
   - Pooling throughout a hierarchy, spatially and temporally
   - Anomaly detection and prediction of future events or external states
   - Sensorimotor prediction, utilizing motor feedback signals for tracking
   the state of the AI
   - Invariance and generalization to similar stimuli, while retaining the
   ability to differentiate distinct stimuli

I'm sure I missed a few things on Numenta's side, but to me this seems like
a good division of labor, especially given that my work is heavily based
off the work of previous (and current) ML research, and should therefore
move along a bit quicker. Hopefully within the next few years both
platforms will be developed enough that they can be easily combined and
still function effectively. However, given that HTM has had some issues
with certain cortical-based problems, such as visual recognition, I'd like
to suggest some well-designed algorithms to look at for ideas for future
additions or modifications. Many of them are also based on the cortex, so
it should be easy to understand the connections to HTM:

   - Convolutional Neural Networks, for rapidly learning generic "filters"
   (such as in the visual and auditory cortices)
   - Echo-State Networks, also used in auditory areas for storage and
   recall of short or long sequences of auditory representations
   - Recurrent Sparse Autoencoders, for reproducing the input provided and
   extracting higher-level, more abstract features (can also be made
   temporally-aware quite easily)

These are just a few examples, and I'm happy to give more as needed. I hope
that one day our algorithms can be combined to make something that we can
possibly call "Human", and thus enter the era of Artificial Intelligence.

Julian Samaroo
Manager of Information Technology
BluePrint Pathways, LLC
(516) 993-1150

On Tue, Jun 30, 2015 at 11:32 AM, Matthew Lohbihler <
[email protected]> wrote:

>  A fair summary. Thanks Matt.
>
>
> On 6/30/2015 12:06 PM, Matthew Taylor wrote:
>
> Encoders matter to Numenta, and those are extra-cortical structures.
> And you can't do sensorimotor work without extra-cortical structures
> either, so I would not say that they don't matter to us.
>
> I would say that we do not care so much about creating biologically
> accurate versions of extra-cortical structures.
> ---------
> Matt Taylor
> OS Community Flag-Bearer
> Numenta
>
>
> On Tue, Jun 30, 2015 at 7:37 AM, Matthew 
> Lohbihler<[email protected]> <[email protected]> wrote:
>
>  Actually, he doesn't. Jeff talks about cortex all the time. I have never
> seen any talk of, research into, or plans to develop any other structure.
> Don't get me wrong: cortex is a key thing. But let's not pretend that,
> publicly anyway, anything else matters much to Numenta at the moment.
>
>
> On 6/30/2015 10:20 AM, Dillon Bender wrote:
>
> Right, Jeff talks about this all the time. An isolated cortex knows
> virtually nothing and can cause nothing. It requires the sub-cortical
> structures like the basal ganglia for learning sensorimotor perception and
> control. That aspect will no doubt need to be included in HTM in some form.
> But like he also says all the time, there’s no reason it has to resemble
> natural, humanoid functions. All the cortical principles will be applied
> generally to any sensory domain, limited by our imagination. No
> circumvention of the biological algorithm is planned.
>
>
>
> - Dillon
>
>
>
> From: nupic [mailto:[email protected] 
> <[email protected]>] On Behalf Of Matthew
> Lohbihler
> Sent: Tuesday, June 30, 2015 9:03 AM
> To: Dillon Bender
> Subject: Re: Response to Jeff Hawkins interview.
>
>
>
> I tend to agree with John. I suspect that intelligence developed upon a
> neurological substrate without which that cortex can't function completely.
> Maybe, maybe, MI can still be developed by circumventing the substrate, but
> we'll learn so much more by developing it too.
>
> On 6/30/2015 9:49 AM, Dillon Bender wrote:
>
> <John> "And I think we'll have to work our way through the whole animal
> kingdom to get a humanoid robot working."
>
>
>
> If what you mean is that researchers should start with building simple
> organisms and then bolt on the more recently evolved systems, then I think
> this is false. The human brain contains the entirety of non-mammal to mammal
> evolution, so there is no reason to model non-mammals.
>
>
>
> I think you have missed out on Numenta's current research goals to work
> sensorimotor into CLA theory, because they realized before you that
> intelligence "needs to be embodied with sensory-motor loop at the core of
> its functionality." They have stated many times that the previous version of
> the theory modeled L2/3 of the cortex, and now adding L4 (and soon L5) will
> help close the sensorimotor loop.
>
>
>
> - Dillon
>
>
>
>
>
> -----Original Message-----
>
> From: nupic [mailto:[email protected] 
> <[email protected]>] On Behalf Of John
> Blackburn
>
> Sent: Tuesday, June 30, 2015 4:55 AM
>
> To: Dillon Bender
>
> Subject: Re: Response to Jeff Hawkins interview.
>
>
>
> Sorry to reopen this thread, I missed it! David, I wanted to comment on what
> you said on Facebook:
>
>
>
> 2.) For the first time in human history, we have an algorithm which models
> activity in the neocortex and performs with true intelligence exactly
> **how** the brain does it (its the HOW that is truly important here). ...and
> by the way, this was also contributed by Jeff Hawkins and Numenta.
>
>
>
> "performs with true intelligence" is a pretty bold claim. If this is the
> case, how come there are no very convincing examples of HTM working with
> human like intelligence? The Hotgym example is nice but it is really no
> better than what could be achieved with many existing neural networks. Echo
> state networks have been around for years and can make temporal predictions
> quite well. I recently presented some time sequence data relating to a
> bridge to this forum but HTM did not succeed in modelling this (ESNs worked
> much better). So outside of Hotgym, what really compelling demos do you
> have? I've been away for a while so maybe I missed something...
>
>
>
> I am also rather concerned HTM needs swarming before it can model anything.
> Isn't that "cheating" in a way? It seems the HTM is rather fragile and needs
> a lot of help. The human brain does not have this luxury it just has to cope
> with whatever data it gets.
>
>
>
> I'm also not convinced the neocortex is everything as Jeff Hawkins thinks. I
> seriously doubt the bulk of the brain is just scaffolding.
>
> I've been told birds have no neocortex but are capable of very intelligent
> behaviour including constructing tools. Meanwhile I don't see any AI robot
> capable of even ant-like intelligence. (ants are
>
> amazing!) Has anyone even constructed a robot based on HTM?
>
>
>
> Personally I don't think a a disembodied computer can ever be intelligent
> (not even ant-like intelligence). IMO a robot (and it must BE a robot) needs
> to be embodied with sensory-motor loop at the core of its functionality to
> start behaving like an animal. (animals are the only things we know that
> show intelligence: clouds don't, volcanos don't, computers don't). And I
> think we'll have to work our way through the whole animal kingdom to get a
> humanoid robot working.
>
>
>
> John.
>
>
>
> On Mon, May 25, 2015 at 10:17 PM, cogmission (David 
> Ray)<[email protected]> <[email protected]> wrote:
>
> You're probably right :-)
>
>
>
> On Mon, May 25, 2015 at 4:16 PM, Matthew Lohbihler
> <[email protected]> <[email protected]> wrote:
>
>
>
> Yes, I agree. Except for the part about checking up on us. As i
>
> mentioned before, indifference to us seems to me to be more the
>
> default than caring about us.
>
>
>
>
>
> On 5/25/2015 5:03 PM, cogmission (David Ray) wrote:
>
>
>
> Let me try and think this through. Only in the context of scarcity
>
> does the question of AGI **or** us come about. Where there is no
>
> scarcity, I think an AGI will just go about its business - peeking in
>
> from time to time to make sure we're doing ok. Why in a universe
>
> where it can go anywhere it wants and produce infinite energy and not
>
> be bound by our planet, would a super-super intelligent being even be
>
> obsessed over us, when it could merely go someplace else? I honestly
>
> thing that is the way it will be. (and maybe is already!)
>
>
>
> On Mon, May 25, 2015 at 3:56 PM, Matthew Lohbihler
> <[email protected]> <[email protected]> wrote:
>
>
>
> Forgive me David, but these are very loose definitions, and i've
>
> lost track of how they relate back to what an AGI will think about
>
> humanity. But to use your terms - hopefully accurately - what if the
>
> AGI satisfies its sentient need for "others" by creating other AGIs,
>
> ones that it can love and appreciate? I doubt humans would ever be
>
> up such a task, unless 1) as pets, or 2) with cybernetic improvements.
>
>
>
> On 5/25/2015 4:37 PM, David Ray wrote:
>
>
>
> Observation is the phenomenon of distinction, in the domain of language.
>
> The universe consists of two things, content and context. Content
>
> depends on its boundaries in order to exist. It depends on what it
>
> is not for it's being. Context is the space for things to be, though
>
> it is not quite space because space is yet another thing. It has no
>
> boundaries and it cannot be arrived at by assembling all of its content.
>
>
>
> Ideas; love, hate, our sense of who we are, our histories what we
>
> know to be true all of those are content. Context is what allows for
>
> that stuff to be. And all of it lives in language without which there would
> be nothing.
>
> There maybe would be a "drift" but we wouldn't know about it and we
>
> wouldn't be able to observe it.
>
>
>
> Sent from my iPhone
>
>
>
> On May 25, 2015, at 3:26 PM, Matthew Lohbihler
> <[email protected]> <[email protected]>
>
> wrote:
>
>
>
> You lost me. You seem to be working with definitions of
>
> "observation" and "space for thinking" that i'm unaware of.
>
>
>
> On 5/25/2015 4:14 PM, David Ray wrote:
>
>
>
> Matthew L.,
>
>
>
> It isn't a thought. It is there before observation or thoughts or
>
> thinking. It actually is the space for thinking to occur - it is the
>
> context that allows for thought. We bring it to the table - it is
>
> there before we are (ontologically speaking). (It being this sense
>
> of integrity/wholeness)
>
>
>
> Sent from my iPhone
>
>
>
> On May 25, 2015, at 2:59 PM, Matthew Lohbihler
> <[email protected]> <[email protected]>
>
> wrote:
>
>
>
> Goodness. I thought we agreed that an AGI would not think like humans.
>
> And besides, "love" doesn't feel like something i want to depend on
>
> as obvious in a machine.
>
>
>
>
>
> On 5/25/2015 3:50 PM, David Ray wrote:
>
>
>
> If I can take this conversation into yet a different direction.
>
>
>
> I think we've all been dancing around The question of what belies
>
> the generation of morality or how will an AI derive its sense of
>
> ethics? Of course initially there will be those parameters that are
>
> programmed in - but eventually those will be gotten around.
>
>
>
> There has been a lot of research into this actually - though it's
>
> not common knowledge it is however knowledge developed over the
>
> observation of millions of people.
>
>
>
> The universe and all beings along the gradient of sentience observe
>
> (albeit perhaps unconsciously), a sense of what I will call
>
> integrity or "wholeness". We'd like to think that mankind steered
>
> itself through the ages toward notions of gentility and societal
>
> sophistication; but it didn't really. The idea that a group or
>
> different groups devised a grand plan to have it turn out this way is
> totally preposterous.
>
>
>
> What is more likely is that there is a natural order to things and
>
> that is motion toward what works for the whole. I can't prove any of
>
> this but internally we all know when it's missing or when we are not
>
> in alignment with it. This ineffable sense is what love is - it's concern
> for the whole.
>
>
>
> So I say that any truly intelligent being, by virtue of existing in
>
> a substrate of integrity will have this built in and a super
>
> intelligent being will understand this - and that is ultimately the
>
> best chance for any single instance to survive is for the whole to survive.
>
>
>
> Yes I know immediately people want to cite all the aberrations and
>
> of course yes there are aberrations just as there are mutations but
>
> those aberrations our reactions to how a person is shown love during
>
> their development.
>
>
>
> Like I said I can't prove any of this but eventually it will bear
>
> itself out and we will find it to be so in the future.
>
>
>
> You can be skeptical if you want to but ask yourself some questions.
>
> Why is it that we all know when it's missing
>
> (fairness/justice/integrity)? Why is it that we develop open source
>
> software and free software? Why is it that despite our greed and
>
> insecurity society moves toward freedom and equality for everyone?
>
>
>
> One more question. Why is it that the most advanced philosophical
>
> beliefs cite that where we are located as a phenomenological event,
>
> is not in separate bodies?
>
>
>
> I know this kind of talk doesn't go over well in this crowd of
>
> concrete thinkers but I know that there is some science somewhere that backs
> this up.
>
>
>
> Sent from my iPhone
>
>
>
> On May 25, 2015, at 2:12 PM, vlab <[email protected]> <[email protected]> wrote:
>
>
>
> Small point: Even if they did decide that our diverse intelligence
>
> is worth keeping around (having not already mapped it into silicon)
>
> why would they need all of us.  Surely 10% of the population would
>
> give them enough 'sample size' to get their diversity ration, heck maybe
> 1/10 of 1% would be
>
> enough.   They may find that we are wasting away the planet (oh, not maybe,
>
> we are) and the planet would be more efficient and they could have
>
> more energy without most of us.  (Unless we become 'copper tops' as
>
> in the Matrix movie).
>
>
>
> On 5/25/2015 2:40 PM, Fergal Byrne wrote:
>
>
>
> Matthew,
>
>
>
> You touch upon the right point. Intelligence which can self-improve
>
> could only come about by having an appreciation for intelligence, so
>
> it's not going to be interested in destroying diverse sources of
>
> intelligence. We represent a crap kind of intelligence to such an AI
>
> in a certain sense, but one which it itself would rather communicate
>
> with than condemn its offspring to have to live like. If these
>
> things appear (which looks inevitable) and then they kill us, many
>
> of them will look back at us as a kind of "lost civilisation" which they'll
> struggle to reconstruct.
>
>
>
> The nice thing is that they'll always be able to rebuild us from the
>
> human genome. It's just a file of numbers after all.
>
>
>
> So, we have these huge threats to humanity. The AGI future is the
>
> only reversible one.
>
>
>
> Regards
>
> Fergal Byrne
>
>
>
> --
>
>
>
> Fergal Byrne, Brenter IT
>
>
>
> Author, Real Machine Intelligence with Clortex and NuPIC
> https://leanpub.com/realsmartmachines
>
>
>
> Speaking on Clortex and HTM/CLA at euroClojure Krakow, June 2014:
> http://euroclojure.com/2014/
>
> and at LambdaJam Chicago, July 2014: http://www.lambdajam.com
>
>
> http://inbits.com - Better Living through Thoughtful Technology
> http://ie.linkedin.com/in/fergbyrne/ -
> https://github.com/fergalbyrne
>
>
> e:[email protected] t:+353 83 4214179 Join the quest for
>
> Machine Intelligence at http://numenta.org Formerly of Adnet
> [email protected] http://www.adnet.ie
>
>
>
>
>
> On Mon, May 25, 2015 at 7:27 PM, Matthew Lohbihler
> <[email protected]> <[email protected]> wrote:
>
>
>
> I think Jeff underplays a couple of points, the main one being the
>
> speed at which an AGI can learn. Yes, there is a natural limit to
>
> how much experimentation in the real world can be done in a given
>
> amount of time. But we humans are already going beyond this with,
>
> for example, protein folding simulations, which speeds up the
>
> discovery of new drugs and such by many orders of magnitude. Any
>
> sufficiently detailed simulation could massively narrow down the
>
> amount of real world verification necessary, such that new
>
> discoveries happen more and more quickly, possibly at some point
>
> faster than we know the AGI is doing them. An intelligence
>
> explosion is not a remote possibility. The major risk here is what Eliezer
> Yudkowsky pointed out: not that the AGI is evil or something, but that it is
> indifferent to humanity.
>
> No one yet goes out of their way to make any form of AI care about
>
> us (because we don't yet know how). What if an AI created
>
> self-replicating nanobots just to prove a hypothesis?
>
>
>
> I think Nick Bostrom's book is what got Stephen, Elon, and Bill all
>
> upset. I have to say it starts out merely interesting, but gets to
>
> a dark place pretty quickly. But he goes too far in the other
>
> direction, at the same time easily accepting that superinteligences
>
> have all manner of cognitive skill, but at the same time can't
>
> fathom the how humans might not like the idea of having our brain's
>
> pleasure centers constantly poked, turning us all into smiling idiots (as i
> mentioned here:
> http://blog.serotoninsoftware.com/so-smart-its-stupid).
>
>
>
>
>
>
>
> On 5/25/2015 2:01 PM, Fergal Byrne wrote:
>
>
>
> Just one last idea in this. One thing that crops up every now and
>
> again in the Culture novels is the response of the Culture to
>
> Swarms, which are self-replicating viral machines or organisms.
>
> Once these things start consuming everything else, the AIs (mainly
>
> Ships and Hubs) respond by treating the swarms as a threat to the
>
> diversity of their Culture. They first try to negotiate, then
>
> they'll eradicate. If they can contain them, they'll do that.
>
>
>
> They do this even though they can themselves withdraw from real
>
> spacetime. They don't have to worry about their own survival. They
>
> do this simply because life is more interesting when it includes all the
> rest of us.
>
>
>
> Regards
>
>
>
> Fergal Byrne
>
>
>
> --
>
>
>
> Fergal Byrne, Brenter IT
>
>
>
> Author, Real Machine Intelligence with Clortex and NuPIC
> https://leanpub.com/realsmartmachines
>
>
>
> Speaking on Clortex and HTM/CLA at euroClojure Krakow, June 2014:
> http://euroclojure.com/2014/
>
> and at LambdaJam Chicago, July 2014: http://www.lambdajam.com
>
>
> http://inbits.com - Better Living through Thoughtful Technology
> http://ie.linkedin.com/in/fergbyrne/ -
> https://github.com/fergalbyrne
>
>
> e:[email protected] t:+353 83 4214179 Join the quest for
>
> Machine Intelligence at http://numenta.org Formerly of Adnet
> [email protected] http://www.adnet.ie
>
>
>
>
>
> On Mon, May 25, 2015 at 5:04 PM, cogmission (David Ray)
> <[email protected]> <[email protected]> wrote:
>
>
>
> This was someone's response to Jeff's interview (see here:
> https://www.facebook.com/fareedzakaria/posts/10152703985901330)
>
>
>
> Please read and comment if you feel the need...
>
>
>
> Cheers,
>
> David
>
>
>
> --
>
> With kind regards,
>
>
>
> David Ray
>
> Java Solutions Architect
>
>
>
> Cortical.io
>
> Sponsor of:  HTM.java
>
>
> [email protected]
> http://cortical.io
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
>
> With kind regards,
>
>
>
> David Ray
>
> Java Solutions Architect
>
>
>
> Cortical.io
>
> Sponsor of:  HTM.java
>
>
> [email protected]
> http://cortical.io
>
>
>
>
>
>
>
>
>
>
>
> --
>
> With kind regards,
>
>
>
> David Ray
>
> Java Solutions Architect
>
>
>
> Cortical.io
>
> Sponsor of:  HTM.java
>
>
> [email protected]
> http://cortical.io
>
>
>
>
>
>
>
>

Reply via email to