Really interesting discussion, thanks for kicking it off, Matt. The blog post is attacking a straw man. Finding the best theory of the neocortex will *clearly* bring us significantly closer to intelligent machines, because it's a major component of all intelligence. Jeff is not trying to replicate human intelligence, he's attempting to understand and emulate one vital part of it.
Jeff seems to have developed a "set" for his talks after so many iterations, where he includes his selection from his "full talk", based on the audience in front of him. His talk on Google TechTalks, for example, dealt extensively (at the end I think) with this question: http://www.youtube.com/watch?v=4y43qwS8fl4 He states explicitly that his goal is absolutely not to build a human intelligence, or to pass the Turing Test, or any of those things. These things might or might not be technically possible. To do that you would need to create a machine and give it all the emotional machinery and human experience, and let it grow up as a real "being". By definition it would then be a person, whose material embodiment just happens to be artificial. In answer to the idea of "uploading your brain" to a computer and waking it up, he ends by saying "just have kids!" One of the reasons we're all involved with NuPIC is because this is an achievable project, with limited goals. We're still exploring what you can achieve with 1mm squared of one layer of one region of neocortex. At some stage in the near future, we'll find out what we can do when we add hierarchy, multi-layer regions, feedback, motor function, and attention, the other principles of Jeff's theory. This is going to be just as hard, or harder, than the work done to date, but we'll figure it out and we see what we get. I'm very confident we'll get something recognisably "intelligent." Human intelligence is just "intelligence" in the context of being human. Our mental life is, of course, suffused with our human experiences and emotions, but these are things which our intelligence interacts with. An intelligent entity (natural or artificial) needs some kind of system which directs it, makes decisions, compares "values", formulates "intentions" and so on. This system interacts with the neocortex or HTM hierarchy, and may be considered to be the "emotional centre" of the entity. It'll have to be a different kind of system, but it might be quite simple. NuPIC's current "non-cortical" components, the bits that run the model and control its evolution, could be considered its "emotional centre" by this definition: driving the model to learn, extracting value, attending to goals, etc. Regards, Fergal Byrne On Fri, Oct 4, 2013 at 3:02 AM, David Ragazzi <[email protected]>wrote: > Hi all, > > Different ideas, different opinions but the same objectives.... All of us > want understand the brain, almost all of us want create build intelligent > machines and almost all of us have own vision of how it works. So why we > dont unify these ideas in a convergent point? I explain.. > > I even had(have) my own theory about how brain works before I know Jeff's > theory (indicated by a friend when I mentioned my ideas). Some of the ideas > proposed by Jeff I already I have in mind (but many others not) As well as > there're ideas that I believe that HTM should address (one of these are > deductive reasoning, not only probabilistic, as evidenced in last neuro > researches). I believe the same thing happened with many of you. > > The question is: many of us have different insights but in most parts we > reach a common consensus. What happens is many of the excellent ideas I > read on this discussion list and advocated by smart minds here end up being > forgotten/obscured as new messages come. > > I mean is: how about if beyond of we have an open-source code, we also > have an open "mind map" project where we could discuss and register these > insights and how they could improve the theory proposed by Hawkins? > > From I see, many of the approaches related to AI are more interessed on > put hands on than first creating a plausible and well-thought architecture. > For example, before Hawkins puts hand on, he prefered release his book in > order to get attention of an wide audience because he knows this task is > not a "one-man task" (at least I believe that this was his ittention). He > even said some things could be strong. BUT some people (including the > blogger mentioned) should bear in mind that "it is a initial framework". > > So coming back to my suggestion, if we already have an initial framework, > the HTM theory, what we could have now is a common and editable "mind map" > where the best ideas proposed by members here are put together and where we > realize how one idea could be related to other, i.e. what is the common > point. > > Well, it's just my opinion.. > > Best, David > > > > On 3 October 2013 22:07, Garikoitz Lerma-Usabiaga <[email protected]>wrote: > >> Hi! >> I agree with what is saying in the forum. I've been involved in some >> philosophy of mind as well, and it seems to me that AI is going to help >> neuroscience/philosophy of mind as much as the other way around. >> We don't know what's human intelligence, or consciousness, and how it >> works. We know that we are good recognizing faces, or detecting the >> physical properties of a falling object in milliseconds. We will have >> machines that will do it better than us, but solving specific problems with >> machines means nothing when you try to understand consciousness, for >> example. >> >> I agree with Ian about the salience concept. If we could discover the >> main adaptation mechanism of the brain and implement it into a machine, >> give the same perceptions and same survival problems, and giving enough >> time, wouldn't they develop similar mechanisms? I wont say that it would >> feel an emotion, but couldn't have a behavior as it had emotions? Or take >> language for example. A single human wouldn't be able or have the need to >> create a language, you only need that to communicate with others. If you >> have several machines interacting together, will they create a new >> communicating system? >> >> And at last, I think the comments on this Nature's special about Turing >> centenary go in the same direction. We are stuck. We don't know. So we >> should keep trying :) >> >> >> http://www.idt.mdh.se/~gdc/work/TURING-SEMINAR/TURING-NATURE/Brain-Computer.pdf >> >> >> thanks! >> Gari >> >> >> On Fri, Oct 4, 2013 at 12:44 AM, Astier Frank <[email protected]>wrote: >> >>> There was an interesting discussion in a philosophy class on Coursera >>> around that topic recently. >>> >>> One of the issues is that in the end, it's not so clear what the >>> computers/software "don't get", or why they shouldn't eventually "get it" >>> (however more precisely we try to define that). There is a position that >>> after all, human "intelligence" may not be much "more" than what we >>> could/will eventually code into a computer. There are echoes of the Chinese >>> Room, and Turing's test in this conversation. But it is not obvious what is >>> the precise thing that humans have and that we couldn't put in a computer >>> eventually. It is not clear that human brains are not entirely "mechanical" >>> or could not be completely emulated in a computer. >>> >>> As the discussion on the Coursera forum progressed, several people >>> argued for a "mind" or "soul" that could never be put in a computer, but >>> they always failed to clearly define what that was in physical terms. They >>> usually asserted the existence of "mind" or "soul" as something outside >>> physics. On the other hand, several people argued for the complete >>> physicality of the human brain, that physicality boding well for the >>> eventual emulation of a complete brain/intelilgence by a computer. >>> >>> Frank >>> >>> >>> >>> On Oct 4, 2013, at 12:29 AM, Hannu Kettinen <[email protected]> wrote: >>> >>> Well, this quote from the linked ‘blog post’ says it all : “...*but it >>> will not change the fundamental fact that computers just don’t get it.* >>> ”. >>> Computers don’t “get” or do much at all, it is all software and algos. >>> Software and algos do what we instruct them to do. >>> >>> Anyhow, when I read these “computer AGI’s need emotions to be able to >>> function” I always refer to Data the emotionless robotron from Star Wars. >>> >>> >>> -H >>> >>> On 03 Oct 2013, at 23:59, Matthew Taylor <[email protected]> wrote: >>> >>> >>> http://opaqueparcels.com/2013/09/30/the-brain-as-a-model-for-computers-why-jeff-hawkins-wont-lead-us-significantly-closer-intelligent-machines/ >>> >>> Any comments? ;-) >>> >>> --------- >>> Matt Taylor >>> OS Community Flag-Bearer >>> Numenta >>> >>> _______________________________________________ >>> nupic mailing list >>> [email protected] >>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org >>> >>> >>> _______________________________________________ >>> nupic mailing list >>> [email protected] >>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org >>> >>> >>> >>> _______________________________________________ >>> nupic mailing list >>> [email protected] >>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org >>> >>> >> >> _______________________________________________ >> nupic mailing list >> [email protected] >> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org >> >> > > _______________________________________________ > nupic mailing list > [email protected] > http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > > -- Fergal Byrne <http://www.examsupport.ie>Brenter IT [email protected] +353 83 4214179 Formerly of Adnet [email protected] http://www.adnet.ie
_______________________________________________ nupic mailing list [email protected] http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
