Isn't the CLA the current implementation of HTM? I'm yet to noob to be able to ask the right questions, so I will keep on trying. I need guidance, on what to read, what to check and more.
I've seen the talk and I think I more or less get the hierarchical idea. If I got it correctly (please correct me where I'm wrong), theoretically it's possible to build a hierarchical model where there are words at a first level, sentences at a second level, paragraphs at a third level and so on with more complex concepts. Or maybe the system will build at the second and third level it's own interpretation of higher level concepts. This model could learn from sequences of inputs (words) encoded in an SDR and each level would predict a next SDR containing different concepts, from words in the first level to abstract concepts in a higher level. The output could be taken from the prediction of the lower level (as it predicts the next word in function of input plus the output of higher levels). What am I missing here? Is this somehow correct? If so, where should I look for implementing a basic example of this model? I also get that layers 5 and 6 are almost not enough understood and that layer 4 (that would allow motor behaviour) might be coming soon to an implementation in nupic (as Jeff believes he understands enough to try an implementation). Something I didn't get clearly in the talk, is how this sensorimotor idea will actually interact with the outside (non internal HTM representation) world. Considering this HTM, and a SDR similar to the one from cortical.io (or using their API). How or what should I need to do to train this "chatbot"? (If it's even possible yet) Best, On Fri, Nov 28, 2014 at 7:45 AM, Dennis Sedov <[email protected]> wrote: > I think you have some misconceptions about CLA. It's a sequence learning > algorithm. HTM, on the other hand is a broader concept. What your looking > for is sensorimotor behavior. I suggest you watch Jeff's speech on that > subject. It's on YouTube. There is a link in CLA/HTM Theory wiki. > > > Sent from my iPhone > > On Nov 27, 2014, at 10:13 PM, Leonardo M. Rocha <[email protected]> > wrote: > > > Hi, > I'll try changing the question: > > Can I train the CLA to maintain an intelligent conversation? > Defining intelligence as being able to maintain the context and semantic > in the dialogue. > > I'm mainly interested in CLA, I will use it anyway for other toy projects. > > > >> The problems depend entirely on how you define "intelligent". >> >> If "intelligent" means a machine that can ask "When was the war of 1812?" >> and then answer "no, please try again" until the student answers "1812" >> then you should be able to build such a machine that works as you expect >> about 90% of the time. >> >> > That is why I named AIML, those kind of rule based answer (if-else > basically) are not only not intelligent, but a big burden to create and > maintain. > > > But if you want a tutor who asks "what was the relationship between the >> various Native American indian tribes during war?" and then if you get >> something wrong the machine will figure out what you likely don't know and >> tell you some things that will clear up misunderstandings. Then we might >> be 50 years away from that. If you want the machine to use words it knows >> you know and to make analogies that you can actually understand because it >> understands your life experiences (because it knows the student is >> Chinese.) then we might be 100 years away. >> >> Today we have machines that can access huge databases but true >> intelligent teaching requires the machine to contain a good, accurate model >> of the student's mind. This part, understanding the student is far past >> what anyone can do. >> >> But if you will settle for "flash cards in natural spoken language" then >> you can build it with current open source technology. >> >> What you should to as a next step is write down about a few dozen >> interactions. Scripts of what you would like to have between the student >> and tutor. Next rank those scripts based on the level of intelligence >> required. >> >> > > The intelligence required is much more than a rule based program, that is > why the idea of the CLA being able to learn and relate different concepts > containing semantic meaning is interesting. > > I need to be able to feed books or chat logs to the tutor and the tutor be > able to answer to questions asked, even if those questions are not > explicitly told in the training set. > > > >> As with all tutors, you evaluate their performance by looking at changes >> in student performance. You ask "did the student actually learn?" >> >> > Actually that is the question to evaluate, how does one allow the CLA to > automagically evaluate this? > That is why I asked about how a CLA can be trained with positive or > negative feedback. > > > >> The tutor would really be a planning machine. It first has to figure out >> where the student is. Then look where we want the student to be and then >> find a route from here to there that moves in "right sized" steps. Then >> the machine executes the plan while it continuously evaluates progress and >> re-plans as required. >> >> The problem is going to be that to do this the tutor needs an internal >> model of the student, that is a hard problem >> >> > Ok, so if we try to simplify this with the idea of a "chatbot that acts > intelligent enough" being intelligent something that is not if-else (or > similar) based and can learn from interactions and books.... Can the CLA > handle it? > > Best > > -- > Ing. Leonardo Manuel Rocha > www.annotatit.com > www.musicpaste.com > > -- Ing. Leonardo Manuel Rocha www.annotatit.com www.musicpaste.com
