Hi,
I'll try changing the question:

Can I train  the CLA to maintain an intelligent conversation?
Defining intelligence as being able to maintain the context and semantic in
the dialogue.

I'm mainly interested in CLA, I will use it anyway for other toy projects.



> The problems depend entirely on how you define "intelligent".
>
> If "intelligent" means a machine that can ask "When was the war of 1812?"
> and then answer "no, please try again" until the student answers "1812"
> then you should be able to build such a machine that works as you expect
> about 90% of the time.
>
>
That is why I named AIML, those kind of rule based answer (if-else
basically) are not only not intelligent, but a big burden to create and
maintain.


But if you want a tutor who asks "what was the relationship between the
> various Native American indian tribes during war?" and then if you get
> something wrong the machine will figure out what you likely don't know and
> tell you some things that will clear up misunderstandings.   Then we might
> be 50 years away from that.  If you want the machine to use words it knows
> you know and to make analogies that you can actually understand because it
> understands your life experiences  (because it knows the student is
> Chinese.) then we might be 100 years away.
>
> Today we have machines that can access huge databases but true intelligent
> teaching requires the machine to contain a good, accurate model of the
> student's mind.  This part, understanding the student is far past what
> anyone can do.
>
> But if you will settle for "flash cards in natural spoken language" then
> you can build it with current open source technology.
>
> What you should to as a next step is write down about a few dozen
> interactions.  Scripts of what you would like to have between the student
> and tutor.   Next rank those scripts based on the level of intelligence
> required.
>
>

The intelligence required is much more than a rule based program, that is
why the idea of the CLA being able to learn and relate different concepts
containing semantic meaning is interesting.

I need to be able to feed books or chat logs to the tutor and the tutor be
able to answer to questions asked, even if those questions are not
explicitly told in the training set.



> As with all tutors, you evaluate their performance by looking at changes
> in student performance.   You ask "did the student actually learn?"
>
>
Actually that is the question to evaluate, how does one allow the CLA  to
automagically evaluate this?
That is why I asked about how a CLA can be trained with positive or
negative feedback.



> The tutor would really be a planning machine.  It first has to figure out
> where the student is.  Then look where we want the student to be and then
> find a route from here to there that moves in "right sized" steps.  Then
> the machine executes the plan while it continuously evaluates progress and
> re-plans as required.
>
> The problem is going to be that to do this the tutor needs an internal
> model of the student, that is a hard problem
>
>
Ok, so if we try to simplify this with the idea of a "chatbot that acts
intelligent enough" being intelligent something that is not if-else (or
similar) based and can learn from interactions and books.... Can the CLA
handle it?

Best

-- 
Ing. Leonardo Manuel Rocha
www.annotatit.com
www.musicpaste.com

Reply via email to