Hello again wonderful people of opencog!

You are the friendliest community for outsiders w.r.t. the AI world. That 
is amazing. 

So I was watching this video on youtube from some DARPA expert talking 
about how there have been two waves of AI technology. As I understood he 
said: the first wave tech tried to use logic by encoding rules relating to 
the real world and letting the computer crunch the axioms. The second wave 
is of course ML. But it is incapable of extracting abstract rules from the 
models it learns. That is, while it could tell you that a picture is a cat, 
it couldn't tell you why it is a cat. Then he said that in the future we 
will have systems that can do both, i.e. learn from data and abstract 
knowledge from the models they learn. 

And I started wondering if this is not very similar to atomspace. You 
represent concepts as graphs and use those graphs to perform logic. Now I 
am not sure if you would be making these graphs through learning algorithms 
or not. So that is my question. Is this what you are trying to do? Are you 
trying to make the atomspace through learning algorithms and then inform 
said algorithms from the contents of the atomspace?

Yours sincerely
Gaurav Gautam

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/b479e0e4-6311-4916-8231-b1ed1572f764%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to