Hello everyone,

I am defining my Phd subject which will revolve around Neuro-symbolim. The 
goal would be to make an agent to learn as much knowledge as it can from 
its environement, and I found the opencog and the atomspace graph very 
interesting for the implementation.

My questions might be too vague, but maybe somebody can provide an answer 
or at least some guiding ideas.

How can we use the opencog system to store increasingly more complex 
concepts of the environment an agent is exploring ? Can the concepts be 
casted as atoms ? How would the relations between atoms be constructed ? Is 
there already a solution in the opencog ecosystem to store concepts in a 
hierarchical or composed manner ? (I am thinking for instance of the Option 
framework, where long sequences of actions are stored (as RL algos) and 
guided by a meta-policy, can this framework be "easily" implemented with 
opencog?).

Hope my questions are a bit clear. I am reading a lot of papers now and it 
sometimes difficult to grasp all the concepts and be articulated when 
talking about them.

Thanks!

Aymeric

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/f6de7a4a-d91c-4acb-9cf3-41dbdf27d6f2n%40googlegroups.com.

Reply via email to