Hi Luke,

On 10/6/20 11:10 AM, Luke Peterson wrote:
 Sorry if that criticism is a bit wooly, but I feel like some “big picture” explanation is missing, so I don’t have a framework to hang all the details on.

The big picture, IMHO, is that OpenCog can learn how to allocate resources, either by creating Hebbian links via mining (in a general sense) a record of its own behavior, or creating inference control rules via mining (in a general sense) a record of its own reasoning. It should eventually acquire the ability to rewrite its own code, starting with Atomese, and then the layers under, but we're still far from that. However we do have proto experiments of creating hebbian links and control rules.

Whether this can work ultimately depends on the environment. If it's too chaotic it won't work, if it's partially chaotic with attractors that OpenCog can figure out, then it likely will work. Biology has been able to learn to somewhat out-perform physics, chemistry, and humanity has learned to somewhat outperform biology (we're kinda on the top of the food chain, these days), so clearly we don't live in a completely chaotic environment (even though it does seem to get more chaotic by the days).
Here is my guide, although you and anyone else on the mailing list will likely find it tedious as it’s pretty elementary stuff explained in far too many words.
https://luketpeterson.github.io/atomspace-bootstrap-guide/

It's really cool you wrote one. I didn't have time to read it yet hopefully I will in the not too distant future.

    Where would you recommend I look next, to get a handle on the
    inferencing systems of OpenCog?

I would say have look at the URE and PLN examples

https://github.com/opencog/ure/tree/master/examples/ure
https://github.com/opencog/pln/tree/master/examples/pln

If you want to know more about proto experiments on learning inference control rules you may read

https://blog.singularitynet.io/introspective-reasoning-within-the-opencog-framework-1bc7e182827
https://blog.opencog.org/2017/10/14/inference-meta-learning-part-i/

Nil


    Anyway, thanks a lot, and thanks for your work on OpenCog to date.

    -Luke




--
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.



--
You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected] <mailto:[email protected]>. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/707E9A4E-1C3E-47B2-BBC6-C7EB5D7FF9D5%40gmail.com <https://groups.google.com/d/msgid/opencog/707E9A4E-1C3E-47B2-BBC6-C7EB5D7FF9D5%40gmail.com?utm_medium=email&utm_source=footer>.

--
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/5088417f-82f6-8f17-b428-85fbbf89cd5b%40gmail.com.

Reply via email to