Hi, On Sun, Jan 30, 2022 at 8:29 PM Reach Me <[email protected]> wrote:
> > I figured I would start simple to see if I could get a simple QA text > chatbot session going so I could get a hands on feel for it and any > drawbacks.. > > I decided to go with docker for a cleaner management setup.. > You should coordinate with Mark Wigzell, who is cleaning up the docker container infrastructure. He's been focusing on the Eva animation containers. This will give you a pretty face to talk to! > I was able to get the opencog-dev, relex and postgres containers going.. > Where does it say that you should use postgres? There are many files and web pages that might say this, these need to be changed to say RocksDB instead. The RocksDB backend is simpler to set up, and several times faster. > > In trying this: > https://github.com/opencog/opencog/tree/master/opencog/nlp/chatbot > This code is probably bit-rotted, no one has booted this for 5 years or so. Getting anything reasonable to work will require deep dives and heavy lifting. > In trying to trouble shoot, I can't seem to find this file mentioned in > oc.scm: > (load "oc/nlp_oc_types.scm") > This file is autogenerated from the code here: https://github.com/opencog/opencog/tree/master/opencog/nlp/oc-types and should be present in your build directory. After installing, it should be installed into appropriate locations (for me its /usr/local/share/guile/site/3.0/opencog/nlp/oc/nlp_oc_types.scm) > > but I wasn't able to find the original filename either: > (load "nlp/types/nlp_types.scm") > You need to install https://github.com/opencog/lg-atomese If there are instructions that fail to mention this, they should be updated. This is an example of what I mean by bit-rot: instructions need to be updated (e.g. postgres->rocksdb, install pre-reqs) and there's surely more. Not too many, I hope. More challenging, for question answering, will be building up a dataset of "knowledge", and plumbing the R2L architecture to figure out how to do this. R2L (relex2logic) is built on a rather old and conventional idea that logical expressions can be extracted from English language sentences. This includes the extraction of clear-cut questions from text, and the assumption that clear-cut answers can be found in other text. If you believe in these kinds of ideas, then, yes, R2L does some fairly decent job at extracting knowledge from reasonably-well structured English sentences. It's a large collection of hand-written rules that deals fairly well with many common situations. I think it can also extract questions, too, but may have limited coverage for that; I don't know. I don't know (don't recall) if they wired up the part to use these abstract question representations to find answers in the previously-input text. I suspect that this last bit of hookup is shakey or broken. It will require some heavy lifting to fix any bit-rot in the R2L infrastructure, and also to make sure everything is wired up correctly, again. There may be missing pieces. I don't know. Other people created this system; I'm familiar with it's general outlines, but I never saw a working demo of the code, so I don't know what it can do. (It has been demoed in public; just not to me.) I slid in the comment "If you believe" in the above, because of a few fundamental issues with this kind of architecture: first, any collection of hand-written rules relating to natural language is necessarily incomplete and buggy. To do better, you need a way of learning rules, automatically. The other issue is that humans are not logical, nor is natural language. See for example, "pragmatics": https://en.wikipedia.org/wiki/Pragmatics or if you like youtube: https://www.youtube.com/watch?v=9I8mQpFAJ50&list=PLMcpZ5z_qC8Jn6_TWiyHDNQDoBA8vZHgv&index=3 "Untranslatability and Japanese Pragmatics" - Xidnaf --linas On Thu, Jan 13, 2022 at 10:41 PM Linas Vepstas <[email protected]> > wrote: > >> Hi, >> >> Let me hop right to the question below: >> >> On Thu, Jan 13, 2022 at 9:01 PM Reach Me <[email protected]> wrote: >> >>> Hi Opencog community! >>> >>> So I recently discovered an interest in AGI and have started to research >>> various things. In my wanderings I came across Opencog and it's amazing. >>> The conceptual Idea of atomspace is very powerful in its fundamental and >>> flexible nature. >>> >>> While I'm interested in AGI/ML, I have more of a systems background with >>> some programming experience. My current understanding of ML is about on >>> the level of "I know flour and water go into the process of making bread, >>> I'm sure baking is part of the process, but I have no idea of all the >>> ingredients and processes involved". I do understand the difference between >>> Symbolic and Neural Nets. I've been reading the wiki and starting to soak >>> up all the concepts I can. While the wiki is great there are some topics >>> that seem like they may no longer apply. >>> >>> I usually like to expand into new areas by picking some pet project and >>> working on it in milestones to grow my understanding. I thought a good >>> project might be a question and answer application on a knowledgebase. In >>> trying to consider how it might be done in atomspace, I came across Lojban >>> topics and then later the paper about symbolic natural language grammar >>> induction (https://arxiv.org/abs/2005.12533). I didn't find more >>> information in the wiki and I wondered if there had been further >>> developments in the area that might be applicable. >>> >>> My pet project would be like: >>> 1) feed in text corpus >>> 2) some process to parse text into atomspace >>> 3) ability to query atomspace in natural language. >>> 4) receive a natural language text answer. >>> >>> Is this a doable task with opencog in its current state, or herculean >>> for a newcomer? >>> >> >> It's doable. In fact, it's been done at least four times, with opencog, >> in the last 10-15 years. Each distinct effort was a success .. or failure, >> depending on how you define success and failure. I certainly got a lot out >> of it. >> >> Whether it's herculean or not depends on your abilities. It's not hard to >> whip up something minimal in not too much time, say in a month. What you'll >> have at the end of that is a toy, a curiosity, and the unanswered question >> of "what does it take to do better than this?" >> >> >>> What are some milestones and topics/resources I may need to investigate >>> further work towards this pet project if it's doable? >>> >> >> For language, you would need to understand link-grammar. You'd need to >> read the original papers on it, as a start. Code that dumps link-grammar >> output into the atomspace is here: https://github.com/opencog/lg-atomese >> It works, its maintained, I think it's "bug-free" but if not I'll fix >> them. What you do with things after that .. well, you're on your own. >> >> One of the more recent attempts to build a talking robot is the R2L >> system (relex2logic) but I don't really recommend it. You can study it to >> get a glimmer of how it worked, and learn from that, but I would discourage >> further development on it. If you are into robots, then some early >> versions of the Hanson Robotics robot (named "Eva") can be found in >> assorted git repos, but it would take a lot of git archeology to recreate a >> working version. But this could be fun. (The robot itself is a Blender >> model. It can "see" via your webcam, and actually track you!) >> >> There's assorted proofs-of-concept of things working at higher levels of >> abstraction, but converting those into something more than a >> proof-of-concept is .. well, the question would be, why? There's a lot of >> things one could do; there's a lot of things that have been tried. It's >> like climbing a mountain in the mists: a lot of possibilities, and a lot of >> confusion. >> >> >>> I'm currently working through the opencog hands on sections to get a >>> feel for things. >>> >> >> Opencog is a bag of parts. Some parts work well. Some are obsolete. Some >> code is good, some code is bad. Some ideas are good, some ideas didn't work >> very well. At any rate, its not one coherent whole. Caveat Emptor. >> >> --linas >> >> -- >> Patrick: Are they laughing at us? >> Sponge Bob: No, Patrick, they are laughing next to us. >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "opencog" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected]. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/opencog/CAHrUA34nwchJ39ijCstEJU%2BCp3b6jwO%3DFJ3Q5hZ8%3DAvr70D4CQ%40mail.gmail.com >> <https://groups.google.com/d/msgid/opencog/CAHrUA34nwchJ39ijCstEJU%2BCp3b6jwO%3DFJ3Q5hZ8%3DAvr70D4CQ%40mail.gmail.com?utm_medium=email&utm_source=footer> >> . >> > -- > You received this message because you are subscribed to the Google Groups > "opencog" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/opencog/CALNwQ9v5gUkF3s_yTmnyRtM7mf6MhqX%3DSPYiBjVwsYK37chvbA%40mail.gmail.com > <https://groups.google.com/d/msgid/opencog/CALNwQ9v5gUkF3s_yTmnyRtM7mf6MhqX%3DSPYiBjVwsYK37chvbA%40mail.gmail.com?utm_medium=email&utm_source=footer> > . > -- Patrick: Are they laughing at us? Sponge Bob: No, Patrick, they are laughing next to us. -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA36wSQHyvz6GMhPT6bH81V6T7ZcHuek9B%3Dr6DNtFMXSLVw%40mail.gmail.com.
