Thanks for the greeting Linas!

Looking around at the NLP Projects a lot of them seem to lead back to you.
You've tried a lot of things it seems, I was even reading your Developer
diary for the learn repo. It sounds very interesting!

I figured I would start simple to see if I could get a simple QA text
chatbot session going so I could get a hands on feel for it and any
drawbacks..

I decided to go with docker for a cleaner management setup.. I was able to
get the opencog-dev, relex and postgres containers going..

In trying this:
https://github.com/opencog/opencog/tree/master/opencog/nlp/chatbot

I get this:























*opencog@6ea4288ab4bf:/opencog/opencog$ guile -l
../opencog/nlp/chatbot/run-chatbot.scm;;; note: auto-compilation is
enabled, set GUILE_AUTO_COMPILE=0;;;       or pass the --no-auto-compile
argument to disable.;;; compiling
/opencog/opencog/../opencog/nlp/chatbot/run-chatbot.scm;;; WARNING:
compilation of /opencog/opencog/../opencog/nlp/chatbot/run-chatbot.scm
failed:;;; no code for module (opencog nlp oc)Backtrace:           9
(primitive-load "/opencog/opencog/../opencog/nlp/chatbo…")In
ice-9/eval.scm:   721:20  8 (primitive-eval (use-modules (opencog nlp)
(opencog …) …))In ice-9/psyntax.scm:  1230:36  7 (expand-top-sequence
((use-modules (opencog nlp) (…) …)) …)  1222:19  6 (parse _ (("placeholder"
placeholder)) ((top) #(# # …)) …)   259:10  5 (parse _ (("placeholder"
placeholder)) (()) _ c&e (eval) …)In ice-9/boot-9.scm:  3927:20  4
(process-use-modules _)   222:29  3 (map1 (((opencog nlp)) ((opencog nlp
oc)) ((opencog …))))   222:17  2 (map1 (((opencog nlp oc)) ((opencog nlp
chatbot))))  3928:31  1 (_ ((opencog nlp oc)))   3329:6  0
(resolve-interface (opencog nlp oc) #:select _ #:hide _
…)ice-9/boot-9.scm:3329:6: In procedure resolve-interface:no code for
module (opencog nlp oc)*


In trying to trouble shoot, I can't seem to find this file mentioned in
oc.scm:
(load "oc/nlp_oc_types.scm")

I found this git commit:
https://github.com/opencog/opencog/commit/7ca14fc6f0492a7404dd185a22e20d150563eae8#diff-e5884379d467ed82fad49bf1779cf2f5f25d53fb19a947b3fb7ad24dffe1f459

but I wasn't able to find the original filename either:
 (load "nlp/types/nlp_types.scm")

Any suggestions?

On Thu, Jan 13, 2022 at 10:41 PM Linas Vepstas <[email protected]>
wrote:

> Hi,
>
> Let me hop right to the question below:
>
> On Thu, Jan 13, 2022 at 9:01 PM Reach Me <[email protected]> wrote:
>
>> Hi Opencog community!
>>
>> So I recently discovered an interest in AGI and have started to research
>> various things. In my wanderings I came across Opencog and it's amazing.
>> The conceptual Idea of atomspace is very powerful in its fundamental and
>> flexible nature.
>>
>> While I'm interested in AGI/ML, I have more of a systems background with
>> some programming experience. My current understanding of ML is about on
>> the level of "I know flour and water go into the process of making bread,
>> I'm sure baking is part of the process, but I have no idea of all the
>> ingredients and processes involved". I do understand the difference between
>> Symbolic and Neural Nets. I've been reading the wiki and starting to soak
>> up all the concepts I can. While the wiki is great there are some topics
>> that seem like they may no longer apply.
>>
>> I usually like to expand into new areas by picking some pet project and
>> working on it in milestones to grow my understanding. I thought a good
>> project might be a question and answer application on a knowledgebase.  In
>> trying to consider how it might be done in atomspace, I came across Lojban
>> topics and then later the paper about symbolic natural language grammar
>> induction (https://arxiv.org/abs/2005.12533). I didn't find more
>> information in the wiki and I wondered if there had been further
>> developments in the area that might be applicable.
>>
>> My pet project would be like:
>> 1) feed in text corpus
>> 2) some process to parse text into atomspace
>> 3) ability to query atomspace in natural language.
>> 4) receive a natural language text answer.
>>
>> Is this a doable task with opencog in its current state, or herculean for
>> a newcomer?
>>
>
> It's doable. In fact, it's been done at least four times, with opencog, in
> the last 10-15 years. Each distinct effort was a success .. or failure,
> depending on how you define success and failure.  I certainly got a lot out
> of it.
>
> Whether it's herculean or not depends on your abilities. It's not hard to
> whip up something minimal in not too much time, say in a month. What you'll
> have at the end of that is a toy, a curiosity, and the unanswered question
> of "what does it take to do better than this?"
>
>
>> What are some milestones and topics/resources I may need to investigate
>> further work towards this pet project if it's doable?
>>
>
> For language, you would need to understand link-grammar. You'd need to
> read the original papers on it, as a start.  Code that dumps link-grammar
> output into the atomspace is here: https://github.com/opencog/lg-atomese
> It works, its maintained, I think it's "bug-free" but if not I'll fix
> them.  What you do with things after that .. well, you're on your own.
>
> One of  the more recent attempts to build a talking robot is the R2L
> system (relex2logic) but I don't really recommend it. You can study it to
> get a glimmer of how it worked, and learn from that, but I would discourage
> further development on it.  If you are into robots, then some early
> versions of the Hanson Robotics robot (named "Eva") can be found in
> assorted git repos, but it would take a lot of git archeology to recreate a
> working version.  But this could be fun. (The robot itself is a Blender
> model. It can "see" via your webcam, and actually track you!)
>
> There's assorted proofs-of-concept of things working at higher levels of
> abstraction, but converting those into something more than a
> proof-of-concept is .. well, the question would be, why?  There's a lot of
> things one could do; there's a lot of things that have been tried.  It's
> like climbing a mountain in the mists: a lot of possibilities, and a lot of
> confusion.
>
>
>> I'm currently working through the opencog hands on sections to get a feel
>> for things.
>>
>
> Opencog is a bag of parts. Some parts work well. Some are obsolete. Some
> code is good, some code is bad. Some ideas are good, some ideas didn't work
> very well. At any rate, its not one coherent whole.  Caveat Emptor.
>
> --linas
>
> --
> Patrick: Are they laughing at us?
> Sponge Bob: No, Patrick, they are laughing next to us.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "opencog" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/opencog/CAHrUA34nwchJ39ijCstEJU%2BCp3b6jwO%3DFJ3Q5hZ8%3DAvr70D4CQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/opencog/CAHrUA34nwchJ39ijCstEJU%2BCp3b6jwO%3DFJ3Q5hZ8%3DAvr70D4CQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CALNwQ9v5gUkF3s_yTmnyRtM7mf6MhqX%3DSPYiBjVwsYK37chvbA%40mail.gmail.com.

Reply via email to