Hi Joel,

Good to hear from you! Yes, please please please do this!

On Tue, May 14, 2019 at 9:29 PM Joel Pitt <[email protected]> wrote:

> Hi all,
>
> I may be jumping into some OpenCog development, and I'm keen to know the
> current status of SpaceServer. I know the code in the opencog repo is
> deprecated,
>

As far as I know, it is NOT deprecated, on the contrary, its fully
functional, complete, finished, done, works-as-designed, etc. (It is also
possible/likely that it can be improved, enhanced, optimzed, etc. but
that's not currently needed.(?)) Unit tests are sorely missing.


> and there is a plan written by Linas from Oct 2018 here:
> https://wiki.opencog.org/w/SpaceServer
>

I think that this wiki page is still partly/mostly(?) correct, but I would
need to re-read, re-review it.

>
> I also see there is a semi-recent branch from Misgana here:
> https://github.com/misgeatgit/opencog/tree/time-ocmap
>

Misgana implemented the core functions .. twice. Once, and that was merged,
and a second time, because he forgot that he'd done it the first time :-/

Here's what is missing, and it is kind-of a "separate project" -- what is
missing is a "common sense" API, that can provide yes/no/maybe-fuzzy
answers to questions like "next-to", "above", "below", "in front of".

Right now, the spacetime server stores 3D/4D locations and that is fine and
that "works perfectly" (TM) as far as I know, so that code is "done"(TM).
What it doesn't do is answer questions like "is A in front of B?"  Hacking
this up isn't really hard ... figure out where you are, where A is, where B
is, do some 3D math, get an answer. Done.

There are two "tricky" bits to this.  The first is common-sense: "in front
of" only makes sense if A and B are about the same size, and differ in
distance by inches or feet, not nanometers (unless you are talking about
nanometer-sized things.)  For now, just ignore/hack around this with
common-sense programming hacks. We can deal with the metaphysics later on.

The second "tricky bit" is to implement the API correctly -- specifically,
as PredicateNodes. This is not hard either -- it is relatively
straight-forward code.  The goal of this API is to allow ghost to evaluate
the following

   EvaluationLink
          PredicateNode "is-in-front-of ?"
          ListLink
                 ConceptNode "big red box"
                 HumanBodyNode "visible person A who is probably Joel"

and get back a true/false/don't-know answer. This should be easy, because
you can find  the 3D locations of both from a "well known" Value attached
to those atoms.  Bingo, you're done. Misgana already wrote the code (and it
is checked in and it "works"(TM)) that fetches those Values from the
spacetime server.

What is missing is a ROS component that grabs 3D positions from the Hanson
Robotics 3D-object-tracking stack and jams it into the space server. If you
know python, and ROS, and get the Hanson Robotics people to tell you which
ROS node/ROS pipeline they're using for vision processing, this is easy.
ROS is great for modular, pipeline processing like this.  (But you don't
have to use ROS. As long as you feed the spacetime server with 3D info from
something, somewhere, it will "just work"(TM).)

I think that doing all this is really fairly easy, straight-forward, and
does not require any whizzy or difficult theory or mad programming skillz
or any kind of woo-woo. At least, not at the basic level of wiring it up,
and being able to use ghost to talk about what the robot camera sees.  Of
course, later on, one could get fancy, but for now, we just need the basic
core infrastructure coded up.  Once done, the language-to-perception will
"just work"(TM).  Done this way, I think its very modular - after the base
is done, then people can then go off and do whizzy neural-net-ish vision
processing later on, and ghost will provide the natural-language
question-answering "for free", no extra work.

>
> Any other clues on people/links I should contact or be aware of?
>

Not as far as I know. I'm willing to walk you through the details. Its
really not hard, and shouldn't take all that long. It's even a good warmup
for getting back into opencog.

Well, talk to Ben, I guess, and well, maybe Vitaly Bogdanov & team, who
invented something completely different.  There is a back-story you need to
know.  Vitaly, please correct me where I'm wrong or misunderstand.

So the core idea of having common-sense PredicateNodes for the the
prepositions (in-front/behind/above/bigger/smaller/etc.) evolved last
spring.  The goal of PredicateNodes was to make them fit in with
EvaluationLinks, etc. and fit in with OpenPsi, fit in with ghost, fit in
with the pattern matcher, the pattern-miner, etc. Fit in with everything in
the current opencog architecture.  Due to general mis-communication,
Vitaly et al. never actually heard about this design.  They implemented
something completely different and totally incompatible. Actually, I'm not
even sure any more about what it is they built.  It seems like a tragic
mistake, because the net result is a system that is incompatible with ..
everything else.  I really really want to get back to the core idea of
using PredicateNodes for everything, and hiding all neural-net magic under
the PredicateNodes, and not somewhere else (certainly not in
python/C++/scheme code API's)  How to rescue that effort, and get it to
work with ghost, well, that is a different conversation. If we could just
have the basic PredicateNode api working -- this would be future-proof, and
extendabale and I think its just not that hard to do.  So yes, please
please do it!

I wrote too much. Sorry.

-- Linas

-- 
cassette tapes - analog TV - film cameras - you

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA37xhFHqcJ2s1paGF3Vib5bfiObNs8wjhVOWhTFKC1m8ug%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to