> If... well I can imagine that there are scenarios where having a DAS is 
> useful .. but we don't have working code for any of those. The problem of 
> building a technology like DAS for some non-existent users who might show up 
> in the future... well, you might discover that the DAS is built wrong. It 
> might have the wrong performance profile. It might be too clunky. It might 
> have lots of features you don't need, and be missing features you do.   
> Designing and building things that don't have a current use is a very risky 
> business.


The bio-Atomspace we are experimenting with now contains only a small
% of the biomedical knowledge we would like it to, which is because of
RAM and processing speed limitations in current OpenCog

Recent optimizations help but don't remotely come close to solving the problem

The neural-symbolic grammar learning that Andres Suarez and I
prototyped last spring, also couldn't viably be done using OpenCog for
similar reasons (RAM and processing speed limitations).   If we could
complete that work it would be very useful in our current humanoid
robotics work w/ OpenCog (Awakening.Health)

The experimentation on pattern mining from inference histories for
automated inference control, that Nil was doing a year ago, was
incredibly slow also due to Atomspace limitations.  (Exploring this
sort of method is part of our motivation for the Minecraft
experimentation.)

Ditto Shujing's work eons ago on pattern mining of agent behavior data
from a game world

It is possibly true that for each such case, one could design a
specialized architecture to support just that case, working around the
need for a general-purpose DAS in that particular case....

Shujing did in fact build her own specialized quasi-DAS just for her
sort of pattern-mining applications, though it's not deprecated...

I understand we could proceed by writing
fully-working-except-for-scalability-issues code for all the above
applications I've alluded to (and more), and then analyzing all this
code and its specific scalability issues, and using this analysis to
drive design of an improved system...

Instead we are indeed aiming to proceed in a faster but in some senses
more risky way, by creating a design that appears to us capable of
scalably carrying out applications such as the above (and such as the
specific use-cases in the DAS document referenced...).


> Without spending a lot of time and energy examining that document, I can't 
> say. That figure seems overly complexticated, to me.  I know that the name 
> "Elon Musk" is very polarizing, but I do like one of his quotes:  "The best 
> part is no part. ... Undesigning is the best thing. Delete it." He's 
> absolutely right. He's not even the first one to say that. Einstein beat him 
> by a century: "Make it as simple as possible, but no simpler".   My knee-jerk 
> reaction is that Figure 6 is violating these maxims.  Without spending a lot 
> of time and effort to comprehend the intent of that diagram, I can't really 
> tell.  (People have remarked that Musk's latest rocket engines are the most 
> complex engines ever built.)


With all due respect (and I really do respect your technical intuition
greatly even though I don't always agree w/ it), I think Alexey and
Vitaly and Cassio and I are already well aware of Occam's Razor in its
various forms....  As you point out the practical application of this
sort of maxim is a somewhat subtler matter than the generalized
articulation ;p

>> -- a piece of the distributed persistent Atomspace, say in RocksDB
>>
>> Then of course the AI process on machine X can query the local
>> Atomspace specifically as it wishes.   If it wants to query the
>> persistent backing store, it can query RocksDB and it will get faster
>> response if the answer happens to be stored in the fragment of RocksDB
>> that is living on the same machine as it.   There will be some bias
>> toward the portion of the distributed RocksDB on machine X having
>> Atoms that relate to the Atoms in the local Atomspace on machine X ...
>> but this depends on the inner workings of RocksDB.   Or at least
>> that's how I'd think it would work, this is speculative...
>
>
> This is not quite how it works, but, roughly speaking, I've got prototypes 
> that do this today.

What are the main differences btw what I described above and what your
prototypes do?

> Please be careful with benchmarking. The Christmas-before-last, Mike Duncan 
> came to me and asked that I look into making AGI-Bio run faster. There was 
> kind of this sense that the atomspace was too slow, and needed tuning. I 
> decided to give him this Christmas present ... By restructuring the agi-bio 
> code, I got it to run 20x faster. That's correct! Twenty times faster!  This 
> did not need any changes to the atomspace!  Thus, blaming the atomspace for 
> bad performance was inappropriate. Don't fall into the same trap with "agents 
> playing Minecraft"!  Careful making assumptions.

Your help on the bio-Ai project is much appreciated!  However, I would
note that Mike Duncan doesn't have the facility w/ tuning and fixing
Atomspace/OpenCog code that Vitaly and some of his St. Petersburg
colleagues do....   I would venture there are less likely to be
relatively quick fixes to apparent brick walls that Vitaly etc. run up
against.  But I'd be happy to be refuted by reality on this -- we will
be using Original OpenCog in Awakening.Health for quite some time and
so having it work better and better is definitely valuable to us...

> Another thing that was careful analysis showed that the agi-bio code was 
> performing certain types of pattern matcher queries that could benefit from 
> caching. That caching mechanism *was* added to the atomspace. So this 
> provides a hands-on, real-world example of where having an actual, running, 
> testable, probable, measureable block of software allows performance tuning 
> and design changes.  Without it, no one would ever have guessed that this 
> particular optimization was possible or interesting or useful. Having 
> something measurable is key.  (the caching did bump performance by another 
> factor of 30%)

Hmm, at a high level we did guess a pattern cache was going to be
useful -- and Senna implemented one some time ago.   However, I have
not compared his implementation/design/concept with yours, and it
would not shock me if your instantiation of the broad concept was more
effective, given your deep familiarity w/ all the code and systems
involved ;)

ben

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CACYTDBfN9v%3DApG8jU4RBbCqi-J3%3DMbN%3DTgpzrdhJNuQEtVQ9hg%40mail.gmail.com.

Reply via email to