Differently but indirectly relatedly, this caching system for graph
queries looks interesting,

https://openproceedings.org/2017/conf/edbt/paper-119.pdf

On Thu, Jul 23, 2020 at 8:15 PM Ben Goertzel <[email protected]> wrote:
>
> Matt,
>
> So regarding these requirements,
>
> > 1. Some cluster node will "own" each atom by assignment via some simple 
> > division of the hash address space.
> > 2. Each cluster node will also contain replicas of many other atoms, not 
> > only for disaster recovery purposes, but also because mind agents on that 
> > node will need in local memory many atoms "owned" by other nodes. Once 
> > we've obtained them from their owners, we might as well keep them around 
> > until we need to recover memory space for other "borrowed" atoms more 
> > urgently needed.
> > 3. A mind agent on a given node wants to be able to update atom properties 
> > (truth value, etc) locally, without having to talk to the "owner" node 
> > directly.
> > 4. Perfect consistency of atom state between different nodes is not a 
> > strict requirement, but it is desirable for a node to be able to identify 
> > the 'authoritative' source for a given atom, and that source should reflect 
> > a reasonably recent state of the atom as updated by any replica node.
> > 5. Relatively poor storage efficiency is acceptable. I.e., a single node 
> > may only be able to dedicate a relatively small portion of its memory to 
> > storing the atoms it owns; a majority of its space may go to replicated 
> > atoms. Nodes are cheap; we'll just buy more. :-)
> >
> > Given those design goals, I think we're looking at a publish-subscribe 
> > model for replicating updates to atoms.
>
>
> -- what Linas and Cassio and Senna have all posited, is that it may be
> more sensible to replace "Atom" with "Chunk" (i.e. sub-metagraph) in
> the above requirements..
>
> What the references I sent in my just-prior email suggest is that, for
> the sorts of graphs that tend to be created in real life, defining
> Chunks in a fairly simple heuristic way (i.e. each chunk is just a
> bunch of tightly-ish connected nodes and links) rather than via
> running an expensive partitioning algorithm will generally be
> adequate.
>
> The requirements you state are in my view correct as regards Atoms.
> However, the perspective being put forth is that handling these
> requirements explicitly on the level of Atoms rather than Chunks will
> become computationally intractable given the number of Atoms involved
> and the dynamic nature of the Atomspace.
>
> -- Ben



-- 
Ben Goertzel, PhD
http://goertzel.org

“The only people for me are the mad ones, the ones who are mad to
live, mad to talk, mad to be saved, desirous of everything at the same
time, the ones who never yawn or say a commonplace thing, but burn,
burn, burn like fabulous yellow roman candles exploding like spiders
across the stars.” -- Jack Kerouac

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CACYTDBcKdcwq%3DBpZ9dS3p4B9-stHF3BOAj5LqngcsLL1%3DQVmMg%40mail.gmail.com.

Reply via email to