Hey Michael, I think I have entered a discussion that I am under qualified for, but I'll keep running! I think I'll jump the top section regarding the under the hood stuff as I do not know enough about how Neo maintains it's indexes, how the query/relationship/object caches work and whether there is any performance impact of creating twice as many relationships (I expect that if I do not get smarter with my data I would reach maximum capacity of Neo in 3 years, if I doubled the relationships I suppose I would half that, but that is very naive!).
Running with a team of developers I would not expect all or even many of them to be operating on raw queries and similar to not producing inverse methods all over the place (although you could cite Commons StringUtils isEmpty/isNotEmpty type design) I expect people to be comfortable with negation, or applying more complex logical analysis. Which is what I meant about exposing the raw query interface to the world at which point I might expect less of what people are comfortable with! I do think that it is a very interesting idea though and expressing (either through or above Core Neo) the semantics but not persisting them presenting a richer interface to all and sundry is a worthwhile discussion. If you could define the inferred relationships such that the planner could use them (he said :SIBLING but he means :SISTER or :BROTHER) or, as you touched on below defining the rules and then explicitly running a MapReduce type function to expose the data sound very interesting. Now I'll comment in line.. On Wednesday, 3 September 2014 21:52:57 UTC+1, Michael Fair wrote: > > > > >> On Wednesday, September 3, 2014 3:29:34 AM UTC-7, Mark Findlater wrote: >> > Hey Mike I think that this is interesting and it reminds me of using the >> OWL *inverseOf *property relationship and other features of the >> inferencing engine (overview >> <http://www.google.com/url?q=http%3A%2F%2Fwww.w3.org%2FTR%2F2004%2FREC-owl-features-20040210%2F&sa=D&sntz=1&usg=AFQjCNHzDx_ohrhoiPKvfdY6uyCekmPqtA> >> , spec <http://www.w3.org/TR/owl-ref/#inverseOf-def>). Or stepping >> further back (in my learning) defining rules in Prolog. >> > > This is definitely about making the schema/engine smarter about at > "semantics". > > > >> I guess that my question as a developer is why I would want the inverse >> relationship label defined? >> > > I can see where a single app written by a single developer or two who are > all the combined schema master, data integrity maintainer, and code author > likely wouldn't need "help" to translate into their schema to query it; > they wrote it and likewise if they want to extend the app, they can just > write more code to extend the app; if they need a new relationship, they > just invent it. I'd argue that even at that small scale, while it's not > "needed" it's still helpful. > > > >> With the triple stores I have used there is usually an inference engine >> which then expose to queries the union of both the asserted triples (the >> data that you explicitly added) and the inferred triples (the ones that >> rules were used to create). The important thing about the inferred triples >> is that they are ephemeral meaning that they do not impact the underlying >> datastore (although you could choose the materialize them) which stops >> unnecessary bloat (at the cost of processing). >> > > Well storage-wise, the db is storing a numeric relationship type > identifier, and then separately storing the relationship type label > information. > > So storage-wise, adding multiple relationship type entries for the same > type id isn't a problem. > The on-disk structure is totally capable of handling this, it's just > changing the type of block pointed to in the RelatioshipTypeStore from > String to String[]. > It's the code that loads this from the disk that would need to be updated > to understand it's getting a String[] back and not just a String. > > I'm now however convinced that updating the code to detect a String versus > a String[] in the RelationshipTypeStore is going to be way easier than all > the splitting and parsing gymnastics I was thinking of earlier! :) > Though I still might encode the "direction" as part of the string. > > > As these new relationships are sugar which is ultimately bloat would you >> ever want to persist them? I also suspect that the directional trick you >> suggest could have serious implications to the performance of the query >> engine (but that's just a hunch). >> > > We can query (a)<-[:REL]-(b) for the same performance as (a)-[:REL]->(b) > so the only trick is getting the planner to know what's being expressed, > and that's what I think the parser's job is. > > All relationships in neo4j have a from/to direction by design. The label > for that direction is rather arbitrary created at design time. Being able > to define the inverse labels for a relationship type eliminates that > required design time arbitrary selection and I can't see how it has any > performance impact. The planner still knows it's looking for a numeric > relationship id (e.g. 62), and what it needs to know about from and to; I > suspect most of the heavy lifting on that was created in the parser. From > what I've gathered a "directionless" relationship match actually becomes > two matches under the hood, one in each direction. > This would just make a similar kind of translation. > > > > The Sibling/Spouse example has limited mileage for me without some clever >> chaining (e.g. how would I represent cousins? Harden SIBLING and >> PARENT/SPOUSE relationships and then chain SIBLING->PARENT->SIBLING->CHILD >> nodes, hardening along the way?) and again I think I would see more value >> in adding this a level above the underlying store. >> > > You jumped the gun on me! :) > I'm actually working on a "relationships as sub-queries" plan and > [:COUSIN], [:NIECE], [:NEPHEW] are perfect examples of that. > I guess I just ought to have simply proposed these named sub-queries as > part of the initial discussion. :) > > It's cool that you used the PARENT/CHILD pairing in the example > (n)-[:PARENT]->(p)-[:SIBLING]->(s)-[:CHILD]->(c). :) > I see using the PARENT/CHILD pair as way more straightforward to > read/follow than (n)-[:PARENT]->(p)-[:SIBLING]->(s)<-[:PARENT]-(c). :) > It is easier on the eye to read a query that draws you from left to right (in your second example above it does appear as if (s) should bee the subject of or attention), but we are discussing orders of beauty on a data interface that is already extremely intuitive (not that i am advocating procrastination) . > So let's talk about how this can be used to create the (n)-[:COUSIN]->(c) > query; > This would be a new kind of relationship type. > If the Cousin type captured a MATCH query like from above, then it would > be a kindred spirit to a SQL VIEW or FUNCTION as part of the DB schema. > Again it's primarily just creating a syntactic shortcut for a larger query; > but it's one that makes the database more comprehensible to those > interfacing with it. > I've banged on about not worrying about comprehensibility, because I think that we would be masking some of the super powers of graphs by concerning ourselves with directional semantics at the query interface level. However distilling larger traversals into a single pseudo relationship *could* make representing complex queries more manageable (whilst potentially masking some things you might have wanted, i.e cousin on which side?). > > This new relationship type could even be used to create a really powerful > result set reduction replacement function. > Imagine taking a graph result from the prior query and running the > [:COUSIN] relationship replacement function on it. > > So if we take a row from the results from the above query, > e.g. (r1) = "ME", [:PARENT], "MYDAD", [:SIBLING], "DAD'S BROTHER", > [:CHILD], "DAD'S BROTHER'S DAUGHTER" > > and run GraphMapReduce(r1, [:COUSIN]), then anywhere the intermediate > nodes match the query provided by [:COUSIN] they get replaced with the > relationship [:COUSIN]. > Giving us: > (r2) = "ME"-[:COUSIN]->"DAD'S BROTHER'S DAUGHTER" > > If the [:COUSIN] type also provided some kind of MAP/REDUCE type functions > as part it's description/definition, then it could build even its own > properties dynamically from the underlying properties on the intervening > nodes. > > This would be especially useful if we were to create a "correct" > implementation of Cousin which really means everyone you are directly > related to through marriage or parentage. Your third cousin twice removed > for example is still technically a "Cousin", the map/reduce function on > "Cousin" could take the intervening Path P, count the number of Parent hops > involved to get the cousin's "order" and then count the balance of the > parent links up/down to get the cousin's "removal" and make those both > properties part of the "COUSIN" relationship connecting us. > > So the idea is that you pass a sub-graph or a result set to a named > relationship type (which is actually a query). > It operates on each row in the result set, when the row matches the query, > the links between the matching endpoint nodes are replaced by the > relationship and the results of the relationship query become properties on > the relationship. > > So the final results of this would be that we started with this: > (r1) = "ME", [:PARENT], "MYDAD", [:SIBLING], "DAD'S BROTHER", [:CHILD], > "DAD'S BROTHER'S DAUGHTER" > > and reduced it to this: > (r3) = "ME"-[:COUSIN {order: 1, removed: 0}]->"DAD'S BROTHER'S DAUGHTER" > This, at least in my mind's eye, would involve adding a new field to the > relationship type to persist across reboots (though technically it is still > just a "String"). > This is the interesting bit for me, something that you already implied it by calling your Class above the GraphMadReducer would be standardising on how analytic or inferencing models could be applied to your graph and then using this standardised model to build relationships with Hadoop or Spark or your new Apache incubating distributed processing for Graph stores project. Having a local MapReduce function is nice, but it's essentially a fancy name for a CREATE query, MapReduce implies a chunked & distributed in nature. If a job can be described (be it in Cypher, Gremlin, XML, JSON, javascript, R, whatever) thrown at a cluster, chunked up and processed, the results being either returned, stored back to the same graph or stored into a new graph then that's powerful. Sure, you'd use it locally first! What I think would be extra smart about breaking this into a new project is the evolution of an ecosystem where not everything is bundled into the core project taking it beyond what I see as it's job, but that interested parties start building function around it that leverage it's power. I've not used it, but doesn't TinkerPop have some sort of Map Reduce type functionality? M -- You received this message because you are subscribed to the Google Groups "Neo4j" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
