I'd welcome any insights and heuristics people might have regarding the following:
I'm looking to use neo4j for domain modeling, and some of the info I want to capture is clearly 'graph-centric' and will be used for specifying traversals, etc: I'll obviously be using core neo4j functionality for that, and there's no confusion or ambiguity there. But there will also be large chunks of data that from a graph perspective are 'just payload' and all I would be doing from neo4j's perspective is finding the data, reading it into the app, and making decisions based on the contents. The data would be messy and different in content and/or format for many different nodes, but we're probably talking dozens-to-one-hundred different key-value pairs with the values being ints or sentence-long strings. We're probably not talking more than 10 kilobytes of info per 'chunk of payload', so this isn't a matter of sheer volume, just messiness and lack of structure. I was wondering which of the following options smells the best to the experts: 1. create a bunch of individual properties for each node as needed, and store it all in neo4j. 2. create a json-or-similar blob in a catch-all neo4j property for each node, store all this stuff in that one blob as needed read it out, and parse it in-app. 3. just store a UUID or key in neo4j, and use the key to store this messy data as a blob in some other noSQL store type. When I need the payload, I'd use neo4j to get the key, then grab the payload from the other store, then parse it in-app. Any suggestions on which of the above is smart/fast/clean/scalable, etc? Am I missing a smarter alternative? And where there isn't enough info above to decide, what other criteria would help me make an informed decision? Thanks, Tim. -- You received this message because you are subscribed to the Google Groups "Neo4j" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
