This data is generated by a variety of shell scripts that do discovery -  
potentially dozens of them - and each is different. Some of the most critical 
data is decomposed to attributes - but not most of it.

-- 
  Alan Robertson
  al...@unix.sh

On Fri, Feb 9, 2018, at 2:58 AM, Michael Hunger wrote:
> I think this is ok. 
> I wished we had full document support yet.  
> 
> I know that pg has really good jsonb support, so go for it. 
> 
> Did you ever try to destrucure the data into properties? Not sure how 
> deeply nested it is? And leave off all that are just defaults 
> 
> Von meinem iPhone gesendet
> 
> > Am 09.02.2018 um 04:10 schrieb Alan Robertson <al...@unix.sh>:
> > 
> > Hi,
> > 
> > There is one set of data that when I insert it into Neo4j - it's really, 
> > really slow. It's discovery data - which is JSON - and sometimes very large 
> > - a few megabytes. Many of them are smallish, but having items a few 
> > kilobytes is common, and dozens of kilobytes is also common, and some few 
> > things are in the megabyte+ range. [Because of compression, I can send up 
> > to 3 megabytes of this JSON over UDP].
> > 
> > There are a few things I can do with Neo4j to make inserting it faster, but 
> > I don't think a lot -- and when I get done, the data is very hard to query 
> > against (it involves regexes against unindexed data, and is a performance 
> > nightmare).
> > 
> > Postgres has JSON support, and it has real transactions and a reputation 
> > for being very solid. I did some benchmarking and it is a a couple of 
> > orders of magnitude faster than Neo4j with both of them untuned. In 
> > addition, Postgres JSON (jsonb) can have indexes over the JSON information 
> > - greatly improving the query capabilities over what Neo4j can do for this 
> > same data.
> > 
> > I'm not thinking about doing anything except moving this one class of data 
> > to Postgres. This particular class of data is also idempotent, which has 
> > advantages when you have multiple databases involved...
> > 
> > Since this particular type of data is its own object in the Python, having 
> > it be in Postgres wouldn't likely be horrible to implement.
> > 
> > If I'm going to do this in the next year or two, it makes sense to couple 
> > it with the rest of the backwards-incompatible changes I'm already putting 
> > into release 2.
> > 
> > Does anyone think this is a show-stopper to use two databases?
> > 
> > -- 
> >  Alan Robertson
> >  al...@unix.sh
_______________________________________________
Assimilation mailing list - Discovery-Driven Monitoring
Assimilation@lists.community.tummy.com
http://lists.community.tummy.com/cgi-bin/mailman/listinfo/assimilation
http://assimmon.org/

Reply via email to