We are using http to write inserts to the /write endpoint and after a very 
short duration, influx is being kill by the linux OS for out of memory. 
We've increased the memory 4 fold from the initial value, and influx now 
has 20GB, but still it is crashing fairly quickly. We are performing about 
only 230,000 inserts. 1000 inserts into 230 measurements. We are using 
batching and flushing every 100 milliseconds. 

We don't believe the data is more than 1GB, but yet, 20 GB of memory isn't 
enough to write this?!

This was done with 1.1.0 version. 

-- 
Remember to include the version number!
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxData" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/67ec84d0-4320-4044-b881-1b23839b2964%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to