I'd be very surprised if there's any significant code from 2014 in today's codebase. Successfully running code from two years ago isn't a useful indicator of whether we can run on 512MB of RAM today.
That being said, the first thing I'd try is narrowing down anything around the OOM. Was it the kernel OOM killer or was it the Go runtime that had a failed allocation? Hopefully we don't need to get into the GODEBUG environment variables to track it down. Next, try running influxd with the GOGC environment variable set to something lower than the default of 100. The runtime package documentation [1] goes into detail about GOGC. I have no idea what would be appropriate - try 10, and if that's noticeably slow, try 25. As far as low-hanging fruit on the config, you could set the monitor store-interval up to 30s or 1m perhaps. Probably best not to disable it. You should be fine to disable the continuous querier and subscriber services, although they probably won't have much of an impact on memory. Let us know how it goes after all that. [1] https://golang.org/pkg/runtime/ On Tuesday, January 10, 2017 at 9:56:53 PM UTC-8, Heath Raftery wrote: > > Bump. A reasonable query I think, because running with 512MB opens up a > whole different universe of hosts for users with simple requirements. > > My experience is with a default install of influxdb 1.1.1, grafana 4.0.2 > and kapacitor 1.1.1. Points arrive via HTTP at about 2500/day. There's two > databases with series cardinality of 21 and 1 (the latter is for grafana > annotations), plus the _internal database with 188. Other than manual > viewing of data with grafana, the only other non-default queries are made > by a kapacitor script that processes the stream from one measurement and > writes to another. > > I'm running with 512MB, and every week or so Influx quietly dies, with an > OOM message left in syslog. Runs perfectly fine on restart, with about > 150MB free (as reported by free -m). > > Given how outrageously far I am from sweating influxdb's capabilities (the > hardware sizing guidelines start at series cardinality of 100,000), it > seems there ought to be a configuration that will happily run on 512MB. > > Rolf, your results are impressive. Have you noticed much change over time? > > Heath > > On Tuesday, October 21, 2014 at 6:18:49 AM UTC+11, Tom Maiaroto wrote: >> >> I'm running InfluxDB on a very small server. I have a few other processes >> running, but nothing crazy. >> What would the best settings be for small servers? >> >> I noticed sometimes InfluxDB is not running and I can't find any >> reasoning in the logs. So I figured it was running out of memory. >> I haven't profiled, but will set something up. >> >> In general though, are there any good approximations I can use in the >> config? >> ie. multiple this number of open files by available memory and that will >> give you the max open files.... etc. >> >> Aside from max-open-files ... are there any other settings to tweak? >> >> What about LRU cache size? >> >> Any tips would be great, thanks! >> Also, is there a minimum amount of RAM suggested for running InfluxDB? >> > -- Remember to include the version number! --- You received this message because you are subscribed to the Google Groups "InfluxData" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/influxdb. To view this discussion on the web visit https://groups.google.com/d/msgid/influxdb/2b0d38ad-2365-4b6e-a200-7e24501408d6%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
