Hi Sean, nice guess, we have 91 786 506 series :) To understand this a bit 
better. Does the high memory consumption come from the fact that influx 
loads the index into memory for faster writes and querying?  

I will dive into the individual measurements to see where exactly do we 
have such a large tag cardinality, so that we can reduce the number of 
series.  

Thank you 

On Monday, July 11, 2016 at 6:51:52 PM UTC+2, Sean Beckett wrote:
>
> High RAM usage usually correlates with high series cardinality 
> <https://docs.influxdata.com/influxdb/v0.13/concepts/glossary/#series-cardinality>
> . 
>
> You can run "SELECT sum(numSeries) AS "total_series" FROM 
> "_internal".."database" WHERE time > now() - 10s" to determine your series 
> cardinality, assuming you haven't altered the default sample rate for the 
> _internal database. If you have, change the WHERE time clause to grab only 
> one sample, or use "SELECT last(numSeries) FROM "_internal".."database" 
> GROUP BY "database"" and sum the results.
>
> With 100GB of RAM in use, I'm going to guess you have 5+ million series.
>
> On Mon, Jul 11, 2016 at 10:21 AM, Jan Kis <[email protected] <javascript:>
> > wrote:
>
>> Hi, 
>>
>> we are using influxdb 0.13 on Fedora 23. We see influx consuming more 
>> than 100GB of ram. At some point it eventually runs out of memory and dies. 
>> There are no errors in the logs. Our configuration is below. 
>>
>> Is there a way to control how much memory influx is consuming?
>> What can we do to figure out why is influx consuming so much memory?
>>
>> Thank you
>>
>> reporting-disabled = false
>> bind-address = ":8088"
>> hostname = ""
>> join = ""
>>
>> [meta]
>>   dir = "/data/influxdb/meta"
>>   retention-autocreate = true
>>   logging-enabled = true
>>   pprof-enabled = false
>>   lease-duration = "1m0s"
>>
>> [data]
>>   dir = "/data/influxdb/data"
>>   engine = "tsm1"
>>   wal-dir = "/data/influxdb/wal"
>>   wal-logging-enabled = true
>>   query-log-enabled = true
>>   cache-max-memory-size = 524288000
>>   cache-snapshot-memory-size = 26214400
>>   cache-snapshot-write-cold-duration = "1h0m0s"
>>   compact-full-write-cold-duration = "24h0m0s"
>>   max-points-per-block = 0
>>   data-logging-enabled = true
>>
>> [cluster]
>>   force-remote-mapping = false
>>   write-timeout = "10s"
>>   shard-writer-timeout = "5s"
>>   max-remote-write-connections = 3
>>   shard-mapper-timeout = "5s"
>>   max-concurrent-queries = 0
>>   query-timeout = "0"
>>   log-queries-after = "0"
>>   max-select-point = 0
>>   max-select-series = 0
>>   max-select-buckets = 0
>>
>> [retention]
>>   enabled = true
>>   check-interval = "30m0s"
>>
>> [shard-precreation]
>>   enabled = true
>>   check-interval = "10m0s"
>>   advance-period = "30m0s"
>>
>> [admin]
>>   enabled = true
>>   bind-address = ":8083"
>>   https-enabled = false
>>   https-certificate = "/etc/ssl/influxdb.pem"
>>   Version = ""
>>
>> [monitor]
>>   store-enabled = true
>>   store-database = "_internal"
>>   store-interval = "10s"
>>
>> [subscriber]
>>   enabled = true
>>
>> [http]
>>   enabled = true
>>   bind-address = ":8086"
>>   auth-enabled = false
>>   log-enabled = true
>>   write-tracing = false
>>   pprof-enabled = false
>>   https-enabled = false
>>   https-certificate = "/etc/ssl/influxdb.pem"
>>   max-row-limit = 10000
>>
>> [[graphite]]
>>   enabled = true
>>   bind-address = ":2003"
>>   database = "graphite"
>>   protocol = "udp"
>>   batch-size = 5000
>>   batch-pending = 10
>>   batch-timeout = "1s"
>>   consistency-level = "one"
>>   separator = "."
>>   udp-read-buffer = 0
>>
>> [[collectd]]
>>   enabled = false
>>   bind-address = ":25826"
>>   database = "collectd"
>>   retention-policy = ""
>>   batch-size = 5000
>>   batch-pending = 10
>>   batch-timeout = "10s"
>>   read-buffer = 0
>>   typesdb = "/usr/share/collectd/types.db"
>>
>> [[opentsdb]]
>>   enabled = false
>>   bind-address = ":4242"
>>   database = "opentsdb"
>>   retention-policy = ""
>>   consistency-level = "one"
>>   tls-enabled = false
>>   certificate = "/etc/ssl/influxdb.pem"
>>   batch-size = 1000
>>   batch-pending = 5
>>   batch-timeout = "1s"
>>   log-point-errors = true
>>
>> [[udp]]
>>   enabled = false
>>   bind-address = ":8089"
>>   database = "udp"
>>   retention-policy = ""
>>   batch-size = 5000
>>   batch-pending = 10
>>   read-buffer = 0
>>   batch-timeout = "1s"
>>   precision = ""
>>
>> [continuous_queries]
>>   log-enabled = true
>>   enabled = true
>>   run-interval = "1s"
>>
>> -- 
>> Remember to include the InfluxDB version number with all issue reports
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "InfluxDB" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected] 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/influxdb.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/influxdb/770d4dc6-8a9b-449e-ad43-fa558e53a16d%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/influxdb/770d4dc6-8a9b-449e-ad43-fa558e53a16d%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> Sean Beckett
> Director of Support and Professional Services
> InfluxDB
>

-- 
Remember to include the InfluxDB version number with all issue reports
--- 
You received this message because you are subscribed to the Google Groups 
"InfluxDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/influxdb.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/influxdb/eaa4d5ef-1e81-409b-89e1-867c83ef3939%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to