In the mean time I also suspect the NetFlow plugin, but I do not have the 
time to investigate further. Right now I am fine with the automatic cron 
job to restart the graylog server once a day.


неделя, 6 ноември 2016 г., 13:34:16 UTC+2, [email protected] написа:
>
> I also have the question.
> I have two servers to collect syslog.
> Server A was for F5 ASM security log, but it did not any plugin in it.
> Server B was for cisco netflow log, it have installed netflow plugin.
>
> Server B always has out of memory issue.
> Server A have run about 14 days, it still fine.
> I guess the netflow plugin cause it. 
>
> Server A, B config:
> 8 Cores CPU
> 16G Rams
> 100G Disk
>
> DEFAULT_JAVA_OPTS="-Djava.library.path=${GRAYLOGCTL_DIR}/../lib/sigar 
> -Xms1g -Xmx2g -XX:NewRatio=1 -XX:PermSize=128m -XX:MaxPermSize=256m -server 
> -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled 
> -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC 
> -XX:-OmitStackTraceInFastThrow"
>
> JAVA_OPTS="${JAVA_OPTS:="$DEFAULT_JAVA_OPTS"}"
>
> Enrico於 2016年9月2日星期五 UTC+8下午8時12分11秒寫道:
>>
>>
>>
>> On Friday, July 8, 2016 at 12:45:01 PM UTC+2, Rumen Tashev wrote:
>>>
>>> I have a similar problem on my Graylog2 configuration. I have a cluster 
>>> with two nodes. The problem is with my slave node, where we capture NetFlow 
>>> data from our routers. The incoming messages are about 30 - 50 per second. 
>>> I have allowed up to 4g of heap memory for the graylog-server. With a fresh 
>>> start, the node uses up to 972.8 MB and this starts to grow over time. It 
>>> takes approximately 24 hours until the node reaches the full 4g (shown as 
>>> 3.8 GB) and then constantly stops and re-starts. A re-start on the node 
>>> (graylog-ctl stop && shutdown -r now) rectifies the problem, but then again 
>>> just temporary. The graylog slave node is configured as "backend".
>>>
>>> We have the 100% same configuration on the master node, where this 
>>> problem does not appear. The master node runs for weeks now, processing 
>>> about 10 - 30 messages per second and uses 1.1GB of heap space. It never 
>>> reaches any close to 3.8 GB, which would be the maximum configured. The 
>>> only difference is, that it does not accept any NetFlow messages.
>>>
>>> Previously we had the NetFlow messages go to the master node. Then the 
>>> exact same behaviour would appear there as well - the node gradually 
>>> consumes more and more memory, until it reaches a state where it constantly 
>>> crashes and restarts. Moving the NetFlow messages to the slave seems to 
>>> have rectified the problem on the master. Both nodes run the latest version 
>>> of Graylog2 - 2.0.3.
>>>
>>> Do you also run NetFlow inputs on your node? Any help is greatly 
>>> appreciated!
>>>
>>
>>
>> nobody has reply ?
>> Thanks 
>> Enrico
>>  
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/a4b4785b-a9a9-4333-a6c2-ed43f1841d51%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to