Hi Shivakumar,

the very simple (and probably not very satisfying) answer is that Drools is 
too slow to cope with the message throughput. Either you simplify the 
Drools rules significantly (or remove them completely, e. g. by moving the 
processing to the clients) or you add more hardware (i. e. more processor 
cores) to the machine hosting Graylog.

Cheers,
Jochen

On Thursday, 12 May 2016 13:01:09 UTC+2, Shivakumar Khuba wrote:
>
> We have installed whole graylog setup in the production.
>
>
> Graylog server 1.3.0 running with 12GB Heap
> Graylog UI 1.3.3 3GB Heap
> Elastic search 1.7.2  5GB Heap (not indexing the data)
> MongoDB 3.0.4 5GB Heap 
>
>
> Messages are coming 1000/sec and average message size is approximately 
> 500KB.
>
>
> Each message is getting processed through DRL, wherein we are extracting 
> fields from it. 
>
>
> We are not indexing in Elastic search, added "m.setFilterOut(true)" this 
> property.
>
>
> We have increased all the processor and ring also, but still buffer 
> process buffers are getting full.
>
>
> Once processor buffer is full and graylog stops processing the message. 
> Kafka journal is getting saturated.
>
>
> Please find attached DRL file.
>
>
> Kindly request you to get back with solution asap.
>
> Thanks and Regards,
> Shivakumar Khuba
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/06b43811-e73e-4bf5-a6ca-ce80736588b8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to