Hi,

what kind of inputs are you running and are you using any extractors or the 
message processing pipeline?

A full process buffer usually hints to slow/complex message filtering or 
extractors.

Cheers,
Jochen

On Friday, 22 April 2016 16:29:41 UTC+2, [email protected] wrote:
>
> Hi,
> I have a problem with my output rate which is very low. My input rate is 
> not very high (200-400 msg/sec) but the output gets stuck at 100msg/sec 
> max. As you can see in the following screenshot, the process and output 
> buffers are at 100% and the journal is pretty full.
>
> [image: graylog] 
> <https://cloud.githubusercontent.com/assets/12542108/14742907/dee229e0-089e-11e6-9902-00040ff1b7d3.jpg>
>
> Here is my hardware setup :
>
> - 2 graylog servers. Both nodes have 4g of ram and 2vcpu, I've tried 
> increasing it to 4vcpu and 8g of ram but ended up with the same problem
>
> - 3 elasticsearch nodes with each 1vcpu and 2 g of ram, 
>
> I have never experienced performance issues with graylog 1.x and the same 
> input rate
>
>
> In the logs I can see errors like the following
> ERROR [KafkaJournal] Read offset 91635221 before start of log at 91753414, 
> starting to read from the beginning of the journal.
>
>
> Thanks !
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/c132434d-932c-44c7-8a3e-df05e2ac2c5e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to