Hi Mike!

The answer depends on several parameters.
Firstly the size of the individual message is important. Are we talking 1k
or 100+k per message?
The latter is a challenge tbh and requires much higher heap sizes.

The key to process this kind of message load is a decently sized
elasticsearch cluster, usually 4+ nodes, and plenty of cores on graylog
nodes, 16 or 24 each I would estimate, across two nodes.

Memory is less of an issue on graylog nodes, unless you have large messages.

We have seen installations doing 40k messages on two nodes, with message
sizes between 100kb and 1mb, but in this case you are torturing
elasticsearch a lot. It's tricky to tune. Most logging use cases have
message sizes that are far below that number though and consequently do
more messages per second.

Beyond this writeup, I will shamelessly point to our commercial offering :)

That being said, the default config can handle roughly 15k messages on
current hardware per node, given a reasonable ES cluster.

Best,
Kay
 On Apr 14, 2015 11:26 PM, "Mike Daoust" <[email protected]> wrote:

> Hey folks
>
> I have a new project that Im looking for some insight on.
> we are testing out logging some high volume data that is between 65 and
> 100k per second.
>
> What would you all think would be an optimal config?  With higher loads do
> you find that having everything separate vs full stack offers better
> performance?
>
>
> Thanks
>
> Mike
>
>
>
>
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "graylog2" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"graylog2" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to