Hello,

May I bug all of you to share some details on your actual graylog-server 
configuration? We are planning on deploying (to start) a 3-node physical 
server Elasticsearch cluster, and I am thinking of beginning with either 2 
or 3 graylog nodes, virtualized, using CentOS 6.7. We don't have a good 
idea of what our logging volume will be to start, but we plan on opening 
the tap slowly. Something I seem to be coming up short on is documentation 
on the actual configuration of the graylog-server and graylog-web services. 
The 1.3 documents provide a recommended HA setup diagram, but either I'm 
really overlooking something or the actual config details are missing.

So the first graylog-server is master, with ES unicast discovery entries 
pointing at my 3 ES nodes on port 9300. MongoDB connection will be 
localhost 27017. Additional graylog-server nodes are not master, identical 
ES configuration, do you configure them to point to MongoDB on the master 
graylog-server node, or do you configure replica's? If you use replica's, 
is there a clustered "listener" of sorts which can always ensure 
connectivity to the current primary? How do the graylog-server nodes 
maintain/establish connectivity to the new primary?

graylog-web, should I be running my REST API on different ports such that 
graylog2-server.uris="node1:12900,node2:12901,node3:12902" and on?  Or 
might I leave them all at 12900? I had this up and running in a lab 
environment and experienced occasional connectivity to the graylog-server 
nodes from graylog-web (running on node1).

Also, is there need to run more than one graylog-web node? May I do so for 
redundancy and put them behind a balancer?

Anything else I'm missing?

Thanks much.

John
On Sunday, June 8, 2014 at 7:37:52 PM UTC-5, Asad Mehmood wrote:
>
> Good day!
>
> Recently I started implementing log monitoring and analysis system using 
> graylog2, we will have around 12,000 message / second. Though in staging we 
> are not even near that number but the cluster is not stable.
>
> Sometimes ES discovery fails because either the PC is in I/O wait or there 
> are too many processes in each core. 
> I tried to tune the settings by one way or another the cluster finds a way 
> to fail, as for my setup there are some limitation for a a while to use 
> high speed I/O so I need to either stick with slow disks or divide the 
> setup in a way that recent logs remain in high speed disks and older are 
> moved to low performance cluster. I was hoping if someone can help me 
> formulate or calculate a formula to decide how many nodes I need for ES 
> cluster, graylog2-server, radio and Kafka.
>
> There is another problem with KAFKA input if i shutdown Kafka, zookeeper 
> or radio, the messages stop coming and I need to Terminate Kafka input and 
> Launch a new input.
> Also the message throughput while using KAFKA and Radio is far less than 
> using direct inputs with graylog2-benchmark tool.
>
> Current Setup
> 2 Nodes for Log Collector and Radio  (8 Gb, 2 Core Xeon )
> 1. Graylog2-server + graylog2-web (16 Gb, 4 Core Xeon )
> 1. Graylog2-server + elasticsearch (16 Gb, 4 Core Xeon )
> 3. Elasticsearch + Kafka Node (16 Gb, 4 Core Xeon )
>
> The message throughput in peak hours will be 12000 / second and to 
> implement this system in  production, the system needs to withstand stress 
> test of 20.000 message / second. 
>
> I will appreciate if anyone here can help me with formulating the 
> performance requirements by quantifying them.
>
>
> regards,
>
> Asad
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/321efa2c-ea0e-4ce5-b1e0-3ff23c00b15b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to