Hi All,

My task is to have a centralized log analysis tool that can accommodate 500 
GB of log files; and can search for anything within seconds from it. So, 
had a basic setup of Graylog with elasticsearch and logstash. 
To start, I tried reading one log file using logstash and stored it in 
elasticsearch. And am able to visualize them in the Graylog web interface.

Now i need to scale up the setup so that I can index 500 GB of data in 
elasticsearch. 
Is it possible to read these many files through logstash and index to 
elasticsearch? 
Will mongodb helpful in this scenario?
How should I scale up my architecture in terms of: elasticsearch nodes, 
RAM, CPU Power, ES-heap size and many more... so that I can meet the 
requirements of the task.

Currently have 4 GB RAM in my VM
CentOS 7
Elasticsearch: v 1.7.5
Logstash: v 2.2.2
Graylog: v 1.3

Kindly help me with my questions. I am very new to this environment.
Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to graylog2+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/28d6a902-fff9-4aee-a64e-7a65dc183ff3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to