correct! But the other two could take this role if the master goes down. Am 
I right? So my setup is fine. Or do I misunderstand something?

On Tuesday, February 10, 2015 at 2:51:20 PM UTC+1, Arie wrote:
>
> When running, only one server can be master. This server is regulating all 
> the logic of your es cluster,
> and is the one that graylog is talking to.
>
>
>
> On Tuesday, February 10, 2015 at 2:04:52 PM UTC+1, Christoph Fürstaller 
> wrote:
>>
>> Hi,
>>
>> Thanks for the configuration docu.
>>
>> Can I really run into split brain?
>> I have 3 nodes, they are all equal. Everyone of them can be a master and 
>> will store data. With the discovery.zen.minimum_master_nodes: 2 I can't get 
>> a split brain. Or am I wrong?
>> Or is this setup not ideal?
>>
>> Chris...
>>
>> On Tuesday, February 10, 2015 at 1:38:06 PM UTC+1, Arie wrote:
>>>
>>> You coud bump into a split brain situation running all ES nodes as 
>>> master.
>>>
>>> Check out this to configure your cluster:
>>>
>>>
>>> http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_important_configuration_changes.html#_minimum_master_nodes
>>>
>>>
>>>
>>> On Tuesday, February 10, 2015 at 12:09:33 AM UTC+1, Christoph Fürstaller 
>>> wrote:
>>>>
>>>> Thanks for your answer!
>>>>
>>>> About the master/data nodes. What happens when the master goes down? 
>>>> Will one of the 'slaves' become a master? I configured all 3 as master for 
>>>> redundancy, so the cluster still survives if only one node is present. Is 
>>>> this assumption wrong?
>>>>
>>>> I've increased the ES_HEAP_SIZE to 6G before, with the same results. 
>>>>
>>>> Chris...
>>>>
>>>> Am Montag, 9. Februar 2015 20:30:28 UTC+1 schrieb Arie:
>>>>>
>>>>> Hi,,
>>>>>
>>>>> Looking @ your config in elasticsearch.yml the follwing comes in to 
>>>>> mind
>>>>>
>>>>> One node should be:
>>>>> node.master: true
>>>>> node.data: true
>>>>>  
>>>>> and for the other two nodes:
>>>>> node.master: false
>>>>> node.data: false
>>>>>
>>>>> elasticseaarch.conf
>>>>> ES_HEAP_SIZE
>>>>>
>>>>> you can take this easy up o 8G (50% of your memory) and check if this 
>>>>> is really
>>>>> running so. In my case on Centos6 I put this in 
>>>>> /etc/conf.d/elasticseaarch
>>>>>
>>>>> Good luck.
>>>>>
>>>>> On Friday, February 6, 2015 at 12:58:27 PM UTC+1, Christoph Fürstaller 
>>>>> wrote:
>>>>>>
>>>>>> Hi,
>>>>>> Yesterday we've updatet our Graylog2/Elasticsearch Cluster. The 
>>>>>> Elastic Search Cluster consists of 3 physical Maschins: DL380 G7, E5620, 
>>>>>> 16GB RAM on RHEL 6.6. Each ES Node gets 4GB RAM. On one Host there is 
>>>>>> the 
>>>>>> graylog2 Server/Interface installed. Until yesterday we used 
>>>>>> Elasticsearch 
>>>>>> 0.90.10-1 and graylog2-0.20.3 Yesterday we updatet graylog2 to 0.90.0, 
>>>>>> startet everything, everything was running fine. Then Stopped graylog2 
>>>>>> and 
>>>>>> the ElaticSearch Cluster, upgraded ES to 1.3.4 and graylog to 0.92.4. 
>>>>>> The 
>>>>>> Upgrade from ES was successfully, after that, startet graylog2, which 
>>>>>> connected to the cluster and showed everything.
>>>>>>
>>>>>> In the ES Cluster there are 7 indices a 20mio messages. The last 3 
>>>>>> indices are opened, the other closed. Graylog2 sees approx 50mio 
>>>>>> messages. 
>>>>>> New messages arrive with approx 5msg/sec
>>>>>>
>>>>>> In the logs from graylog2-server there are messages like this, every 
>>>>>> couple of minutes:
>>>>>> org.graylog2.periodical.GarbageCollectionWarningThread - Last GC run 
>>>>>> with PS Scavenge took longer than 1 second
>>>>>>
>>>>>> It seems graylog is running fine, a bit slow on searches, but fine.
>>>>>>
>>>>>> Attached are the config files for graylog2 and elasticsearch.
>>>>>>
>>>>>> Can someone give us a hint where this warnings come from? What we can 
>>>>>> tweak? Would be very helpful!
>>>>>>
>>>>>> Thanks!
>>>>>> Chris...
>>>>>>
>>>>>

-- 
You received this message because you are subscribed to the Google Groups 
"graylog2" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to