Elasticsearch cluster is still red.   Here is the current log after a reboot
[2016-02-10 10:42:38,305][INFO ][cluster.service          ] [Wicked] added 
{[graylog2-server][unOjK3nMR0mVfRdAtGn6gw][graylog][inet[/myipaddress:9350]]{client=true,
 
data=false, master=false},}, reason: zen-disco-receive(join from node[$
[2016-02-10 10:50:32,595][INFO ][cluster.service          ] [Wicked] 
removed 
{[graylog2-server][unOjK3nMR0mVfRdAtGn6gw][graylog][inet[/myipaddress:9350]]{client=true,
 
data=false, master=false},}, reason: zen-disco-node_left([graylog2-s$
[2016-02-10 10:50:51,036][INFO ][cluster.service          ] [Wicked] added 
{[graylog2-server][cf06ZtJ6SlK_PJzJeNlH2Q][graylog][inet[/myipaddress:9350]]{client=true,
 
data=false, master=false},}, reason: zen-disco-receive(join from node[$
[2016-02-10 10:53:06,206][INFO ][node                     ] [Wicked] 
stopping ...     
[2016-02-10 10:53:07,173][INFO ][node                     ] [Wicked] 
stopped  
[2016-02-10 10:53:07,175][INFO ][node                     ] [Wicked] 
closing ...              
[2016-02-10 10:53:07,197][INFO ][node                     ] [Wicked] closed 
      
[2016-02-10 10:53:46,674][INFO ][node                     ] [Phyla-Vell] 
version[1.7.4], pid[917], build[0d3159b/2015-12-15T11:25:18Z]
[2016-02-10 10:53:46,689][INFO ][node                     ] [Phyla-Vell] 
initializing ...
[2016-02-10 10:53:47,164][INFO ][plugins                  ] [Phyla-Vell] 
loaded [], sites [kopf]
[2016-02-10 10:53:47,252][INFO ][env                      ] [Phyla-Vell] 
using [1] data paths, mounts [[/var/opt/graylog/data 
(/dev/mapper/graylog--vg-data_LV)]], net usable_space [65gb], net 
total_space [294.1gb], types [ext4]
[2016-02-10 10:53:54,630][INFO ][node                     ] [Phyla-Vell] 
initialized      
[2016-02-10 10:53:54,630][INFO ][node                     ] [Phyla-Vell] 
starting ...     
[2016-02-10 10:53:54,710][INFO ][transport                ] [Phyla-Vell] 
bound_address {inet[/myipaddress:9300]}, publish_address 
{inet[/myipaddress:9300]}
[2016-02-10 10:53:54,728][INFO ][discovery                ] [Phyla-Vell] 
graylog2/Th0Nu_4iTtSjVFcccOKOLg
[2016-02-10 10:54:04,754][INFO ][cluster.service          ] [Phyla-Vell] 
new_master 
[Phyla-Vell][Th0Nu_4iTtSjVFcccOKOLg][graylog][inet[/myipaddress:9300]], 
reason: zen-disco-join (elected_as_master)
[2016-02-10 10:54:04,901][INFO ][http                     ] [Phyla-Vell] 
bound_address {inet[/myipaddress:9200]}, publish_address 
{inet[/myipaddress:9200]}
[2016-02-10 10:54:04,901][INFO ][node                     ] [Phyla-Vell] 
started
[2016-02-10 10:54:05,095][INFO ][gateway                  ] [Phyla-Vell] 
recovered [91] indices into cluster_state                   
[2016-02-10 10:54:06,598][INFO ][cluster.service          ] [Phyla-Vell] 
added 
{[graylog2-server][kHyQVl-7SAylptfIbEgT5g][graylog][inet[/myipaddress:9350]]{client=true,
 
data=false, master=false},}, reason: zen-disco-receive(join from n$
[2016-02-10 11:22:52,285][INFO ][node                     ] [Phyla-Vell] 
stopping ...     
[2016-02-10 11:22:53,477][INFO ][node                     ] [Phyla-Vell] 
stopped   
[2016-02-10 11:22:53,477][INFO ][node                     ] [Phyla-Vell] 
closing ...  
[2016-02-10 11:22:53,503][INFO ][node                     ] [Phyla-Vell] 
closed       
[2016-02-10 11:23:31,080][INFO ][node                     ] [Griffin] 
version[1.7.4], pid[930], build[0d3159b/2015-12-15T11:25:18Z]
[2016-02-10 11:23:31,088][INFO ][node                     ] [Griffin] 
initializing ...
[2016-02-10 11:23:31,353][INFO ][plugins                  ] [Griffin] 
loaded [], sites [kopf] 
[2016-02-10 11:23:31,418][INFO ][env                      ] [Griffin] using 
[1] data paths, mounts [[/var/opt/graylog/data 
(/dev/mapper/graylog--vg-data_LV)]], net usable_space [64.9gb], net 
total_space [294.1gb], types [ext4]
[2016-02-10 11:23:39,772][INFO ][node                     ] [Griffin] 
initialized     
[2016-02-10 11:23:39,773][INFO ][node                     ] [Griffin] 
starting ...
[2016-02-10 11:23:40,049][INFO ][transport                ] [Griffin] 
bound_address {inet[/myipaddress:9300]}, publish_address 
{inet[/myipaddress:9300]}
[2016-02-10 11:23:40,062][INFO ][discovery                ] [Griffin] 
graylog2/Ef-tsRRsQ2SsakMz88VJCg
[2016-02-10 11:23:50,166][INFO ][cluster.service          ] [Griffin] 
new_master 
[Griffin][Ef-tsRRsQ2SsakMz88VJCg][graylog][inet[/myipaddress:9300]], 
reason: zen-disco-join (elected_as_master)
[2016-02-10 11:23:50,378][INFO ][http                     ] [Griffin] 
bound_address {inet[/myipaddress:9200]}, publish_address 
{inet[/myipaddress:9200]}
[2016-02-10 11:23:50,380][INFO ][node                     ] [Griffin] 
started                 
[2016-02-10 11:23:50,718][INFO ][gateway                  ] [Griffin] 
recovered [91] indices into cluster_state
[2016-02-10 11:23:53,532][INFO ][cluster.service          ] [Griffin] added 
{[graylog2-server][aYG8InOnRtSYR4Ri6wXsKQ][graylog][inet[/myipaddress:9350]]{client=true,
 
data=false, master=false},}, reason: zen-disco-receive(join from node$



On Wednesday, February 10, 2016 at 10:27:50 AM UTC-6, jeremys wrote:
>
> Correct.  Delete is already set.   I will look at the API you suggested. 
>
> # Decide what happens with the oldest indices when the maximum number of 
> indices is reached.
> # The following strategies are availble:
> #   - delete # Deletes the index completely (Default)
> #   - close # Closes the index and hides it from the system. Can be 
> re-opened later.
> retention_strategy = delete
>
>
> On Wednesday, February 10, 2016 at 10:21:46 AM UTC-6, Jochen Schalanda 
> wrote:
>>
>> Hi,
>>
>> Graylog's index retention will only delete indices if retention_strategy 
>> is set to delete (see 
>> https://github.com/Graylog2/graylog2-server/blob/1.3.3/misc/graylog2.conf#L129-L133).
>>  
>> The index retention job runs every 5 minutes and removes (or closes) old 
>> indices.
>>
>> You can also safely delete indices in Elasticsearch using the Delete 
>> Index API (
>> https://www.elastic.co/guide/en/elasticsearch/reference/1.7/indices-delete-index.html)
>>  
>> to free more disk space.
>>
>>
>> Cheers,
>> Jochen
>>
>> On Wednesday, 10 February 2016 17:11:25 UTC+1, jeremys wrote:
>>>
>>> Thanks..
>>>
>>> I set my indicies from 150 back to 50 but it is not deleting old 
>>> indicies? Currently, I have 128 indicies.  I am assuming that it is not a 
>>> good idea to delete those manually?
>>>
>>> Here is the output.   
>>>
>>> [2016-02-09 00:00:36,926][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:01:06,959][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:01:36,924][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:01:36,926][INFO ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark exceeded on one or more nodes, rerouting shards
>>> [2016-02-09 00:02:06,937][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:02:36,924][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:03:06,927][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:03:06,928][INFO ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark exceeded on one or more nodes, rerouting shards
>>> [2016-02-09 00:03:36,936][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:04:06,953][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:04:06,954][INFO ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark exceeded on one or more nodes, rerouting shards
>>> [2016-02-09 00:04:36,933][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:05:06,953][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:05:06,954][INFO ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark exceeded on one or more nodes, rerouting shards
>>> [2016-02-09 00:05:36,959][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:06:06,927][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:06:36,926][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:06:36,928][INFO ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark exceeded on one or more nodes, rerouting shards
>>> [2016-02-09 00:07:06,932][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:07:36,926][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:08:06,925][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:08:06,927][INFO ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark exceeded on one or more nodes, rerouting shards
>>> [2016-02-09 00:08:36,963][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:09:06,925][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>> [2016-02-09 00:09:36,939][WARN ][cluster.routing.allocation.decider] 
>>> [Siryn] high disk watermark [90%] exceeded on 
>>> [9K50-EAOTHiZYQN5ziTtlg][Siryn] free: 26.4gb[8.9%], shards will be 
>>> relocated away from this node
>>>
>>>
>>> On Wednesday, February 10, 2016 at 9:59:42 AM UTC-6, Jochen Schalanda 
>>> wrote:
>>>>
>>>> Hi,
>>>>
>>>> if you're using one of the official virtual appliances, you can use 
>>>> sudo to run commands as super user (root) or simply run sudo -i to 
>>>> open a shell with root privileges.
>>>>
>>>>
>>>> Cheers,
>>>> Jochen
>>>>
>>>> On Wednesday, 10 February 2016 16:30:53 UTC+1, jeremys wrote:
>>>>>
>>>>> If I am looking in the correct place, I am getting permission denied 
>>>>> when trying to view the elasticsearch folder under /var/log/graylog
>>>>>
>>>>> On Wednesday, February 10, 2016 at 9:25:46 AM UTC-6, Jochen Schalanda 
>>>>> wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> are there any error messages in the logs of your Elasticsearch nodes?
>>>>>>
>>>>>>
>>>>>> Cheers,
>>>>>> Jochen
>>>>>>
>>>>>> On Wednesday, 10 February 2016 15:54:34 UTC+1, jeremys wrote:
>>>>>>>
>>>>>>> Two days ago, I noticed that my Elasticsearch cluster was 
>>>>>>> unavailable.  I've followed the suggestions provided in the setup 
>>>>>>> documentation but I still cannot get the cluster to turn green. 
>>>>>>>
>>>>>>> Graylog could not successfully connect to the Elasticsearch cluster. 
>>>>>>> If you're using multicast, check that it is working in your network and 
>>>>>>> that Elasticsearch is accessible. Also check that the cluster name 
>>>>>>> setting 
>>>>>>> is correct. Read how to fix this in the Elasticsearch setup 
>>>>>>> documentation. 
>>>>>>> <http://docs.graylog.org/en/1.3/pages/configuring_es.html#configuration>
>>>>>>>
>>>>>>> No changes were made between the time it was working and when it 
>>>>>>> stopped working.  Any help would be greatly appreciated.  Thank you
>>>>>>>
>>>>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/1bafa1d1-bc63-428c-af90-2452f5f57790%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to