Hi Steve,
have you restarted ES service also?
Re run  curl -XGET http://172.20.39.61:9200/_cluster/health?pretty to check 
the status again.

Alberto

On Thursday, November 12, 2015 at 1:22:10 AM UTC+1, Steve Kirkpatrick wrote:
>
> Hi Alberto,
>
> Tried setting "index.number_of_replicas: 0" in elasticsearch.yml as 
> suggested.  Then did a "sudo graylog-ctl restart".
> Upon logging into the Web UI, I see the Elasticsearch cluster is still 
> yellow and neither of the missing indices are in the list.
>
> I wonder what keeps them from being added to the list of indices?  Any 
> other troubleshooting steps I am overlooking?
>
> Thanks for the help.
>
> Steve.
>
> On Wednesday, November 11, 2015 at 12:47:44 AM UTC-8, Alberto Frosi wrote:
>>
>> Hi Steve,
>> I suggest to check the indexed data with:
>>
>> curl -XGET 'http://127.0.0.1:9200/graylog_59/_search?pretty=1'
>>
>> Your ES state it's yellow because you have replica unassigned, should be 
>> green.
>> Edit the config file elasticsearch.yml and disable replication :
>> index.number_of_replicas: 0
>>
>> or configure this properly, depends of your needs.
>>
>> Restart ES service.
>>
>> Now the first command it's to check if your indexed data it's present in 
>> that indice, if yes, like I guess, You can restart a graylog and check 
>> again.
>> However now in your graylog web UI the status is it yellow or green ?
>> Sometimes in early versions of graylog had same problems with refresh 
>> between ES and web UI.
>> HTH 
>> Alberto
>>
>>
>>
>> On Tuesday, November 10, 2015 at 7:07:36 PM UTC+1, Steve Kirkpatrick 
>> wrote:
>>>
>>> Thanks for the reply Alberto.
>>>
>>> Here's the first part of the output on the first command:
>>> Note: I used the IP of the graylog-server because localhost gave me: 
>>> Connection refused
>>>
>>> root@graylog-server:/var/log/graylog/web# curl -XGET 
>>> 172.20.39.61:9200/graylog_59/_stats?pretty
>>> {
>>>   "_shards" : {
>>>     "total" : 8,
>>>     "successful" : 4,
>>>     "failed" : 0
>>>   },
>>>   "_all" : {
>>>     "primaries" : {
>>>       "docs" : {
>>>         "count" : 1318398,
>>>         "deleted" : 0
>>>       },
>>>       "store" : {
>>>         "size_in_bytes" : 685371590,
>>>         "throttle_time_in_millis" : 0
>>>       },
>>>
>>> That looks promising.  Not sure if the rest of the output would be 
>>> helpful.
>>>
>>> The second command:
>>>
>>> root@graylog-server:/var/log/graylog/web#  curl -XGET 
>>> http://172.20.39.61:9200/_cat/shards
>>> ...
>>> graylog_59 0 p STARTED    326183 161.9mb 127.0.1.1 X-Cutioner 
>>> graylog_59 0 r UNASSIGNED                                     
>>> graylog_59 3 p STARTED    329200 163.4mb 127.0.1.1 X-Cutioner 
>>> graylog_59 3 r UNASSIGNED                                     
>>> graylog_59 1 p STARTED    330482 163.4mb 127.0.1.1 X-Cutioner 
>>> graylog_59 1 r UNASSIGNED                                     
>>> graylog_59 2 p STARTED    332533 164.6mb 127.0.1.1 X-Cutioner 
>>> graylog_59 2 r UNASSIGNED
>>> ...
>>> graylog_67 0 p STARTED    295826 145.3mb 127.0.1.1 X-Cutioner 
>>> graylog_67 0 r UNASSIGNED                                     
>>> graylog_67 3 p STARTED    299222 146.8mb 127.0.1.1 X-Cutioner 
>>> graylog_67 3 r UNASSIGNED                                     
>>> graylog_67 1 p STARTED    298980 146.4mb 127.0.1.1 X-Cutioner 
>>> graylog_67 1 r UNASSIGNED                                     
>>> graylog_67 2 p STARTED    304105 148.4mb 127.0.1.1 X-Cutioner 
>>> graylog_67 2 r UNASSIGNED  
>>>
>>> Those messages seem to match the ones from the "working" indices.
>>>
>>> Third command:
>>>
>>> root@graylog-server:/var/log/graylog/web# curl -XGET 
>>> http://172.20.39.61:9200/_cluster/health?pretty
>>> {
>>>   "cluster_name" : "graylog2",
>>>   "status" : "yellow",
>>>   "timed_out" : false,
>>>   "number_of_nodes" : 2,
>>>   "number_of_data_nodes" : 1,
>>>   "active_primary_shards" : 132,
>>>   "active_shards" : 132,
>>>   "relocating_shards" : 0,
>>>   "initializing_shards" : 0,
>>>   "unassigned_shards" : 132,
>>>   "delayed_unassigned_shards" : 0,
>>>   "number_of_pending_tasks" : 0,
>>>   "number_of_in_flight_fetch" : 0
>>> }
>>>
>>> Seems OK.
>>>
>>> Any other commands I could try or logs I should look at to determine why 
>>> those two indices are not available within the Graylog Web UI?
>>>
>>>
>>> Appreciate the help.
>>>
>>> Steve.
>>>
>>> On Tuesday, November 10, 2015 at 8:19:57 AM UTC-8, Alberto Frosi wrote:
>>>>
>>>> Hi Steve,
>>>> I suggest to check if these indices exist yet, querying ES directly:
>>>>
>>>> curl -XGET localhost:9200/graylog_59/_stats?pretty
>>>>
>>>>  curl -XGET http://localhost:9200/_cat/shards
>>>>
>>>>
>>>> curl -XGET http:///localhost:9200/_cluster/health?pretty
>>>>
>>>> HTH
>>>> Ciao
>>>> Alberto
>>>>
>>>> On Tuesday, November 10, 2015 at 1:16:52 AM UTC+1, Steve Kirkpatrick 
>>>> wrote:
>>>>>
>>>>> Hello,
>>>>>
>>>>> Running Graylog V1.2.2 using the VM appliance from graylog.org.
>>>>>
>>>>> Been having performance issues.  When I first start Graylog, 
>>>>> everything is snappy.  By the next day, things have gotten more sluggish. 
>>>>>  Sometimes it takes 5-10 attempts to login to the web interface.  
>>>>>
>>>>> One problem I have is that two of the indices have dropped off the 
>>>>> list on the Systems->Indices page.
>>>>> After some googling, I decided to try Maintenance->Recalculate index 
>>>>> ranges.
>>>>> The job completes but neither of the two indices reappear in the list.
>>>>>
>>>>> I found these errors in /var/log/graylog/server/current:
>>>>>
>>>>> 2015-11-09_23:28:53.06554 INFO  [RebuildIndexRangesJob] Re-calculating 
>>>>> index ranges.
>>>>> 2015-11-09_23:28:53.06590 INFO  [SystemJobManager] Submitted SystemJob 
>>>>> <a3802c80-8739-11e5-8dd3-005056b859d5> 
>>>>> [org.graylog2.indexer.ranges.RebuildIndexRangesJob]
>>>>> 2015-11-09_23:28:53.12839 INFO  [MongoIndexRangeService] Calculated 
>>>>> range of [graylog_47] in [56ms].
>>>>> ...
>>>>> 2015-11-09_23:28:54.49844 INFO  [MongoIndexRangeService] Calculated 
>>>>> range of [graylog_55] in [101ms].
>>>>> 2015-11-09_23:28:54.81895 INFO  [MongoIndexRangeService] Calculated 
>>>>> range of [graylog_58] in [211ms].
>>>>> 2015-11-09_23:28:54.94361 INFO  [MongoIndexRangeService] Calculated 
>>>>> range of [graylog_57] in [123ms].
>>>>> 2015-11-09_23:28:55.04214 ERROR [Indices] Error while calculating 
>>>>> timestamp stats in index <graylog_59>
>>>>> 2015-11-09_23:28:55.04216 
>>>>> org.elasticsearch.action.search.SearchPhaseExecutionException: Failed to 
>>>>> execute phase [query], all shards failed; shardFailures 
>>>>> {[XnEo6hwLTeyUZ4EluxaIEw][graylog_59][0]: 
>>>>> RemoteTransportException[[X-Cutioner][inet
>>>>> [/172.20.39.61:9300]][indices:data/read/search[phase/query]]]; nested: 
>>>>> ClassCastException; }{[XnEo6hwLTeyUZ4EluxaIEw][graylog_59][1]: 
>>>>> RemoteTransportException[[X-Cutioner][inet[/172.20.39.61:9300]][indices:data/read/search[phase/query]]];
>>>>>  
>>>>> nested: ClassCastException; }{[XnEo6hwLTeyUZ4EluxaIEw][graylog_59][2]: 
>>>>> RemoteTransportException[[X-Cutioner][inet[/172.20.39.61:9300]][indices:data/read/search[phase/query]]];
>>>>>  
>>>>> nested: ClassCastException; }{[XnEo6hwLTeyUZ4EluxaIEw][graylog_
>>>>> 59][3]: 
>>>>> RemoteTransportException[[X-Cutioner][inet[/172.20.39.61:9300]][indices:data/read/search[phase/query]]];
>>>>>  
>>>>> nested: ClassCastException; }
>>>>> 2015-11-09_23:28:55.04217       at 
>>>>> org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:237)
>>>>> 2015-11-09_23:28:55.04218       at 
>>>>> org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onFailure(TransportSearchTypeAction.java:183)
>>>>> 2015-11-09_23:28:55.04218       at 
>>>>> org.elasticsearch.search.action.SearchServiceTransportAction$6.handleException(SearchServiceTransportAction.java:249)
>>>>> 2015-11-09_23:28:55.04219       at 
>>>>> org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:190)
>>>>> 2015-11-09_23:28:55.04219       at 
>>>>> org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:180)
>>>>> 2015-11-09_23:28:55.04220       at 
>>>>> org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:130)
>>>>> 2015-11-09_23:28:55.04220       at 
>>>>> org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
>>>>> 2015-11-09_23:28:55.04220       at 
>>>>> org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>>>>> 2015-11-09_23:28:55.04221       at 
>>>>> org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
>>>>> 2015-11-09_23:28:55.04221       at 
>>>>> org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
>>>>> 2015-11-09_23:28:55.04222       at 
>>>>> org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
>>>>> 2015-11-09_23:28:55.04222       at 
>>>>> org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
>>>>> 2015-11-09_23:28:55.04223       at 
>>>>> org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
>>>>> 2015-11-09_23:28:55.04223       at 
>>>>> org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
>>>>> 2015-11-09_23:28:55.04224       at 
>>>>> org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>>>>> 2015-11-09_23:28:55.04225       at 
>>>>> org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
>>>>> 2015-11-09_23:28:55.04225       at 
>>>>> org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
>>>>> 2015-11-09_23:28:55.04226       at 
>>>>> org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
>>>>> 2015-11-09_23:28:55.04226       at 
>>>>> org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
>>>>> 2015-11-09_23:28:55.04226       at 
>>>>> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
>>>>> 2015-11-09_23:28:55.04227       at 
>>>>> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
>>>>> 2015-11-09_23:28:55.04228       at 
>>>>> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
>>>>> 2015-11-09_23:28:55.04228       at 
>>>>> org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
>>>>> 2015-11-09_23:28:55.04228       at 
>>>>> org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
>>>>> 2015-11-09_23:28:55.04229       at 
>>>>> org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
>>>>> 2015-11-09_23:28:55.04229       at 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>>>> 2015-11-09_23:28:55.04230       at 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>>> 2015-11-09_23:28:55.04230       at 
>>>>> java.lang.Thread.run(Thread.java:745)
>>>>> 2015-11-09_23:28:55.04250 INFO  [RebuildIndexRangesJob] Could not 
>>>>> calculate range of index [graylog_59]. Skipping.
>>>>> 2015-11-09_23:28:55.04252 
>>>>> org.elasticsearch.indices.IndexMissingException: [graylog_59] missing
>>>>> 2015-11-09_23:28:55.04252       at 
>>>>> org.graylog2.indexer.indices.Indices.timestampStatsOfIndex(Indices.java:482)
>>>>> 2015-11-09_23:28:55.04253       at 
>>>>> org.graylog2.indexer.ranges.MongoIndexRangeService.calculateRange(MongoIndexRangeService.java:118)
>>>>> 2015-11-09_23:28:55.04253       at 
>>>>> org.graylog2.indexer.ranges.RebuildIndexRangesJob.execute(RebuildIndexRangesJob.java:96)
>>>>> 2015-11-09_23:28:55.04253       at 
>>>>> org.graylog2.system.jobs.SystemJobManager$1.run(SystemJobManager.java:88)
>>>>> 2015-11-09_23:28:55.04254       at 
>>>>> com.codahale.metrics.InstrumentedScheduledExecutorService$InstrumentedRunnable.run(InstrumentedScheduledExecutorService.java:235)
>>>>> 2015-11-09_23:28:55.04254       at 
>>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>>>> 2015-11-09_23:28:55.04254       at 
>>>>> java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>>> 2015-11-09_23:28:55.04255       at 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>>>>> 2015-11-09_23:28:55.04255       at 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>>>> 2015-11-09_23:28:55.04256       at 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>>>> 2015-11-09_23:28:55.04256       at 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>>> 2015-11-09_23:28:55.04257       at 
>>>>> java.lang.Thread.run(Thread.java:745)
>>>>> 2015-11-09_23:28:55.20408 INFO  [MongoIndexRangeService] Calculated 
>>>>> range of [graylog_61] in [161ms].
>>>>> 2015-11-09_23:28:55.38073 INFO  [MongoIndexRangeService] Calculated 
>>>>> range of [graylog_60] in [175ms].
>>>>>
>>>>> graylog_59 is one of the two missing indices.
>>>>>
>>>>> Is it possible to "fix" these indices and gain access to the data 
>>>>> contained within them?
>>>>> I originally configured the system to keep 30 indices, each with 24 
>>>>> hours of data.
>>>>> Today I reconfigured that to 60 indices at 12 hours each.  Not sure if 
>>>>> that will help with the performance issues.
>>>>> If their a rule-of-thumb for index sizing?
>>>>>
>>>>> Anything else I should be looking at to figure out the performance 
>>>>> issues?
>>>>> The performance graphs for the VM look OK in vSphere; no resources 
>>>>> appear to be overwhelmed.
>>>>>
>>>>> Thanks for any guidance.
>>>>>
>>>>> Steve.
>>>>>
>>>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/3d421103-03c4-467f-8d67-135f0a4b0124%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to