Thanks Kay! I did upgrade from an older version of Graylog and 
ElasticSearch once v0.20.0 came out.

I ran that job from the web interface, and it looks like it cycled through 
all my indexes with messages like:

2014-05-28 14:34:30,460 INFO : org.graylog2.system.jobs.SystemJobManager - 
Submitted SystemJob <da2267c5-e6af-11e3-a691-005056b274fe> 
[org.graylog2.indexer.ranges.RebuildIndexRangesJob]
2014-05-28 14:34:30,460 INFO : 
org.graylog2.indexer.ranges.RebuildIndexRangesJob - Re-calculating index 
ranges.
2014-05-28 14:34:30,729 INFO : 
org.graylog2.indexer.ranges.RebuildIndexRangesJob - Calculated range of 
[graylog2_56] in [239ms].
2014-05-28 14:34:30,959 INFO : 
org.graylog2.indexer.ranges.RebuildIndexRangesJob - Calculated range of 
[graylog2_55] in [229ms].
2014-05-28 14:34:31,158 INFO : 
org.graylog2.indexer.ranges.RebuildIndexRangesJob - Calculated range of 
[graylog2_54] in [199ms].
2014-05-28 14:34:31,345 INFO : 
org.graylog2.indexer.ranges.RebuildIndexRangesJob - Calculated range of 
[graylog2_53] in [187ms].

.....etc, then when it hits graylog2_recent

2014-05-28 14:34:51,242 INFO : 
org.graylog2.indexer.ranges.RebuildIndexRangesJob - Could not calculate 
range of index [graylog2_recent]. Skipping.
org.elasticsearch.action.search.SearchPhaseExecutionException: Failed to 
execute phase [query], all shards failed; shardFailures 
{[cjdL5h8tSo6VK3zE-Gpvxw][graylog2_recent][0]: 
RemoteTransportException[[GLNode1][inet[/:9300]][search/phase/query]]; 
nested: SearchParseException[[graylog2_recent][0]: 
query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [Failed to parse 
source 
[{"size":1,"query":{"match_all":{}},"sort":[{"timestamp":{"order":"desc"}}]}]]];
 
nested: SearchParseException[[graylog2_recent][0]: 
query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [No mapping found 
for [timestamp] in order to sort on]]; 
}{[XB6yIWzpRHOa8wOJtuKe1g][graylog2_recent][3]: 
RemoteTransportException[[GLNode2][inet[/:9300]][search/phase/query]]; 
nested: SearchParseException[[graylog2_recent][3]: 
query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [Failed to parse 
source 
[{"size":1,"query":{"match_all":{}},"sort":[{"timestamp":{"order":"desc"}}]}]]];
 
nested: SearchParseException[[graylog2_recent][3]: 
query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [No mapping found 
for [timestamp] in order to sort on]]; 
}{[XB6yIWzpRHOa8wOJtuKe1g][graylog2_recent][2]: 
RemoteTransportException[[GLNode2][inet[/:9300]][search/phase/query]]; 
nested: SearchParseException[[graylog2_recent][2]: 
query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [Failed to parse 
source 
[{"size":1,"query":{"match_all":{}},"sort":[{"timestamp":{"order":"desc"}}]}]]];
 
nested: SearchParseException[[graylog2_recent][2]: 
query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [No mapping found 
for [timestamp] in order to sort on]]; 
}{[cjdL5h8tSo6VK3zE-Gpvxw][graylog2_recent][1]: 
RemoteTransportException[[GLNode1][inet[/:9300]][search/phase/query]]; 
nested: SearchParseException[[graylog2_recent][1]: 
query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [Failed to parse 
source 
[{"size":1,"query":{"match_all":{}},"sort":[{"timestamp":{"order":"desc"}}]}]]];
 
nested: SearchParseException[[graylog2_recent][1]: 
query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [No mapping found 
for [timestamp] in order to sort on]]; }
        at 
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:272)
        at 
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$3.onFailure(TransportSearchTypeAction.java:224)
        at 
org.elasticsearch.search.action.SearchServiceTransportAction$4.handleException(SearchServiceTransportAction.java:222)
        at 
org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:181)
        at 
org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:171)
        at 
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:123)
        at 
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at 
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
        at 
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
        at 
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
        at 
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
        at 
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
        at 
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
        at 
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
        at 
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
        at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
        at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
        at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
        at 
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
        at 
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
        at 
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)

Then at the end:

2014-05-28 14:35:05,052 INFO : 
org.graylog2.indexer.ranges.RebuildIndexRangesJob - Done calculating index 
ranges for 172 indices. Took 34562ms.
2014-05-28 14:35:05,052 INFO : org.graylog2.system.jobs.SystemJobManager - 
SystemJob <da2267c5-e6af-11e3-a691-005056b274fe> 
[org.graylog2.indexer.ranges.RebuildIndexRangesJob] finished in 34591ms.

What does that job actually do? Should those older indices be marked for 
deletion now?


On Wednesday, May 28, 2014 3:13:33 PM UTC-6, Kay Röpke wrote:
>
> Hi!
>
> graylog2_recent sounds like you upgraded from an older version. Those 
> indices might not get collected correctly.
>
> Please try to use the "recalculate index ranges" from the action menu on 
> the system/indices page in the web interface. After that the server should 
> perform retention properly. If not, you can still manually clear out 
> unwanted indices, but be sure to perform that maintenance task from the 
> page.
>
> Best,
> Kay
> On May 28, 2014 11:04 PM, "Tyler Bell" <[email protected]<javascript:>> 
> wrote:
>
>> Hi All - Using Graylog v0.20.1 and ES 0.90.10, and am having my disk 
>> space maxed out because indices are not rotating out. My elasticsearch data 
>> directory shows indices graylog2_recent, graylog2_0, ...., graylog2_173.
>>
>> Anyone have experience with this? I'm going to use the ES API to clear 
>> out some older indices and get my setup working again, but need to figure 
>> out the rotation issue for longterm resolution.
>>
>> I'm using default config settings:
>>
>> # Embedded elasticsearch configuration file
>> # pay attention to the working directory of the server, maybe use an 
>> absolute path here
>> elasticsearch_config_file = /etc/graylog2-elasticsearch.yml
>>
>> elasticsearch_max_docs_per_index = 20000000
>>
>> # How many indices do you want to keep?
>> # 
>> elasticsearch_max_number_of_indices*elasticsearch_max_docs_per_index=total 
>> number of messages in your setup
>> elasticsearch_max_number_of_indices = 20
>>
>> # Decide what happens with the oldest indices when the maximum number of 
>> indices is reached.
>> # The following strategies are availble:
>> #   - delete # Deletes the index completely (Default)
>> #   - close # Closes the index and hides it from the system. Can be 
>> re-opened later.
>> retention_strategy = delete
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "graylog2" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"graylog2" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to