I ended up deleting some of the oldest indices I had (graylog2_1, 
graylog2_2, etc) to free up some space. I noticed that these indices were 
bigger than the others. Later on, I restarted the graylog server for 
another issue I was correcting, and I saw graylog marking all but the 
newest 20 indices for deletion. Everything is rotating fine now.

I'm not sure what was in those older indices, but I do know they existed 
before I upgraded to Graylog v0.20.1 and ES 0.90.10. 

On Wednesday, May 28, 2014 6:26:58 PM UTC-6, Kay Röpke wrote:
>
> I don't think that you can run it manually.
> You can increase the log level of 
> org.graylog2.periodical.IndexRetentionThread to debug to get more 
> information.
> Either use the logging page in the system section, or use a custom 
> log4j.xml and restart the server. Its described at 
> http://support.torch.sh/help/kb/graylog2-server/starting-and-stopping-the-server-cli-parameters#supplying-external-logging-configuration
>
> Best,
> Kay
> On May 29, 2014 1:17 AM, "Tyler Bell" <[email protected] 
> <javascript:>> wrote:
>
>> I cleared out graylog2_recent. It only had old data, and no longer get 
>> that error when running the job. However, I'm not seeing old indices clear 
>> out. I'll let it go overnight and see what happens. 
>>
>> Do you know of a way to run the job that cleans up indices via the 
>> Graylog api? I can try to run it manually and see if there are errors. 
>>
>> On Wednesday, May 28, 2014 3:59:43 PM UTC-6, Kay Röpke wrote:
>>>
>>> The job takes note of the earliest and latest timestamp present in an 
>>> index.
>>> We are using that metadata to select the indices that need to be 
>>> included when searching to avoid I/O on indices that cannot contain any 
>>> message from the range you are interested in.
>>> It also usually shows when structural problems occur, and should also 
>>> trigger the retention mechanism.
>>> In your case, the deprecated graylog2_recent index screws it up.
>>> Please delete that index using elasticsearch's API, it contains no 
>>> useful information for the newer graylog2 versions.
>>> After that's done the retention should kick in on the next index 
>>> retention cleanup cycle (which runs every 5 minutes).
>>>
>>> Best,
>>> Kay
>>> On May 28, 2014 11:41 PM, "Tyler Bell" <[email protected]> wrote:
>>>
>>>> Thanks Kay! I did upgrade from an older version of Graylog and 
>>>> ElasticSearch once v0.20.0 came out.
>>>>
>>>> I ran that job from the web interface, and it looks like it cycled 
>>>> through all my indexes with messages like:
>>>>
>>>> 2014-05-28 14:34:30,460 INFO : org.graylog2.system.jobs.SystemJobManager 
>>>> - Submitted SystemJob <da2267c5-e6af-11e3-a691-005056b274fe> 
>>>> [org.graylog2.indexer.ranges.RebuildIndexRangesJob]
>>>> 2014-05-28 14:34:30,460 INFO : 
>>>> org.graylog2.indexer.ranges.RebuildIndexRangesJob 
>>>> - Re-calculating index ranges.
>>>> 2014-05-28 14:34:30,729 INFO : 
>>>> org.graylog2.indexer.ranges.RebuildIndexRangesJob 
>>>> - Calculated range of [graylog2_56] in [239ms].
>>>> 2014-05-28 14:34:30,959 INFO : 
>>>> org.graylog2.indexer.ranges.RebuildIndexRangesJob 
>>>> - Calculated range of [graylog2_55] in [229ms].
>>>> 2014-05-28 14:34:31,158 INFO : 
>>>> org.graylog2.indexer.ranges.RebuildIndexRangesJob 
>>>> - Calculated range of [graylog2_54] in [199ms].
>>>> 2014-05-28 14:34:31,345 INFO : 
>>>> org.graylog2.indexer.ranges.RebuildIndexRangesJob 
>>>> - Calculated range of [graylog2_53] in [187ms].
>>>>
>>>> .....etc, then when it hits graylog2_recent
>>>>
>>>> 2014-05-28 14:34:51,242 INFO : 
>>>> org.graylog2.indexer.ranges.RebuildIndexRangesJob 
>>>> - Could not calculate range of index [graylog2_recent]. Skipping.
>>>> org.elasticsearch.action.search.SearchPhaseExecutionException: Failed 
>>>> to execute phase [query], all shards failed; shardFailures 
>>>> {[cjdL5h8tSo6VK3zE-Gpvxw][graylog2_recent][0]: 
>>>> RemoteTransportException[[GLNode1][inet[/:9300]][search/phase/query]]; 
>>>> nested: SearchParseException[[graylog2_recent][0]: 
>>>> query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [Failed to 
>>>> parse source 
>>>> [{"size":1,"query":{"match_all":{}},"sort":[{"timestamp":{"order":"desc"}}]}]]];
>>>>  
>>>> nested: SearchParseException[[graylog2_recent][0]: 
>>>> query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [No mapping 
>>>> found for [timestamp] in order to sort on]]; 
>>>> }{[XB6yIWzpRHOa8wOJtuKe1g][graylog2_recent][3]: 
>>>> RemoteTransportException[[GLNode2][inet[/:9300]][search/phase/query]]; 
>>>> nested: SearchParseException[[graylog2_recent][3]: 
>>>> query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [Failed to 
>>>> parse source 
>>>> [{"size":1,"query":{"match_all":{}},"sort":[{"timestamp":{"order":"desc"}}]}]]];
>>>>  
>>>> nested: SearchParseException[[graylog2_recent][3]: 
>>>> query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [No mapping 
>>>> found for [timestamp] in order to sort on]]; 
>>>> }{[XB6yIWzpRHOa8wOJtuKe1g][graylog2_recent][2]: 
>>>> RemoteTransportException[[GLNode2][inet[/:9300]][search/phase/query]]; 
>>>> nested: SearchParseException[[graylog2_recent][2]: 
>>>> query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [Failed to 
>>>> parse source 
>>>> [{"size":1,"query":{"match_all":{}},"sort":[{"timestamp":{"order":"desc"}}]}]]];
>>>>  
>>>> nested: SearchParseException[[graylog2_recent][2]: 
>>>> query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [No mapping 
>>>> found for [timestamp] in order to sort on]]; 
>>>> }{[cjdL5h8tSo6VK3zE-Gpvxw][graylog2_recent][1]: 
>>>> RemoteTransportException[[GLNode1][inet[/:9300]][search/phase/query]]; 
>>>> nested: SearchParseException[[graylog2_recent][1]: 
>>>> query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [Failed to 
>>>> parse source 
>>>> [{"size":1,"query":{"match_all":{}},"sort":[{"timestamp":{"order":"desc"}}]}]]];
>>>>  
>>>> nested: SearchParseException[[graylog2_recent][1]: 
>>>> query[ConstantScore(*:*)],from[-1],size[1]: Parse Failure [No mapping 
>>>> found for [timestamp] in order to sort on]]; }
>>>>         at org.elasticsearch.action.search.type.
>>>> TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(
>>>> TransportSearchTypeAction.java:272)
>>>>         at org.elasticsearch.action.search.type.
>>>> TransportSearchTypeAction$BaseAsyncAction$3.onFailure(
>>>> TransportSearchTypeAction.java:224)
>>>>         at org.elasticsearch.search.action.
>>>> SearchServiceTransportAction$4.handleException(
>>>> SearchServiceTransportAction.java:222)
>>>>         at org.elasticsearch.transport.netty.MessageChannelHandler.
>>>> handleException(MessageChannelHandler.java:181)
>>>>         at org.elasticsearch.transport.netty.MessageChannelHandler.
>>>> handlerResponseError(MessageChannelHandler.java:171)
>>>>         at org.elasticsearch.transport.netty.MessageChannelHandler.
>>>> messageReceived(MessageChannelHandler.java:123)
>>>>         at org.elasticsearch.common.netty.channel.
>>>> SimpleChannelUpstreamHandler.handleUpstream(
>>>> SimpleChannelUpstreamHandler.java:70)
>>>>         at org.elasticsearch.common.netty.channel.
>>>> DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>>>>         at org.elasticsearch.common.netty.channel.
>>>> DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(
>>>> DefaultChannelPipeline.java:791)
>>>>         at org.elasticsearch.common.netty.channel.Channels.
>>>> fireMessageReceived(Channels.java:296)
>>>>         at org.elasticsearch.common.netty.handler.codec.frame.
>>>> FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
>>>>         at org.elasticsearch.common.netty.handler.codec.frame.
>>>> FrameDecoder.callDecode(FrameDecoder.java:443)
>>>>         at org.elasticsearch.common.netty.handler.codec.frame.
>>>> FrameDecoder.messageReceived(FrameDecoder.java:303)
>>>>         at org.elasticsearch.common.netty.channel.
>>>> SimpleChannelUpstreamHandler.handleUpstream(
>>>> SimpleChannelUpstreamHandler.java:70)
>>>>         at org.elasticsearch.common.netty.channel.
>>>> DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>>>>         at org.elasticsearch.common.netty.channel.
>>>> DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
>>>>         at org.elasticsearch.common.netty.channel.Channels.
>>>> fireMessageReceived(Channels.java:268)
>>>>         at org.elasticsearch.common.netty.channel.Channels.
>>>> fireMessageReceived(Channels.java:255)
>>>>          at org.elasticsearch.common.netty.channel.socket.nio.
>>>> NioWorker.read(NioWorker.java:88)
>>>>         at org.elasticsearch.common.netty.channel.socket.nio.
>>>> AbstractNioWorker.process(AbstractNioWorker.java:108)
>>>>         at org.elasticsearch.common.netty.channel.socket.nio.
>>>> AbstractNioSelector.run(AbstractNioSelector.java:318)
>>>>         at org.elasticsearch.common.netty.channel.socket.nio.
>>>> AbstractNioWorker.run(AbstractNioWorker.java:89)
>>>>         at org.elasticsearch.common.netty.channel.socket.nio.
>>>> NioWorker.run(NioWorker.java:178)
>>>>         at org.elasticsearch.common.netty.util.
>>>> ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
>>>>         at org.elasticsearch.common.netty.util.internal.
>>>> DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
>>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
>>>> ThreadPoolExecutor.java:1145)
>>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>>> ThreadPoolExecutor.java:615)
>>>>         at java.lang.Thread.run(Thread.java:744)
>>>>
>>>> Then at the end:
>>>>
>>>> 2014-05-28 14:35:05,052 INFO : 
>>>> org.graylog2.indexer.ranges.RebuildIndexRangesJob 
>>>> - Done calculating index ranges for 172 indices. Took 34562ms.
>>>> 2014-05-28 14:35:05,052 INFO : org.graylog2.system.jobs.SystemJobManager 
>>>> - SystemJob <da2267c5-e6af-11e3-a691-005056b274fe> 
>>>> [org.graylog2.indexer.ranges.RebuildIndexRangesJob] finished in 
>>>> 34591ms.
>>>>
>>>> What does that job actually do? Should those older indices be marked 
>>>> for deletion now?
>>>>
>>>>
>>>> On Wednesday, May 28, 2014 3:13:33 PM UTC-6, Kay Röpke wrote:
>>>>>
>>>>> Hi!
>>>>>
>>>>> graylog2_recent sounds like you upgraded from an older version. Those 
>>>>> indices might not get collected correctly.
>>>>>
>>>>> Please try to use the "recalculate index ranges" from the action menu 
>>>>> on the system/indices page in the web interface. After that the server 
>>>>> should perform retention properly. If not, you can still manually clear 
>>>>> out 
>>>>> unwanted indices, but be sure to perform that maintenance task from the 
>>>>> page.
>>>>>
>>>>> Best,
>>>>> Kay
>>>>> On May 28, 2014 11:04 PM, "Tyler Bell" <[email protected]> wrote:
>>>>>
>>>>>> Hi All - Using Graylog v0.20.1 and ES 0.90.10, and am having my disk 
>>>>>> space maxed out because indices are not rotating out. My elasticsearch 
>>>>>> data 
>>>>>> directory shows indices graylog2_recent, graylog2_0, ...., graylog2_173.
>>>>>>
>>>>>> Anyone have experience with this? I'm going to use the ES API to 
>>>>>> clear out some older indices and get my setup working again, but need to 
>>>>>> figure out the rotation issue for longterm resolution.
>>>>>>
>>>>>> I'm using default config settings:
>>>>>>
>>>>>> # Embedded elasticsearch configuration file
>>>>>> # pay attention to the working directory of the server, maybe use an 
>>>>>> absolute path here
>>>>>> elasticsearch_config_file = /etc/graylog2-elasticsearch.yml
>>>>>>
>>>>>> elasticsearch_max_docs_per_index = 20000000
>>>>>>
>>>>>> # How many indices do you want to keep?
>>>>>>  # 
>>>>>> elasticsearch_max_number_of_indices*elasticsearch_max_docs_per_index=total
>>>>>>  
>>>>>> number of messages in your setup
>>>>>> elasticsearch_max_number_of_indices = 20
>>>>>>
>>>>>> # Decide what happens with the oldest indices when the maximum number 
>>>>>> of indices is reached.
>>>>>> # The following strategies are availble:
>>>>>>  #   - delete # Deletes the index completely (Default)
>>>>>> #   - close # Closes the index and hides it from the system. Can be 
>>>>>> re-opened later.
>>>>>> retention_strategy = delete
>>>>>>
>>>>>>  -- 
>>>>>> You received this message because you are subscribed to the Google 
>>>>>> Groups "graylog2" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it, 
>>>>>> send an email to [email protected].
>>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>>
>>>>>  -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "graylog2" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to [email protected].
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "graylog2" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"graylog2" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to