Hi,

I am a little late but maybe it brings some closure...I believe you
ran into this: https://github.com/elasticsearch/elasticsearch/pull/5623
The symptoms for this bug are exactly what you describe.

Britta

On Mon, Mar 17, 2014 at 10:07 PM, Mac Jouz <[email protected]> wrote:
>
> Finally I fixed dynamically the broken index but taking account your answer
> I'm going to add files to avoid future problems
>
> Thanks Karol
>
> Regards
>
> José
>
> Le lundi 17 mars 2014 19:25:31 UTC+1, bizzorama a écrit :
>>
>> Hi, we tried both ways but:
>> First worked but was temporary and worked as index quickfix (after
>> powerdown it was lost again), of course we used the rest interfaces to fix
>> mappings that were already broken (we could not pump all data again so we
>> had to fix it somehow).
>>
>> We applied the mapping file as default (for all indexes) to avoid the
>> problem in future, we knew that all indexes can be started with same
>> mapping.
>>
>> 17-03-2014 17:56, "Mac Jouz" <[email protected]> napisał(a):
>>>
>>> Hi,
>>>
>>> Thanks Karol, changing ES version does not change the problem indeed.
>>>
>>> 2 complementary questions if I may:
>>> - You wrote that you copied the mapping file on ES location, did you try
>>> a way to do so dynamically with a REST call ?
>>> - Otherwise did you apply the modification for the specific "corrupted"
>>> index or copy the mapping file in default config ES location (that is to say
>>> that it was valid for all index ?)
>>>
>>> Regards
>>>
>>> José
>>>
>>>
>>>
>>> Le dimanche 16 mars 2014 16:37:19 UTC+1, bizzorama a écrit :
>>>>
>>>> Hi,
>>>>
>>>> it turned out that it was not a problem of ES version (we tested on both
>>>> 0.90.10 and 0.90.9) but just a ES bug ...
>>>> after restarting pc or even just the service indices got broken ... we
>>>> found out that this was the case of missing mappings.
>>>> We observed that broken indices had their mappings corrupted (only some
>>>> default fields were observed).
>>>> You can check this by calling: http:\\es_address:9200\indexName\_mapping
>>>>
>>>> Our mappings were dynamic (not set manually - just figured out by ES
>>>> when the records were incoming).
>>>>
>>>> The solution was to add a static mapping file like the one described
>>>> here:
>>>>
>>>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-conf-mappings.html
>>>> (we added the default one).
>>>>
>>>> I just copied mappings from a healty index, made some changes, turned it
>>>> to a mapping file and copied to the ES server.
>>>>
>>>> Now everything works just fine.
>>>>
>>>> Regards,
>>>> Karol
>>>>
>>>>
>>>> W dniu niedziela, 16 marca 2014 14:54:00 UTC+1 użytkownik Mac Jouz
>>>> napisał:
>>>>>
>>>>>
>>>>> Hi Bizzorama,
>>>>>
>>>>> I had a similar problem with the same configuration than you gave.
>>>>> ES ran since the 11th of February and was fed every day at 6:00 AM by 2
>>>>> LS.
>>>>> Everything worked well (kibana reports were correct and no data loss)
>>>>> until
>>>>> I restarted yesterday ES :-(
>>>>> Among 30 index (1 per day), 4 were unusable and data within kibana
>>>>> report
>>>>> for the related period were unavailable (same
>>>>> org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet[0]: 
>>>>> (key)
>>>>> field [@timestamp] not found)
>>>>>
>>>>> Do you confirm when you downgraded ES to 0.90.9 that you retrieved your
>>>>> data
>>>>> (i.e you was able to show your data in kibana reports) ?
>>>>>
>>>>> I will try to downgrade ES version as you suggested and will let you
>>>>> know
>>>>> more
>>>>>
>>>>> Thanks for your answer
>>>>>
>>>>>
>>>>>
>>>>> Sorry for the delay.
>>>>>
>>>>> Looks like you were right, after downgrading ES to 0.90.9 i couldn't
>>>>> reproduce the issue in such manner.
>>>>>
>>>>> Unfortunately, I found some other problems, and one looks like a
>>>>> blocker ....
>>>>>
>>>>> After whole ES cluster powerdown, ES just started replaying 'no mapping
>>>>> for ... <name of field>'  for each request.
>>>>>
>>>>> W dniu czwartek, 20 lutego 2014 16:42:20 UTC+1 użytkownik Binh Ly
>>>>> napisał:
>>>>>>
>>>>>> Your error logs seem to indicate some kind of version mismatch. Is it
>>>>>> possible for you to test LS 1.3.2 against ES 0.90.9 and take a sample of 
>>>>>> raw
>>>>>> logs from those 3 days and test them through to see if those 3 days work 
>>>>>> in
>>>>>> Kibana? The reason I ask is because LS 1.3.2 (specifically the 
>>>>>> elasticsearch
>>>>>> output) was built using the binaries from ES 0.90.9.
>>>>>>
>>>>>> Thanks.
>>>>>
>>>>>
>>>>> Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I've noticed a very disturbing ElasticSearch behaviour ...
>>>>>> my environment is:
>>>>>>
>>>>>> 1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
>>>>>> (0.90.10) + kibana
>>>>>>
>>>>>> which process about 7 000 000 records per day,
>>>>>> everything worked fine on our test environment, untill we run some
>>>>>> tests for a longer period (about 15 days).
>>>>>>
>>>>>> After that time, kibana was unable to show any data.
>>>>>> I did some investigation and it looks like some of the indexes (for 3
>>>>>> days to be exact) seem to be corrupted.
>>>>>> Now every query from kibana, using those corrupted indexes - failes.
>>>>>>
>>>>>> Errors read from elasticsearch logs:
>>>>>>
>>>>>> - org.elasticsearch.search.facet.FacetPhaseExecutionException:
>>>>>> Facet[terms]: failed to find mapping for Name ... a couple of other 
>>>>>> columns
>>>>>> - org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
>>>>>> [0]: (key) field [@timestamp] not found
>>>>>>
>>>>>> ... generaly all queries end with those errors
>>>>>>
>>>>>> When elasticsearch is started we find something like this:
>>>>>>
>>>>>> [2014-02-07 15:02:08,147][WARN ][transport.netty          ] [Name]
>>>>>> Message not fully read (request) for [243445] and action
>>>>>> [cluster/nodeIndexCreated], resetting
>>>>>> [2014-02-07 15:02:08,147][WARN ][transport.netty          ] [Name]
>>>>>> Message not fully read (request) for [249943] and action
>>>>>> [cluster/nodeIndexCreated], resetting
>>>>>> [2014-02-07 15:02:08,147][WARN ][transport.netty          ] [Name]
>>>>>> Message not fully read (request) for [246740] and action
>>>>>> [cluster/nodeIndexCreated], resetting
>>>>>>
>>>>>> And a little observations:
>>>>>>
>>>>>> 1. When using elasticsearch-head plugin, when querying records
>>>>>> 'manually', i can see only elasticsearch columns (_index, _type, _id,
>>>>>> _score).
>>>>>>     But when I 'randomly' select columns and overview their raw json
>>>>>> they look ok.
>>>>>>
>>>>>> 2, When I tried to process same data again - everything is ok
>>>>>>
>>>>>> Is it possible that some corrupted data found its way to elasticsearch
>>>>>> and now whole index is broken ?
>>>>>> Can this be fixed ? reindexed or sth ?
>>>>>> This data is very importand and can't be lost ...
>>>>>>
>>>>>> Best Regards,
>>>>>> Karol
>>>>>>
>>>>>
>>>>> Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I've noticed a very disturbing ElasticSearch behaviour ...
>>>>>> my environment is:
>>>>>>
>>>>>> 1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch
>>>>>> (0.90.10) + kibana
>>>>>>
>>>>>> which process about 7 000 000 records per day,
>>>>>> everything worked fine on our test environment, untill we run some
>>>>>> tests for a longer period (about 15 days).
>>>>>>
>>>>>> After that time, kibana was unable to show any data.
>>>>>> I did some investigation and it looks like some of the indexes (for 3
>>>>>> days to be exact) seem to be corrupted.
>>>>>> Now every query from kibana, using those corrupted indexes - failes.
>>>>>>
>>>>>> Errors read from elasticsearch logs:
>>>>>>
>>>>>> - org.elasticsearch.search.facet.FacetPhaseExecutionException:
>>>>>> Facet[terms]: failed to find mapping for Name ... a couple of other 
>>>>>> columns
>>>>>> - org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet
>>>>>> [0]: (key) field [@timestamp] not found
>>>>>>
>>>>>> ... generaly all queries end with those errors
>>>>>>
>>>>>> When elasticsearch is started we find something like this:
>>>>>>
>>>>>> [2014-02-07 15:02:08,147][WARN ][transport.netty          ] [Name]
>>>>>> Message not fully read (request) for [243445] and action
>>>>>> [cluster/nodeIndexCreated], resetting
>>>>>> [2014-02-07 15:02:08,147][WARN ][transport.netty          ] [Name]
>>>>>> Message not fully read (request) for [249943] and action
>>>>>> [cluster/nodeIndexCreated], resetting
>>>>>> [2014-02-07 15:02:08,147][WARN ][transport.netty          ] [Name]
>>>>>> Message not fully read (request) for [246740] and action
>>>>>> [cluster/nodeIndexCreated], resetting
>>>>>>
>>>>>> And a little observations:
>>>>>>
>>>>>> 1. When using elasticsearch-head plugin, when querying records
>>>>>> 'manually', i can see only elasticsearch columns (_index, _type, _id,
>>>>>> _score).
>>>>>>     But when I 'randomly' select columns and overview their raw json
>>>>>> they look ok.
>>>>>>
>>>>>> 2, When I tried to process same data again - everything is ok
>>>>>>
>>>>>> Is it possible that some corrupted data found its way to elasticsearch
>>>>>> and now whole index is broken ?
>>>>>> Can this be fixed ? reindexed or sth ?
>>>>>> This data is very importand and can't be lost ...
>>>>>>
>>>>>> Best Regards,
>>>>>> Karol
>>>>>>
>>> --
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "elasticsearch" group.
>>> To unsubscribe from this topic, visit
>>> https://groups.google.com/d/topic/elasticsearch/7ZwB6SNFkDc/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> [email protected].
>>>
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/elasticsearch/6c861b61-72c1-4855-b8e5-d3b55afcff92%40googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/2c505b6c-5aa2-4fac-963c-82c6a2bda83d%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CALhJbBiBS_EqrdQuG%2BFb2%3DvDkRZOoy7dy4iL0aWkCGgQDjOwFw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to