Hi
Yes, that's a bummer unfortunately. Anyway i find it a bit weird that 
`message` field is taking up so much space to trigger a breaker. It's not 
even in the query so why it gets loaded? When i was monitoring my fielddata 
and switching to doc_values in my setup other smaller fields were my 
problem. Maybe trimming message with drools or query optimisations to 
remove referencing to `message` field should be considered.

W dniu czwartek, 21 kwietnia 2016 15:00:06 UTC+2 użytkownik Jochen 
Schalanda napisał:
>
> Hi Daniel,
>
> doc values don't work for analyzed string fields like "message": 
>
> Doc values are supported on almost all field types, with the notable 
>> exception of analyzed string fields.
>>
>
> Unfortunately that's exactly the field which trips the field data cache 
> circuit breaker in Jason's case.
>
>
> Cheers,
> Jochen 
>
> On Thursday, 21 April 2016 13:14:46 UTC+2, Daniel Kamiński wrote:
>>
>> you can change 'message' mapping template in ES via it's rest api, and 
>> add `"doc_values": true` to some less needed fields, more info or doc 
>> values here: 
>> https://www.elastic.co/guide/en/elasticsearch/reference/current/doc-values.html
>>
>> W dniu czwartek, 21 kwietnia 2016 00:48:57 UTC+2 użytkownik Jason Haar 
>> napisał:
>>>
>>> Hi there
>>>
>>> I tried to do what I thought was a simple search across a week's worth 
>>> of data on a single-box graylog server (ie it also has ES and mongodb on it)
>>>
>>> Basically I did a search for "fieldname:value1 OR fieldname:value2 OR 
>>> fieldname:value3" over 7days and graylog just sat there spinning it's 
>>> wheels (before hand I was happily doing searches that weren't causing any 
>>> grief at all)
>>>
>>> The CPU on the graylog server went through the roof, graylog error file 
>>> showed no problem, but ES logs showed a bunch of these
>>>
>>> indices.breaker.fielddata] [fielddata] New used memory 11155918063 
>>> [10.3gb] for data of [message] would be larger than configured breaker: 
>>> 10857952051 [10.1gb], breaking
>>>
>>> After five minutes of graylog just sitting there, I restarted ES, but 
>>> graylog was now borked. The input channels were still receiving data, but 
>>> nothing was flowing out. So I restarted graylog and all was good again
>>>
>>> Is this expected behaviour, and if so, what is needed to stop it? I've 
>>> seen other non-graylog related postings on the ES list about this happening 
>>> with large clusters, so it seems to be an error case for ES, but I'm more 
>>> concerned over how graylog reacted: ie why didn't it give up and give me an 
>>> error page for starters. It looks to me like graylog didn't expect that ES 
>>> search to error out and that caused it to block? (I'm assuming ES generated 
>>> an error - the logs shows that WARN - I dunno what happens next)
>>>
>>>
>>> -- 
>>> Cheers
>>>
>>> Jason Haar
>>> Information Security Manager, Trimble Navigation Ltd.
>>> Phone: +1 408 481 8171
>>> PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/af9ffe2a-2337-4072-8f93-e18b8ba7e8e9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to