I'm a bit confused on the github page which versions have the bug, and which do not. It has labels of:
1.4.3 1.5.0 2.0.0 My hunch is those versions do not have the bug, although it sure seems more logical to tag versions that *do* have the bug, so I wanted to confirm :) Thanks! Chris On Wed, Jan 14, 2015 at 11:07 AM, Chris Neal <[email protected]> wrote: > Hi Masaru, > > Beautiful! That's exactly it. Thank you very much. :) > > Chris > > On Wed, Jan 14, 2015 at 10:42 AM, Masaru Hasegawa <[email protected]> > wrote: > >> Hi Chris, >> >> I think you hit this issue >> https://github.com/elasticsearch/elasticsearch/issues/8890. >> Workaround would be to use index template (as described in the issue) or >> to update them by indices settings API. >> >> >> Masaru >> >> >> On Wed, Jan 14, 2015 at 9:52 AM, Chris Neal <[email protected]> >> wrote: >> >>> Hi all. >>> >>> I'm reposting an earlier thread of mine with a more appropriate subject >>> in hopes that someone might have an idea on this one. :) >>> >>> Each node in my cluster has its configuration set via elasticsearch.yml >>> only. I do not apply any index level settings, however the nodes in the >>> cluster are overwriting my config settings with the defaults. I have been >>> unable to figure out why this is happening, and was hoping someone else >>> might. >>> >>> My elasticsearch.yml file defines these settings: >>> >>> index: >>> codec: >>> bloom: >>> load: false >>> merge: >>> policy: >>> max_merge_at_once: 4 >>> max_merge_at_once_explicit: 4 >>> max_merged_segment: 1gb >>> segments_per_tier: 4 >>> type: tiered >>> scheduler: >>> max_thread_count: 1 >>> type: concurrent >>> number_of_replicas: 0 >>> number_of_shards: 1 >>> refresh_interval: 5s >>> >>> From the head plugin, I can see these settings are in effect: >>> >>> >>> - >>> settings: { >>> - index: { >>> - codec: { >>> - bloom: { >>> - load: false >>> } >>> } >>> - number_of_replicas: 1 >>> - number_of_shards: 6 >>> - translog: { >>> - flush_threshold_size: 1GB >>> } >>> - search: { >>> - slowlog: { >>> - threshold: { >>> - fetch: { >>> - warn: 2s >>> - info: 1s >>> } >>> - index: { >>> - warn: 10s >>> - info: 5s >>> } >>> - query: { >>> - warn: 10s >>> - info: 5s >>> } >>> } >>> } >>> } >>> - refresh_interval: 60s >>> - merge: { >>> - scheduler: { >>> - type: concurrent >>> - max_thread_count: 1 >>> } >>> - policy: { >>> - type: tiered >>> - max_merged_segment: 1gb >>> - max_merge_at_once_explicit: 4 >>> - max_merge_at_once: 4 >>> - segments_per_tier: 4 >>> } >>> } >>> } >>> - bootstrap: { >>> - mlockall: true >>> } >>> - >>> >>> >>> But each node outputs this on new index creation: >>> >>> [2015-01-13 02:12:52,062][INFO ][index.merge.policy ] >>> [elasticsearch-test] [test-20150113][1] updating [segments_per_tier] from >>> [4.0] to [10.0] >>> [2015-01-13 02:12:52,062][INFO ][index.merge.policy ] >>> [elasticsearch-test] [test-20150113][1] updating [max_merge_at_once] from >>> [4] to [10] >>> [2015-01-13 02:12:52,062][INFO ][index.merge.policy ] >>> [elasticsearch-test] [test-20150113][1] updating >>> [max_merge_at_once_explicit] from [4] to [30] >>> [2015-01-13 02:12:52,062][INFO ][index.merge.policy ] >>> [elasticsearch-test] [test-20150113][1] updating [max_merged_segment] from >>> [1024.0mb] to [5gb] >>> >>> This is happening both on two clusters for me. My "regular" ES cluster >>> of 3 nodes, and my dedicated Marvel cluster of 1 node. So strange. >>> >>> [2015-01-06 04:04:53,320][INFO ][cluster.metadata ] >>> [elasticsearch-ip-10-0-0-42] [.marvel-2015.01.06] update_mapping >>> [cluster_state] (dynamic) >>> [2015-01-06 04:04:56,704][INFO ][index.merge.policy ] >>> [elasticsearch-ip-10-0-0-42] [.marvel-2015.01.06][0] updating >>> [segments_per_tier] from [4.0] to [10.0] >>> [2015-01-06 04:04:56,704][INFO ][index.merge.policy ] >>> [elasticsearch-ip-10-0-0-42] [.marvel-2015.01.06][0] updating >>> [max_merge_at_once] from [4] to [10] >>> [2015-01-06 04:04:56,704][INFO ][index.merge.policy ] >>> [elasticsearch-ip-10-0-0-42] [.marvel-2015.01.06][0] updating >>> [max_merge_at_once_explicit] from [4] to [30] >>> [2015-01-06 04:04:56,704][INFO ][index.merge.policy ] >>> [elasticsearch-ip-10-0-0-42] [.marvel-2015.01.06][0] updating >>> [max_merged_segment] from [1024.0mb] to [5gb] >>> >>> I am really stumped on why this is happening! >>> Thanks so much for your time. >>> Chris >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "elasticsearch" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to [email protected]. >>> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/elasticsearch/CAND3DphZTogfwqD992nDB8sRjNVq-01BbMRrd8UdGwB2hfUuYg%40mail.gmail.com >>> <https://groups.google.com/d/msgid/elasticsearch/CAND3DphZTogfwqD992nDB8sRjNVq-01BbMRrd8UdGwB2hfUuYg%40mail.gmail.com?utm_medium=email&utm_source=footer> >>> . >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "elasticsearch" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected]. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/elasticsearch/CAGmu3c2bi6EDqd7zkFjNWcug_ETVoFwpY_R3Qfh9rDXC0Q3wGQ%40mail.gmail.com >> <https://groups.google.com/d/msgid/elasticsearch/CAGmu3c2bi6EDqd7zkFjNWcug_ETVoFwpY_R3Qfh9rDXC0Q3wGQ%40mail.gmail.com?utm_medium=email&utm_source=footer> >> . >> For more options, visit https://groups.google.com/d/optout. >> > > -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAND3DpipGddY7uCfkoMyk%2B8%3D7W4zu6KBhAwixSN0MmExD5Rcug%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
