Totally. For example:

            "analyzer": {
                "default_index": {
                    "tokenizer": "standard",
                    "filter": ["standard", "lowercase"]
                },
                "default_search": {
                    "tokenizer": "standard",
                    "filter": ["standard", "lowercase", "stop"]
                },


On Monday, June 30, 2014 12:19:55 PM UTC-4, mooky wrote:
>
> Excellent. Thanks for the info.
>
> Is it possible to set my custom analyser as the default analyser for an 
> index (ie instead of standard_analyzer)
>
> -N
>
> On Monday, 30 June 2014 14:41:10 UTC+1, Glen Smith wrote:
>>
>> You can set up an analyser for your index...
>>
>> ...
>>     "my-index": {
>>         "analysis": {
>>             "analyzer": {
>>                 "default_index": {
>>                     "tokenizer": "standard",
>>                     "filter": ["standard", "icu_fold_filter", "stop"]
>>                 },
>>                 "default_search": {
>>                     "tokenizer": "standard",
>>                     "filter": ["standard", "icu_fold_filter", "stop"]
>>                 },
>>                 "custom_index": {
>>                     "tokenizer": "whitespace",
>>                     "filter": ["lower"]
>>                 },
>>                 "custom_search": {
>>                     "tokenizer": "whitespace",
>>                     "filter": ["lower"]
>>                 }
>>             }
>>         }
>>     }
>> ...
>>
>> and then map your relevant field accordingly:
>>
>> {
>>     "_timestamp": {
>>         "enabled": "true",
>>         "store": "yes"
>>     },
>>     "properties": {
>>         "my_field": {
>>             "type": "string",
>>             "index_analyzer": "custom_index",
>>             "search_analyzer": "custom_search"
>>         }
>>     }
>> }
>>
>>
>> Note that you can (and often should) set up index analysis and search 
>> analysis differently (eg if you use synonyms, only expand search terms).
>>
>> Hope I haven't missed the point...
>>
>> On Monday, June 30, 2014 8:47:36 AM UTC-4, mooky wrote:
>>>
>>> Hi all,
>>>
>>> I have a google-style search capability in my app that uses the _all 
>>> field with the default (standard) analyzer (I don't configure anything - so 
>>> its Elastic's default).
>>>
>>> There are a few cases where we don't quite get the behaviour we want, 
>>> and I am trying to work out how I tweak the analyzer configuration.
>>>
>>> 1) if the user searches using 99.97, then they get the results they 
>>> expect, but if they search using 99.97%, they get nothing. They should get 
>>> the results that match "99.97%". The default analyzer config loses the %, I 
>>> guess.
>>>
>>> 2) I have no idea what the text is ( : ) ) but the user wants to search 
>>> using 托克金通贸易 - which is in the data - but currently we get zero results. It 
>>> looks like the standard analyzer/tokenizer breaks on each character.
>>>
>>> I *_think_* I just want a whitespace analyzer with lower-casing ....
>>> However, 
>>> a) I am not exactly sure how to configure that, and;
>>> b) I am not 100% sure what I am losing/gaining vs standard analyzer. 
>>> (dont need stop-words - in any case default cfg for standard analyser 
>>> doesn't have any IIRC)
>>>
>>> (FWIW, on all our other text fields, we tend to use no analyzer)
>>>
>>> (Elastic 1.1.1 and 1.2 ...)
>>>
>>> Cheers.
>>> -M
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/63eeca9b-27ca-45da-9b57-d688add036e9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to