On 10/23/2019 9:41 AM, servus01 wrote:
Hey,
thank you for helping me:
Thanks in advanced for any help, really appriciate.
<https://lucene.472066.n3.nabble.com/file/t494058/screenshot.jpg>
<https://lucene.472066.n3.nabble.com/file/t494058/screenshot3.jpg>
It is not the WordDelimiter filter that is affecting your punctuation.
It is the StandardTokenizer, which is the first analysis component that
runs. You can see this in the first screenshot, where that tokenizer
outputs terms of "CCF" "HD" and "2nd".
That filter is capable of affecting punctuation, depending on its
settings, but in this case, no punctuation is left by the time the
analysis hits that filter.
Thanks,
Shawn