[
https://issues.apache.org/jira/browse/SOLR-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15353920#comment-15353920
]
Steve Rowe commented on SOLR-9185:
----------------------------------
This parser's comment support clashes with the approach I took to handling
whitespace (tokenization vs. ignoring): when a run of whitespace is interrupted
by a comment, multiple WHITESPACE_SEQ tokens are generated, and the rules
expect all whitespace runs to be collapsed into a single WHITESPACE_SEQ token.
Thinking about a way to address this.
> Solr's "Lucene"/standard query parser should not split on whitespace before
> sending terms to analysis
> -----------------------------------------------------------------------------------------------------
>
> Key: SOLR-9185
> URL: https://issues.apache.org/jira/browse/SOLR-9185
> Project: Solr
> Issue Type: Bug
> Reporter: Steve Rowe
> Assignee: Steve Rowe
> Attachments: SOLR-9185.patch, SOLR-9185.patch
>
>
> Copied from LUCENE-2605:
> The queryparser parses input on whitespace, and sends each whitespace
> separated term to its own independent token stream.
> This breaks the following at query-time, because they can't see across
> whitespace boundaries:
> n-gram analysis
> shingles
> synonyms (especially multi-word for whitespace-separated languages)
> languages where a 'word' can contain whitespace (e.g. vietnamese)
> Its also rather unexpected, as users think their
> charfilters/tokenizers/tokenfilters will do the same thing at index and
> querytime, but
> in many cases they can't. Instead, preferably the queryparser would parse
> around only real 'operators'.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]