[
https://issues.apache.org/jira/browse/LUCENE-1606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12781223#action_12781223
]
Robert Muir commented on LUCENE-1606:
-------------------------------------
bq. I thought you were about to open one!
I opened one for Automaton specifically, should i change it to be all MTQ?
bq. Actually... wouldn't we need to convert to int[] (for Unicode 4) not
char[], to be most convenient for "higher up" APIs like automaton? If we did
char[] you'd still have to handle surrogates process (and then it's not unlike
doing byte[]).
nope. because unicode and java are optimized for UTF-16, not UTF-32. so we
should use char[], but use the codePoint apis, which are designed such that you
can process text in UTF-16 (char[]) efficiently, yet also handle the rare case
of supp. characters.
char[] is correct, its just that we have to be careful to use the right apis
for processing it.
With String() a lot of the apis such as String.toLowerCase do this
automatically for you, so most applications have no issues.
> Automaton Query/Filter (scalable regex)
> ---------------------------------------
>
> Key: LUCENE-1606
> URL: https://issues.apache.org/jira/browse/LUCENE-1606
> Project: Lucene - Java
> Issue Type: New Feature
> Components: Search
> Reporter: Robert Muir
> Assignee: Robert Muir
> Priority: Minor
> Fix For: 3.1
>
> Attachments: automaton.patch, automatonMultiQuery.patch,
> automatonmultiqueryfuzzy.patch, automatonMultiQuerySmart.patch,
> automatonWithWildCard.patch, automatonWithWildCard2.patch,
> BenchWildcard.java, LUCENE-1606-flex.patch, LUCENE-1606.patch,
> LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch,
> LUCENE-1606.patch, LUCENE-1606.patch, LUCENE-1606.patch,
> LUCENE-1606_nodep.patch
>
>
> Attached is a patch for an AutomatonQuery/Filter (name can change if its not
> suitable).
> Whereas the out-of-box contrib RegexQuery is nice, I have some very large
> indexes (100M+ unique tokens) where queries are quite slow, 2 minutes, etc.
> Additionally all of the existing RegexQuery implementations in Lucene are
> really slow if there is no constant prefix. This implementation does not
> depend upon constant prefix, and runs the same query in 640ms.
> Some use cases I envision:
> 1. lexicography/etc on large text corpora
> 2. looking for things such as urls where the prefix is not constant (http://
> or ftp://)
> The Filter uses the BRICS package (http://www.brics.dk/automaton/) to convert
> regular expressions into a DFA. Then, the filter "enumerates" terms in a
> special way, by using the underlying state machine. Here is my short
> description from the comments:
> The algorithm here is pretty basic. Enumerate terms but instead of a
> binary accept/reject do:
>
> 1. Look at the portion that is OK (did not enter a reject state in the
> DFA)
> 2. Generate the next possible String and seek to that.
> the Query simply wraps the filter with ConstantScoreQuery.
> I did not include the automaton.jar inside the patch but it can be downloaded
> from http://www.brics.dk/automaton/ and is BSD-licensed.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]