[
https://issues.apache.org/jira/browse/LUCENE-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13465767#comment-13465767
]
Robert Muir commented on LUCENE-3842:
-------------------------------------
in TStoA:
{code}
if (pos == -1 && posInc == 0) {
// TODO: hmm are TS's still allowed to do this...?
posInc = 1;
}
{code}
NO they are not! :)
As far as the limitations, i feel like if the last token's endOffset != length
of input
that might be pretty safe in general (e.g. standardtokenizer) because of how
unicode
works... i have to think about it.
strange the FST size increased so much? If i run the benchmark:
{noformat}
[junit4:junit4] 2> -- RAM consumption
[junit4:junit4] 2> JaspellLookup size[B]: 9,815,152
[junit4:junit4] 2> TSTLookup size[B]: 9,858,792
[junit4:junit4] 2> FSTCompletionLookup size[B]: 466,520
[junit4:junit4] 2> WFSTCompletionLookup size[B]: 507,640
[junit4:junit4] 2> AnalyzingCompletionLookup size[B]: 1,832,952
{noformat}
I don't know if we should worry about that, but it seems somewhat large
for just using KeywordTokenizer.
{code}
* <b>NOTE</b>: Although the {@link TermFreqIterator} API specifies
* floating point weights
{code}
Thats obselete. See WFSTSuggester in trunk where I fixed this already.
> Analyzing Suggester
> -------------------
>
> Key: LUCENE-3842
> URL: https://issues.apache.org/jira/browse/LUCENE-3842
> Project: Lucene - Core
> Issue Type: New Feature
> Components: modules/spellchecker
> Affects Versions: 3.6, 4.0-ALPHA
> Reporter: Robert Muir
> Attachments: LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch,
> LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch,
> LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch,
> LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch, LUCENE-3842.patch,
> LUCENE-3842.patch, LUCENE-3842-TokenStream_to_Automaton.patch
>
>
> Since we added shortest-path wFSA search in LUCENE-3714, and generified the
> comparator in LUCENE-3801,
> I think we should look at implementing suggesters that have more capabilities
> than just basic prefix matching.
> In particular I think the most flexible approach is to integrate with
> Analyzer at both build and query time,
> such that we build a wFST with:
> input: analyzed text such as ghost0christmas0past <-- byte 0 here is an
> optional token separator
> output: surface form such as "the ghost of christmas past"
> weight: the weight of the suggestion
> we make an FST with PairOutputs<weight,output>, but only do the shortest path
> operation on the weight side (like
> the test in LUCENE-3801), at the same time accumulating the output (surface
> form), which will be the actual suggestion.
> This allows a lot of flexibility:
> * Using even standardanalyzer means you can offer suggestions that ignore
> stopwords, e.g. if you type in "ghost of chr...",
> it will suggest "the ghost of christmas past"
> * we can add support for synonyms/wdf/etc at both index and query time (there
> are tradeoffs here, and this is not implemented!)
> * this is a basis for more complicated suggesters such as Japanese
> suggesters, where the analyzed form is in fact the reading,
> so we would add a TokenFilter that copies ReadingAttribute into term text
> to support that...
> * other general things like offering suggestions that are more "fuzzy" like
> using a plural stemmer or ignoring accents or whatever.
> According to my benchmarks, suggestions are still very fast with the
> prototype (e.g. ~ 100,000 QPS), and the FST size does not
> explode (its short of twice that of a regular wFST, but this is still far
> smaller than TST or JaSpell, etc).
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]