[ 
https://issues.apache.org/jira/browse/LUCENE-1306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12604756#action_12604756
 ] 

Karl Wettin commented on LUCENE-1306:
-------------------------------------

The current NGram analysis in trunk is split in two, on for edge-grams and one 
for inner grams. 

This patch combines them both in a single filter that uses ^prefix and suffix$ 
tokens if they are some sort of edge gram, or both around the complete token if 
n is great enough. There is also method to extend if you want to add a payload 
(more boost to edge grams or something) or do something to the gram tokens 
depending on what part of the original token they contain.

> CombinedNGramTokenFilter
> ------------------------
>
>                 Key: LUCENE-1306
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1306
>             Project: Lucene - Java
>          Issue Type: New Feature
>          Components: contrib/analyzers
>            Reporter: Karl Wettin
>            Assignee: Karl Wettin
>            Priority: Trivial
>         Attachments: LUCENE-1306.txt
>
>
> Alternative NGram filter that produce tokens with composite prefix and suffix 
> markers.
> {code:java}
> ts = new WhitespaceTokenizer(new StringReader("hello"));
> ts = new CombinedNGramTokenFilter(ts, 2, 2);
> assertNext(ts, "^h");
> assertNext(ts, "he");
> assertNext(ts, "el");
> assertNext(ts, "ll");
> assertNext(ts, "lo");
> assertNext(ts, "o$");
> assertNull(ts.next());
> {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to