[ https://issues.apache.org/jira/browse/SOLR-1706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12797829#action_12797829 ]
Robert Muir commented on SOLR-1706: ----------------------------------- its not just the concatenation, but also the subword generation. In the case below, Autocoder should not be emitted, as only numeric subword generation is turned on. {code} public void test128() throws Exception { assertWdf("word 1234 Super-Duper-XL500-42-Autocoder x'sbd123 a4b3c-", 0,1,0,0,0,0,0,0,0, null, new String[] { "word", "1234", "42", "Autocoder", "a4b3c" }, new int[] { 0, 5, 28, 31, 50 }, new int[] { 4, 9, 30, 40, 55 }, new int[] { 1, 1, 1, 1, 2 }); } {code} > wrong tokens output from WordDelimiterFilter when english possessives are in > the text > ------------------------------------------------------------------------------------- > > Key: SOLR-1706 > URL: https://issues.apache.org/jira/browse/SOLR-1706 > Project: Solr > Issue Type: Bug > Components: Schema and Analysis > Affects Versions: 1.4 > Reporter: Robert Muir > > the WordDelimiterFilter english possessive stemming "'s" removal (on by > default) unfortunately causes strange behavior: > below you can see that when I have requested to only output numeric > concatenations (not words), these english possessive stems are still > sometimes output, ignoring the options i have provided, and even then, in a > very inconsistent way. > {code} > assertWdf("Super-Duper-XL500-42-AutoCoder's", 0,0,0,1,0,0,0,0,1, null, > new String[] { "42", "AutoCoder" }, > new int[] { 18, 21 }, > new int[] { 20, 30 }, > new int[] { 1, 1 }); > assertWdf("Super-Duper-XL500-42-AutoCoder's-56", 0,0,0,1,0,0,0,0,1, null, > new String[] { "42", "AutoCoder", "56" }, > new int[] { 18, 21, 33 }, > new int[] { 20, 30, 35 }, > new int[] { 1, 1, 1 }); > assertWdf("Super-Duper-XL500-AB-AutoCoder's", 0,0,0,1,0,0,0,0,1, null, > new String[] { }, > new int[] { }, > new int[] { }, > new int[] { }); > assertWdf("Super-Duper-XL500-42-AutoCoder's-BC", 0,0,0,1,0,0,0,0,1, null, > new String[] { "42" }, > new int[] { 18 }, > new int[] { 20 }, > new int[] { 1 }); > {code} > where assertWdf is > {code} > void assertWdf(String text, int generateWordParts, int generateNumberParts, > int catenateWords, int catenateNumbers, int catenateAll, > int splitOnCaseChange, int preserveOriginal, int splitOnNumerics, > int stemEnglishPossessive, CharArraySet protWords, String expected[], > int startOffsets[], int endOffsets[], String types[], int posIncs[]) > throws IOException { > TokenStream ts = new WhitespaceTokenizer(new StringReader(text)); > WordDelimiterFilter wdf = new WordDelimiterFilter(ts, generateWordParts, > generateNumberParts, catenateWords, catenateNumbers, catenateAll, > splitOnCaseChange, preserveOriginal, splitOnNumerics, > stemEnglishPossessive, protWords); > assertTokenStreamContents(wdf, expected, startOffsets, endOffsets, types, > posIncs); > } > {code} -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.