Jack Krupansky-2 wrote > Typically the white space tokenizer is the best choice when the word > delimiter filter will be used. > > -- Jack Krupansky
If we wanted to keep the StandardTokenizer (because we make use of the token types) but wanted to use the WDFF to get combinations of words that are split with certain characters (mainly - and /, but possibly others as well), what is the suggested way of accomplishing this? Would we just have to extend the JFlex file for the tokenizer and re-compile it? -- View this message in context: http://lucene.472066.n3.nabble.com/WordDelimiterFilterFactory-and-StandardTokenizer-tp4131628p4136146.html Sent from the Solr - User mailing list archive at Nabble.com.