[
https://issues.apache.org/jira/browse/LUCENE-3731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13199778#comment-13199778
]
Robert Muir commented on LUCENE-3731:
-------------------------------------
Thanks for starting this Tommaso:
I was unable to apply the patch (were there some svn-copies?)
But I suggest in general using the
BaseTokenStreamTestCase.assertTokenStreamContents/assertAnalyzesTo:
e.g. instead of:
{code}
// check that 'the big brown fox jumped on the wood' tokens have the expected
PoS types
String[] expectedPos = new String[]{"at", "jj", "jj", "nn", "vbd", "in",
"at", "nn"};
int i = 0;
while (ts.incrementToken()) {
assertNotNull(offsetAtt);
assertNotNull(termAtt);
assertNotNull(typeAttr);
assertEquals(typeAttr.type(), expectedPos[i]);
i++;
}
{code}
you could use:
{code}
assertTokenStreamContents(ts,
new String[] { "the", "big", "brown", ... }, /* expected terms */
new String[] { "at", "jj", "jj", ... }, /* expected types */
{code}
There are also variants that let you supply expected start/end offsets, I think
that would be good.
Finally, to check for lots of other bugs (including thread-safety,
compatibility with charfilters, etc),
I would recommend:
{code}
/** blast some random strings through the analyzer */
public void testRandomStrings() throws Exception {
Analyzer a = new Analyzer() {
@Override
protected TokenStreamComponents createComponents(String fieldName, Reader
reader) {
Tokenizer tokenizer = new MyTokenizer(reader);
return new TokenStreamComponents(tokenizer, tokenizer);
}
};
checkRandomData(random, a, 10000*RANDOM_MULTIPLIER);
}
{code}
If you look at BaseTokenStreamTestCase you will see all of these methods are
insanely nitpicky
and find all kinds of bugs in analysis components, so it will really help test
coverage I think.
> Create a analysis/uima module for UIMA based tokenizers/analyzers
> -----------------------------------------------------------------
>
> Key: LUCENE-3731
> URL: https://issues.apache.org/jira/browse/LUCENE-3731
> Project: Lucene - Java
> Issue Type: Improvement
> Components: modules/analysis
> Reporter: Tommaso Teofili
> Assignee: Tommaso Teofili
> Fix For: 3.6, 4.0
>
> Attachments: LUCENE-3731.patch
>
>
> As discussed in SOLR-3013 the UIMA Tokenizers/Analyzer should be refactored
> out in a separate module (modules/analysis/uima) as they can be used in plain
> Lucene. Then the solr/contrib/uima will contain only the related factories.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]