Just updated to /trunk and am now seeing this exception:

Caused by: org.apache.solr.client.solrj.SolrServerException:
java.lang.ClassCastException:
xxx.solr.analysis.JSONKeyValueTokenizerFactory$1 cannot be cast to
org.apache.lucene.analysis.Tokenizer
        at 
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:141)
        ... 15 more
Caused by: java.lang.ClassCastException:
xxx.solr.analysis.JSONKeyValueTokenizerFactory$1 cannot be cast to
org.apache.lucene.analysis.Tokenizer
        at 
org.apache.solr.analysis.TokenizerChain.getStream(TokenizerChain.java:69)
        at 
org.apache.solr.analysis.SolrAnalyzer.reusableTokenStream(SolrAnalyzer.java:74)
        at 
org.apache.solr.schema.IndexSchema$SolrIndexAnalyzer.reusableTokenStream(IndexSchema.java:364)
        at 
org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:124)
        at 
org.apache.lucene.index.DocFieldProcessorPerThread.processDocument(DocFieldProcessorPerThread.java:244)
        at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:772)


Looks like SolrIndexAnalyzer now assumes everything uses the new
TokenStream API...

I'm fine upgrading, but it seems we should the 'back compatibility'
notice more explicit.


FYI, this is what the TokenizerFactory looks like:

public class JSONKeyValueTokenizerFactory extends BaseTokenizerFactory
{
  ...

  public TokenStream create(Reader input) {
    final JSONParser js = new JSONParser( input );
    final Stack<String> keystack = new Stack<String>();

    return new TokenStream()
    {
      ...

Reply via email to