[ 
https://issues.apache.org/jira/browse/NUTCH-1571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yuanyun.cn updated NUTCH-1571:
------------------------------

    Attachment: 1571-5.15.patch
    
> SolrInputSplit doesn't implement Writable and crawl script doesn't pass 
> crawlId to generate and updatedb tasks
> --------------------------------------------------------------------------------------------------------------
>
>                 Key: NUTCH-1571
>                 URL: https://issues.apache.org/jira/browse/NUTCH-1571
>             Project: Nutch
>          Issue Type: Bug
>          Components: indexer
>    Affects Versions: 2.1
>            Reporter: yuanyun.cn
>              Labels: crawler
>             Fix For: 2.2
>
>         Attachments: 1571-5.15.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> I met tow issues when I run crawl script from 2.x brunch.
> 1. It throws exception when run solrdedup task:
> Exception in thread "main" java.lang.NullPointerException
>         at 
> org.apache.hadoop.io.serializer.SerializationFactory.getSerializer(SerializationFactory.java:73)
>         at 
> org.apache.hadoop.mapreduce.split.JobSplitWriter.writeNewSplits(JobSplitWriter.java:123)
>         at 
> org.apache.hadoop.mapreduce.split.JobSplitWriter.createSplitFiles(JobSplitWriter.java:74)
>         at 
> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:968)
>         at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:979)
>         at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:897)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:500)
>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>         at 
> org.apache.nutch.indexer.solr.SolrDeleteDuplicates.dedup(SolrDeleteDuplicates.java:371)
>         at 
> org.apache.nutch.indexer.solr.SolrDeleteDuplicates.run(SolrDeleteDuplicates.java:381)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>         at 
> org.apache.nutch.indexer.solr.SolrDeleteDuplicates.main(SolrDeleteDuplicates.java:391)
> Debugged the code, and found this is because SolrInputSplit class doesn't 
> implement Writable interface, so I made the change to implement the interface 
> and the two methods: readFields and write.
> 2. Nothing is really pushed to remote solr server. Looked at the code, and 
> found out this is because tasks: generate and updatedb doesn't use crawlId 
> parameter: added "-crawlId $CRAWL_ID" to them and crawl script works well now.
> Also seems generate task doesn't use paramters: $CRAWL_ID/crawldb 
> $CRAWL_ID/segments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to