Just a wild guess: Since there are 2 Mapper classes in hadoop 20, each in a different package, did you import the right package when extending CrawlerMapper?
-Marshall Schor Mathias De Maré wrote: > Hi, > > I was wondering if MultithreadedMapper is broken in 0.20.0, or perhaps I am > using it incorrectly. > The following code: > > Configuration conf = new Configuration(); > String[] otherArgs = new GenericOptionsParser(conf, > args).getRemainingArgs(); > Job job = new Job(conf, "foo"); > job.setMapperClass(MultithreadedMapper.class); > MultithreadedMapper.setMapperClass(job, CrawlerMapper.class); > > The last line always fails with the following error: > [javac] Compiling 14 source files to > /home/mathias/hadoop_testing/crawler/svn/Crawler/build/classes > [javac] > /home/mathias/hadoop_testing/crawler/svn/Crawler/src/mycrawler/CrawlerJob.java:37: > <K1,V1,K2,V2>setMapperClass(org.apache.hadoop.mapreduce.Job,java.lang.Class<org.apache.hadoop.mapreduce.Mapper<K1,V1,K2,V2>>) > in org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper cannot be applied > to > (com.acquia.web.crawler.CrawlerJob,java.lang.Class<mycrawler.CrawlerMapper>) > [javac] MultithreadedMapper.setMapperClass(this, > CrawlerMapper.class); > [javac] ^ > [javac] 1 error > > However, CrawlerMapper does extend Mapper! More precisely: public class > CrawlerMapper extends Mapper<IntWritable, URLInfo, Text, URLInfo> > > Is there perhaps something I'm missing? > >
