This plugin is not that good. I mean "run on hadoop". So, I do not recommend you to use this. You can use maven to package mahout: mvn -Dskiptests=true clean package And run is on hadoop: hadoop jar mahout-*-job.jar xxx.xxx.xx.xx comand line would more convenient.
在 2011年8月19日 下午4:03,张玉东 <[email protected]>写道: > It is ok to run the mahout in the command line. I do not know whether > mahout supports the manner of "run on hadoop" in eclipse. Apparently, some > basic classes are not transported to the datanodes. > > -----邮件原件----- > 发件人: 戴清灏 [mailto:[email protected]] > 发送时间: 2011年8月19日 15:58 > 收件人: [email protected] > 主题: Re: Mahout project running in eclipse > > Try to run mahout-*-job.jar, not any other jar. > Is your mahout version 0.5? > > 在 2011年8月19日 下午3:44,张玉东 <[email protected]>写道: > > > Dear Mahouters, > > I am a newer in Mahout. I try to setup Mahout in Eclipse running on > Windows > > and execute it on the remote Linux Based Hadoop cluster. However, when I > > test the KMeans example, it offers two options: sequential and MR, the > > former one can be executed exactly, but when it is expected to operate on > > the cluster by MapReduce, the following error appears, has any one met > the > > similar problem? Or, it can not be operated in this manner? Thanks. > > > > Error: java.lang.ClassNotFoundException: org.apache.mahout.math.Vector > > at java.net.URLClassLoader$1.run(URLClassLoader.java:202) > > at java.security.AccessController.doPrivileged(Native Method) > > at java.net.URLClassLoader.findClass(URLClassLoader.java:190) > > at java.lang.ClassLoader.loadClass(ClassLoader.java:306) > > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) > > at java.lang.ClassLoader.loadClass(ClassLoader.java:247) > > at java.lang.Class.forName0(Native Method) > > at java.lang.Class.forName(Class.java:247) > > at > > > org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:762) > > at org.apache.hadoop.io.WritableName.getClass(WritableName.java:71) > > at > > > org.apache.hadoop.io.SequenceFile$Reader.getValueClass(SequenceFile.java:1613) > > at > org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1555) > > at > > org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1428) > > at > > org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1417) > > at > > org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1412) > > at > > > org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.initialize(SequenceFileRecordReader.java:50) > > at > > > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:418) > > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:620) > > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305) > > at org.apache.hadoop.mapred.Child.main(Child.java:170) > > > > Yudong > > > > >
