No. I do not recommend you to use eclipse to build. (except you use m2eclipse <http://m2eclipse.sonatype.org/>) I mean you can just compile and package your source code via command line. There seems not any brief guide on this. I would like write my experience down on my blog. Do you have any IM? I may help you step by step. My QQ: 175162478 :-)
在 2011年8月19日 下午4:33,张玉东 <[email protected]>写道: > You means that I can create a project in eclipse, then build it by maven? > Do you have any guidelines or websites upon this issue I can refer to. > > -----邮件原件----- > 发件人: 戴清灏 [mailto:[email protected]] > 发送时间: 2011年8月19日 16:28 > 收件人: [email protected] > 主题: Re: Mahout project running in eclipse > > yes, of course. It's a package tool. > > 在 2011年8月19日 下午4:26,张玉东 <[email protected]>写道: > > > Can maven build my own projects developed based on mahout? > > > > -----邮件原件----- > > 发件人: 戴清灏 [mailto:[email protected]] > > 发送时间: 2011年8月19日 16:16 > > 收件人: [email protected] > > 主题: Re: Mahout project running in eclipse > > > > This plugin is not that good. I mean "run on hadoop". > > So, I do not recommend you to use this. > > You can use maven to package mahout: mvn -Dskiptests=true clean package > > And run is on hadoop: hadoop jar mahout-*-job.jar xxx.xxx.xx.xx > > comand line would more convenient. > > > > 在 2011年8月19日 下午4:03,张玉东 <[email protected]>写道: > > > > > It is ok to run the mahout in the command line. I do not know whether > > > mahout supports the manner of "run on hadoop" in eclipse. Apparently, > > some > > > basic classes are not transported to the datanodes. > > > > > > -----邮件原件----- > > > 发件人: 戴清灏 [mailto:[email protected]] > > > 发送时间: 2011年8月19日 15:58 > > > 收件人: [email protected] > > > 主题: Re: Mahout project running in eclipse > > > > > > Try to run mahout-*-job.jar, not any other jar. > > > Is your mahout version 0.5? > > > > > > 在 2011年8月19日 下午3:44,张玉东 <[email protected]>写道: > > > > > > > Dear Mahouters, > > > > I am a newer in Mahout. I try to setup Mahout in Eclipse running on > > > Windows > > > > and execute it on the remote Linux Based Hadoop cluster. However, > when > > I > > > > test the KMeans example, it offers two options: sequential and MR, > the > > > > former one can be executed exactly, but when it is expected to > operate > > on > > > > the cluster by MapReduce, the following error appears, has any one > met > > > the > > > > similar problem? Or, it can not be operated in this manner? Thanks. > > > > > > > > Error: java.lang.ClassNotFoundException: > org.apache.mahout.math.Vector > > > > at java.net.URLClassLoader$1.run(URLClassLoader.java:202) > > > > at java.security.AccessController.doPrivileged(Native Method) > > > > at java.net.URLClassLoader.findClass(URLClassLoader.java:190) > > > > at java.lang.ClassLoader.loadClass(ClassLoader.java:306) > > > > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) > > > > at java.lang.ClassLoader.loadClass(ClassLoader.java:247) > > > > at java.lang.Class.forName0(Native Method) > > > > at java.lang.Class.forName(Class.java:247) > > > > at > > > > > > > > > > org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:762) > > > > at > org.apache.hadoop.io.WritableName.getClass(WritableName.java:71) > > > > at > > > > > > > > > > org.apache.hadoop.io.SequenceFile$Reader.getValueClass(SequenceFile.java:1613) > > > > at > > > org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1555) > > > > at > > > > > org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1428) > > > > at > > > > > org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1417) > > > > at > > > > > org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1412) > > > > at > > > > > > > > > > org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.initialize(SequenceFileRecordReader.java:50) > > > > at > > > > > > > > > > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:418) > > > > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:620) > > > > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305) > > > > at org.apache.hadoop.mapred.Child.main(Child.java:170) > > > > > > > > Yudong > > > > > > > > > > > > > >
