cool,that solve the problem I am not print out the tutorial,not even scroll down to see the "if it not working" issue
thanks On Wed, May 13, 2009 at 9:06 PM, Frank McCown <fmcc...@harding.edu> wrote: > Look under the section entitled "Java Heap Size problem" in the 1.0 > tutorial. That could be your problem. > > Frank > > On Wed, May 13, 2009 at 3:12 AM, jackyu <jackyu...@gmail.com> wrote: > > > > Hi > > I follow the tutorial of running nutch 1.0 in eclipse under > ubuntu,building > > is ok,but can't see the new build output dir > > but when run the crawl.java in eclipse,console show: > > > > crawl started in: crawl > > rootUrlDir = urls > > threads = 10 > > depth = 3 > > topN = 50 > > Injector: starting > > Injector: crawlDb: crawl/crawldb > > Injector: urlDir: urls > > Injector: Converting injected urls to crawl db entries. > > Exception in thread "main" java.io.IOException: Job failed! > > at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1232) > > at org.apache.nutch.crawl.Injector.inject(Injector.java:160) > > at org.apache.nutch.crawl.Crawl.main(Crawl.java:113) > > > > if I run the crawl from command line,it show: > > org.apache.nutch.plugin.PluginRuntimeException: > > java.lang.ClassNotFoundException: > > org.apache.nutch.net.urlnormalizer.basic.BasicURLNormalizer > > > > thanks > > > > -- > > View this message in context: > http://www.nabble.com/can%27t-run-in-eclipse-tp23517344p23517344.html > > Sent from the Nutch - User mailing list archive at Nabble.com. > > > > >