hello, I originally attempted to create a Java class, that meant to be executed from the command line. So, I created a `main` function, and from Eclipse, I was able to just /right-click/ and use the `run as java application` menu. Nutch was working fine, crawling the URLs as it meant to be.
Later I ran into some issues while trying to generate my executable /JAR/, so I decided to just change my class to be an HttpServlet instead, and triggered the crawl via an HTTP request. Unfortunately, it dies with the famous: `Job failed!` message. I kinda stepped through the motions, and noticed, when I execute the whole thing from Eclipse, my *Crawl.main(args)* properly calls the *Crawl.main()* function, but when I call the same function from Http, it ends up jumping to *Crawl.run()* - not sure if that is normal or not. You can see the whole console log here: http://pastebin.com/raw.php?i=Z7NHYzUU thanks, --imre -- View this message in context: http://lucene.472066.n3.nabble.com/Nutch-1-6-from-Java-via-HttpServlet-tp4045382.html Sent from the Nutch - User mailing list archive at Nabble.com.

