Has anybody written a servlet that invokes a nutch crawl? If not, why
not? Are there serious hurdles to doing this? One issue I see is the
possibility of two users trying to run a crawl simultaneously. The
default nutch installation has a single nutch-site.xml, urls directory
and crawl-urlfilter.txt, each of which is generally modified before
invoking nutch.

I've been invoking crawl with the following shell script:
#!/bin/bash
# crawl.sh <depth> <url directory> <db directory> <log file>
nohup bin/nutch crawl $2 -threads 10 -dir $3 -depth $1 >& $4 &

I have been given the task of developing a web app that allows a user to
configure and then invoke a customized crawl. These crawls will probably
be mostly intranet crawls, but I can't assume that they will be limited
to our websites.

I could write a Perl CGI script that would process form input, modify
the config files, and then do system('nohup bin/nutch crawl <snip>');
but I'd prefer a pure Java solution to this problem.

What do you think?

- Paul M Lieberman
American Psychological Association



Reply via email to