Thanks. From what I'd read now, seems there is way to inject new url to the
existing webdb then fetch those new addeded url, index them and merge with
existing version? Is this the same as  you said "generate segments and do
it"?


On 3/19/06, Raghavendra Prabhu <[EMAIL PROTECTED]> wrote:
>
> You cannot run the crawl step once more
>
> You have to generate segments and do it
>
> The normal crawl cannot be used
>
>
>
>
> On 3/19/06, Hong Li <[EMAIL PROTECTED]> wrote:
> >
> > Greeting,
> >
> > Anyone can tell me how to regularly to fetch one given website to grab
> its
> > new added contents? I am using  nutch crawl to get the first complete
> > contents of our website to replace mysql based search but can't figure
> out
> > how to run nutch the second time since it always complain the crawl
> > directory already exists.
> >
> > TIA,
> >
> >
>
>

Reply via email to