does anyone know of a nicer way of doing this?


On 8/13/07, Renaud Richardet <[EMAIL PROTECTED]> wrote:
> not sure, but I think it's just to flush the cached index...
>
>
> Brian Demers wrote:
> > Why does the web app need to be restarted? are the index files on the
> > classpath or something? It seem like this is a hack?
> >
> >
> > On 8/13/07, srampl <[EMAIL PROTECTED]> wrote:
> >
> >> Hi,
> >>
> >> Thanks for this valuable information,
> >>
> >> I need contionus and latest results in nutch,i have old crawl data "CrawlA"
> >> and latest crawl data "crawlB" . u told after the merge use this command
> >> "touch $tomcat_dir/WEB-INF/web.xml" function in script, that's fine. but at
> >> the time of merging between crawlA & crawlB, we can't able to give the
> >> result. it disply empty page, then only i am asking ,  how to solve this
> >> problem.
> >>
> >> Thanks
> >>
> >>
> >>
> >>
> >> Tomislav Poljak wrote:
> >>
> >>> Hi,
> >>> if it helps:
> >>>
> >>> you don't need to restart tomcat to load index changes, it is enough to
> >>> restart an individual web application (without restarting the Tomcat
> >>> service) by touching the application's web.xml file. This is faster than
> >>> restarting tomcat. Add:
> >>>
> >>> touch $tomcat_dir/WEB-INF/web.xml
> >>>
> >>> to the end of your script and this will "tell Tomcat to reload index".
> >>>
> >>> Tomislav
> >>>
> >>>
> >>>
> >>> On Fri, 2007-08-10 at 02:50 -0700, srampl wrote:
> >>>
> >>>> Hi,
> >>>>
> >>>> Thanks for ur reply,
> >>>>
> >>>> I have did that step, but at the time of merging between old index(i.e
> >>>> curently tomcat running index) and new index or after merging, it not
> >>>> give
> >>>> the search result until the tomcat is restart. So we can't able to
> >>>> produce
> >>>> contionous search result at the the time of merging and after merging
> >>>> until
> >>>> we restart the tomcat.
> >>>>
> >>>> Plz give idea abt this
> >>>>
> >>>> Thanks in advance,
> >>>>
> >>>>
> >>>>
> >>>> Harmesh, V2solutions wrote:
> >>>>
> >>>>> Hi,
> >>>>>  The Crawl can be updated by again performing the generate, fetch &
> >>>>>
> >>>> update
> >>>>
> >>>>> cycle step by step
> >>>>> Generate will create new segment and after fetching the documents, the
> >>>>> update cycle will update it
> >>>>> with the older crawl.
> >>>>>
> >>>>>
> >>>>> Ratnesh,V2Solutions India wrote:
> >>>>>
> >>>>>> Hi,
> >>>>>> Ricardo, Greetings of the day,
> >>>>>> We are using nutch and our corporate application is ready but due to
> >>>>>> client demand regarding getting refresh crawl data, we are planning to
> >>>>>> update our crawldb instead of re-crawling .
> >>>>>>
> >>>>>> So do u have any solution that how to update crawldb which already
> >>>>>>
> >>>> have
> >>>>
> >>>>>> been crawled and storing some useful information.
> >>>>>>
> >>>>>> It's nice if I find any solutions from u or any of ur colleagues.
> >>>>>>
> >>>>>> With Thanks & Regards,
> >>>>>>
> >>>>>> Ratnesh,V2Solutions India
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>
>
>

Reply via email to