Hi,

I'm trying to provide search functionality for our website using Apache
Solr. We have a in-house developed crawler which provides few required
functionality in handy. 

My question is, the current crawler program tries to save all the data in to
the database. Is it a good approach to save all crawler data in to database?
or to leave it as some sort of flat file (XML/HTML)?, We are hoping that our
data will grow rapidly. Assume that my next step is to import all the data
from database to Solr index. 

Any suggestion would be helpful and appreciated.

Thanks
Ram
-- 
View this message in context: 
http://www.nabble.com/Crawler-Output-Flat-file-or-Database--tp22774610p22774610.html
Sent from the Nutch - User mailing list archive at Nabble.com.

Reply via email to