Maybe someone can explain to me how this works.

First, my setup.

I create a fetchlist each night with FreeFetchlistTool and fetch those 
pages.  It often contains the same URLS that are already in the database, 
but this tool gets the newest copies of those URLs.

I also run nutch dedup after everything is fetched, indexed, etc.  I then 
merge the segments using the following command:

ls -d $segments_dir/* | xargs bin/nutch merge $index_dir

Every night the number of "duplicates" increases.  THis is so because the 
duplicates from the day before are not actually deleted (I assume).

Is dedup removing them from some sort of master index and the segments 
retain their original information?

If so, is there a way to merge the segments into one (or whatever) so that 
duplicate URLs do not exist?  Would mergesegs do this?

Thanks for any help, and I hope my questionis clear.

Matt


Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Nutch-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nutch-general

Reply via email to