Maybe someone can explain to me how this works.
First, my setup.
I create a fetchlist each night with FreeFetchlistTool and fetch those
pages. It often contains the same URLS that are already in the database,
but this tool gets the newest copies of those URLs.
I also run nutch dedup after everything is fetched, indexed, etc. I then
merge the segments using the following command:
ls -d $segments_dir/* | xargs bin/nutch merge $index_dir
Every night the number of "duplicates" increases. THis is so because the
duplicates from the day before are not actually deleted (I assume).
Is dedup removing them from some sort of master index and the segments
retain their original information?
If so, is there a way to merge the segments into one (or whatever) so that
duplicate URLs do not exist? Would mergesegs do this?
Thanks for any help, and I hope my questionis clear.
Matt