Stefan's tutorial doesn't list these as steps.
I will add these steps hopefully until this year.
If you want to
continually fetch more and more levels from your crawldb and
appropriately update your index what is the correct method for
doing so?
Currently I am doing this:
generate
fetch
invertlinks
index
Looks like you missed to update the crawldb after fetching, but in
general that is the way to go.
You can run this cycle 100000 times or more :). I suggest have big
enough segments size and later merging some indexes together.
Just play around and try it out.
The size of segments and how many segment indexes you should merge
very much depends on your hardware.
Also note that searching from a index stored on ndfs is slow, but
there will be a solution for that until next weeks or so.
HTH
Stefan