I guess I really don't understand this stuff.  When I scoop the
Slashdot site, for instance, I say

sitescooper.pl -dump -mhtml -refresh -site site_samples/linux/slashdot.site -filename 
Slashdot

and it obligingly goes out and pulls over 18 pages from Slashdot.
However, when I then look at the generated Slashdot.html top page, it
has no links in it!  Why bother pulling over the other pages and
stashing them in that directory if the Contents page isn't going to
reference them?

Bill
_______________________________________________
Sitescooper-talk mailing list
[EMAIL PROTECTED]
http://lists.sourceforge.net/mailman/listinfo/sitescooper-talk

Reply via email to