Hello, I've been having some problems with plucker reading the DB file produced when the spider program reads home.html. To cut down on noise, I simplified this file to:
<HTML> <HEAD> <TITLE>Plucker Home Page</TITLE> </HEAD> <BODY> <H1>Plucker Home</H1> <A HREF="http://www.theregister.co.uk" MAXDEPTH=3>The Register</A><P> </BODY> </HTML> When I open the DB created by the spider program, there's only the "The Register" link. I click on that and the first page of the Register's site comes up just fine. This is just fine, but then, when I hit any link from here, everything returns with the "Sorry, the link you selected was not downloaded by Plucker." I've tried adjusting the MAXDEPTH to some N >= 2 and I've tried with and without STAYONHOST. These settings affect the DB size in the expected ways, but the end results are the same. I'm sure that the problem is me, but I cannot see what I'm doing wrong. I would really like to use plucker, but this problem is driving me batty (or, battier, if you ask my wife ;). Does anyone have any suggestions or pointers to somewhere in TFM? I'm using plucker 1.1.14 on a RH Linux box. Thanks, pete

