>> there are some >> pages which don't get processed properly by the python parser. > > Hi Stephen, > > What pages don't get parsed properly?
It is www.animexx.de/news. > > If you are sure the HTML is valid to the W3C spec, you may wish to fill out a bug > report at > http://bugs.plkr.org so that it can be. No, it's not a bug in Plucker Desktop or Parser, it is because those guys can't keep their fingers from "illegal" HTML-code. (Yes, checked it already at www.w3c.org) (They've got also a webspider blocking mechanism which stops you from retrieving more than one page with JPluck. That's why I don't know whether Plucker Desktop would handle this problem either.) Thanks for Your interest in this case, but as long as there is a possibility to "JPluck" the neccesary sites, I will use the Java program. No problem what so ever. Thanks, Stephen D. Leedle AKA Verlorene Seele > > Best wishes, > Robert _______________________________________________ plucker-list mailing list [EMAIL PROTECTED] http://lists.rubberchicken.org/mailman/listinfo/plucker-list

