"bruce" <[EMAIL PROTECTED]> wrote:
hi...

we're looking at creating a project/app to extract information from
university websites. we know we can write a separate individual perl
app/scipt for each school which would crawl/parse/extract the information we
need. however, we'd rather not write a unique perl script for each school if
there is a better/more efficient way.

anybody have any good suggestions, preferably with code samples!!

thanks for any help/assistance/pointers/etc...

If you unleash a spider, don't forget to build in support for avoiding pages when requested to do so. Visit: http://www.robotstxt.org/wc/exclusion.html

Also, a good place to start would be The Web Robots FAQ
at http://www.robotstxt.org/wc/faq.html

This suggests a book or three on spiders.

Regards,
Martin


_______________________________________________ Perl-Win32-Users mailing list [EMAIL PROTECTED] To unsubscribe: http://listserv.ActiveState.com/mailman/mysubs

Reply via email to