Follow-up Comment #4, bug #51029 (project wget):

Hi again,

our system hit another website with the same behavior. It's the same call as
in the original post but with https://www.sparkasse.at as target. After about
19MiB (uncompressed warc size) it will try to download the robots.txt and
crashes.

(note: as this is a bank site, repeated crawling might trigger throttling or
blocking)

Christof Horschitz



    _______________________________________________________

Reply to this item at:

  <http://savannah.gnu.org/bugs/?51029>

_______________________________________________
  Message sent via/by Savannah
  http://savannah.gnu.org/


Reply via email to