S+++

At 12:18 PM 10/2/2001 -0700, "Seth Cohn" <[EMAIL PROTECTED]> wrote:
>from man wget:
>
>           In one case you'll need to add a couple more options.
>           If document is a "<FRAMESET>" page, the "one more hop"
>           that -p gives you won't be enough---you'll get the
>           "<FRAME>" pages that are referenced, but you won't get
>           their requisites.  Therefore, in this case you'll need
>           to add -r -l1 to the commandline.  The -r -l1 will
>           recurse from the "<FRAMESET>" page to to the "<FRAME>"
>           pages, and the -p will get their requisites.  If
>           you're already using a recursion level of 1 or more,
>           you'll need to up it by one.  In the future, -p may be
>           made smarter so that it'll do "two more hops" in the
>           case of a "<FRAMESET>" page.
>
>wget handles frames just fine.
>
>Seth
>
>
>----- Original Message ----- 
>From: Ralph Zeller <[EMAIL PROTECTED]>
>To: <[EMAIL PROTECTED]>
>Sent: Tuesday, October 02, 2001 11:37 AM
>Subject: [EUG-LUG:3047] Recursive web-sucking
>
>
>> I sometimes like to suck down an entire web-site using wget.  However,
>> wget doesn't follow links in frames.  Is there another tool that can
>> recursively web-suck including following links in frames?  Thanks.
>> 
>> 
>

Reply via email to