Hi Yiwei,
Thanks for reporting the issue.
The said problem has been fixed in the current alpha build of Wget. It will
be available when the next version of Wget is released.
On Mon, Jun 24, 2013 at 11:53 PM, Yiwei Yang wrote:
> Hi,
>I want to save everything I downloaded from wget. Howeve
Hi,
I want to save everything I downloaded from wget. However, When I use
wget -p -H where can be some extremely long link like
http://www.baidu.com/baidu.php?url=06jseau7X1x3eY6DC9bMYbAZCSUYxKoGEAnm9bNQ-3C74TFYrjjVVWsntVGEBBWZvekliovDdNxOis9AkTcakEduxtazv2paoo2eTfutzgOK0-y3T8FCc4Oo9m76K7
Hi there,
I am not good to read source-code of wget, however, as you mentioned, they
have some 'hidden function' that involved regular expression.
Does that mean you can use wget to get the page and at the same time, also
use regular expression to parse it, all in wget?
If so, can you give me an e
I want to get some file size. Some people recommend wget --spider. However,
when I run it on some links, like http://autos.cn.yahoo.com/,
http://www.cbsa.gc.ca/menu-eng.html,http://oprofile.sourceforge.net/mail/and
so on, then it said Length: unspecified [text/html]. Is there way to
solve this or
- Original Message -
> Tomas Hozza writes:
>
> > Hello.
> >
> > We at Red Hat did an inspection of wget-1.14 source, man page and
> > wget usage (--help) and discovered following errors:
> >
> > Missing in man present in usage:
> > --accept-regex
> > --preserve-permissions
> > --regex-typ