It seems that you're behind a firewall and need to use passive ftp.
Try the `--passive-ftp' flag, or specify `passive_ftp = on' in
~/.wgetrc.
It's a bug. -O currently doesn't work everywhere in should. If you
just want to change the directory where Wget operates, the workaround
is to use `-P'. E.g.:
wget -N ftp://ftp.pld-linux.org/dists/ac/PLD/athlon/PLD/RPMS/packages.dir.mdd -P
Mauro Tortonesi [EMAIL PROTECTED] writes:
so, i am asking you: what do you think of these changes?
Overall they look very good! Judging from the patch, a large piece of
the work part seems to be in an unexpected place: the FTP code.
Here are some remarks I got looking at the patch.
It
On Wed, 8 Oct 2003, Hrvoje Niksic wrote:
Mauro Tortonesi [EMAIL PROTECTED] writes:
so, i am asking you: what do you think of these changes?
Overall they look very good! Judging from the patch, a large piece of
the work part seems to be in an unexpected place: the FTP code.
yes, i have
Mauro Tortonesi [EMAIL PROTECTED] writes:
I still don't understand the choice to use sockaddr and
sockaddr_storage in a application code.
They result in needless casts and (to me) uncomprehensible code.
well, using sockaddr_storage is the right way (TM) to write IPv6 enabled
code ;-)
Not
Mauro Tortonesi wrote:
are there __REALLY__ systems which do not support inet_aton? their ISVs
should be ashamed of themselves...
Solaris, for example. IIRC inet_aton isn't in any document which claims
to be a standard.
however, yours seemed to me an ugly hack, so i have temporarily removed
Zitat von Hrvoje Niksic [EMAIL PROTECTED]:
Jochen Roderburg [EMAIL PROTECTED] writes:
Zitat von Hrvoje Niksic [EMAIL PROTECTED]:
It's a feature. `-A zip' means `-A zip', not `-A zip,html'. Wget
downloads the HTML files only because it absolutely has to, in order
to recurse through
It's a feature. `-A zip' means `-A zip', not `-A zip,html'. Wget
downloads the HTML files only because it absolutely has to, in order
to recurse through them. After it finds the links in them, it deletes
them.
Zitat von Hrvoje Niksic [EMAIL PROTECTED]:
It's a feature. `-A zip' means `-A zip', not `-A zip,html'. Wget
downloads the HTML files only because it absolutely has to, in order
to recurse through them. After it finds the links in them, it deletes
them.
Hmm, so it has really been an
At 12:05 PM 10/3/2003, Hrvoje Niksic wrote:
It's a feature. `-A zip' means `-A zip', not `-A zip,html'. Wget
downloads the HTML files only because it absolutely has to, in order
to recurse through them. After it finds the links in them, it deletes
them.
How about a switch to keep the .html
Jochen Roderburg [EMAIL PROTECTED] writes:
Zitat von Hrvoje Niksic [EMAIL PROTECTED]:
It's a feature. `-A zip' means `-A zip', not `-A zip,html'. Wget
downloads the HTML files only because it absolutely has to, in order
to recurse through them. After it finds the links in them, it deletes
Hi Hrvoje :)
* Hrvoje Niksic [EMAIL PROTECTED] dixit:
This beta includes several important bug fixes since 1.9-beta1, most
notably the fix for correct file name quoting with recursive FTP
downloads.
That works, at least for me. I've tested with the ftp repository
that previously
Hi Jack :)
* Jack Pavlovsky [EMAIL PROTECTED] dixit:
It's probably a bug:
bug: when downloading
wget -mirror ftp://somehost.org/somepath/3acv14~anivcd.mpg,
wget saves it as-is, but when downloading
wget ftp://somehost.org/somepath/3*, wget saves the files as
3acv14%7Eanivcd.mpg
Jack Pavlovsky [EMAIL PROTECTED] writes:
It's probably a bug: bug: when downloading wget -mirror
ftp://somehost.org/somepath/3acv14~anivcd.mpg, wget saves it as-is,
but when downloading wget ftp://somehost.org/somepath/3*, wget saves
the files as 3acv14%7Eanivcd.mpg
Thanks for the report.
Hrvoje,
please add this patch:
--- wget-1.9-beta1\windows\Makefile.src Sat May 18 02:16:36 2002
+++ wget-1.9-beta1.wip\windows\Makefile.src Thu Sep 25 08:09:26 2003
@@ -63,15 +63,17 @@
RM = del
-SRC = cmpt.c safe-ctype.c connect.c host.c http.c netrc.c ftp-basic.c ftp.c
\
-
'; [EMAIL PROTECTED]
Subject: RE: Wget 1.9-beta1 is available for testing
Hrvoje,
please add this patch:
--- wget-1.9-beta1\windows\Makefile.src Sat May 18 02:16:36 2002
+++ wget-1.9-beta1.wip\windows\Makefile.src Thu Sep 25
08:09:26 2003
@@ -63,15 +63,17 @@
RM = del
-SRC
Could the person who sent me the patch for Windows compilers support
please resend it? Amidst all the viruses, I accidentally deleted the
message before I've had a chance to apply it. Sorry about the
mistake.
Hi Hrvoje :)
* Hrvoje Niksic [EMAIL PROTECTED] dixit:
http://fly.srk.fer.hr/~hniksic/wget/wget-1.9-beta1.tar.gz
I've got and tested it, and with NO wgetrc (it happens the same
with my own wgetrc, but I tested clean just in case), the problem
with the quoting still exists:
DervishD [EMAIL PROTECTED] writes:
I've got and tested it, and with NO wgetrc (it happens the same
with my own wgetrc, but I tested clean just in case), the problem
with the quoting still exists:
$wget -r -c -nH ftp://user:[EMAIL PROTECTED]/Music/Joe Hisaishi
[...]
--15:22:55--
Hi Hrvoje :)
* Hrvoje Niksic [EMAIL PROTECTED] dixit:
Thanks for the detailed bug report. Although it doesn't look that
way, this problem is nothing but a simple oversight.
OK, patch applied and working! Now it downloads correctly all
files, quote appropriately and all works smooth
Lucuk, Pete [EMAIL PROTECTED] writes:
as we can see above, wget has raznoe.shtml.html as the main file,
this is *not* what I want, I *always* want the main file to be name
index.html.
Wget doesn't really have the concept of a main file. As a
workaround, you could simply `ln -s
, September 18, 2003 12:37 PM
To: Lucuk, Pete
Cc: '[EMAIL PROTECTED]'
Subject: Re: wget renaming URL/file downloaded, how to???
Lucuk, Pete [EMAIL PROTECTED] writes:
as we can see above, wget has raznoe.shtml.html as the main file,
this is *not* what I want, I *always* want the main file
Nicolas, thanks for the patch; I'm about to apply it to Wget CVS.
On Wed, 10 Sep 2003, Andreas Belitz wrote:
Hi,
i have found a problem regarding wget --spider.
It works great for any files over http or ftp, but as soon as one of
these two conditions occur, wget starts downloading the file:
1. linked files (i'm not 100% sure about this)
2.
Hi Aaron S. Hawley,
On Wed, 10. September 2003 you wrote:
ASH actually, what you call download scripts are actually HTTP redirects, and
ASH in this case the redirect is to an FTP server and if you double-check i
ASH think you'll find Wget does not know how to spider in ftp. end
ASH
On Wed, 10 Sep 2003, Andreas Belitz wrote:
Hi Aaron S. Hawley,
On Wed, 10. September 2003 you wrote:
ASH actually, what you call download scripts are actually HTTP redirects, and
ASH in this case the redirect is to an FTP server and if you double-check i
ASH think you'll find Wget does
Wget doesn't interpret Javascript, only regular HTML (AFAIK). Wget won't be
able to follow any links that are only setup thru a Javascript function like
MM_openBrWindow().
Adam
--
Adam Stein @ Xerox Corporation Email: [EMAIL PROTECTED]
No, it won't. The javascript stuff makes sure of that.
Mark Post
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 09, 2003 4:32 PM
To: [EMAIL PROTECTED]
Subject: wget -r -p -k -l 5 www.protcast.com doesnt pull some images
though they are
I, on the other hand, am actually not sure why you're not able to
have Wget find the marked up (not javascript) image.
Cause it worked for me.
% ls -l www.protcast.com/Grafx/menu-contact_\(off\).jpg
-rw--- 1 ashawley usr 2377 Jan 10 2003
I believe this is on the TODO list, if I understood the problem correctly.
I'm starting to look at the code and time permitting I'll try and address
this one too.
Cheers,
Jeremy
On Sat, 30 Aug 2003, Jason Mancini wrote:
I'm trying to retrieve a list of files from an ftp server:
wget
Hello,
On Wed, Aug 27, 2003 at 06:19:17AM -0400, joe j wrote:
Hello,
I am trying to download files with wget from download.com. I am using a
windows system. For some reason wget doesn?t know how to deal with
download.com links. For example Kazaa link
If this is a non-transparent proxy, you do indeed need to use the proxy
parameters:
--proxy-user=user
--proxy-passwd=password
As well as set the proxy server environment variables
ftp_proxy=http://proxy.server.name[:port] - Note the http:// value. That
is correct.
man wget shows:
-D domain-list
--domains=domain-list
Set domains to be followed. domain-list is a comma-separated
list of domains.
Note that it does not turn on -H.
Mark Post
-Original Message-
From: Andrzej Kasperowicz [mailto:[EMAIL PROTECTED]
On 18 Aug 2003 at 13:49, Post, Mark K wrote:
man wget shows:
-D domain-list
--domains=domain-list
Set domains to be followed. domain-list is a comma-separated
list of domains.
Note that it does not turn on -H.
Right, but by default wget should not
PROTECTED]
Subject: RE: wget is mirroring whole internet instead of just my web
page!
On 18 Aug 2003 at 13:49, Post, Mark K wrote:
man wget shows:
-D domain-list
--domains=domain-list
Set domains to be followed. domain-list is a comma-separated
list of domains
Angelo,
It works for me:
# wget -N http://www.nic.it/index.html
--13:04:39-- http://www.nic.it/index.html
= `index.html'
Resolving www.nic.it... done.
Connecting to www.nic.it[193.205.245.10]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2,474 [text/html]
Does the PATH of procmail contain the directory where wget lives?
Mark Post
-Original Message-
From: Michel Lombart [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 29, 2003 6:51 PM
To: [EMAIL PROTECTED]
Subject: wget and procmail
Hello,
I've an issue with wget and procmail.
I install
* Thomas Schweikle [EMAIL PROTECTED] [030718 12:27]:
in the manual page to wget you mention:
--proxy=on/off
Turn proxy support on or off. The proxy is on by default if the
appropriate environmental variable is defined.
Could you please state additionally what ... appropriate
Hi again.
Some people have reported experiencing the same problem, but nobody from the
development team has forwarded a comment on this. Anybody can tell us if this is bug
or some config issue?
Regards,
Hans Deragon
-Original Message-
From: Hans Deragon (LMC)
Sent: Wednesday,
Wget maintainer:
http://www.geocrawler.com/archives/3/409/2003/3/0/10399285/
--
The geocrawler archives for Wget are alive again!
On Mon, 14 Jul 2003, Hans Deragon (QA/LMC) wrote:
Hi again.
Some people have reported experiencing the same problem, but nobody
from the development team has
, 2003 10:21 AM
To: '[EMAIL PROTECTED]'
Subject: RE: wget: ftp through http proxy not working with 1.8.2. It doe
s wo rk with 1.5.3
Hi again.
Some people have reported experiencing the same problem, but nobody from
the development team has forwarded a comment on this. Anybody can tell us
still
interested in working on wget).
Mark
-Original Message-
From: Hans Deragon (QA/LMC) [mailto:[EMAIL PROTECTED]
Sent: Monday, July 14, 2003 1:05 PM
To: 'Post, Mark K'
Subject: RE: wget: ftp through http proxy not working with 1.8.2. It doe
s work with 1.5.3
wget --debug -m ftp
On Tue, Jun 24, 2003 at 02:41:50PM -0400, Jim Ennis wrote:
Hello,
I am trying to compile wget-1.8.2 on Solaris 9 with openssl-0.9.7b . The
Don't.. Wget is seriously broken with the SSL extensions, see my messages a
month or two ago. (Not that anyone repied :P)
Check out curl perhaps?
Rajesh wrote:
Wget is not mirroring the web site properly. For eg it is not copying
symbolic
links from the main web server.The target directories do exist on the
mirror
server.
wget can only mirror what can be seen from the web. Symbolic links will be
treated as hard references (assuming
Rajesh wrote:
Thanks for your reply. I have tried using the command wget
--user-agent=Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1), but it
didn't
work.
Adding the user agent helps some people -- I think most often with web
servers from the evil empire.
I have one more question. In
rrgh.
I see that you have an option '--spider' to just test the connection, however, it
doesn't seem to work unless you place the option *before* the url. ie:
wget --spider 'ftp://stuff'
works
wheras
wget --spider 'ftp://stuff2'
doesn't.
no - scratch that - it *doesn't* seem to work if
Just upgraded to 1.8.2 and ok, I think I see the problem here...
--spider only works with html files.. right? If so, why?
Ed
i submitted a patch in february.
http://www.mail-archive.com/wget%40sunsite.dk/msg04645.html
http://www.geocrawler.com/archives/3/409/2003/2/100/10313375/
On Tue, 17 Jun 2003, Peschko, Edward wrote:
Just upgraded to 1.8.2 and ok, I think I see the problem here...
--spider only works with
there doesn't seem to be anything wrong with the page.
are you having trouble with recursive wgets with other (all) pages or just
this one.
/a
Aaron S. Hawley wrote:
there doesn't seem to be anything wrong with the page.
are you having trouble with recursive wgets with other
(all) pages or just this one.
Aaron, good thinking -- I tried different urls and then different
versions. It's a bug with the OS/2 wget 1.82 which I have.
http://www.gnu.org/manual/wget/
On Wed, 11 Jun 2003, Support, DemoG wrote:
hello,
I need help on this subject:
Please tell me what is the command line if i wanted to get all the files,
subdirectories with all contained from a ftp like ftp.mine.com
also i have the user and pass, and i will
another propretary protocol brought to you by the folks in redmond,
washington.
http://sdp.ppona.com/
http://geocities.com/majormms/
On Sun, 1 Jun 2003, Andrzej Kasperowicz wrote:
How could I download using wget that:
mms://mms.itvp.pl/bush_archiwum/bush.wmv
If wget cannot manage it then
Hi George,
maybe try Wget version 1.8.2 if you can get hold of it.
Using that version Wget seems to behave as expected (see below).
Cheers,
cerbo
wget -d http://search.yahoo.com/search?p=d%26d+exhaust;
DEBUG output created by Wget 1.8.2 on irix6.5.
--12:37:14--
my guess is that this probably isn't in the manual.
% wget --version
GNU Wget 1.9-beta
Copyright (C) 1995, 1996, 1997, 1998, 2000, 2001 Free Software Foundation,
Inc.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
a patch was submitted:
http://www.mail-archive.com/wget%40sunsite.dk/msg04645.html
On Thu, 6 Mar 2003, Keith Thompson wrote:
When invoked with an ftp://; URL, wget's the --spider option is
silently ignored and the file is downloaded. This applies to wget
version 1.8.2.
To demonstrate:
this bug is confirmed in CVS, it looks like there's been a lot of changes
to html-url.c
/a
On Thu, 20 Feb 2003, Jamie Zawinski wrote:
Try this, watch it lose:
wget --debug -e robots=off -nv -m -nH -np \
http://www.dnalounge.com/flyers/
http://www.dnalounge.com/flyers/ does a
Dieter Kuntz wrote:
I want to get the status-page of my router. the login of the router
consists
only of a password, no username. that is the problem, I cannot login
with wget. can somebody help me?
What protocol are you using?? HTTP,FTP,Telnet? How do you usually do
that, not using wget?
Dieter Kuntz wrote:
hello,
thanks for your answer.
I am using Win98.
When I go to my status-page of my smc-router 7004abr,
I use a browser and write http://192.168.2.1:88.
You see, I use http.
Then I get the startpage with a field, where I can
fill in the password and enter.
then I get the
Kalin KOZHUHAROV wrote:
Dieter Kuntz wrote:
i will test with --http-user..
OK, I think you will not make it this way.
What we are talking here is form sumbmission, not just password.
Your password happenes to be part of a form.
So first look at the html source of the page where
Sorry for the length of this post. Some of you offered some helpful
things for me to try. First, here's my ~/.wgetrc file:
$ more ~/.wgetrc
continue = on
dirstruct = off
dot_style = mega
glob = on
mirror = on
passive_ftp = on
progress = bar
tries = 3
verbose = on
use_proxy = on
ftp_proxy =
You need -N (upper case). The switch is case sensitive.
Glad to see that someone else is an f-prot user.
At 12:23 AM 1/22/2003, Steve Bratsberg wrote:
I am using Wget to update the date file for f-prot disks that boot from Dos.
I have a copy of the zipped dat file on the hard drive and I have
Fred Holmes wrote:
Many thanks for the help. I'm using Windows 32 binary 1.8.2 on WIN2K.
The syntax you have given me gets the first page OK, apparently by using
the -O switch to specify a simple filename as the output file, rather
than the automatically generated output filename(s). I.e., I
Hi Karl!
From my POV, the current set-up is the best solution.
Of course, I am also no developer, but an avid user.
Sometimes you just don't know the structure of the website
in advance, so using -m as a trouble-free no-brainer
will get you the complete site neatly done with timestamps.
BTW,
At 11:24 PM 12/2/2002 +0200, Taavi Meos wrote:
Hi!
This bug is not serious, everything works but may confuse a user.
OS: RedHat Linux 8.0
GNU Wget 1.8.2
used filename: proftpd-1.2.6-1.i386.rpm
When downloading a RedHat RPM package wget shows it's type as:
[audio/x-pn-realaudio-plugin]
using
Isn't nytimes subscription only? you need to set the cookie etc i think
JG
Jennifer Freeman wrote:
Hiya,
using wget with -k and -O together wget cannot change the links in the
document.
wget -k -O nytimes.html
http://www.nytimes.com/ref/membercenter/help/privacy.html
produces the following
I've been watching this list for a couple of days, and I think it's just
people asking questions (me included) with no one around to answer them.
On 16 Oct 2002, hao chen wrote:
Hi,
Is there a way to convert relative links to non-relative links with
wget? I searched through the manual and
is nearly empty (it seems)
nearly all the linux distro's I have tried use wget for all updating tasks
or similiar...
quiet strange really.
rgds
Frank
-Original Message-
From: Daniel Webb [mailto:[EMAIL PROTECTED]]
Sent: Thursday, 17 October 2002 4:11 AM
To: [EMAIL PROTECTED]
Subject: Re
.
-Original Message-
From: Franki [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October 16, 2002 4:22 PM
To: [EMAIL PROTECTED]
Subject: RE: wget: relative link to non-relative
yeah, I have been lurking here for over a week, was going to ask a question,
but didn't see any answers, so I wasn't gonna
On 2002-09-20 08:15 +0200, Dominic Chambers wrote:
I am using wget 1.82 on Win2K SP2, and wget froze on the fifth
1.8.2.
downloaded file 'prn.html' using the command line:
wget -r -l0 -A htm,html,png,gif,jpg,jpeg --no-parent
http://java.sun.com/products/jlf/at/book
About twenty
I just started using wget today, and I am very impressed with it.
Apart from the first job I tried, everything has worked perfectly for
a number of jobs, even changing to relative URLs, which I thought was
very impressive. Thanks for the good work.
I am using wget 1.82 on Win2K SP2,
Thomas Lussnig wrote:
this is an Windows spezific problem. Normaly should prn.html an valid
Filename.
And as you can check Long filenames can contain :.
No they can't. And, on NTFS including a : in a filename causes the data to be
written into an invisible named stream.
But there is an
[EMAIL PROTECTED] wrote:
We're using the wget app to run our scheduled tasks. Each time it runs, a
copy of the file is created with a number added to the end of it. Is there
a way to turn this off? We tried adding --quiet to the bat file but it
still wrote the file.
-nc or -N depending on
Dale Therio wrote:
Hmm... the 'facts' as you stated in your post to this list are that
the site owner does not want people to suck the content of his/her
site using a download manager. And to prove this, they are trying to
detect such activity and are preventing it. Those are the facts
It would seem that this is not your site then and so if the owner of the
site doesn't want you to download his/her entire site why should you?
Maybe they have banners or something that pays for their hosting costs
and they feel preventing tools like wget form sucking their site is a
way to
Try the source I sent you.
Dominique wrote:
thank you Max,
np
Is it different than the one I CVS-ed yesterday? I mean, does it have
changes in creating filenames? Please note, that I finally compiled it
and could run it.
No changes... I did run autoconf, so you could go straight to
No changes... I did run autoconf, so you could go straight to configure (as you
have too new an autoconf version).
it compiled just fine now
Something has just occurred to me: by default, wget defaults to recursion
resticted to 5 levels. Perhaps that is the problem?
If so, an -l0 will fix it.
The problem is that with a ?x=y, where y contains slashes, wget passes them
unchanged to the OS, causing directories to be created, but fails to adjyst
relative links to account for the fact that the page is in a deeper directory
that it should be. The solution is to map / to _ or something.
Max Bowsher wrote:
The problem is that with a ?x=y, where y contains slashes, wget passes them
unchanged to the OS, causing directories to be created, but fails to adjyst
relative links to account for the fact that the page is in a deeper directory
that it should be. The solution is to map /
To invoke html examples they use calls like (just the first example):
http://www.w3schools.com/html/tryit.asp?filename=tryhtml_basic
What filename did you expect for this ?
- tryit.asp
- tryit.asp?filename=tryhtml_basic
- tryhtml_basic
Wget saves a file and a directory with this very
What filename did you expect for this ?
- tryit.asp
- tryit.asp?filename=tryhtml_basic
- tryhtml_basic
Once again: the loaction is:
http://www.w3schools.com/html/tryit.asp?filename=tryhtml_basic
It is a frame set which requires frames. One of them is a problem,
because it has special
On Mit, 11 Sep 2002, Dominique wrote:
Hello,
wget ftp://ftp.reed.edu/pub/src/html-helper-mode.tar.gz
== PORT ... done.== RETR html-helper-mode.tar.gz ...
maybe you have to use --passive-ftp?
--
Noèl Köthe
Dominique wrote:
tryit_edit.asp?filename=tryhtml_basicreferer=http://www.w3schools.com/html/html
_examples.asp
and just this one is truncated. I think some regexp or pattern or
explicit list of where_not_to_break_a_string characters would solve
the problem. Or maybe it is already possible,
Is it something I can do myself or the code has to be changed?
Domi
I think that some URL encoding has not happened somewhere. Whether wget or the
web server is at fault, I don't know, but the solution would be to URL encode
the slashes.
Max.
tryit_edit.asp?filename=tryhtml_basicreferer=http://www.w3schools.com/html/html
_examples.asp
and just this one is truncated. I think some regexp or pattern or
explicit list of where_not_to_break_a_string characters would solve
the problem. Or maybe it is already possible, but I dont know
-e robots=off
Jon W. Backstrom wrote:
Dear Gnu Developers,
We just ran into a situation where we had to spider a site of our
own on a outsourced service because the company was going out of
business. Because wget respects the robots.txt file, however, we
could not get an archive made
Thomas Lussnig wrote:
Why should be there an url encoding ?
/ are an legal character in url and in the GET string.
Ist used for example for Path2Query translation.
The main problem is that wget need to translate an URL to and
Filesystem name.
Yes, you are right, I wasn't think clearly.
You don't give a whole lot of information. It's kind of impossible to help when
you don't know what the problem is.
Posting the URL of the problem site would be a good idea.
Max.
Dominique wrote:
Is it possible at all?
dominique
Dominique wrote:
Hi,
I have a problem trying to wget a
Posting the URL of the problem site would be a good idea.
well, I have quite a few. let's start with this:
http://www.w3schools.com/html/default.asp
or just anything from such a page page. I hacked around for a while with
no apparent success.
thanks
dominique
Max.
Dominique wrote:
Dominique wrote:
Posting the URL of the problem site would be a good idea.
well, I have quite a few. let's start with this:
http://www.w3schools.com/html/default.asp
or just anything from such a page page. I hacked around for a while
with no apparent success.
Try this and it
Yes! It works!! I just missed -U option
thanks a lot!
dominique
Thomas Lussnig wrote:
Try this and it works !!!
wget -U Mozilla/5.0 (compatible; MSIE 6.0; Windows NT 5.1)
http://www.w3schools.com/html/default.asp
Problem is that these site Block wget
Cu Thomas Lunig
I just
tested this on the UNIX release wget-1.8.2.tar.gz and still came across the same
problem. WGET creates files such as "subdirectory\filename.html" in the
main directory. It then tries to get the images from filename.html, but
cannot, since it has not traveled down
intosubdirectory.
Hi,
On Sun, Aug 11, 2002 at 05:26:44PM +0200, Kai Anding wrote:
Hello there,
I am trying to use wget with Suse 7.3 and i cannot get it to run correctly.
An example would be
wget ftp://ftp.suse.com/pub/suse/i386/8.0/SuSEgo.ico
which produces the output
.
and then the program
This was already fixed in wget-1.8.2
Regards
Erlend Aasland
On Thu, Aug 01, 2002 at 01:20:43PM +1000, [EMAIL PROTECTED] wrote:
There is a problem with wget recursively retrieving pages over https. With debu\
g on you get the following.
Not following non-HTTP schemes.
A quick
Hi and thank you for your prompt reply.
However, my problem is that I do
http://www.helios.de/cgi-bin/nph-trace.cgi?128.131.44.10; and what it
does instead is
http://www.helios.de:80/cgi-bin/nph-trace.cgi?128.131.44.10;.
On Mon, 29 Jul 2002, Erlend Aasland wrote:
Try wget
Cédric Rosa wrote:
Hello,
Is-it normal that wget saves web pages which contain meta name=robots
content=noindex ?
Or does wget considerate that it is not a search engine and respects only
the follow/nofollow rules ?
Or is-it a bug ? :)
I don't think wget support meta name=robots tags.
://www.robotstxt.org/wc/norobots-rfc.txt.
[...]
Bye,
Cedric.
- Original Message -
From: Hack Kampbjørn [EMAIL PROTECTED]
To: Cédric Rosa [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Saturday, July 06, 2002 8:21 PM
Subject: Re: wget and meta name=robots content=noindex,nofollow
Cédric Rosa wrote
IIRC if no cookie is set no file is created.
Try with wget -d, check the debug output.
Heiko
--
-- PREVINET S.p.A.[EMAIL PROTECTED]
-- Via Ferretto, 1ph x39-041-5907073
-- I-31021 Mogliano V.to (TV) fax x39-041-5907472
-- ITALY
-Original Message-
From: Holger
Title: Bericht
Use
1.8.2 or 1.9-beta.
PREVINET
S.p.A.
[EMAIL PROTECTED]-- Via Ferretto,
1 ph
x39-041-5907073-- I-31021 Mogliano V.to (TV) fax x39-041-5907472--
ITALY
-Original Message-From: Carl S. in 't Veld
[mailto:[EMAIL PROTECTED]]Sent: Sunday, June 30, 2002 1:01
Dear wget list,
I'm really amazed by the -k option in wget, however, it doesn't
appear
to work with the -O - option or even the -O file option! It would
be
really great if I could print the converted URLs to stdout. Does
anyone have a work around? Or maybe another piece of
On Tue, 28 May 2002, Hrvoje Niksic wrote:
Doug Kaufman [EMAIL PROTECTED] writes:
This doesn't work out of the box for DJGPP or for Cygwin. Appended
is a patch to fix most of the problems.
Thanks for the comments and the patch. The patch will most likely not
make into the 1.8.2
801 - 900 of 1221 matches
Mail list logo