Seth Shafer [EMAIL PROTECTED] writes:
I'm trying to retrieve content from 50 or so pages, merge it into one file,
and convert all of the links in that that one file.
Unfortunately, that won't work. The -O option is not compatible with
--convert-links, or even with `-r'.
M P [EMAIL PROTECTED] writes:
I'm also trying to automatically login to
https://online.wellsfargo.com/cgi-bin/signon.cgi using
wget but with no luck so far.
Any ideas to get this working is greatly appreciated.
I'm finding it hard to try this out, but I *think* that a combination
of
me stesso [EMAIL PROTECTED] writes:
i am a complete beginner in both debian and wget. I
tried to install it under my woody, but when i try to
install (first step in the installation instructions)
it i get this message:
debian:~/Desktop/wget-1.9# ./configure --prefix=$HOME
configuring for
Yup; 1.9.1 cannot download large files. I hope to fix this by the
next release.
[EMAIL PROTECTED] writes:
On platform HP-UX 11.00 PA-RISC
wget version 1.9.1
-T parameter not work in a certain condition
On a deadly HTTP server (HTTP server accept connection, but never send data)
wget wait indefinitely
Could you post a debug log or (better) trace output?
Dan Jacobson [EMAIL PROTECTED] writes:
Wishlist: giving a way to save the types of cookies you say you won't in:
`--save-cookies FILE'
Save cookies to FILE at the end of session. Cookies whose expiry
time is not specified, or those that have already expired, are not
saved.
Manuel [EMAIL PROTECTED] writes:
http1.1
why not?
Because implementing it seemed like additional work for little or no
gain. This is changing as more server software assumes HTTP/1.1 and
bugs out on 1.0 clients.
Dan Jacobson [EMAIL PROTECTED] writes:
H Do you really need an option to also save expired cookies?
You should allow the user power over all aspects...
Maybe we should rename `--keep-session-cookies' to
`--keep-cookies=session', and also allow values such as `expired',
`all', and `default'.
[EMAIL PROTECTED] writes:
Here is the result of wget -t 1 -T 10 -d
http://www.admin.edt.fr/supervision/test.jsp; command on HP-UX with version
1.8.1 and 1.9.1
With version 1.9.1, time out never occurs
I'm not aware of significant differences between the two versions in
that department.
[EMAIL PROTECTED] writes:
Here is a trace of system calls during execution of 'wget -t 1 -T 10
http://www.admin.edt.fr/supervision/test.jsp'
(See attached file: wget_trace.txt)
Thanks. Apparently Wget isn't even calling select! I see two
possibilities:
1. Something in your configuration
[EMAIL PROTECTED] writes:
For the second point. I didn't compile wget myself, I use the version published
on HP-UX Software Porting Center (http://hpux.connect.org.uk/)
So I get the official sources and HP-UX Software Porting Center sources and
compile both version.
Results are the same. No
Nicolás Conde [EMAIL PROTECTED] writes:
Hello list.
I'm new to wget, but so far I've found it very useful.
Right now I'm having some trouble, though. I've tried
wget -m ftp://usr:passwd@ftp.server
and
wget --follow-ftp -r -nH ftp://usr:passwd@ftp.server/pub/...
Libo Yu [EMAIL PROTECTED] writes:
I need submit a file to a web page and then download its output. The
ENCTYPE files of the form is : multipart/form-data. It seems wget
doesnot support multipart data. Is that right? Thanks.
That's right. The support might not be too hard to add, though.
Daniel Stenberg [EMAIL PROTECTED] writes:
On Thu, 3 Jun 2004, Hrvoje Niksic wrote:
It seems configure's way of checking for select simply fails on HPUX.
:-(
The default configure test that checks for the presence of a
function is certainly not optimal for all platforms and
environments
Karr, David [EMAIL PROTECTED] writes:
When testing of posting to web services, if the service returns a
SOAP fault, it will set the response code to 500. However, the
information in the SOAP fault is still useful. When wget gets a 500
response code, it doesn't try to output the error stream
Daniel Stenberg [EMAIL PROTECTED] writes:
Would it make sense to simply always use this check?
In my mind, this is what the function-exists test should do, but I
thought that I'd leave the existing test do what it thinks is right
first since there might be cases and systems around that works
You can post your questions to [EMAIL PROTECTED]
Tony Lewis [EMAIL PROTECTED] writes:
Phil Endecott wrote:
Tony The stuff between the quotes following HREF is not HTML; it
Tony is a URL. Hence, it must follow URL rules not HTML rules.
No, it's both a URL and HTML. It must follow both rules.
Please see the page that I cited in my
John Clarke [EMAIL PROTECTED] writes:
The manual says it's possible, but I can't Logon to a secure site with
post-data.
I've been trying things like:
wget -post-data=username=foopassword=bar https://mysecuresite.net/
authenticate.php
It should be --post-data, with two dashes. And you
Tristan Miller [EMAIL PROTECTED] writes:
There appears to be a bug in the documentation (man page, etc.) for
wget 1.9.1.
I think this is a bug in the man page generation process.
is the version information on the program:
AUTHOR
Originally written by Hrvoje Niksic [EMAIL PROTECTED]
digita.com.
COPYRIGHT
Copyright (c) 1996, 1997, 1998, 2000, 2001 Free Software
Foundation, Inc.
There is no version information in this output. You need to send
Victor Nazarov [EMAIL PROTECTED] writes:
I've been using wget-1.9.1 and noticed that I'm unable to dounload
some files from one server. I've tried lots of ways round modifing the
HTTP request and using the -d option. And finally I've found the
resone of failure. Wget compress multiple slashes
Dan Jacobson [EMAIL PROTECTED] writes:
Maybe add an option so e.g.,
$ wget --parallel URI1 URI2 ...
would get them at the same time instead of in turn.
You can always invoke Wget in parallel by using something like `wget
URI1 wget URI2 '. How would a `--parallel' option be different
from
Dan Jacobson [EMAIL PROTECTED] writes:
Phil How about
Phil $ wget URI1 wget URI2
Mmm, OK, but unwieldy if many. I guess I'm thinking about e.g.,
$ wget --max-parallel-fetches=11 -i url-list
(hmm, with default=1 meaning not parallel, but sequential.)
I suppose forking would not be too
Josy P. Pullockara [EMAIL PROTECTED] writes:
I use GNU Wget 1.8.2 for downloading and Galeon 1.3.8 for web
browsing on Mandrake 9.2.
We have 10Mbps line via an http proxy and I used to use wget for robust
and fast downloading of huge files from ftp sites at almost 150 K/s.
Now I find wget
That's a bug in all released versions of Wget, sorry. In the next
release downloading files larger than 2G might become possible.
Rüdiger Cordes [EMAIL PROTECTED] writes:
there is no description how to turn on cookie storing nor how to use
the command line to tell wget to use two cookies.
Do you have access to the info manual? It does describe the options
`--load-cookies' and `--save-cookies', which are relevant for
Eric Domenjoud [EMAIL PROTECTED] writes:
Under Mandrake linux 9.2, the command
wget -r -k http://www.cplusplus.com/ref/iostream
terminates with
get: retr.c:263: calc_rate: Assertion `msecs = 0' failed.
Aborted
This problem has been fixed in 1.9.1.
For the last several months I've been completely absent from Wget
development, and from the net in general. Here is why, and the story
is not for the faint of heart.
Near the end of July I took a two-week vacation. On August 2nd I
found it took an effort to stand up from a sitting position.
[EMAIL PROTECTED] (Steven M. Schweda) writes:
The other function arguments control various formatting options. (Where
can't GCC printf() using %ll?)
For the record, GCC doesn't printf() anything, printf is defined in
the standard library. If the operating system's printf() doesn't
[EMAIL PROTECTED] (Steven M. Schweda) writes:
From: Hrvoje Niksic [EMAIL PROTECTED]
but perhaps a better question would have been, 'Where can't a GCC
user do a printf() using %ll?'.
On any system that predates `long long'. For example, SunOS 4.1.x,
Ultrix, etc.
I thought we were
Leonid [EMAIL PROTECTED] writes:
Steven and Hrvoje,
wget-1.9.1 has a function number_to_string which is in fact a
home-made equivalent to printf () %ld.
Yes, but that function is merely an optimization used to avoid
frequent calls to sprintf(buf, %ld, num). Wget does in fact in many
places
I noticed the addition of a `string_t.c' file that uses wchar_t and
wide string literals without any protection. This will break
compilation of Wget under older compilers.
If the new policy is to discontinue support for old systems and I
missed the announcement, I apologize. In that case it
Here is my attempt at adding large file support to Wget in a manner I
think is fairly portable. I'd be interested to know if this breaks
compilation for anyone.
The patch is purely experimental; I haven't even written the ChangeLog
entries yet. After applying it, don't forget to run autoheaders
Donny Viszneki [EMAIL PROTECTED] writes:
But as I noted, the source code does NOT appear to escape the tilde
under any conditions.
Take a look at urlchr_table at url.c. ~ is marked as an unsafe
character, which is why it gets escaped.
[EMAIL PROTECTED] (Steven M. Schweda) writes:
1. I'd say that code like if ( sizeof(number) == 8 ) should have
been a compile-time #ifdef rather than a run-time decision.
Where do you see such code? grep 'if.*sizeof' *.c doesn't seem to
show such examples.
2. Multiple functions like
I propose to make this list available via gmane, www.gmane.com. It
buys us good archiving, as well as NNTP access. Is there anyone who
would object to that?
Roman Bednarek [EMAIL PROTECTED] writes:
The Info-ZIP code uses one function with a ring of string buffers to
ease the load on the programmer.
That makes sense. I assume the print function also receives an
integer argument specifying the ring position?
The function can have a circular
Erik Ohrnberger [EMAIL PROTECTED] writes:
I was trying to download SuSE version 9.2 from the local mirror site
thinking that I could get the entire package as a single DVD image
( 2 GB). So I did the wget command with the appropriate FTP
arguments, and run it in the background.
Hrvoje Niksic [EMAIL PROTECTED]
* configure.in: Check for LFS. Determine SIZEOF_OFF_T.
src/ChangeLog:
2005-02-20 Hrvoje Niksic [EMAIL PROTECTED]
* wget.h: Define a `wgint' type, normally aliased to (a possibly
64-bit) off_t.
* all: Use `wgint' instead of `long
Does MSVC support long long? If not, how does one...
* print __int64 values? I assume printf(%lld, ...) doesn't work?
* retrieve __int64 values from strings? I assume there is no
strtoll?
I'm asking because I noticed that my LFS patch kind of depends on long
long on machines with LFS.
Gisle Vanem [EMAIL PROTECTED] writes:
Hrvoje Niksic wrote:
Does MSVC support long long? If not, how does one...
No, it has a '__int64' built-in.
* print __int64 values? I assume printf(%lld, ...) doesn't work?
Correct, use %I64d for signed 64-bit and %I64u for unsigned.
* retrieve
Gisle Vanem [EMAIL PROTECTED] writes:
Hrvoje Niksic wrote:
Thanks for the info. Is it OK to just require MSVC7, or should we
write a compatibility function for earlier versions?
Write a compatible function IMHO. A lot of users (including me) still
uses MSVC6.
OK. I don't think we can
[EMAIL PROTECTED] (Steven M. Schweda) writes:
1. I'd say that code like if ( sizeof(number) == 8 ) should have
been a compile-time #ifdef rather than a run-time decision.
Where do you see such code? grep 'if.*sizeof' *.c doesn't seem to
show such examples.
As I recall, it was in
Greg Hurrell [EMAIL PROTECTED] writes:
No. Must be some other weirdness on my system. [...]
Maybe your mail client mangled the long lines in the patch? Try to
download the patch from here and see if it works then:
http://fly.srk.fer.hr/~hniksic/lfs-patch
string_t.c uses the function iswblank, which doesn't seem to exist on
Solaris 8 I tried to compile it on. (Compilation is likely broken on
other non-Linux platforms as well for the same reason.) Since nothing
seems to be using the routines from string_t, I solved the problem by
removing
Dave Yeo [EMAIL PROTECTED] writes:
ps anyone getting a bunch of what look like viruses on the
wget-patches list?
I just noticed them on gmane. I've now asked the SunSITE.dk staff to
deploy the kind of virus/spam protection currently used by this list
(confirmation required for non-subscribers
Mauro Tortonesi [EMAIL PROTECTED] writes:
On Sunday 20 February 2005 06:31 pm, Hrvoje Niksic wrote:
string_t.c uses the function iswblank, which doesn't seem to exist
on Solaris 8 I tried to compile it on. (Compilation is likely
broken on other non-Linux platforms as well for the same reason
Maciej W. Rozycki [EMAIL PROTECTED] writes:
Actually if (sizeof(number) == 8) is much more readable than any
preprocessor clutter and yields exactly the same.
Agreed, in some cases. In others it yields to pretty annoying
compiler warnings.
Simone Piunno [EMAIL PROTECTED] writes:
I think by pushing this line of reasoning to the extreme you
shouldn't have added i18n through gettext, should you?
You are right, and I was indeed leery of adding support for gettext
until I was convinced that it would work well both on systems without
Maciej W. Rozycki [EMAIL PROTECTED] writes:
On Mon, 21 Feb 2005, Hrvoje Niksic wrote:
Actually if (sizeof(number) == 8) is much more readable than any
preprocessor clutter and yields exactly the same.
Agreed, in some cases. In others it yields to pretty annoying
compiler warnings
Maciej W. Rozycki [EMAIL PROTECTED] writes:
Besides, despite sizeof(foo) being a constant, you can't move a
comparison against it to cpp.
You can, Autoconf allows you to check for size of foo, which gives
you a SIZEOF_FOO preprocessor constant. Then you can write things
like:
#if SIZEOF_FOO
Mauro Tortonesi [EMAIL PROTECTED] writes:
It is rather common that either the charset at the remote host or
the charset at the local host are set incorrectly.
this is not a problem. actually (apart from the case of a document
returned as an HTTP response) we cannot be sure that the charset
Mauro Tortonesi [EMAIL PROTECTED] writes:
the problem is not with HTTP response messages, but with HTTP
resources (which can be for example binary data or multibyte char
text - in this case you really want to escape unprintable data while
printing all the valid multibyte chars you can using
Mauro Tortonesi [EMAIL PROTECTED] writes:
but i suspect we wiil probably have to add foreign charset support
to wget one of these days. for example, suppose we are doing a
recursive HTTP retrieval and the HTML pages we retrieve are not
encoded in ASCII but in UTF16 (an encoding in which is
Mauro Tortonesi [EMAIL PROTECTED] writes:
The only reason why that bug occurred was the broken hotfix that
escaped *all* non-ASCII content printed by Wget, instead of only that
actually read from the network. We don't need iconv to fix that, we
need correct quoting.
yes, you may be right.
Mauro Tortonesi [EMAIL PROTECTED] writes:
If that weren't safe, Wget would (along with many other programs) have
been broken a long time ago. In fact, if that were the case, I would
never have even accepted adding support for gettext in the first
place.
well, theoretically it could happen.
Mauro Tortonesi [EMAIL PROTECTED] writes:
i don't know what's the correct procedure to add a new translation
to a GNU project (hrvoje, do you have any ideas?),
I used to add translations for Croatian, both for Wget and for other
programs, so I should know, but I must admit that the details now
Simone Piunno [EMAIL PROTECTED] writes:
On Monday 21 February 2005 16:18, Hrvoje Niksic wrote:
Also, gettext doesn't change behavior of low-level routines in a
fundamental way -- it's just a way of getting different strings.
On the other hand, wide chars do introduce pretty invasive changes
When opening files, Wget takes care (by default) to not overwrite an
existing file, and to tell the user where the file is to be saved.
However, the defense against overwriting may fail because Wget
determines the file name before attempting the download, but only
opens the file when the data
[EMAIL PROTECTED] (Steven M. Schweda) writes:
SunOS 5.9 /usr/include/fcntl.h:
[...]
/* large file compilation environment setup */
#if !defined(_LP64) _FILE_OFFSET_BITS == 64
#ifdef __PRAGMA_REDEFINE_EXTNAME
#pragma redefine_extnameopenopen64
Simone Piunno [EMAIL PROTECTED] writes:
On Tuesday 22 February 2005 00:10, Hrvoje Niksic wrote:
If wide chars were in that message, you could no longer print it with
printf, which means that a majority of gettext-using programs would be
utterly broken, Wget included. I imagine I would have
Maciej W. Rozycki [EMAIL PROTECTED] writes:
Is it possible to portably use open() and retain large file support?
Try the AC_SYS_LARGEFILE autoconf macro.
That's what I thought I was using. I was just afraid that open()
wasn't correctly encompassed by the large file API's, a fear that
proved
Maciej W. Rozycki [EMAIL PROTECTED] writes:
I wonder what is the difference between AC_FUNC_FSEEKO and
AC_CHECK_FUNCS(seeko). The manual doesn't seem to explain.
Well, that's what I have on my local system:
- Macro: AC_FUNC_FSEEKO
If the `fseeko' function is available, define
Herold Heiko [EMAIL PROTECTED] writes:
I tried a test compile just now, with Visual C++ 6 I get different
errors:
Thanks for checking it.
string_t.[ch] - iswblank doesn't seem to be available,
For now, just remove string_t from the Makefile. It's not used
anywhere yet.
Also, the large
Herold Heiko [EMAIL PROTECTED] writes:
http.c(503) : warning C4090: 'function' : different 'const'
qualifiers
[...]
I don't quite understand these warnings. Did they occur before?
Definitively, I trie with a rev from March 2004, same warnings.
Then we can ignore them for now.
I
Maciej W. Rozycki [EMAIL PROTECTED] writes:
Doesn't GCC work for this target?
It does, in the form of Cygwin and MingW. But Heiko was using MS
VC before, and we have catered to broken compilers before, so it
doesn't hurt to try.
Herold Heiko [EMAIL PROTECTED] writes:
Said that, in retr.c simplifying the int rdsize line did not solve, but I
tried the following, we have:
#ifndef MIN
# define MIN(i, j) ((i) = (j) ? (i) : (j))
#endif
int rdsize = exact ? MIN (toread - sum_read, dlbufsize) : dlbufsize;
Herold Heiko [EMAIL PROTECTED] writes:
Does solve, in fact I found some MS articles suggesting the same thing.
Attached patch does work around the problem by disabling optimization
selectively..
I was able to retrieve a 2.5GB file with ftp.
In other words, large files now work on Windows? I
Is there a way to get the functionality of open(..., O_CREAT|O_EXCL)
under Windows? For those who don't know, O_EXCL opens the file
exclusively, guaranteeing that the file we're opening will not be
overwritten. (Note that it's not enough to check that the file
doesn't exist before opening it; it
Please note that Wget 1.9.x doesn't support downloading of 2G+ files.
To download large files, get the CVS version of Wget (see
http://wget.sunsite.dk for instructions.)
Gisle Vanem [EMAIL PROTECTED] writes:
Hrvoje Niksic wrote:
In other words, large files now work on Windows? I must admit, that
was almost too easy. :-)
Don't open the champagne bottle just yet :)
Too late, the bottle is already empty. :-)
Now could someone try this with Borland
Gisle Vanem [EMAIL PROTECTED] writes:
Another option was to simply set the (system) errno after the Winsock
operations, and have our own strerror that recognizes them. (That
assumes that Winsock errno values don't conflict with the system ones,
which I believe is the case.)
That assumption
Steve Thompson [EMAIL PROTECTED] writes:
I have found in another context that the Windows C run-time library
can't handle files larger than 2GB in any context, when using fopen,
etc. The size of off_t is 4 bytes on IA32.
I know that, but stdio is not necessarily tied to off_t anyway --
except
Gisle Vanem [EMAIL PROTECTED] writes:
There is strace for Win-NT too. But I dare not install it to find
out.
Hmm, OK.
PS. it is quite annoying to get 2 copies of every message.
I'll try to remember to edit the headers to leave your private address
out.
Also, there should be a Reply-to:
Gisle Vanem [EMAIL PROTECTED] writes:
It doesn't seem the patches to support 2GB files works on
Windows. Wget hangs indefinitely at the end of transfer. E.g.
[...]
I seem to be unable to repeat this.
Does this happen with only with large files, or with all files on
large-file-enabled version
Noèl Köthe [EMAIL PROTECTED] writes:
Am Mittwoch, den 23.02.2005, 23:13 +0100 schrieb Hrvoje Niksic:
The most requested feature of the last several years finally arrives
-- large file support. With this patch Wget should be able to
download files larger than 2GB on systems that support them
Belov, Charles [EMAIL PROTECTED] writes:
I would like to use wget 1.9.1 instead of the wget 1.8.x which is
installed on our server. I downloaded 1.9.1 from the Gnu ftp site,
and issued the command:
make -f Makefile.in wget191
You're not supposed to use Makefile.in directly. Run
Brad Andersen [EMAIL PROTECTED] writes:
This option appears to be missing from wget --help, however,
it is in the documentation. It is not working in 1.9 or
1.9.1.
That option will first appear in Wget 1.10 and is currently available
in CVS. Where did you find
With today's prevalence of NAT, I believe that passive FTP should be
made default.
On the systems without NAT, both types should work, and on systems
that use NAT only passive FTP will work. This makes it the obvious
choice to be the default. I believe web browsers have been doing the
same for
Thanks for the pointer. Note that a `--active-ftp' is not necessary
in the CVS version because every --option has the equivalent
--no-option. This means that people who don't want passive FTP can
specify `--no-passive-ftp', or `--passive-ftp=no'.
Gabor Istvan [EMAIL PROTECTED] writes:
I would like to know if it is possible to mirror or recursively download
web sites that have links like ' .php?dir=./ ' within. If yes what are the
options to apply?
I don't see why that wouldn't work. Something like `wget -r URL'
should apply.
Martin Trautmann [EMAIL PROTECTED] writes:
I'm afraid that reading the URLs from an input file can't be passed
through a -D filter? What's a reasonable behavior of combining -i
and -D?
-D filters the URLs encountered with -r. Specifying an input file is
the same as specifying those URLs on
Roman Shiryaev [EMAIL PROTECTED] writes:
I usually download files using wget from one of ISP filesevers via
8Mbps ADSL under linux. And wget always shows me that download speed
is no more than ~830 kbytes/sec. Now I guess that this is a transfer
speed of really useful data only (i.e. wget
[EMAIL PROTECTED] writes:
I found some problem to download page from https://... URL, when I
have connection only through proxy. For NON http protocols I use
CONNECT method, but wget seems to not use it and access directly
https URLS. For http:// URL wget downloads fine.
Can you tell me, is
John Andrea [EMAIL PROTECTED] writes:
I'm trying to connect to a virtual host even though the DNS does not
point to that host. I believe this should work if I specify the ip
address of the host and then use the Host: header within the
request. A test with telnet tells me that this works.
Dan Jacobson [EMAIL PROTECTED] writes:
Is it still useful to mail to [EMAIL PROTECTED] I don't think
anybody's home. Shall the address be closed?
If you're referring to Mauro being busy, I don't see it as a reason to
close the bug reporting address.
Tony Lewis [EMAIL PROTECTED] writes:
Hrvoje Niksic wrote:
This patch imposes IMHO reasonable, yet safe, limits for reading server
responses into memory.
Your choice of default limits looks reasonable to me, but shouldn't
wget provide the user a way to override these limits
Tony Lewis [EMAIL PROTECTED] writes:
Hrvoje Niksic wrote:
I don't see how and why a web site would generate headers (not
bodies, to be sure) larger than 64k.
To be honest, I'm less concerned about the 64K header limit than I
am about limiting a header line to 4096 bytes.
The 4k limit does
Martin Trautmann [EMAIL PROTECTED] writes:
is there a fix when file names are too long?
I'm afraid not. The question here would be, how should Wget know the
maximum size of file name the file system supports? I don't think
there's a portable way to determine that.
Maybe there should be a way
Martin Trautmann [EMAIL PROTECTED] writes:
On 2005-03-21 17:13, Hrvoje Niksic wrote:
Martin Trautmann [EMAIL PROTECTED] writes:
is there a fix when file names are too long?
I'm afraid not. The question here would be, how should Wget know the
maximum size of file name the file system
As currently written, Wget really prefers to determine the file name
based on the URL, before the download starts (redirections are sort of
an exception here). It would be easy to add an option to change
index.html to index.xml or whatever you desire, but it would be
much harder to determine the
Stephen Leaf [EMAIL PROTECTED] writes:
parameter option --stdout
this option would print the file being downloaded directly to stdout. which
would also mean that _only_ the file's content is printed. no errors,
verbosity.
usefulness?
wget --stdout http://server.com/file.bz2 | bzcat file
Jens Rösner [EMAIL PROTECTED] writes:
C:\wgetwget --proxy=on -x -r -l 2 -k -x -limit-rate=50k --tries=45
--directory-prefix=AsptDD
As Jens said, Wget 1.5.3 did not yet support bandwidth throttling.
Also please note that the option is named --limit-rate, not
-limit-rate.
JASON JESSO [EMAIL PROTECTED] writes:
I rename the GETALL to GETALLJJ as to avoid the
conflict. Now I get linker errors:
gcc -O2 -Wall -Wno-implicit -o wget cmpt.o connect.o
convert.o cookies.o ftp.o ftp-basic.o ftp-ls.o
ftp-opie.o hash.o headers.o host.o html-parse.o
html-url.o http.o
JASON JESSO [EMAIL PROTECTED] writes:
[...]
I found that GETALL conflicts with other headers on
the system.
We can easily rename GETALL to GLOB_GETALL or something like that and
will do so for the next release. Thanks for the report.
I now see the cause of the linking problem: Apache's fnmatch.h is
shadowing the system one. Either remove Apache's bogus fnmatch.h or
remove the code that defines SYSTEM_FNMATCH in sysdep.h.
I wonder if Apache does the fnmatch clobbering by default, or if the
system integrators botch things up.
Behdad Esfahbod [EMAIL PROTECTED] writes:
Well, sorry if it's all nonsense now: Last year I sent the
following mail, and got a reply confirming this bug and that it
may be changed to use pid instead of a serial in log filename.
Recently I was doing a project and had the same problem, I found
Wget shouldn't alter the page contents, except for converted links.
Is the funny character in places which Wget should know about
(e.g. URLs in links) or in the page text? Could you page a minimal
excerpt from the page, before and after garbling done by Wget?
Alternately, could you post a URL
I'm not sure what causes this problem, but I suspect it does not come
from Wget doing something wrong. That Notepad opens the file
correctly is indicative enough.
Maybe those browsers don't understand UTF-8 (or other) encoding of
Unicode when the file is opened on-disk?
901 - 1000 of 1457 matches
Mail list logo