Gisle Vanem [EMAIL PROTECTED] writes:
Why the need for asprintf() in url.c:903? This function is missing
on DOS/Win32 and nowhere to be found in ./lib.
Wget is supposed to use aprintf, which is defined in utils.c, and is
not specific to Unix.
It's preferable to use an asprintf-like functions
Juon, Stefan [EMAIL PROTECTED] writes:
I just noticed these debug messages:
**
DEBUG output created by Wget 1.10.2 on cygwin.
You are of course aware that this is not the latest Wget (1.11.4)?
As mentioned before, recursive download
Micah Cowan [EMAIL PROTECTED] writes:
I don't see what you see wrt making the code harder to follow and reason
about (true abstraction rarely does, AFAICT,
I was referring to the fact that adding an abstraction layer requires
learning about the abstraction layer, both its concepts and its
Micah Cowan [EMAIL PROTECTED] writes:
Or did you mean to write wget version of socket interface? i.e. to
write our version of socket, connect,write,read,close,bind,
listen,accept,,,? sorry I'm confused.
Yes! That's what I meant. (Except, we don't need listen, accept; and
we only need bind
Alain Guibert [EMAIL PROTECTED] writes:
On Wednesday, April 2, 2008 at 23:09:52 +0200, Hrvoje Niksic wrote:
Micah Cowan [EMAIL PROTECTED] writes:
It's hard for me to imagine an fnmatch that ignores FNM_PATHNAME
The libc 5.4.33 fnmatch() supports FNM_PATHNAME, and there is code
apparently
Alain Guibert [EMAIL PROTECTED] writes:
Maybe you could put a breakpoint in fnmatch and see what goes wrong?
The for loop intended to eat several characters from the string also
advances the pattern pointer. This one reaches the end of the pattern,
and points to a NUL. It is not a '*'
Micah Cowan [EMAIL PROTECTED] writes:
My Name? wrote:
Hello,
I was wondering if there was a way to prevent the title changing...
wget is currently nested in another script, and would probally confuse
the user as to why the title says wget file location is it possible
to retain its former
Alain Guibert [EMAIL PROTECTED] writes:
This old system does HAVE_WORKING_FNMATCH_H (and thus
SYSTEM_FNMATCH). When #undefining SYSTEM_FNMATCH, the test still
fails at the very same line. And then it also fails on modern
systems. I guess this points at the embedded src/cmpt.c:fnmatch()
Alain Guibert [EMAIL PROTECTED] writes:
Hello Micah,
On Monday, March 31, 2008 at 11:39:43 -0700, Micah Cowan wrote:
could you try to isolate which part of test_dir_matches_p is failing?
The only failing src/utils.c test_array[] line is:
| { { *COMPLETE, NULL, NULL }, foo/!COMPLETE,
Micah Cowan [EMAIL PROTECTED] writes:
It sounds like a libc problem rather than a gcc problem. Try
#undefing SYSTEM_FNMATCH in sysdep.h and see if it works then.
It's hard for me to imagine an fnmatch that ignores FNM_PATHNAME: I
mean, don't most shells rely on this to handle file globbing
Micah Cowan [EMAIL PROTECTED] writes:
I'm wondering whether it might make sense to go back to completely
ignoring the system-provided fnmatch?
One argument against that approach is that it increases code size on
systems that do correctly implement fnmatch, i.e. on most modern
Unixes that we
mm w [EMAIL PROTECTED] writes:
#if SIZEOF_VOID_P 4
key += (key 44);
key ^= (key 54);
key += (key 36);
key ^= (key 41);
key += (key 42);
key ^= (key 34);
key += (key 39);
key ^= (key 44);
#endif
this one is minor, the shift count is superior or equal to
Charles [EMAIL PROTECTED] writes:
On Thu, Mar 13, 2008 at 1:17 AM, Hrvoje Niksic [EMAIL PROTECTED] wrote:
It assums, though, that the preexisting index.html corresponds to
the one that you were trying to download; it's unclear to me how
wise that is.
That's what -nc does
Micah Cowan [EMAIL PROTECTED] writes:
When I tried this in my wget, I got different behavior with wget 1.11
alpha and wget 1.10.2
D:\wget --proxy=off -r -l 1 -nc -np http://localhost/test/
File `localhost/test/index.html' already there; not retrieving.
D:\wget110 --proxy=off -r -l 1
Micah Cowan [EMAIL PROTECTED] writes:
The prerelease still has a potential for crashes: in the Czech locales
it will tend to crash if the download is large (or slow) enough to push
minutes into the three-digit zone (that is, if it would take 1 hour
and 40 minutes).
How can minutes get in
Martin Paul [EMAIL PROTECTED] writes:
Micah Cowan wrote:
Then, how was --http-user, --http-passwd working in the past? Those only
work with the underlying HTTP authentication protocol (the brower's
unattractive popup dialog), which AFAIK can't be affected by CGI forms
or JavaScript, etc.
I
Micah Cowan [EMAIL PROTECTED] writes:
Also: the fix to the locale/progress-bar issues resulted in the
added use of a couple wide-character/multibyte-related functions,
mbtowc and wcwidth.
So far Wget has avoided explicit use of wc/mb functions on the account
of portability. Fortunately in
Diego 'Flameeyes' Pettenò [EMAIL PROTECTED] writes:
It is a micro-optimisation, I admit that, but it's not just the
indirection the problem.
Pointers, and structures containing pointers, need to be
runtime-relocated for shared libraries and PIC code (let's assume
that shared libraries are
Diego 'Flameeyes' Pettenò [EMAIL PROTECTED] writes:
On 01/feb/08, at 09:12, Hrvoje Niksic wrote:
Even ignoring the fact that Wget is not a shared library, there are
ways to solve this problem other than turning all char *foo[] into
char foo[][MAXSIZE], which is, sorry, just lame and wasteful
Micah Cowan [EMAIL PROTECTED] writes:
Note that you could also do all the pointer maths up-front, leaving
existing usage code the same, with something like:
static const char foo_data[] = one\0two\0three;
static const char * const foo = {foo_data + 0, foo_data + 4,
foo_data + 8};
I
Micah Cowan [EMAIL PROTECTED] writes:
Right. What I was meaning to prevent, though, is the need to do:
foo[foo_data + foo_idx[i]]
and instead do:
foo[i]
That is why my example had a foo function, which turns foo[i] to
foo(i), but otherwise works the same. Using just foo[i] is
Christopher G. Lewis [EMAIL PROTECTED] writes:
On Vista, you probably have to run in an administrative command
prompt.
You mean that you need to be the administrator to run Wget? If so,
why? Surely other programs managed to access the network without
administrator privileges.
Hopkins, Scott [EMAIL PROTECTED] writes:
Worked perfect. Thanks for the help.
Actually, I find it surprising that AIX's strdup would have such a
bug, and that it would go undetected. It is possible that the problem
lies elsewhere and that the change is just masking the real bug.
strdup
Hopkins, Scott [EMAIL PROTECTED] writes:
Interesting. Compiled that code and I get the following when running
the resulting binary.
/var/opt/prj/wget$ strdup_test
20001448
As I suspected. Such an obvious strdup bug would likely have been
detected sooner.
I appear to have a
Marcus [EMAIL PROTECTED] writes:
Is there some way I can WGET to work with a percentage sign in the password?
I.e. WGET ftp://login:[EMAIL PROTECTED]/file.txt
Yes, escape the percentage as %25:
wget ftp://login:[EMAIL PROTECTED]/file.txt
(This is not specific to Wget; '%' is the hex escape
Micah Cowan [EMAIL PROTECTED] writes:
What's up with the -Y option?
IIRC it used to be the option to turn on the use of proxies. I
retained it for compatibility because many people were using `-Y on'
in their scripts. It might be the time to retire that option and only
leave the --no-proxy
Micah Cowan [EMAIL PROTECTED] writes:
Actually, the reason it is not enabled by default is that (1) it is
broken in some respects that need addressing, and (2) as it is currently
implemented, it involves a significant amount of extra traffic,
regardless of whether the remote end actually ends
I've noticed that the NEWS file now includes contents that would
previously not have been included. NEWS was conceived as a resource
for end users, not for developers or distribution maintainers. (Other
GNU software seems to follow a similar policy.) I tried hard to keep
it readable by only
If GnuTLS support will not be ready for the 1.11 release, may I
suggest that we not advertise it in NEWS? After all, it's badly
broken in that it doesn't support certificate validation, which is one
of the most important features of an SSL client. It also doesn't
support many of our SSL
Micah Cowan [EMAIL PROTECTED] writes:
I thought the code was refactored to determine the file name after
the headers arrive. It certainly looks that way by the output it
prints:
{mulj}[~]$ wget www.cnn.com
[...]
HTTP request sent, awaiting response... 200 OK
Length: unspecified
Gerard [EMAIL PROTECTED] writes:
In particular, if Wget chooses not to download a file because the
local timestamp is still current, or because its size corresponds
to that of the remote file, these should result in an exit status
of zero.
I disagree. If wget has not downloaded a file,
R Kimber [EMAIL PROTECTED] writes:
I agree that Wget should allow the caller to find out what
happened, but I don't think exit codes can be of much use there.
For one, they don't allow distinction between different
successful conditions, which is a problem in many cases.
I'm not sure I
Micah Cowan [EMAIL PROTECTED] writes:
Hrvoje Niksic wrote:
A Wget user showed me an example of Wget misbehaving.
Hrvoje, do you know if this is a regression over 1.10.2?
I don't think so, but it's probably a regression over 1.9.x. In 1.10
Wget started to set up the locale by calling
Mauro Tortonesi [EMAIL PROTECTED] writes:
I vote we stick with C. Java is slower and more prone to environmental
problems.
not really. because of its JIT compiler, Java is often as fast as
C/C++, and sometimes even significantly faster.
Not if you count startup time, which is crucial for a
Micah Cowan [EMAIL PROTECTED] writes:
The new Wget flags empty Set-Cookie as a syntax error (but only
displays it in -d mode; possibly a bug).
I'm not clear on exactly what's possibly a bug: do you mean the fact
that Wget only calls attention to it in -d mode?
That's what I meant.
I
Micah Cowan [EMAIL PROTECTED] writes:
I was able to reproduce the problem above in the release version of
Wget; however, it appears to be working fine in the current
development version of Wget, which is expected to release soon as
version 1.11.*
I think the old Wget crashed on empty
Tony Lewis [EMAIL PROTECTED] writes:
Hrvoje Niksic wrote:
And how is .tar.gz renamed? .tar-1.gz?
Ouch.
OK. I'm responding to the chain and not Hrvoje's expression of pain. :-)
What if we changed the semantics of --no-clobber so the user could specify
the behavior? I'm thinking it could
Andreas Pettersson [EMAIL PROTECTED] writes:
And how is .tar.gz renamed? .tar-1.gz?
Ouch.
Micah Cowan [EMAIL PROTECTED] writes:
It just occurred to me that this change breaks backward compatibility.
It will break scripts that try to clean up after Wget or that in any
way depend on the current naming scheme.
It may. I am not going to commit to never ever changing the current
Micah Cowan [EMAIL PROTECTED] writes:
Christian Roche has submitted a revised version of a patch to modify
the unique-name-finding algorithm to generate names in the pattern
foo-n.html rather than foo.html.n. The patch looks good, and
will likely go in very soon.
foo.html.n has the advantage
Hrvoje Niksic [EMAIL PROTECTED] writes:
Micah Cowan [EMAIL PROTECTED] writes:
Christian Roche has submitted a revised version of a patch to modify
the unique-name-finding algorithm to generate names in the pattern
foo-n.html rather than foo.html.n. The patch looks good, and
will likely go
Micah Cowan [EMAIL PROTECTED] writes:
Or getting the definition requires defining a magic preprocessor
symbol such as _XOPEN_SOURCE. The man page I found claims that the
function is defined by XPG4 and links to standards(5), which
explicitly documents _XOPEN_SOURCE.
Right. But we set that
Micah Cowan [EMAIL PROTECTED] writes:
I can't even begin to fathom why some system would fail to compile
in such an event: _XOPEN_SOURCE is a feature request, not a
guarantee that you'll get some level of POSIX.
Yes, but sometimes the system headers are buggy. Or sometimes they
work just
Micah Cowan [EMAIL PROTECTED] writes:
Okay... but I don't see the logic of:
1. If the system has POSIX's sigsetjmp, use that.
2. Otherwise, just assume it has the completely unportable, and not
even BSDish, siggetmask.
Are you sure siggetmask isn't BSD-ish? When I tested that code on
Micah Cowan [EMAIL PROTECTED] writes:
I know nothing of VMS. If it's sufficiently different from Unix that
it has wildly different alarm/signal facilities, or no alarm/signal at
all (as is the case with Windows), then it certainly makes sense for
Wget to provide a VMS-specific
Daniel Stenberg [EMAIL PROTECTED] writes:
It is quite possible that the Autoconf test for sigsetjmp yields a
false negative.
I very much doubt it does, since we check for it in the curl
configure script,
Note that I didn't mean in general. Such bugs can sometimes show in
one program or
Micah Cowan [EMAIL PROTECTED] writes:
Note that curl provides the additional check for a macro version in
the configure script, rather than in the source; we should probably
do it that way as well. I'm not sure how that helps for this,
though: if the above test is failing, then either it's a
Micah Cowan [EMAIL PROTECTED] writes:
Steven Schweda has started some testing on Tru64, and uncovered some
interesting quirks; some of them look like flaws I've introduced,
and others are bugginess in the Tru64 environment itself. It's
proving very helpful. :)
Is the exchange off-list or on
Micah Cowan [EMAIL PROTECTED] writes:
Is there any reason we can't move the contents of config-post.h into
sysdep.h, and have the .c files #include wget.h at the top, before any
system headers?
wget.h *needs* stuff from the system headers, such as various system
types. If you take into
Micah Cowan [EMAIL PROTECTED] writes:
Yes, that appears to work quite well, as long as we seed it right;
starting with a consistent X₀ would be just as bad as trying them
sequentially, and choosing something that does not change several times
a second (such as time()) still makes it likely
Micah Cowan [EMAIL PROTECTED] writes:
Could you be more specific? AFAICT, wget.h #includes the system headers
it needs. Considering the config-post.h stuff went at the top of the
sysdep.h, sysdep.h is already at the top of wget.h,
OK, it should work then. The reasoning behind my worrying is
Micah Cowan [EMAIL PROTECTED] writes:
version.c: $(wget_SOURCES) $(LDADD)
printf '%s' 'const char *version_string = @VERSION@' $@
-hg log -r tip --template=' ({node|short})' $@
printf '%s\n' ';' $@
printf is not portable to older systems, but that may not be a
Micah Cowan [EMAIL PROTECTED] writes:
I may take liberties with the Make environment, and assume the
presence of a GNU toolset, though I'll try to avoid that where it's
possible.
Requiring the GNU toolset puts a large burden on the users of non-GNU
systems (both free and non-free ones).
Micah Cowan [EMAIL PROTECTED] writes:
Alright; I'll make an extra effort to avoid non-portable Make
assumptions then. It's just... portable Make _sucks_ (not that
non-portable Make doesn't).
It might be fine to require GNU make if there is a good reason for it
-- many projects do. But
Micah Cowan [EMAIL PROTECTED] writes:
Note that, technically, those are not leaks in real need of
plugging because they get called only once, i.e. they do not
accumulate (leak) unused memory. Of course, it's still a good
idea to remove them, if nothing else, then to remove false
positives
Micah Cowan [EMAIL PROTECTED] writes:
Make my src changes, create a changeset... And then I'm lost...
Alright, so you can make your changes, and issue an hg diff, and
you've basically got what you used to do with svn.
That is not quite true, because with svn you could also do svn
commit to
Tony Godshall [EMAIL PROTECTED] writes:
OK, so let's go back to basics for a moment.
wget's default behavior is to use all available bandwidth.
And so is the default behavior of curl, Firefox, Opera, and so on.
The expected behavior of a program that receives data over a TCP
stream is to
Micah Cowan [EMAIL PROTECTED] writes:
FYI, I've removed the PATCHES file. Not because I don't think it's
useful, but because the information needed updating (now that we're
using Mercurial rather than Subversion), I expect it to be updated
again from time to time, and the Wgiki seems to be
Micah Cowan [EMAIL PROTECTED] writes:
Among other things, version.c is now generated rather than
parsed. Every time make all is run, which also means that make
all will always relink the wget binary, even if there haven't been
any changes.
I personally find that quite annoying. :-( I hope
Tony Godshall [EMAIL PROTECTED] writes:
available bandwidth and adjusts to that. The usefullness is in
trying to be unobtrusive to other users.
The problem is that Wget simply doesn't have enough information to be
unobtrusive. Currently available bandwidth can and does change as new
Tony Godshall [EMAIL PROTECTED] writes:
My point remains that the maximum initial rate (however you define
initial in a protocol as unreliable as TCP/IP) can and will be
wrong in a large number of cases, especially on shared connections.
Again, would an algorithm where the rate is
Jim Wright [EMAIL PROTECTED] writes:
- --limit-rate will find your version handy, but I want to hear from
them. :)
I would appreciate and have use for such an option. We often access
instruments in remote locations (think a tiny island in the Aleutians)
where we share bandwidth with other
Jim Wright [EMAIL PROTECTED] writes:
I think there is still a case for attempting percent limiting. I
agree with your point that we can not discover the full bandwidth of
the link and adjust to that. The approach discovers the current
available bandwidth and adjusts to that. The
Micah Cowan [EMAIL PROTECTED] writes:
It is actually illegal to specify byte values outside the range of
ASCII characters in a URL, but it has long been historical practice
to do so anyway. In most cases, the intended meaning was one of the
latin character sets (usually latin1), so Wget was
---BeginMessage---
Hi,I am using wget 1.10.2 in Windows 2003.And the same problem like Cantara.
The file system is NTFS.
Well I find my problem is, I wrote the command in schedule tasks like this:
wget -N -i D:\virus.update\scripts\kavurl.txt -r -nH -P
d:\virus.update\kaspersky
well, after
control H [EMAIL PROTECTED] writes:
After a few hours of headache I found out my --post-data option
didn't work as I expected because the data I send has to be
URL-escaped. This is not mentioned both in the manpage and inline
help. A remark would be helpful.
Note that, in general, it
Esin Andrey [EMAIL PROTECTED] writes:
Hi!
I have downloaded wget-1.10.2 sources and try to compile it.
I have some warnings:
/|init.c: In function ‘cmd_spec_prefer_family’
init.c:1193: warning: доступ по указателю с приведением типа нарушает правила
перекрытия объектов в памяти
|/I have
Micah Cowan [EMAIL PROTECTED] writes:
I have a question: why do we attempt to generate absolute paths and
such and CWD to those, instead of just doing the portable
string-of-CWDs to get where we need to be?
I think the original reason was that absolute paths allow crossing
from any directory
Micah Cowan [EMAIL PROTECTED] writes:
I agree that it's probably a good idea to move HTML parsing to a model
that doesn't require slurping everything into memory;
Note that Wget mmaps the file whenever possible, so it's not actually
allocated on the heap (slurped). You need some memory to
Micah Cowan [EMAIL PROTECTED] writes:
Yes, but when mmap()ping with MEM_PRIVATE, once you actually start
_using_ the mapped space, is there much of a difference?
As long as you don't write to the mapped region, there should be no
difference between shared and private mapped space -- that's
Micah Cowan [EMAIL PROTECTED] writes:
Actually, I was wrong though: sometimes mmap() _is_ failing for me
(did just now), which of course means that everything is in resident
memory.
I don't understand why mmapping a regular would fail on Linux. What
error code are you getting?
(Wget tries
Micah Cowan [EMAIL PROTECTED] writes:
I don't know. The reason directories are matched separately from
files is because files often *don't* match the pattern you've chosen
for directories. For example, -X/etc should exclude anything under
/etc, such as /etc/passwd, but also
Micah Cowan [EMAIL PROTECTED] writes:
I think we should either be a stub, or a fairly complete manual
(and agree that the latter seems preferable); nothing half-way
between: what we have now is a fairly incomplete manual.
Converting from Info to man is harder than it may seem. The script
Micah Cowan [EMAIL PROTECTED] writes:
Converting from Info to man is harder than it may seem. The script
that does it now is basically a hack that doesn't really work well
even for the small part of the manual that it tries to cover.
I'd noticed. :)
I haven't looked at the script that
Micah Cowan [EMAIL PROTECTED] writes:
I would like for devs to be able to avoid the hassle of posting
non-trivial changes they make to the wget-patches list. To my mind,
there are two ways of accomplishing this:
1. Make wget-patches a list _only_ for submitting patches for
consideration by
Micah Cowan [EMAIL PROTECTED] writes:
Someone just asked on the #wget IRC channel if there was a way to
exclude files with certain names, and I recommended -X, without
realizing that that option excludes directories, not files.
My question is: why do we allow users to exclude directories,
Micah Cowan [EMAIL PROTECTED] writes:
Yes, but -R has a lesser degree of control over the sorts of
pathnames that it can constrain: for instance, if one uses
-Rmyprefix*, it will match files myprefix-foo.html and
myprefix-bar.mp3; but it will also match notmyprefix.js, which is
probably not
Micah Cowan [EMAIL PROTECTED] writes:
What is the status of the wget-patches list: is it being actively
used/monitored? Does it still serve its original purpose?
Mauro and I are subscribed to it. The list served its purpose while
Wget was actively maintained. It's up to you whether to
Micah Cowan [EMAIL PROTECTED] writes:
Mauro and I are subscribed to it. The list served its purpose while
Wget was actively maintained. It's up to you whether to preserve it
or replace it with a bug tracker patch submission process.
Given the low incidence of patch submission, is there any
Tony Lewis [EMAIL PROTECTED] writes:
There is a buffer overflow in the following line of the proposed code:
sprintf(filecopy, \%.2047s\, file);
Wget has an `aprintf' utility function that allocates the result on
the heap. Avoids both buffer overruns and arbitrary limits on file
name
Rich Cook [EMAIL PROTECTED] writes:
Trouble is, it's undocumented as to how to free the resulting
string. Do I call free on it?
Yes. Freshly allocated with malloc in the function documentation
was supposed to indicate how to free the string.
Virden, Larry W. [EMAIL PROTECTED] writes:
Tony Lewis [EMAIL PROTECTED] writes:
Wget has an `aprintf' utility function that allocates the result on
the heap. Avoids both buffer overruns and
arbitrary limits on file name length.
If it uses the heap, then doesn't that open a hole where a
Rich Cook [EMAIL PROTECTED] writes:
On Jul 5, 2007, at 11:08 AM, Hrvoje Niksic wrote:
Rich Cook [EMAIL PROTECTED] writes:
Trouble is, it's undocumented as to how to free the resulting
string. Do I call free on it?
Yes. Freshly allocated with malloc in the function documentation
Micah Cowan [EMAIL PROTECTED] writes:
The GNU Project has appointed me as the new maintainer for wget,
Welcome!
If you need assistance regarding the workings of the internals or
design decisions, please let me know and I'll gladly help. I haven't
had much time to participate lately, but
Adrian Sandor [EMAIL PROTECTED] writes:
Thanks a lot Steven,
Apparently there's more than a little code in src/cookies.c which is
not ready for NULL values in the attr and value members of the
cookie structure.
Does that mean wget is buggy or does brinkster break the cookie
specification?
George Pavlov [EMAIL PROTECTED] writes:
Permanent cookies are supposed to be present in cookies.txt, and
Wget will use them. Session cookies will be missing (regardless
of how they were set) from the file and therefore will not be
picked up by Wget.
This is not entirely true. You can
Poppa Pump [EMAIL PROTECTED] writes:
Now I also need to load 2 more cookie values, but these are set
using Javascript. Does anyone know how to set those cookies. I can't
seem to find any info on this. Thanks for your help.
Wget doesn't really distinguish the cookies set by Javascript from
Greg Lindahl [EMAIL PROTECTED] writes:
Host: kpic1 is a HTTP/1.1 feature. So this is non-sensical.
The `Host' header was widely used with HTTP/1.0, which is how it
entered the HTTP/1.1 spec.
For other reasons, Wget should really upgrade to using HTTP/1.1.
Robert Millan [EMAIL PROTECTED] writes:
-AC_CHECK_FUNCS(strtoll usleep ftello sigblock sigsetjmp memrchr)
+AC_CHECK_FUNCS(strtoll usleep ftello sigblock sigsetjmp memrchr strcasecmp
strncasecmp strdup isatty symlink)
-dnl We expect to have these functions on Unix-like systems configure
Eugene Homyakov [EMAIL PROTECTED] writes:
Could you please make -i option accept URL? This is useful when
downloading m3u's
Note that you can easily chain Wget invocations, e.g.
wget -qO- URL | wget -i-
Hrvoje Niksic [EMAIL PROTECTED] writes:
[EMAIL PROTECTED] (Steven M. Schweda) writes:
It's starting to look like a consensus. A Google search for:
wget DONE_CWD
finds:
http://www.mail-archive.com/wget@sunsite.dk/msg08741.html
That bug is fixed in subversion, revision 2194.
I
[EMAIL PROTECTED] (Steven M. Schweda) writes:
It's starting to look like a consensus. A Google search for:
wget DONE_CWD
finds:
http://www.mail-archive.com/wget@sunsite.dk/msg08741.html
That bug is fixed in subversion, revision 2194.
Applied, thanks. Sorry about the delay.
Thanks for the report. Please note that your patch sets the thousands
separator to C, which is probably not what you had in mind. I'm
about to apply a slightly different patch to deal with the problem you
describe:
2007-02-11 Hrvoje Niksic [EMAIL PROTECTED]
* utils.c
Vladimir Volovich [EMAIL PROTECTED] writes:
when using the -S option, wget dies apparently because the server
returns 8-bit characters in the WWW-Authenticate header:
[...]
Thank for the report and the test case. This patch fixes the problem:
2007-02-02 Hrvoje Niksic [EMAIL PROTECTED
Nejc ┼koberne [EMAIL PROTECTED] writes:
[EMAIL PROTECTED]:~# wget -O /dev/null http://10.0.0.2/testsmall.dat 21 |
grep saved
10:38:13 (86,22 MB/s) - `/dev/null' saved [21954560/21954560]
[EMAIL PROTECTED]:~# wget -O /dev/null ftp://testuser:[EMAIL
PROTECTED]/testsmall.dat 21 | grep saved
Ted Mielczarek [EMAIL PROTECTED] writes:
Is there any interest in this?
Sorry for answering this late. I, for one, find it very interesting.
Fetching CSS would be a very welcome feature.
Thanks for the report and the (correct) analysis. This patch fixes
the problem in the trunk.
2007-01-23 Hrvoje Niksic [EMAIL PROTECTED]
* cookies.c (parse_set_cookie): Would erroneously discard cookies
with unparsable expiry time.
Index: src/cookies.c
Thanks for the report. This patch fixes the problem:
2007-01-23 Hrvoje Niksic [EMAIL PROTECTED]
* progress.c (create_image): Check for ETA overflow.
(print_row_stats): Ditto.
Index: src/progress.c
===
--- src
Lars Hamren [EMAIL PROTECTED] writes:
Download speeds are reported as K/s, where, I assume, K is short
for kilobytes.
It's meant to stand for what is now known as kibibyte.
The correct SI prefix for thousand is k, not K:
The prefix doesn't refer to thousands.
1 - 100 of 1457 matches
Mail list logo