On Mon, 8 Sep 2008, Donald Allen wrote:
The page I get is what would be obtained if an un-logged-in user went to the
specified url. Opening that same url in Firefox *does* correctly indicate
that it is logged in as me and reflects my customizations.
First, LiveHTTPHeaders is the Firefox
On Thu, 7 Aug 2008, Micah Cowan wrote:
niwt (which I like best so far: Nifty Integrated Web Tools).
But the grand question is: how would that be pronounced? Like newt? :-)
--
/ daniel.haxx.se
On Sat, 5 Apr 2008, Micah Cowan wrote:
Or did you mean to write wget version of socket interface? i.e. to write
our version of socket, connect,write,read,close,bind, listen,accept,,,?
sorry I'm confused.
Yes! That's what I meant. (Except, we don't need listen, accept; and we only
need bind
On Sat, 5 Apr 2008, Hrvoje Niksic wrote:
This would mean we'd need to separate uses of read() and write() on normal
files (which should continue to use the real calls, until we replace them
with the file I/O abstractions), from uses of read(), write(), etc on
sockets, which would be using our
On Thu, 29 Nov 2007, Alan Thomas wrote:
Sorry for the misunderstanding. Honestly, Java would be a great language
for what wget does.
Perhaps, but not for where wget is used: on numerous platforms as a
stand-alone downloadable tool, including on embedded and small-CPU devices.
Environments
On Thu, 29 Nov 2007, Josh Williams wrote:
I really like the name `fetch` because it does what it says it does. It's
more UNIX-like than the other names :-)
While I agree that a unix-like name is preferable, I just want to point out
that 'fetch' is already used by a http/ftp transfer tool
On Fri, 26 Oct 2007, Micah Cowan wrote:
I very much doubt it does, since we check for it in the curl configure
script, and I can see the output from it running on Tru64 clearly state:
checking for sigsetjmp... yes
Note that curl provides the additional check for a macro version in the
On Fri, 26 Oct 2007, Micah Cowan wrote:
The obvious solution to that is to use c-ares, which does exactly that:
handle DNS queries asynchronously. Actually, I didn't know this until just
now, but c-ares was split off from ares to meet the needs of the curl
developers. :)
We needed an asynch
On Sat, 27 Oct 2007, Hrvoje Niksic wrote:
Do you say that Tru64 lacks both sigsetjmp and siggetmask? Are you
sure about that?
That is the only system we are currently talking about.
I find it hard to believe that Tru64 lacks both of those functions;
for example, see
On Wed, 10 Oct 2007, Micah Cowan wrote:
It appears from your description that Wget's check in http-ntlm.c:
#if OPENSSL_VERSION_NUMBER 0x00907001L
is wrong. Your copy of openssl seems to be issuing a number lower than
that, and yet has the newer, capitalized names.
I don't think that check
Hey
Just to let you know, it seems that coverity.com scans/scanned wget as part of
their scan project, and I belive a wget person could get a signin and get to
see the details from that:
http://scan.coverity.com/rung0.html
We got curl added as well and coverity did find a range of
Hi guys,
ohloh.net keeps track of FLOSS authors and projects and do some interesting
stats and numbers. Wget is listed too:
http://www.ohloh.net/projects/7947?p=Wget
(No I'm not involved with the site in any way but as a happy visitor and
registered user.)
On Fri, 3 Aug 2007, Micah Cowan wrote:
I have a question: why do we attempt to generate absolute paths and such and
CWD to those, instead of just doing the portable string-of-CWDs to get where
we need to be?
Just a word of caution here: while RFC1738 tells this is the way to do it,
there
On Sat, 4 Aug 2007, Micah Cowan wrote:
Just a word of caution here: while RFC1738 tells this is the way to do it,
there are servers and times where this approach doesn't work. (lib)curl has
an option to specify the CWD style (multiple cwd, single cwd or no cwd) due
to this...
Could you be
On Wed, 18 Jul 2007, Micah Cowan wrote:
The manpage doesn't need to give as detailed explanations as the info manual
(though, as it's auto-generated from the info manual, this could be hard to
avoid); but it should fully describe essential features.
I know GNU projects for some reason go
On Fri, 29 Jun 2007, Micah Cowan wrote:
For submitting actual files as form content, multipart/form-data is a much
more natural mechanism.
[...]
Obviously, while this is something wget does not currently do, it is
something wget ought to do. I'll look into how we might implement this in a
On Thu, 28 Jun 2007, Hrvoje Niksic wrote:
It's easy to bring back the code itself, but it's not easy to integrate it
with how Wget communicates with proxies, at least not without reworking a
large chunk of HTTP code. That is why I started with support for simple
client NTLM and postponed
On Wed, 27 Jun 2007, Barnett, Rodney wrote:
I agree. I discovered this when trying to use wget with an HTTP
proxy that uses NTLM. (Is that on the list somewhere?)
I'm pretty sure the original NTLM code I contributed to wget _had_ the ability
to deal with proxies (as I wrote the support for
On Tue, 26 Jun 2007, Micah Cowan wrote:
The GNU Project has appointed me as the new maintainer for wget
Welcome!
Speaking of licensing changes, I don't see a specific exemption clause
for linking wget with OpenSSL
See the end of the README.
On Fri, 8 Jun 2007, Kelly Jones wrote:
I want to use Nagios to monitor a site (running on Windows/IIS) that
uses NTLM for authentication. Is there a plugin/script/library/etc
that can help?
Reason I'm cc'ing the lynx/wget/curl/links lists: if
lynx/wget/curl/links can do NTLM, I can easily
On Tue, 20 Feb 2007, Barnett, Rodney wrote:
Is anyone working on NTLM proxy authentication for wget? If not, are there
any major obstacles?
NTLM support has been in wget for several years by now.
On Tue, 20 Feb 2007, Barnett, Rodney wrote:
I'm referring to *proxy authentication*. I'm no expert on wget, but...
According to the documentation: For proxy authorization only the Basic
authentication scheme is currently implemented.
Oh, sorry my bad. I submitted the NTLM code and it was
On Sat, 30 Sep 2006, Anthony L. Bryan wrote:
Multithreaded downloads can increase speed quite a bit.
I don't think anyone has argued about that downloading from several servers at
once will be faster in many occasions (like when each server gives you less
bandwidth for the transfer than
:
http://curl.haxx.se/mail/lib-2005-11/0008.html
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([s\x]\)\([\xoi]\)xo un\2\1 is xg'`ol
On Fri, 25 Nov 2005, Steven M. Schweda wrote:
Or, better yet, _DO_ forget to prepend the trouble-causing $CWD to those
paths.
I agree. What good would prepending do? It will most definately add problems
such as those Steven describes.
--
-=- Daniel Stenberg -=- http
.)
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
.
There is already plenty of info in the headers of each single mail to allow
them to get filtered accurately without this being needed.
For example: X-Mailing-List: wget@sunsite.dk
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo
On Tue, 23 Aug 2005, Hrvoje Niksic wrote:
Would someone be willing to host an issue tracker for Wget?
Doesn't http://savannah.gnu.org/ or similar provide such services that are
sufficiently good?
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx
it?
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
be encouraged to make wget use the
already installed cacert file.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
this.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
to the protocol
strictly of course.
I have a pending patch that adds it, but I haven't yet decided if it is worth
adding or not...
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
(as an option instead of OpenSSL) I think can fill in some info:
GnuTLS is alive and it is working. It is even documented somewhat better than
OpenSSL (which isn't saying a lot, I know).
Converting an OpenSSL-using program into using GnuTLS instead isn't very hard.
--
-=- Daniel Stenberg -=- http
or bad
record mac
Maybe this should be reported to the OpenSSL maintainers?
If you force it to do SSLv2 it works fine. At times, old and bad SSL
implementations make it hard for OpenSSL to autodetect version.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed
, since openssl ciphers only had SSLv3.
Let me repeat myself:
If you force it to do SSLv2 it works fine.
wget --sslprotocol=1
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
multiple times.
And probably a little more that I've forgotten to mention now. ;-)
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
that. i will fix it ASAP (Jan 12 2005)
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
-O foo.html --post-file data.txt --post-data varname=worksfine
http://localhost/test.php
Is the PHP script possibly assuming that the post data is sent using formpost
multipart encoding? Can you show us what the HTML form tag looks like that
you use for a browser to do this?
--
-=- Daniel
header that
specifies the boundary separator string).
Another option is to use a tool that already has support for what you're
asking.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
on the platform) is suitable, to
convert a string to wget_off_t.
Now, this is only a suggestion meant to kick-start some discussions and
possibly implementations.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
. I guess that is stupid too.
Perhaps I'll learn all this when I grow older.
This is my last mail here on this topic.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
, it is only likely to happen if you have shaky network
setup.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
is better than Y, just different.
And I have contributed to this project serveral times and I might very well
continue to do so. I am not just an author of another tool.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
. It supports large files on all platforms that do.
Having done lots of the adjustments in the curl code, I have to admit that the
work (the transition to portable large file support) wasn't _that_ hard once
we actually started working on it.
--
-=- Daniel Stenberg -=- http
to do it.
The old version is not available anymore so posting the old URL is not gonna
help anyone.
If you want to get a grasp of what the code looks like in its original shape,
check the lib/http_ntlm.[ch] files in curl's source repository.
--
-=- Daniel Stenberg -=- http
assignment stuff before 2003 ended).
I'll try to get time off to fix a new version of the files in the beginning of
next year.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
... :-)
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
care about it.
wget should use CONNECT when doing HTTPS over a proxy and it does GET when
doing HTTP.
IIRC, this problem is fixed in the CVS version.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
On Wed, 1 Dec 2004 [EMAIL PROTECTED] wrote:
Is there a way to use SSL authentication with ftp in wget?
AFAIK, wget doesn't support it.
But curl does: curl.haxx.se
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
that 'off_t' is 32 bit on Windows systems and thus this does not
enable large file support for those.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
: http://curl.haxx.se/libcurl/competitors.html
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
that curl groks large files too.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
for wput, so I just established one
at http://groups.yahoo.com/group/wput/, if anyone is interested.
wput is not wget so I agree discussing it on a different list is a good idea.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2
is right first since there
might be cases and systems around that works in ways I haven't considered. For
example, this extra test might fail if the function name is defined as a
macro.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2
( [],
[ $func ();],
AC_MSG_RESULT(yes!)
eval ac_cv_func_$func=yes
def=`echo HAVE_$func | tr 'a-z' 'A-Z'`
AC_DEFINE_UNQUOTED($def, 1, [If you have $func]),
AC_MSG_RESULT(but still no)
)
--
-=- Daniel
myself.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
I found this on the bugtraq mailing list and since I haven't seen it discussed
here, I thought it could be informative.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
-- Forwarded message --
Date
depending on platform. I can't see many benefits in using 64bit
variables on systems that don't deal with 64bit filesizes.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
On Mon, 10 May 2004, [iso-8859-2] Dra?en Ka?ar wrote:
* Change most (all?) occurrences of `long' in the code to `off_t'. Or
should we go the next logical step and just use uintmax_t right
away?
Just use off_t.
... but Windows has no off_t... ;-)
--
-=- Daniel Stenberg
in the GET request, only in the CONNECT request.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
to direct it elsewhere. You
can use curl -v -i [URL] to get to see the full request curl sends and all
the headers it receives back. Then those could be compared to what wget
sends/gets.
In my eyes, this looks like the correct output from curl. Wasn't it?
--
-=- Daniel Stenberg -=- http
I'd share a clue I've learned:
off_t is not a good type to use for this (large sizes) on Windows, since it is
32 bits even though Windows _does_ support large files.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
On Thu, 11 Dec 2003, Hrvoje Niksic wrote:
IIRC passive FTP is not documented by RFC 959
It was.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
couldn't really tell from this patch, but make sure that you don't
accidentally pass on the proxy authentication in the following request to the
actual remote server as well.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1
method string, you need to make sure that
'CONNECT' is the method used when you use Digest for this case.
But as you said, Digest is rarely used for proxy authentication.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg
a https URL?
Nope. curl only speaks non-SSL HTTP with the proxy. (To be precise, it ignores
the protocol part of the given proxy and connects to it non-SSL.)
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
this peeking? I mean, what's the
gain?
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
for thought here.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
of the
server side.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
://davenport.sourceforge.net/ntlm.html
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
prepare a C file and header and post them in a separate mail. They will
need a little attention, but not much. Mainly to setup pointers to user name,
password, etc.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
these things.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
`Connection: keep-alive' response header.
HTTP 1.1 servers don't (normally) use Connection: keep-alive. Since 1.1
assumes persistant connections by default they only send Connection: close
if they aren't.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx
a request from a
single soul to add it to curl. I'd say it is an indication that such an option
would not be widely used. (Given that curl has been used for scripting POSTs,
logins and cookie stuff for years already.)
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu
session's business.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
-1.5.tar.gz
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
in the cookie file (I believe
I read that it doesn't atm).
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
with this automaticly, that I can think of, is to use a
Expect: 100-continue request-header and based on the 100-response you can
decide if the server is 1.1 or not.
Other than that, I think a command line option is the only choice.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se
support yet anyway, but that's another story.
For example, we have to face the problems with exposing an API using such a
variable type...)
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
This is NTLM required, indeed.
Does anyone have any ideas on what's wrong and how to fix it (if possible)?
The problem is that wget doesn't support NTLM. The fix is to implement it.
A work-around would be to get a recent curl release, as it supports NTLM.
--
-=- Daniel Stenberg -=- http
On Tue, 9 Sep 2003, Hrvoje Niksic wrote:
Thanks to Daniel Stenberg who has either been reading my mind or has had the
exact same needs, here is a patch that brings configure (auto-)detection for
IPv6.
Of course I read your mind, what else could it be? :-P
I'm glad it helped
if you want to.
2. Care to elaborate on why you introduced automake in wget? I have a feeling
this is not what earlier wget hackers would've wanted.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
hard to do without automake...
is this enough? ;-)
You don't have to convince *me* since I'm just one of the interested guys in
the audience, asking one of the players on stage! ;-)
I'll go back to lurk mode again. Thanks for your lengthy reply anyway!
--
-=- Daniel Stenberg -=- http
threatening as
the Wget maintainer application looks, we could currently most benefit from
a trusted soul.
Indeed. Or make that trusted souls, as I believe it would be better to have
more than one.
--
-=- Daniel Stenberg -=- http://daniel.haxx.se -=-
ech`echo xiun|tr nu oc|sed 'sx\([sx
].
That said, I personally have nothing to do with the GNU project or with wget,
I'm just your average Joe hanging out here with the rest.
--
Daniel Stenberg - http://daniel.haxx.se - +46-705-44 31 77
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
that. Be it people in the FSF or elsewhere.
--
Daniel Stenberg - http://daniel.haxx.se - +46-705-44 31 77
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
unfortunately very likely break on other platforms as
well.
AFAIK.
--
Daniel Stenberg - http://daniel.haxx.se - +46-705-44 31 77
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
was only mentioning the
need for a wget team bigger than one person.
And I've done this before. Multiple times.
--
Daniel Stenberg - http://daniel.haxx.se - +46-705-44 31 77
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
periods when he can't
donate as much of his time as wget needs.
Just my own opinions of course.
--
Daniel Stenberg - http://daniel.haxx.se - +46-705-44 31 77
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
I find it mildly annoying that I have not seen this discussed or even
mentioned in here.
Or am I just ignorant?
--
Daniel Stenberg - http://daniel.haxx.se - +46-705-44 31 77
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
-- Forwarded message --
Date
similar to this, but re-uses the same connection as long as possible.
curl is however not a wget clone, so there will be features only wget can do,
and vice versa.
--
Daniel Stenberg - http://daniel.haxx.se - +46-705-44 31 77
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1
name or password contain , then
replace it with %40 in the URL.
--
Daniel Stenberg - http://daniel.haxx.se - +46-705-44 31 77
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
just suspect that, it knows. Wget issues a HTTP/1.0 request and
then it (the server) can't reply with a chunked response. Since it doesn't
know the size on beforehand, it can only send the response as Connection:
close.
--
Daniel Stenberg - http://daniel.haxx.se - +46-705-44 31 77
ech
in my own backyard again. ;-)
--
Daniel Stenberg - http://daniel.haxx.se - +46-705-44 31 77
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
viruses
that have been sent here.
Refusing mails with ScanMail Message in the subject would've stopped all
the warning mails that accompany 98% of all those virus mails.
--
Daniel Stenberg - http://daniel.haxx.se - +46-705-44 31 77
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2
whatever things you
want.
IANAL.
--
Daniel Stenberg - http://daniel.haxx.se - +46-705-44 31 77
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
sent to the remote server, what you read was sent fromt the
remote server.
--
Daniel Stenberg - http://daniel.haxx.se - +46-705-44 31 77
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
1 - 100 of 144 matches
Mail list logo