As I was writing the manual for `--post', I decided that I wasn't
happy with this part:

    Please be aware that Wget needs to know the size of the POST data
    in advance.  Therefore the argument to @code{--post-file} must be
    a regular file; specifying a FIFO or something like
    @file{/dev/stdin} won't work.

My first impulse was to bemoan Wget's antiquated HTTP code which
doesn't understand "chunked" transfer.  But, coming to think of it,
even if Wget used HTTP/1.1, I don't see how a client can send chunked
requests and interoperate with HTTP/1.0 servers.

The thing is, to be certain that you can use chunked transfer, you
have to know you're dealing with an HTTP/1.1 server.  But you can't
know that until you receive a response.  And you don't get a response
until you've finished sending the request.  A chicken-and-egg problem!

Of course, once a response is received, we could remember that we're
dealing with an HTTP/1.1 server, but that information is all but
useless, since Wget's `--post' is typically used to POST information
to one URL and exit.

Is there a sane way to stream data to HTTP/1.0 servers that expect
POST?

Reply via email to