Re: Using chunked transfer for HTTP requests?
Am Dienstag, 07.10.03, um 17:02 Uhr (Europe/Berlin) schrieb Hrvoje Niksic: That's probably true. But have you tried sending without Content-Length and Connection: close and closing the output side of the socket before starting to read the reply from the server? That might work, but it sounds too dangerous to do by default, and too obscure to devote a command-line option to. Besides, HTTP/1.1 *requires* requests with a request-body to provide Conent-Length: For compatibility with HTTP/1.0 applications, HTTP/1.1 requests containing a message-body MUST include a valid Content-Length header field unless the server is known to be HTTP/1.1 compliant. I just checked with RFC 1945 and it explicitly says that POSTs must carry a valid Content-Length header. That leaves the option of first sending an OPTIONS request to the server (either url or *) to check the HTTP version. //Stefan
Re: Using chunked transfer for HTTP requests?
Am Dienstag, 07.10.03, um 16:36 Uhr (Europe/Berlin) schrieb Hrvoje Niksic: What the current code does is: determine the file size, send Content-Length, read the file in chunks (up to the promised size) and send those chunks to the server. But that works only with regular files. It would be really nice to be able to say something like: mkisofs blabla | wget http://burner/localburn.cgi --post-file /dev/stdin That would indeed be nice. Since I'm coming from the WebDAV side of life: does wget allow the use of PUT? My first impulse was to bemoan Wget's antiquated HTTP code which doesn't understand "chunked" transfer. But, coming to think of it, even if Wget used HTTP/1.1, I don't see how a client can send chunked requests and interoperate with HTTP/1.0 servers. How do browsers figure out whether they can do a chunked transfer or not? I haven't checked, but I'm 99% convinced that browsers simply don't give a shit about non-regular files. That's probably true. But have you tried sending without Content-Length and Connection: close and closing the output side of the socket before starting to read the reply from the server? //Stefan
Re: Using chunked transfer for HTTP requests?
Theoretically, a HTTP/1.0 server should accept an unknown content-length if the connection is closed after the request. Unfortunately, the response 411 Length Required, is only defined in HTTP/1.1. //Stefan Am Dienstag, 07.10.03, um 01:12 Uhr (Europe/Berlin) schrieb Hrvoje Niksic: As I was writing the manual for `--post', I decided that I wasn't happy with this part: Please be aware that Wget needs to know the size of the POST data in advance. Therefore the argument to @code{--post-file} must be a regular file; specifying a FIFO or something like @file{/dev/stdin} won't work. My first impulse was to bemoan Wget's antiquated HTTP code which doesn't understand "chunked" transfer. But, coming to think of it, even if Wget used HTTP/1.1, I don't see how a client can send chunked requests and interoperate with HTTP/1.0 servers. The thing is, to be certain that you can use chunked transfer, you have to know you're dealing with an HTTP/1.1 server. But you can't know that until you receive a response. And you don't get a response until you've finished sending the request. A chicken-and-egg problem! Of course, once a response is received, we could remember that we're dealing with an HTTP/1.1 server, but that information is all but useless, since Wget's `--post' is typically used to POST information to one URL and exit. Is there a sane way to stream data to HTTP/1.0 servers that expect POST?
Handling of Content-Length 0
Please excuse if this bug has already been reported: In wget 1.8.1 (OS X) and 1.8.2 (cygwin) the handling of resources with content-length 0 is wrong. wget tries to read the empty content and hangs until the socket read timeout fires. (I set the timeout to different values and it exactly matches the termination of the GET). Of course this is only noticable with HTTP/1.1 server which leave the connection open and do not apply transfer-encding: chunked for empty response bodies. I imagine this should be quite easy to fix... Best Regards, Stefan