Re: Using chunked transfer for HTTP requests?

2003-10-07 Thread Tony Lewis
Hrvoje Niksic wrote:

> That would work for short streaming, but would be pretty bad in the
> mkisofs example.  One would expect Wget to be able to stream the data
> to the server, and that's just not possible if the size needs to be
> known in advance, which HTTP/1.0 requires.

One might expect it, but if it's not possible using the HTTP protocol, what
can you do? :-)



Re: Using chunked transfer for HTTP requests?

2003-10-07 Thread Hrvoje Niksic
"Tony Lewis" <[EMAIL PROTECTED]> writes:

> Hrvoje Niksic wrote:
>
>> I don't understand what you're proposing.  Reading the whole file in
>> memory is too memory-intensive for large files (one could presumably
>> POST really huge files, CD images or whatever).
>
> I was proposing that you read the file to determine the length, but
> that was on the assumption that you could read the input twice,
> which won't work with the example you proposed.

In fact, it won't work with anything except regular files and links to
them.

> Can you determine if --post-file is a regular file?

Yes.

> If so, I still think you should just read (or otherwise examine) the
> file to determine the length.

That's how --post-file works now.  The problem is that it doesn't work
for non-regular files.  My first message explains it, or at least
tries to.

> For other types of input, perhaps you want write the input to a
> temporary file.

That would work for short streaming, but would be pretty bad in the
mkisofs example.  One would expect Wget to be able to stream the data
to the server, and that's just not possible if the size needs to be
known in advance, which HTTP/1.0 requires.


Re: Using chunked transfer for HTTP requests?

2003-10-07 Thread Tony Lewis
Hrvoje Niksic wrote:

> I don't understand what you're proposing.  Reading the whole file in
> memory is too memory-intensive for large files (one could presumably
> POST really huge files, CD images or whatever).

I was proposing that you read the file to determine the length, but that was
on the assumption that you could read the input twice, which won't work with
the example you proposed.

> It would be really nice to be able to say something like:
>
> mkisofs blabla | wget http://burner/localburn.cgi --post-file
> /dev/stdin

Stefan Eissing wrote:

> I just checked with RFC 1945 and it explicitly says that POSTs must
> carry a valid Content-Length header.

In that case, Hrvoje will need to get creative. :-)

Can you determine if --post-file is a regular file? If so, I still think you
should just read (or otherwise examine) the file to determine the length.

For other types of input, perhaps you want write the input to a temporary
file.

Tony



Re: Using chunked transfer for HTTP requests?

2003-10-07 Thread Stefan Eissing
Am Dienstag, 07.10.03, um 17:02 Uhr (Europe/Berlin) schrieb Hrvoje 
Niksic:
That's probably true. But have you tried sending without
Content-Length and Connection: close and closing the output side of
the socket before starting to read the reply from the server?
That might work, but it sounds too dangerous to do by default, and too
obscure to devote a command-line option to.  Besides, HTTP/1.1
*requires* requests with a request-body to provide Conent-Length:
   For compatibility with HTTP/1.0 applications, HTTP/1.1 requests
   containing a message-body MUST include a valid Content-Length
   header field unless the server is known to be HTTP/1.1 compliant.
I just checked with RFC 1945 and it explicitly says that POSTs must
carry a valid Content-Length header.
That leaves the option of first sending an OPTIONS request to the
server (either url or *) to check the HTTP version.
//Stefan




Re: Using chunked transfer for HTTP requests?

2003-10-07 Thread Hrvoje Niksic
Stefan Eissing <[EMAIL PROTECTED]> writes:

> Am Dienstag, 07.10.03, um 16:36 Uhr (Europe/Berlin) schrieb Hrvoje
> Niksic:
>> What the current code does is: determine the file size, send
>> Content-Length, read the file in chunks (up to the promised size) and
>> send those chunks to the server.  But that works only with regular
>> files.  It would be really nice to be able to say something like:
>>
>> mkisofs blabla | wget http://burner/localburn.cgi --post-file
>> /dev/stdin
>
> That would indeed be nice. Since I'm coming from the WebDAV side
> of life: does wget allow the use of PUT?

No.

>> I haven't checked, but I'm 99% convinced that browsers simply don't
>> give a shit about non-regular files.
>
> That's probably true. But have you tried sending without
> Content-Length and Connection: close and closing the output side of
> the socket before starting to read the reply from the server?

That might work, but it sounds too dangerous to do by default, and too
obscure to devote a command-line option to.  Besides, HTTP/1.1
*requires* requests with a request-body to provide Conent-Length:

   For compatibility with HTTP/1.0 applications, HTTP/1.1 requests
   containing a message-body MUST include a valid Content-Length
   header field unless the server is known to be HTTP/1.1 compliant.


Re: Using chunked transfer for HTTP requests?

2003-10-07 Thread Stefan Eissing
Am Dienstag, 07.10.03, um 16:36 Uhr (Europe/Berlin) schrieb Hrvoje 
Niksic:
What the current code does is: determine the file size, send
Content-Length, read the file in chunks (up to the promised size) and
send those chunks to the server.  But that works only with regular
files.  It would be really nice to be able to say something like:
mkisofs blabla | wget http://burner/localburn.cgi --post-file 
/dev/stdin
That would indeed be nice. Since I'm coming from the WebDAV side
of life: does wget allow the use of PUT?
My first impulse was to bemoan Wget's antiquated HTTP code which
doesn't understand "chunked" transfer.  But, coming to think of it,
even if Wget used HTTP/1.1, I don't see how a client can send
chunked requests and interoperate with HTTP/1.0 servers.
How do browsers figure out whether they can do a chunked transfer or
not?
I haven't checked, but I'm 99% convinced that browsers simply don't
give a shit about non-regular files.
That's probably true. But have you tried sending without Content-Length
and Connection: close and closing the output side of the socket before
starting to read the reply from the server?
//Stefan




Re: Using chunked transfer for HTTP requests?

2003-10-07 Thread Hrvoje Niksic
"Tony Lewis" <[EMAIL PROTECTED]> writes:

> Hrvoje Niksic wrote:
>
>> Please be aware that Wget needs to know the size of the POST
>> data in advance.  Therefore the argument to @code{--post-file}
>> must be a regular file; specifying a FIFO or something like
>> @file{/dev/stdin} won't work.
>
> There's nothing that says you have to read the data after you've
> started sending the POST. Why not just read the --post-file before
> constructing the request so that you know how big it is?

I don't understand what you're proposing.  Reading the whole file in
memory is too memory-intensive for large files (one could presumably
POST really huge files, CD images or whatever).

What the current code does is: determine the file size, send
Content-Length, read the file in chunks (up to the promised size) and
send those chunks to the server.  But that works only with regular
files.  It would be really nice to be able to say something like:

mkisofs blabla | wget http://burner/localburn.cgi --post-file /dev/stdin

>> My first impulse was to bemoan Wget's antiquated HTTP code which
>> doesn't understand "chunked" transfer.  But, coming to think of it,
>> even if Wget used HTTP/1.1, I don't see how a client can send
>> chunked requests and interoperate with HTTP/1.0 servers.
>
> How do browsers figure out whether they can do a chunked transfer or
> not?

I haven't checked, but I'm 99% convinced that browsers simply don't
give a shit about non-regular files.


Re: Using chunked transfer for HTTP requests?

2003-10-07 Thread Tony Lewis
Hrvoje Niksic wrote:

> Please be aware that Wget needs to know the size of the POST data
> in advance.  Therefore the argument to @code{--post-file} must be
> a regular file; specifying a FIFO or something like
> @file{/dev/stdin} won't work.

There's nothing that says you have to read the data after you've started
sending the POST. Why not just read the --post-file before constructing the
request so that you know how big it is?

> My first impulse was to bemoan Wget's antiquated HTTP code which
> doesn't understand "chunked" transfer.  But, coming to think of it,
> even if Wget used HTTP/1.1, I don't see how a client can send chunked
> requests and interoperate with HTTP/1.0 servers.

How do browsers figure out whether they can do a chunked transfer or not?

Tony



Re: Using chunked transfer for HTTP requests?

2003-10-07 Thread Daniel Stenberg
On Tue, 7 Oct 2003, Hrvoje Niksic wrote:

> My first impulse was to bemoan Wget's antiquated HTTP code which doesn't
> understand "chunked" transfer.  But, coming to think of it, even if Wget
> used HTTP/1.1, I don't see how a client can send chunked requests and
> interoperate with HTTP/1.0 servers.
>
> The thing is, to be certain that you can use chunked transfer, you
> have to know you're dealing with an HTTP/1.1 server.  But you can't
> know that until you receive a response.  And you don't get a response
> until you've finished sending the request.  A chicken-and-egg problem!

The only way to deal with this automaticly, that I can think of, is to use a
"Expect: 100-continue" request-header and based on the 100-response you can
decide if the server is 1.1 or not.

Other than that, I think a command line option is the only choice.

-- 
 -=- Daniel Stenberg -=- http://daniel.haxx.se -=-
  ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol


Re: Using chunked transfer for HTTP requests?

2003-10-07 Thread Stefan Eissing
Theoretically, a HTTP/1.0 server should accept an unknown content-length
if the connection is closed after the request.
Unfortunately, the response 411 Length Required, is only defined in 
HTTP/1.1.

//Stefan

Am Dienstag, 07.10.03, um 01:12 Uhr (Europe/Berlin) schrieb Hrvoje 
Niksic:

As I was writing the manual for `--post', I decided that I wasn't
happy with this part:
Please be aware that Wget needs to know the size of the POST data
in advance.  Therefore the argument to @code{--post-file} must be
a regular file; specifying a FIFO or something like
@file{/dev/stdin} won't work.
My first impulse was to bemoan Wget's antiquated HTTP code which
doesn't understand "chunked" transfer.  But, coming to think of it,
even if Wget used HTTP/1.1, I don't see how a client can send chunked
requests and interoperate with HTTP/1.0 servers.
The thing is, to be certain that you can use chunked transfer, you
have to know you're dealing with an HTTP/1.1 server.  But you can't
know that until you receive a response.  And you don't get a response
until you've finished sending the request.  A chicken-and-egg problem!
Of course, once a response is received, we could remember that we're
dealing with an HTTP/1.1 server, but that information is all but
useless, since Wget's `--post' is typically used to POST information
to one URL and exit.
Is there a sane way to stream data to HTTP/1.0 servers that expect
POST?




Using chunked transfer for HTTP requests?

2003-10-06 Thread Hrvoje Niksic
As I was writing the manual for `--post', I decided that I wasn't
happy with this part:

Please be aware that Wget needs to know the size of the POST data
in advance.  Therefore the argument to @code{--post-file} must be
a regular file; specifying a FIFO or something like
@file{/dev/stdin} won't work.

My first impulse was to bemoan Wget's antiquated HTTP code which
doesn't understand "chunked" transfer.  But, coming to think of it,
even if Wget used HTTP/1.1, I don't see how a client can send chunked
requests and interoperate with HTTP/1.0 servers.

The thing is, to be certain that you can use chunked transfer, you
have to know you're dealing with an HTTP/1.1 server.  But you can't
know that until you receive a response.  And you don't get a response
until you've finished sending the request.  A chicken-and-egg problem!

Of course, once a response is received, we could remember that we're
dealing with an HTTP/1.1 server, but that information is all but
useless, since Wget's `--post' is typically used to POST information
to one URL and exit.

Is there a sane way to stream data to HTTP/1.0 servers that expect
POST?