On Oct 15, 2008, at 16:29 , Patrick Antivackis wrote:
Hello,
Before I fill a JIRA bug and propose a patch to fix it, i would like
to
validate this issue with you.
Issue : Cannot upload a file which size > 64M using PUT method
How to reproduce :
Create a database test
Create a 64M file :
dd count=131072 if=/dev/zero of=dummy64M
upload this file using CURL :
curl -H "Content-Type: application/octet-stream" -T dummy64M
localhost:5984/test/dummy64M/ -v
it works :
[EMAIL PROTECTED] ~]$ curl -H "Content-Type: application/octet-
stream" -T
dummy64M localhost:5984/try/dummy64M/ -v
* About to connect() to localhost port 5984 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 5984 (#0)
PUT /try/dummy64M/dummy64M HTTP/1.1
User-Agent: curl/7.16.4 (i686-pc-linux-gnu) libcurl/7.16.4 OpenSSL/
0.9.8e
zlib/1.2.3 libidn/1.0
Host: localhost:5984
Accept: */*
Content-Type: application/octet-stream
Content-Length: 67108864
Expect: 100-continue
< HTTP/1.1 100 Continue
% Total % Received % Xferd Average Speed Time Time Time
Current
Dload Upload Total Spent Left
Speed
0 64.0M 0 0 0 0 0 0 --:--:-- --:--:--
--:--:--
0} [data not shown]
100 64.0M 0 0 100 64.0M 0 16.3M 0:00:03 0:00:03
--:--:--
16.3M< HTTP/1.1 201 Created
< Server: CouchDB/0.9.0a-incubating (Erlang OTP/R12B)
< Date: Wed, 15 Oct 2008 14:12:00 GMT
< Content-Type: text/plain;charset=utf-8
< Content-Length: 46
< Cache-Control: must-revalidate
<
{ [data not shown]
100 64.0M 0 46 100 64.0M 9 13.4M 0:00:04 0:00:04
--:--:--
13.0M* Connection #0 to host localhost left intact
* Closing connection #0
{"ok":true,"id":"dummy64M","rev":"1158604681"}
now create a 128M file :
dd count=262144 if=/dev/zero of=dummy128M
upload this file using CURL :
curl -H "Content-Type: application/octet-stream" -T dummy128M
localhost:5984/test/dummy128M/ -v
it breaks :
[EMAIL PROTECTED] ~]$ curl -H "Content-Type:
application/application/octet-stream" -T dummy128M
localhost:5984/try/dummy128M/ -v
* About to connect() to localhost port 5984 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 5984 (#0)
PUT /try/dummy128M/dummy128M HTTP/1.1
User-Agent: curl/7.16.4 (i686-pc-linux-gnu) libcurl/7.16.4 OpenSSL/
0.9.8e
zlib/1.2.3 libidn/1.0
Host: localhost:5984
Accept: */*
Content-Type: application/application/octet-stream
Content-Length: 134217728
Expect: 100-continue
< HTTP/1.1 100 Continue
% Total % Received % Xferd Average Speed Time Time Time
Current
Dload Upload Total Spent Left
Speed
0 128M 0 0 0 0 0 0 --:--:-- --:--:--
--:--:--
0} [data not shown]
* select/poll returned error
0 128M 0 0 0 16384 0 2177k 0:01:00 --:--:--
0:01:00
5333k* Closing connection #0
curl: (55) select/poll returned error
There is no error in couchDB logs.
After debugging the code, I found that the problem occurs in
mochiweb_request line 138 with the gen_tcp:recv call :
recv(Length, Timeout) ->
case gen_tcp:recv(Socket, Length, Timeout) of
{ok, Data} ->
put(?SAVE_RECV, true),
Data;
_ ->
exit(normal)
end.
The answer catched by the case _ is in fact {error, enomem}
There is no error in couchdb as the exit is "normal" !!!
For what i read (see answers from
http://www.google.fr/search?q=16M+gen_tcp%3Arecv), if Length is too
big
(some says 16M, for me it's 64M) gen_tcp:recv sends back an enomem
error.
I made a patch for this in order to call many time gen_tcp:recv with
a 16M
Length, and then concat the binary answers (almost like in the
read_chunk
method of the mochiweb_multipart).
Anyway, my questions are :
Is it really a bug or is it a way to force curl to use chunk with a
PUT
method (not found any but...)
Set the Transfer-Coding header to "chunked" to force curl to chunk the
request.
I successfully uploaded several gigabytes of data using this.
Not sure about the fix.
Cheers
Jan
--
Should the corrections occurs here or should it be given to mochiweb
even if
they seem to use PUT only for small upload files.
Thank you for your answers.