[ 
https://issues.apache.org/jira/browse/KNOX-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17044588#comment-17044588
 ] 

Kevin Risden commented on KNOX-2139:
------------------------------------

So taking a quick look through the code, contentLength comes straight from the 
the HttpServletRequest.

{code:java}
int contentLength = request.getContentLength();
{code}

So it is being set to 0 somewhere higher up the stack. I looked through the 
code for other implementations of "getContentLength" as well as 
"setContentLength". I don't see where Knox is setting the content length 
explicitly to 0 off hand. I'll have to try to reproduce this and hook up a 
debugger to see what is messing with the contentLength.

> Can not handle 8GB file when using webhdfs
> ------------------------------------------
>
>                 Key: KNOX-2139
>                 URL: https://issues.apache.org/jira/browse/KNOX-2139
>             Project: Apache Knox
>          Issue Type: Bug
>          Components: Server
>    Affects Versions: 1.1.0, 1.2.0
>            Reporter: Sean Chow
>            Priority: Critical
>
> I use knox with webhdfs for a long time, andI uprade my knox version from 0.8 
> to 1.2 recent days. It's really strange that knox can't handle file with size 
> *8589934592 bytes* when I upload my splited file to hdfs.
> It's easy to reproduce and both knox1.1 and 1.2 have this issue. But is works 
> fine in knox0.8.
> Any error log found in gateway.log? No, all logs is clean. From the client 
> side (curl), I saw the the url is redirected correctly and failed with 
> {{curl: (55) Send failure: Connection reset by peer}} or {{curl: (55) Send 
> failure: Broken pipe}}
> I'm sure my network is ok. Any files with other size(smaller or larger) can 
> be  upload successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to