[
https://issues.apache.org/jira/browse/KNOX-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17044609#comment-17044609
]
Kevin Risden commented on KNOX-2139:
------------------------------------
Without a reproduction locally yet, my guess is around Transfer-Encoding:
chunked and that makes Content-Length: 0 on purpose. Not sure what triggers
this, but either way need to reproduce this and figure out what all is
happening.
> Can not handle 8GB file when using webhdfs
> ------------------------------------------
>
> Key: KNOX-2139
> URL: https://issues.apache.org/jira/browse/KNOX-2139
> Project: Apache Knox
> Issue Type: Bug
> Components: Server
> Affects Versions: 1.1.0, 1.2.0
> Reporter: Sean Chow
> Priority: Critical
>
> I use knox with webhdfs for a long time, andI uprade my knox version from 0.8
> to 1.2 recent days. It's really strange that knox can't handle file with size
> *8589934592 bytes* when I upload my splited file to hdfs.
> It's easy to reproduce and both knox1.1 and 1.2 have this issue. But is works
> fine in knox0.8.
> Any error log found in gateway.log? No, all logs is clean. From the client
> side (curl), I saw the the url is redirected correctly and failed with
> {{curl: (55) Send failure: Connection reset by peer}} or {{curl: (55) Send
> failure: Broken pipe}}
> I'm sure my network is ok. Any files with other size(smaller or larger) can
> be upload successfully.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)