[
https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15424134#comment-15424134
]
Genmao Yu commented on HADOOP-13498:
------------------------------------
In my last comment, 1G is still too large for a practical test. So, I choose to
test the logic of calculating size of multipart piece, i.e. skip testing on
Aliun OSS service.
> the number of multi-part upload part should not bigger than 10000
> -----------------------------------------------------------------
>
> Key: HADOOP-13498
> URL: https://issues.apache.org/jira/browse/HADOOP-13498
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs
> Affects Versions: HADOOP-12756
> Reporter: Genmao Yu
> Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13498-HADOOP-12756.001.patch,
> HADOOP-13498-HADOOP-12756.002.patch
>
>
> We should not only throw exception when exceed 10000 limit of multi-part
> number, but should guarantee to upload any object no matter how big it is.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]