[
https://issues.apache.org/jira/browse/HADOOP-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14649784#comment-14649784
]
Aaron Fabbri commented on HADOOP-12269:
---------------------------------------
[~Thomas Demoor] [~steve_l] I applied Patch 2 to trunk and did some testing..
Looks good on my end.
Did the following via s3a:// URIs to my amazon S3 bucket:
- hdfs put and get with md5 checking of integrity
- Test multipart upload on/off, e.g.
hdfs fs -Dfs.s3a.multipart.threshold=5242880 -Dfs.s3a.multipart.size=5242880
-put hadoop-client2.tgz s3a://$ACCESS:$SECRET@fabbri-dev/test-folder
then `lsof -i 4tcp | grep java` to confirm single/multiple TCP connections to
S3 storage
- Download the multipart-uploaded tar file (hdfs fs -get) and run md5 to check
round-trip integrity versus original local file.
Hope that helps. Eager to get this upstream for the critical overflow bug I
explained in HADOOP-12267
> Update aws-sdk dependency to 1.10.6
> -----------------------------------
>
> Key: HADOOP-12269
> URL: https://issues.apache.org/jira/browse/HADOOP-12269
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Reporter: Thomas Demoor
> Assignee: Thomas Demoor
> Attachments: HADOOP-12269-001.patch, HADOOP-12269-002.patch
>
>
> This was originally part of HADOOP-11684, pulling out to this separate
> subtask as requested by [[email protected]]
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)