[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=303199=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303199
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 28/Aug/19 19:53
Start Date: 28/Aug/19 19:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-525897125
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 56 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 720 | trunk passed |
   | +1 | compile | 440 | trunk passed |
   | +1 | checkstyle | 84 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1048 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 170 | trunk passed |
   | 0 | spotbugs | 515 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 748 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 638 | the patch passed |
   | +1 | compile | 466 | the patch passed |
   | +1 | javac | 466 | the patch passed |
   | +1 | checkstyle | 87 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 825 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 189 | the patch passed |
   | +1 | findbugs | 670 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 347 | hadoop-hdds in the patch failed. |
   | -1 | unit | 365 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 110 | The patch does not generate ASF License warnings. |
   | | | 7205 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.container.TestReplicationManager |
   |   | hadoop.ozone.security.TestOzoneDelegationTokenSecretManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1279 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fbdf9d4346c0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 48cb583 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/8/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/8/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/8/testReport/ |
   | Max. process+thread count | 1178 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303199)
Time Spent: 4h 20m  (was: 4h 10m)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 20m
>  

[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=303102=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303102
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 28/Aug/19 19:03
Start Date: 28/Aug/19 19:03
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303102)
Time Spent: 4h 10m  (was: 4h)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=303101=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303101
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 28/Aug/19 19:03
Start Date: 28/Aug/19 19:03
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-525879820
 
 
   Thank you for the reviews. I have committed this to the trunk. @elek  Thanks 
for the contribution.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303101)
Time Spent: 4h  (was: 3h 50m)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=303080=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303080
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 28/Aug/19 18:23
Start Date: 28/Aug/19 18:23
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-525864916
 
 
   @elek There is an acceptance test failure in Multipart upload. Not sure if 
it is related here, as I see that test is passing in CI run.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303080)
Time Spent: 3h 50m  (was: 3h 40m)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=302972=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-302972
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 28/Aug/19 15:44
Start Date: 28/Aug/19 15:44
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1279: HDDS-1942. Support copy 
during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-525803909
 
 
   @lokeshj1703 Do you have any more concerns? If not, can you please change 
the status from "requested change"?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 302972)
Time Spent: 3h 40m  (was: 3.5h)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=302938=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-302938
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 28/Aug/19 15:06
Start Date: 28/Aug/19 15:06
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1279: HDDS-1942. Support copy 
during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-525787933
 
 
   Thanks the review @bharatviswa504 
   
   @lokeshj1703 Do you have any more concerns? Your review is still in  
"requested change" phase but I think I addressed the comments. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 302938)
Time Spent: 3.5h  (was: 3h 20m)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=302935=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-302935
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 28/Aug/19 14:51
Start Date: 28/Aug/19 14:51
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1279: HDDS-1942. Support copy 
during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-525781224
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 302935)
Time Spent: 3h 20m  (was: 3h 10m)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=302609=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-302609
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 28/Aug/19 07:23
Start Date: 28/Aug/19 07:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-525618093
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 193 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 839 | trunk passed |
   | +1 | compile | 467 | trunk passed |
   | +1 | checkstyle | 112 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1192 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 233 | trunk passed |
   | 0 | spotbugs | 590 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 875 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 731 | the patch passed |
   | +1 | compile | 471 | the patch passed |
   | +1 | javac | 471 | the patch passed |
   | +1 | checkstyle | 91 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 874 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 216 | the patch passed |
   | +1 | findbugs | 919 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 445 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2752 | hadoop-ozone in the patch failed. |
   | 0 | asflicense | 53 | ASF License check generated no output? |
   | | | 10710 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestContainerReportWithKeys |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog |
   |   | hadoop.ozone.scm.TestSCMNodeManagerMXBean |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.client.rpc.TestKeyInputStream |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1279 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c26573fd2250 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b1eee8b |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/7/testReport/ |
   | Max. process+thread count | 3973 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 302609)
Time Spent: 3h 10m  (was: 3h)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942

[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=301832=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301832
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 27/Aug/19 09:42
Start Date: 27/Aug/19 09:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-525225559
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 17 | Maven dependency ordering for branch |
   | +1 | mvninstall | 622 | trunk passed |
   | +1 | compile | 382 | trunk passed |
   | +1 | checkstyle | 83 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 848 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 179 | trunk passed |
   | 0 | spotbugs | 454 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 660 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | -1 | mvninstall | 288 | hadoop-ozone in the patch failed. |
   | -1 | compile | 80 | hadoop-ozone in the patch failed. |
   | -1 | javac | 80 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 42 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 659 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | the patch passed |
   | -1 | findbugs | 164 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 332 | hadoop-hdds in the patch passed. |
   | -1 | unit | 349 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 5629 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1279 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b468e8002e70 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3329257 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/6/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/6/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/6/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/6/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/6/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/6/testReport/ |
   | Max. process+thread count | 1242 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301832)
Time Spent: 3h  (was: 2h 50m)

> Support copy during S3 multipart upload part creation
> 

[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=300077=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-300077
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 23/Aug/19 06:27
Start Date: 23/Aug/19 06:27
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-524189652
 
 
   +1 LGTM.
   Need to verify test failures, for acceptance test not able to verify them, 
as it is opening an html code contained in Github.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 300077)
Time Spent: 2h 50m  (was: 2h 40m)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=299425=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-299425
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 22/Aug/19 13:16
Start Date: 22/Aug/19 13:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-523901698
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for branch |
   | +1 | mvninstall | 598 | trunk passed |
   | +1 | compile | 340 | trunk passed |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 820 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | trunk passed |
   | 0 | spotbugs | 432 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 630 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 574 | the patch passed |
   | +1 | compile | 385 | the patch passed |
   | +1 | javac | 385 | the patch passed |
   | -0 | checkstyle | 36 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 649 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | the patch passed |
   | +1 | findbugs | 674 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 293 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2091 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 7768 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1279 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6745475e8bcc 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ee7c261 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/5/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/5/testReport/ |
   | Max. process+thread count | 5301 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 299425)
Time Spent: 2h 40m  (was: 2.5h)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>  

[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=298668=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298668
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 21/Aug/19 13:13
Start Date: 21/Aug/19 13:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-523450772
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 84 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for branch |
   | +1 | mvninstall | 631 | trunk passed |
   | +1 | compile | 370 | trunk passed |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 826 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 155 | trunk passed |
   | 0 | spotbugs | 424 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 612 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 554 | the patch passed |
   | +1 | compile | 361 | the patch passed |
   | +1 | javac | 361 | the patch passed |
   | -0 | checkstyle | 34 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 677 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | the patch passed |
   | +1 | findbugs | 631 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 352 | hadoop-hdds in the patch passed. |
   | -1 | unit | 3260 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 9038 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.ozone.scm.TestXceiverClientManager |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1279 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 31f3792bf7c2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 10ec31d |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/4/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/4/testReport/ |
   | Max. process+thread count | 4199 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298668)
Time Spent: 2.5h  (was: 2h 20m)

> Support copy during S3 multipart upload part creation
> -
>
>  

[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=298564=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298564
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 21/Aug/19 09:59
Start Date: 21/Aug/19 09:59
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#discussion_r316101386
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
 ##
 @@ -200,3 +200,26 @@ Test Multipart Upload with the simplified aws s3 cp API
 Execute AWSS3Clicp s3://${BUCKET}/mpyawscli 
/tmp/part1.result
 Execute AWSS3Clirm s3://${BUCKET}/mpyawscli
 Compare files   /tmp/part1
/tmp/part1.result
+
+Test Multipart Upload Put With Copy
+Run Keyword Create Random file  5
+${result} = Execute AWSS3APICli put-object --bucket ${BUCKET} 
--key copytest/source --body /tmp/part1
+
+
+${result} = Execute AWSS3APICli create-multipart-upload 
--bucket ${BUCKET} --key copytest/destination
+
+${uploadID} =   Execute and checkrc  echo '${result}' | jq -r 
'.UploadId'0
+Should contain   ${result}${BUCKET}
+Should contain   ${result}UploadId
+
+${result} = Execute AWSS3APICli  upload-part-copy --bucket 
${BUCKET} --key copytest/destination --upload-id ${uploadID} --part-number 1 
--copy-source ${BUCKET}/copytest/source
+Should contain   ${result}${BUCKET}
 
 Review comment:
   Good idea. I am pushing the new version with the additional smoketest...
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298564)
Time Spent: 2h 20m  (was: 2h 10m)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=298563=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298563
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 21/Aug/19 09:58
Start Date: 21/Aug/19 09:58
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1279: HDDS-1942. Support copy 
during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-523388046
 
 
   Thanks the review @bharatviswa504 
   
   > Related to eTag I think we don't have backend implemented but for 
x-amz-copy-source-if-unmodified-since we can use lastModified from KeyInfo and 
do this right?
   
   Yes, it's true. Let's do it in a separated jira (opened HDDS-1997). Can be 
better to do it in a separated phase as it's not a blocking feature for using 
ozone as a docker registry backend, and also it requires more code (especially 
more testing code) and can be easier to review in a separated step.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298563)
Time Spent: 2h 10m  (was: 2h)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=298543=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298543
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 21/Aug/19 09:28
Start Date: 21/Aug/19 09:28
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1279: HDDS-1942. Support copy 
during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-523376938
 
 
   Thanks the review @bharatviswa504 
   
   > Related to eTag I think we don't have backend implemented but for 
x-amz-copy-source-if-unmodified-since we can use lastModified from KeyInfo and 
do this right?
   
   Yes, it's true. Let's do it in a separated jira (opened HDDS-1997). Can be 
better to do it in a separated phase as it's not a blocking feature for using 
ozone as a docker registry backend, and also it requires more code (especially 
more testing code) and can be easier to review in a separated step.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298543)
Time Spent: 2h  (was: 1h 50m)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=298541=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298541
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 21/Aug/19 09:21
Start Date: 21/Aug/19 09:21
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#discussion_r316082174
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java
 ##
 @@ -736,4 +761,25 @@ private CopyObjectResponse copyObject(String copyHeader,
   }
 }
   }
+
+  /**
+   * Parse the key and bucket name from copy header.
+   */
+  private Pair parseSourceHeader(String copyHeader)
+  throws OS3Exception {
+String header = copyHeader;
+if (header.startsWith("/")) {
+  header = copyHeader.substring(1);
+}
+int pos = header.indexOf("/");
+if (pos == -1) {
+  OS3Exception ex = S3ErrorTable.newError(S3ErrorTable
+  .INVALID_ARGUMENT, header);
+  ex.setErrorMessage("Copy Source must mention the source bucket and " +
+  "key: sourcebucket/sourcekey");
+  throw ex;
+}
+
+return Pair.of(copyHeader.substring(0, pos), copyHeader.substring(pos + 
1));
 
 Review comment:
   Thanks, you are right. I fixed it and created a unit test for this utility 
method.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298541)
Time Spent: 1h 50m  (was: 1h 40m)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=298534=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298534
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 21/Aug/19 09:15
Start Date: 21/Aug/19 09:15
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1279: HDDS-1942. Support copy 
during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-523371978
 
 
   Thanks the review @lokeshj1703.
   
   > Can we also add a unit test related to same?
   
   Sure, I created a real unit test. Will push it soon...
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298534)
Time Spent: 1h 40m  (was: 1.5h)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=298529=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298529
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 21/Aug/19 09:09
Start Date: 21/Aug/19 09:09
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#discussion_r316076135
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java
 ##
 @@ -553,12 +555,45 @@ private Response createMultipartKey(String bucket, 
String key, long length,
   OzoneBucket ozoneBucket = getBucket(bucket);
   OzoneOutputStream ozoneOutputStream = ozoneBucket.createMultipartKey(
   key, length, partNumber, uploadID);
-  IOUtils.copy(body, ozoneOutputStream);
+
+  String copyHeader = headers.getHeaderString(COPY_SOURCE_HEADER);
+  if (copyHeader != null) {
+Pair result = parseSourceHeader(copyHeader);
+
+String sourceBucket = result.getLeft();
+String sourceKey = result.getRight();
+
+try (OzoneInputStream sourceObject =
+getBucket(sourceBucket).readKey(sourceKey)) {
+
+  String range =
+  headers.getHeaderString(COPY_SOURCE_HEADER_RANGE);
+  if (range != null) {
+RangeHeader rangeHeader =
+RangeHeaderParserUtil.parseRangeHeader(range, 0);
+IOUtils.copyLarge(sourceObject, ozoneOutputStream,
+rangeHeader.getStartOffset(),
+rangeHeader.getEndOffset() - rangeHeader.getStartOffset());
+
+  } else {
+IOUtils.copy(sourceObject, ozoneOutputStream);
+  }
+}
+
+  } else {
+IOUtils.copy(body, ozoneOutputStream);
+  }
   ozoneOutputStream.close();
   OmMultipartCommitUploadPartInfo omMultipartCommitUploadPartInfo =
   ozoneOutputStream.getCommitUploadPartInfo();
-  return Response.status(Status.OK).header("ETag",
-  omMultipartCommitUploadPartInfo.getPartName()).build();
+  String eTag = omMultipartCommitUploadPartInfo.getPartName();
+
+  if (copyHeader != null) {
+return Response.ok(new CopyPartResult(eTag)).build();
 
 Review comment:
   It is set in the constructor:
   
   ```java
 public CopyPartResult(String eTag) {
   this.eTag = eTag;
   this.lastModified = Instant.now();
 }
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298529)
Time Spent: 1.5h  (was: 1h 20m)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=297899=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-297899
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 20/Aug/19 14:03
Start Date: 20/Aug/19 14:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-523029698
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1147 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 48 | Maven dependency ordering for branch |
   | +1 | mvninstall | 698 | trunk passed |
   | +1 | compile | 419 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 884 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | trunk passed |
   | 0 | spotbugs | 481 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 705 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 586 | the patch passed |
   | +1 | compile | 407 | the patch passed |
   | +1 | javac | 407 | the patch passed |
   | +1 | checkstyle | 87 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 706 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | the patch passed |
   | +1 | findbugs | 751 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 365 | hadoop-hdds in the patch passed. |
   | -1 | unit | 507 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 7905 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1279 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 330fe7eaf930 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6244502 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/3/testReport/ |
   | Max. process+thread count | 1382 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 297899)
Time Spent: 1h 20m  (was: 1h 10m)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This 

[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=296352=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296352
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 16/Aug/19 15:44
Start Date: 16/Aug/19 15:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-522055534
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for branch |
   | +1 | mvninstall | 576 | trunk passed |
   | +1 | compile | 348 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 840 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | trunk passed |
   | 0 | spotbugs | 452 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 664 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 583 | the patch passed |
   | +1 | compile | 393 | the patch passed |
   | +1 | javac | 393 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 670 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | the patch passed |
   | +1 | findbugs | 685 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 287 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1613 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 7379 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1279 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 82a979cf92cb 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9b8359b |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/2/testReport/ |
   | Max. process+thread count | 3935 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296352)
Time Spent: 1h 10m  (was: 1h)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker

[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=295867=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-295867
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 15/Aug/19 23:17
Start Date: 15/Aug/19 23:17
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1279: 
HDDS-1942. Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#discussion_r314534545
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
 ##
 @@ -200,3 +200,26 @@ Test Multipart Upload with the simplified aws s3 cp API
 Execute AWSS3Clicp s3://${BUCKET}/mpyawscli 
/tmp/part1.result
 Execute AWSS3Clirm s3://${BUCKET}/mpyawscli
 Compare files   /tmp/part1
/tmp/part1.result
+
+Test Multipart Upload Put With Copy
+Run Keyword Create Random file  5
+${result} = Execute AWSS3APICli put-object --bucket ${BUCKET} 
--key copytest/source --body /tmp/part1
+
+
+${result} = Execute AWSS3APICli create-multipart-upload 
--bucket ${BUCKET} --key copytest/destination
+
+${uploadID} =   Execute and checkrc  echo '${result}' | jq -r 
'.UploadId'0
+Should contain   ${result}${BUCKET}
+Should contain   ${result}UploadId
+
+${result} = Execute AWSS3APICli  upload-part-copy --bucket 
${BUCKET} --key copytest/destination --upload-id ${uploadID} --part-number 1 
--copy-source ${BUCKET}/copytest/source
+Should contain   ${result}${BUCKET}
 
 Review comment:
   Can we add an test part copy with byte-range also.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 295867)
Time Spent: 1h  (was: 50m)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=295866=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-295866
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 15/Aug/19 23:17
Start Date: 15/Aug/19 23:17
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1279: 
HDDS-1942. Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#discussion_r314534545
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/s3/MultipartUpload.robot
 ##
 @@ -200,3 +200,26 @@ Test Multipart Upload with the simplified aws s3 cp API
 Execute AWSS3Clicp s3://${BUCKET}/mpyawscli 
/tmp/part1.result
 Execute AWSS3Clirm s3://${BUCKET}/mpyawscli
 Compare files   /tmp/part1
/tmp/part1.result
+
+Test Multipart Upload Put With Copy
+Run Keyword Create Random file  5
+${result} = Execute AWSS3APICli put-object --bucket ${BUCKET} 
--key copytest/source --body /tmp/part1
+
+
+${result} = Execute AWSS3APICli create-multipart-upload 
--bucket ${BUCKET} --key copytest/destination
+
+${uploadID} =   Execute and checkrc  echo '${result}' | jq -r 
'.UploadId'0
+Should contain   ${result}${BUCKET}
+Should contain   ${result}UploadId
+
+${result} = Execute AWSS3APICli  upload-part-copy --bucket 
${BUCKET} --key copytest/destination --upload-id ${uploadID} --part-number 1 
--copy-source ${BUCKET}/copytest/source
+Should contain   ${result}${BUCKET}
 
 Review comment:
   Can we add an example of part copy with byte-range also.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 295866)
Time Spent: 50m  (was: 40m)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=295865=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-295865
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 15/Aug/19 23:16
Start Date: 15/Aug/19 23:16
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1279: 
HDDS-1942. Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#discussion_r314534295
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java
 ##
 @@ -553,12 +555,45 @@ private Response createMultipartKey(String bucket, 
String key, long length,
   OzoneBucket ozoneBucket = getBucket(bucket);
   OzoneOutputStream ozoneOutputStream = ozoneBucket.createMultipartKey(
   key, length, partNumber, uploadID);
-  IOUtils.copy(body, ozoneOutputStream);
+
+  String copyHeader = headers.getHeaderString(COPY_SOURCE_HEADER);
+  if (copyHeader != null) {
+Pair result = parseSourceHeader(copyHeader);
+
+String sourceBucket = result.getLeft();
+String sourceKey = result.getRight();
+
+try (OzoneInputStream sourceObject =
+getBucket(sourceBucket).readKey(sourceKey)) {
+
+  String range =
+  headers.getHeaderString(COPY_SOURCE_HEADER_RANGE);
+  if (range != null) {
+RangeHeader rangeHeader =
+RangeHeaderParserUtil.parseRangeHeader(range, 0);
+IOUtils.copyLarge(sourceObject, ozoneOutputStream,
+rangeHeader.getStartOffset(),
+rangeHeader.getEndOffset() - rangeHeader.getStartOffset());
+
+  } else {
+IOUtils.copy(sourceObject, ozoneOutputStream);
+  }
+}
+
+  } else {
+IOUtils.copy(body, ozoneOutputStream);
+  }
   ozoneOutputStream.close();
   OmMultipartCommitUploadPartInfo omMultipartCommitUploadPartInfo =
   ozoneOutputStream.getCommitUploadPartInfo();
-  return Response.status(Status.OK).header("ETag",
-  omMultipartCommitUploadPartInfo.getPartName()).build();
+  String eTag = omMultipartCommitUploadPartInfo.getPartName();
+
+  if (copyHeader != null) {
+return Response.ok(new CopyPartResult(eTag)).build();
 
 Review comment:
   CopyPartResult from the documentation has 2 fields etag and LastModified. 
Here we are not setting lastModified.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 295865)
Time Spent: 40m  (was: 0.5h)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=293873=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-293873
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 13/Aug/19 12:40
Start Date: 13/Aug/19 12:40
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on pull request #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#discussion_r313367705
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java
 ##
 @@ -736,4 +761,25 @@ private CopyObjectResponse copyObject(String copyHeader,
   }
 }
   }
+
+  /**
+   * Parse the key and bucket name from copy header.
+   */
+  private Pair parseSourceHeader(String copyHeader)
+  throws OS3Exception {
+String header = copyHeader;
+if (header.startsWith("/")) {
+  header = copyHeader.substring(1);
+}
+int pos = header.indexOf("/");
+if (pos == -1) {
+  OS3Exception ex = S3ErrorTable.newError(S3ErrorTable
+  .INVALID_ARGUMENT, header);
+  ex.setErrorMessage("Copy Source must mention the source bucket and " +
+  "key: sourcebucket/sourcekey");
+  throw ex;
+}
+
+return Pair.of(copyHeader.substring(0, pos), copyHeader.substring(pos + 
1));
 
 Review comment:
   I think this should use header instead of copyHeader 
`Pair.of(header.substring(0, pos), header.substring(pos + 1))`? I think we can 
just use a single variable here for clarity?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 293873)
Time Spent: 0.5h  (was: 20m)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=292715=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292715
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 11/Aug/19 15:19
Start Date: 11/Aug/19 15:19
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279#issuecomment-520236387
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 65 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for branch |
   | +1 | mvninstall | 666 | trunk passed |
   | +1 | compile | 412 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 970 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   | 0 | spotbugs | 477 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 698 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 584 | the patch passed |
   | +1 | compile | 381 | the patch passed |
   | +1 | javac | 381 | the patch passed |
   | +1 | checkstyle | 71 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 671 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   | +1 | findbugs | 704 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 334 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2991 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 9182 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1279 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a8dd26e8e4a8 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cf5d895 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/1/testReport/ |
   | Max. process+thread count | 5065 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/s3gateway hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1279/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292715)
Time Spent: 20m  (was: 10m)

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
>

[jira] [Work logged] (HDDS-1942) Support copy during S3 multipart upload part creation

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1942?focusedWorklogId=292697=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292697
 ]

ASF GitHub Bot logged work on HDDS-1942:


Author: ASF GitHub Bot
Created on: 11/Aug/19 12:45
Start Date: 11/Aug/19 12:45
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1279: HDDS-1942. 
Support copy during S3 multipart upload part creation
URL: https://github.com/apache/hadoop/pull/1279
 
 
   Uploads a part by copying data from an existing object as data source
   
   Documented here:
   
   https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html
   
   See: https://issues.apache.org/jira/browse/HDDS-1942
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292697)
Time Spent: 10m
Remaining Estimate: 0h

> Support copy during S3 multipart upload part creation
> -
>
> Key: HDDS-1942
> URL: https://issues.apache.org/jira/browse/HDDS-1942
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Uploads a part by copying data from an existing object as data source
> Documented here:
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org