[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=303107=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303107
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 28/Aug/19 19:14
Start Date: 28/Aug/19 19:14
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1278: HDDS-1950. 
S3 MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303107)
Time Spent: 4h  (was: 3h 50m)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=303106=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303106
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 28/Aug/19 19:14
Start Date: 28/Aug/19 19:14
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1278: HDDS-1950. S3 MPU 
part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-525883845
 
 
   Committed to the trunk. Thanks for the contribution @elek 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303106)
Time Spent: 3h 50m  (was: 3h 40m)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=303104=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303104
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 28/Aug/19 19:07
Start Date: 28/Aug/19 19:07
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1278: HDDS-1950. S3 MPU 
part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-525881077
 
 
   Test failures does not look related to this patch. I am going to commit this 
patch. @lokeshj1703  and @bharatviswa504  Thanks for the reviews.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 303104)
Time Spent: 3h 40m  (was: 3.5h)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=303092=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-303092
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 28/Aug/19 18:42
Start Date: 28/Aug/19 18:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1278: HDDS-1950. S3 
MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-525872187
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 131 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 867 | trunk passed |
   | +1 | compile | 486 | trunk passed |
   | +1 | checkstyle | 107 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | -1 | shadedclient | 1275 | branch has errors when building and testing our 
client artifacts. |
   | -1 | javadoc | 43 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 46 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1472 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 51 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 46 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 50 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 46 | hadoop-ozone in the patch failed. |
   | -1 | compile | 42 | hadoop-hdds in the patch failed. |
   | -1 | compile | 46 | hadoop-ozone in the patch failed. |
   | -1 | javac | 42 | hadoop-hdds in the patch failed. |
   | -1 | javac | 46 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 41 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | -1 | shadedclient | 70 | patch has errors when building and testing our 
client artifacts. |
   | -1 | javadoc | 39 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 43 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 43 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 41 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 40 | hadoop-hdds in the patch failed. |
   | -1 | unit | 100 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 74 | The patch does not generate ASF License warnings. |
   | | | 4024 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.TestHddsClientUtils |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1278 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux df40dc7dffba 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c7d426d |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/10/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/10/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/10/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/10/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/10/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/10/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/10/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/10/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/10/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/10/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 

[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=302934=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-302934
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 28/Aug/19 14:49
Start Date: 28/Aug/19 14:49
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #1278: HDDS-1950. S3 MPU 
part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-525780440
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 302934)
Time Spent: 3h 20m  (was: 3h 10m)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=301855=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301855
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 27/Aug/19 10:16
Start Date: 27/Aug/19 10:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1278: HDDS-1950. S3 
MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-525237729
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 600 | trunk passed |
   | +1 | compile | 385 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 848 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | trunk passed |
   | 0 | spotbugs | 449 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 661 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 564 | the patch passed |
   | +1 | compile | 400 | the patch passed |
   | +1 | javac | 400 | the patch passed |
   | +1 | checkstyle | 81 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 693 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | the patch passed |
   | +1 | findbugs | 682 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 336 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1773 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 7693 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1278 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6b3926c09351 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3329257 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/8/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/8/testReport/ |
   | Max. process+thread count | 5164 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 301855)
Time Spent: 3h  (was: 2h 50m)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> If an S3 

[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=300064=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-300064
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 23/Aug/19 06:07
Start Date: 23/Aug/19 06:07
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1278: HDDS-1950. S3 
MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-524185207
 
 
   +1 LGTM.  (Need to verify test results)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 300064)
Time Spent: 2h 40m  (was: 2.5h)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=300066=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-300066
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 23/Aug/19 06:07
Start Date: 23/Aug/19 06:07
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1278: HDDS-1950. S3 
MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-524185207
 
 
   +1 LGTM.  (Need to verify test failures)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 300066)
Time Spent: 2h 50m  (was: 2h 40m)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=300063=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-300063
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 23/Aug/19 06:06
Start Date: 23/Aug/19 06:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1278: HDDS-1950. S3 
MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-524185207
 
 
   +1 LGTM. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 300063)
Time Spent: 2.5h  (was: 2h 20m)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=299441=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-299441
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 22/Aug/19 13:39
Start Date: 22/Aug/19 13:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1278: HDDS-1950. S3 
MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-523910786
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 69 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 636 | trunk passed |
   | +1 | compile | 386 | trunk passed |
   | +1 | checkstyle | 69 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 961 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | trunk passed |
   | 0 | spotbugs | 507 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 750 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 639 | the patch passed |
   | +1 | compile | 435 | the patch passed |
   | +1 | javac | 435 | the patch passed |
   | +1 | checkstyle | 96 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 795 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 170 | the patch passed |
   | +1 | findbugs | 696 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 339 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2335 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 8758 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.scm.node.TestQueryNode |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1278 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux dcfa853f84ea 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ee7c261 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/7/testReport/ |
   | Max. process+thread count | 5345 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 299441)
Time Spent: 2h 20m  (was: 2h 10m)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: 

[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=299182=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-299182
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 22/Aug/19 05:53
Start Date: 22/Aug/19 05:53
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on issue #1278: HDDS-1950. S3 MPU 
part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-523759252
 
 
   @elek Thanks for updating the PR! The changes look good to me. There is 
acceptance test failure which I am not able to check.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 299182)
Time Spent: 2h 10m  (was: 2h)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=299179=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-299179
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 22/Aug/19 05:39
Start Date: 22/Aug/19 05:39
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on pull request #1278: HDDS-1950. 
S3 MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#discussion_r316503317
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java
 ##
 @@ -0,0 +1,93 @@
+package org.apache.hadoop.ozone.om;
+
+import java.io.IOException;
+import java.util.ArrayList;
+
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
+import org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
+import org.apache.hadoop.ozone.om.helpers.OmKeyArgs.Builder;
+import org.apache.hadoop.ozone.om.helpers.OmMultipartInfo;
+import org.apache.hadoop.ozone.om.helpers.OmMultipartUploadListParts;
+import org.apache.hadoop.ozone.security.OzoneBlockTokenSecretManager;
+import org.apache.hadoop.test.GenericTestUtils;
+
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+/**
+ * Unit test key manager.
+ */
+public class TestKeyManagerUnit {
 
 Review comment:
   Yes, that makes sense. We can move some unit tests which do not require scm 
and other components involvement later into this test.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 299179)
Time Spent: 2h  (was: 1h 50m)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=298528=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298528
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 21/Aug/19 09:08
Start Date: 21/Aug/19 09:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1278: HDDS-1950. S3 
MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-523369506
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 76 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 601 | trunk passed |
   | +1 | compile | 362 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 922 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 187 | trunk passed |
   | 0 | spotbugs | 429 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 636 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 557 | the patch passed |
   | +1 | compile | 394 | the patch passed |
   | +1 | javac | 394 | the patch passed |
   | +1 | checkstyle | 75 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 715 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 186 | the patch passed |
   | +1 | findbugs | 722 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 351 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2904 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 8992 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.ozone.web.TestOzoneWebAccess |
   |   | hadoop.ozone.web.TestOzoneVolumes |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1278 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 96d0e29a6abe 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8aaf5e1 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/6/testReport/ |
   | Max. process+thread count | 5314 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298528)
Time Spent: 1h 50m  (was: 1h 40m)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> 

[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=298519=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298519
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 21/Aug/19 08:38
Start Date: 21/Aug/19 08:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1278: HDDS-1950. S3 
MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-523357940
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 625 | trunk passed |
   | +1 | compile | 384 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 950 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | trunk passed |
   | 0 | spotbugs | 468 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 671 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 604 | the patch passed |
   | +1 | compile | 405 | the patch passed |
   | +1 | javac | 405 | the patch passed |
   | +1 | checkstyle | 87 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 763 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 173 | the patch passed |
   | +1 | findbugs | 707 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 340 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2432 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 57 | The patch does not generate ASF License warnings. |
   | | | 8644 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1278 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5455e21c792f 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8aaf5e1 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/5/testReport/ |
   | Max. process+thread count | 4917 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298519)
Time Spent: 1h 40m  (was: 1.5h)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: 

[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=298463=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298463
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 21/Aug/19 06:38
Start Date: 21/Aug/19 06:38
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1278: HDDS-1950. S3 MPU 
part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#discussion_r316018169
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -1329,8 +1329,16 @@ public OmMultipartUploadListParts listParts(String 
volumeName,
 multipartKeyInfo.getPartKeyInfoMap();
 Iterator> partKeyInfoMapIterator =
 partKeyInfoMap.entrySet().iterator();
-HddsProtos.ReplicationType replicationType =
-partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
+
+OmKeyInfo omKeyInfo =
+metadataManager.getOpenKeyTable().get(multipartKey);
+
+if (omKeyInfo == null) {
+  throw new IllegalStateException(
+  "Open key is missing for multipart upload " + multipartKey);
+}
+
+HddsProtos.ReplicationType replicationType = omKeyInfo.getType();
 
 Review comment:
   Thanks the idea @lokeshj1703 and @bharatviswa504 
   
   Updated the patch according to this suggestion.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 298463)
Time Spent: 1.5h  (was: 1h 20m)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=298208=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298208
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 20/Aug/19 21:30
Start Date: 20/Aug/19 21:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1278: HDDS-1950. S3 
MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-523202633
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 96 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 181 | hadoop-ozone in trunk failed. |
   | -1 | compile | 76 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 126 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1687 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 458 | trunk passed |
   | 0 | spotbugs | 568 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 306 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 264 | hadoop-ozone in the patch failed. |
   | -1 | compile | 185 | hadoop-ozone in the patch failed. |
   | -1 | javac | 185 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 106 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 761 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | the patch passed |
   | -1 | findbugs | 104 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 343 | hadoop-hdds in the patch failed. |
   | -1 | unit | 110 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 6625 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1278 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f8aafe161c1e 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4cb22cd |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/testReport/ |
   | Max. process+thread count | 428 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to 

[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=297929=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-297929
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 20/Aug/19 14:43
Start Date: 20/Aug/19 14:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1278: HDDS-1950. S3 
MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-523047428
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 153 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 813 | trunk passed |
   | +1 | compile | 465 | trunk passed |
   | +1 | checkstyle | 103 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1151 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 251 | trunk passed |
   | 0 | spotbugs | 575 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 829 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 752 | the patch passed |
   | +1 | compile | 454 | the patch passed |
   | +1 | javac | 454 | the patch passed |
   | +1 | checkstyle | 101 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 891 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 228 | the patch passed |
   | +1 | findbugs | 784 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 426 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2581 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 10213 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1278 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a77d0ed31e2f 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6244502 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/3/testReport/ |
   | Max. process+thread count | 4735 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 297929)
Time Spent: 1h 10m  (was: 1h)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> 

[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=296365=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-296365
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 16/Aug/19 15:53
Start Date: 16/Aug/19 15:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1278: HDDS-1950. S3 
MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-522058529
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 609 | trunk passed |
   | +1 | compile | 358 | trunk passed |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 793 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 142 | trunk passed |
   | 0 | spotbugs | 437 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 625 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 545 | the patch passed |
   | +1 | compile | 352 | the patch passed |
   | +1 | javac | 352 | the patch passed |
   | +1 | checkstyle | 68 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 616 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 140 | the patch passed |
   | +1 | findbugs | 636 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 303 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2359 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 7827 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1278 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7f58789619a2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9b8359b |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/2/testReport/ |
   | Max. process+thread count | 4733 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 296365)
Time Spent: 1h  (was: 50m)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> 

[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=295906=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-295906
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 16/Aug/19 00:41
Start Date: 16/Aug/19 00:41
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1278: 
HDDS-1950. S3 MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#discussion_r314549567
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -1329,8 +1329,16 @@ public OmMultipartUploadListParts listParts(String 
volumeName,
 multipartKeyInfo.getPartKeyInfoMap();
 Iterator> partKeyInfoMapIterator =
 partKeyInfoMap.entrySet().iterator();
-HddsProtos.ReplicationType replicationType =
-partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
+
+OmKeyInfo omKeyInfo =
+metadataManager.getOpenKeyTable().get(multipartKey);
+
+if (omKeyInfo == null) {
+  throw new IllegalStateException(
+  "Open key is missing for multipart upload " + multipartKey);
+}
+
+HddsProtos.ReplicationType replicationType = omKeyInfo.getType();
 
 Review comment:
   Yes agreed, if we do this way, we can save one DB read.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 295906)
Time Spent: 50m  (was: 40m)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=293885=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-293885
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 13/Aug/19 13:15
Start Date: 13/Aug/19 13:15
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on pull request #1278: HDDS-1950. 
S3 MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#discussion_r313386459
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -1329,8 +1329,16 @@ public OmMultipartUploadListParts listParts(String 
volumeName,
 multipartKeyInfo.getPartKeyInfoMap();
 Iterator> partKeyInfoMapIterator =
 partKeyInfoMap.entrySet().iterator();
-HddsProtos.ReplicationType replicationType =
-partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
+
+OmKeyInfo omKeyInfo =
+metadataManager.getOpenKeyTable().get(multipartKey);
+
+if (omKeyInfo == null) {
+  throw new IllegalStateException(
+  "Open key is missing for multipart upload " + multipartKey);
+}
+
+HddsProtos.ReplicationType replicationType = omKeyInfo.getType();
 
 Review comment:
   We are setting replicationType twice- line 1358 and 1341. Can we add this 
openKeyTable get op inside an if condition so that it is executed only if no 
parts are present in the key?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 293885)
Time Spent: 40m  (was: 0.5h)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=293884=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-293884
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 13/Aug/19 13:15
Start Date: 13/Aug/19 13:15
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on pull request #1278: HDDS-1950. 
S3 MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#discussion_r313388730
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerUnit.java
 ##
 @@ -0,0 +1,93 @@
+package org.apache.hadoop.ozone.om;
+
+import java.io.IOException;
+import java.util.ArrayList;
+
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
+import org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
+import org.apache.hadoop.ozone.om.helpers.OmKeyArgs.Builder;
+import org.apache.hadoop.ozone.om.helpers.OmMultipartInfo;
+import org.apache.hadoop.ozone.om.helpers.OmMultipartUploadListParts;
+import org.apache.hadoop.ozone.security.OzoneBlockTokenSecretManager;
+import org.apache.hadoop.test.GenericTestUtils;
+
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+/**
+ * Unit test key manager.
+ */
+public class TestKeyManagerUnit {
 
 Review comment:
   We already have a class for KeyManagerImpl unit tests "TestKeyManagerImpl". 
I think we can use that or make this one specific for the s3 apis?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 293884)
Time Spent: 0.5h  (was: 20m)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=292714=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292714
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 11/Aug/19 14:52
Start Date: 11/Aug/19 14:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1278: HDDS-1950. S3 
MPU part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278#issuecomment-520234393
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 1 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 621 | trunk passed |
   | +1 | compile | 369 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 946 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 172 | trunk passed |
   | 0 | spotbugs | 434 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 630 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 552 | the patch passed |
   | +1 | compile | 378 | the patch passed |
   | +1 | javac | 378 | the patch passed |
   | +1 | checkstyle | 76 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 721 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | the patch passed |
   | +1 | findbugs | 648 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 329 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2376 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 8336 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1278 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 57d845d3eded 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cf5d895 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/1/testReport/ |
   | Max. process+thread count | 5403 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1278/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292714)
Time Spent: 20m  (was: 10m)

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created 

[jira] [Work logged] (HDDS-1950) S3 MPU part-list call fails if there are no parts

2019-08-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1950?focusedWorklogId=292695=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-292695
 ]

ASF GitHub Bot logged work on HDDS-1950:


Author: ASF GitHub Bot
Created on: 11/Aug/19 12:32
Start Date: 11/Aug/19 12:32
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #1278: HDDS-1950. S3 MPU 
part-list call fails if there are no parts
URL: https://github.com/apache/hadoop/pull/1278
 
 
   If an S3 multipart upload is created but no part is upload the part list 
can't be called because it throws HTTP 500:
   
   Create an MPU:
   
   {code}
   aws s3api --endpoint http://localhost: create-multipart-upload 
--bucket=docker --key=testkeu 
   {
   "Bucket": "docker",
   "Key": "testkeu",
   "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
   }
   {code}
   
   List the parts:
   
   {code}
   aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
--key=testkeu 
--upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
   {code}
   
   It throws an exception on the server side, because in the 
KeyManagerImpl.listParts the  ReplicationType is retrieved from the first part:
   
   {code}
   HddsProtos.ReplicationType replicationType =
   
partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
   {code}
   
   Which is not yet available in this use case.
   
   See: https://issues.apache.org/jira/browse/HDDS-1950
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 292695)
Time Spent: 10m
Remaining Estimate: 0h

> S3 MPU part-list call fails if there are no parts
> -
>
> Key: HDDS-1950
> URL: https://issues.apache.org/jira/browse/HDDS-1950
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If an S3 multipart upload is created but no part is upload the part list 
> can't be called because it throws HTTP 500:
> Create an MPU:
> {code}
> aws s3api --endpoint http://localhost: create-multipart-upload 
> --bucket=docker --key=testkeu 
> {
> "Bucket": "docker",
> "Key": "testkeu",
> "UploadId": "85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234"
> }
> {code}
> List the parts:
> {code}
> aws s3api --endpoint http://localhost: list-parts  --bucket=docker 
> --key=testkeu 
> --upload-id=85343e71-4c16-4a75-bb55-01f56a9339b2-102592678478217234
> {code}
> It throws an exception on the server side, because in the 
> KeyManagerImpl.listParts the  ReplicationType is retrieved from the first 
> part:
> {code}
> HddsProtos.ReplicationType replicationType =
> partKeyInfoMap.firstEntry().getValue().getPartKeyInfo().getType();
> {code}
> Which is not yet available in this use case.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org