[jira] [Work logged] (HDDS-1260) Create Recon Server lifecyle integration with Ozone.

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1260?focusedWorklogId=218468=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218468
 ]

ASF GitHub Bot logged work on HDDS-1260:


Author: ASF GitHub Bot
Created on: 26/Mar/19 05:55
Start Date: 26/Mar/19 05:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #643: HDDS-1260. Create 
Recon Server lifecycle integration with Ozone.
URL: https://github.com/apache/hadoop/pull/643#issuecomment-476486033
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 511 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 56 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1106 | trunk passed |
   | +1 | compile | 111 | trunk passed |
   | +1 | checkstyle | 29 | trunk passed |
   | +1 | mvnsite | 136 | trunk passed |
   | +1 | shadedclient | 623 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 151 | trunk passed |
   | +1 | javadoc | 103 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for patch |
   | -1 | mvninstall | 21 | dist in the patch failed. |
   | +1 | compile | 101 | the patch passed |
   | +1 | javac | 101 | the patch passed |
   | +1 | checkstyle | 26 | the patch passed |
   | +1 | mvnsite | 106 | the patch passed |
   | +1 | shellcheck | 27 | There were no new shellcheck issues. |
   | +1 | shelldocs | 12 | There were no new shelldocs issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 4 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 742 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 162 | the patch passed |
   | +1 | javadoc | 86 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 36 | common in the patch passed. |
   | +1 | unit | 44 | ozone-manager in the patch passed. |
   | +1 | unit | 34 | ozone-recon in the patch passed. |
   | +1 | unit | 21 | dist in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 4539 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-643/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/643 |
   | Optional Tests |  dupname  asflicense  mvnsite  unit  shellcheck  
shelldocs  compile  javac  javadoc  mvninstall  shadedclient  xml  findbugs  
checkstyle  |
   | uname | Linux 00d8f7473f37 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c99b107 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-643/1/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-643/1/testReport/ |
   | Max. process+thread count | 447 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-643/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218468)
Time Spent: 1h 10m  (was: 1h)

> Create Recon Server lifecyle integration with Ozone.
> 
>
> Key: HDDS-1260
> URL: https://issues.apache.org/jira/browse/HDDS-1260
> Project: Hadoop 

[jira] [Work logged] (HDDS-1260) Create Recon Server lifecyle integration with Ozone.

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1260?focusedWorklogId=218467=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218467
 ]

ASF GitHub Bot logged work on HDDS-1260:


Author: ASF GitHub Bot
Created on: 26/Mar/19 05:53
Start Date: 26/Mar/19 05:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #643: HDDS-1260. Create 
Recon Server lifecycle integration with Ozone.
URL: https://github.com/apache/hadoop/pull/643#issuecomment-476485653
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 28 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 63 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1031 | trunk passed |
   | +1 | compile | 121 | trunk passed |
   | +1 | checkstyle | 27 | trunk passed |
   | +1 | mvnsite | 132 | trunk passed |
   | +1 | shadedclient | 639 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 134 | trunk passed |
   | +1 | javadoc | 95 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for patch |
   | -1 | mvninstall | 18 | dist in the patch failed. |
   | +1 | compile | 92 | the patch passed |
   | +1 | javac | 92 | the patch passed |
   | +1 | checkstyle | 23 | the patch passed |
   | +1 | mvnsite | 98 | the patch passed |
   | +1 | shellcheck | 24 | There were no new shellcheck issues. |
   | +1 | shelldocs | 15 | There were no new shelldocs issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 4 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 719 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 155 | the patch passed |
   | +1 | javadoc | 91 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 36 | common in the patch passed. |
   | -1 | unit | 43 | ozone-manager in the patch failed. |
   | +1 | unit | 34 | ozone-recon in the patch passed. |
   | +1 | unit | 21 | dist in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3921 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.security.TestOzoneDelegationTokenSecretManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-643/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/643 |
   | Optional Tests |  dupname  asflicense  mvnsite  unit  shellcheck  
shelldocs  compile  javac  javadoc  mvninstall  shadedclient  xml  findbugs  
checkstyle  |
   | uname | Linux 65b968f0f31e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c99b107 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-643/3/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-643/3/artifact/out/patch-unit-hadoop-ozone_ozone-manager.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-643/3/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-643/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218467)
Time Spent: 1h 

[jira] [Work logged] (HDDS-1260) Create Recon Server lifecyle integration with Ozone.

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1260?focusedWorklogId=218466=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218466
 ]

ASF GitHub Bot logged work on HDDS-1260:


Author: ASF GitHub Bot
Created on: 26/Mar/19 05:50
Start Date: 26/Mar/19 05:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #643: HDDS-1260. Create 
Recon Server lifecycle integration with Ozone.
URL: https://github.com/apache/hadoop/pull/643#issuecomment-476485072
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 59 | Maven dependency ordering for branch |
   | +1 | mvninstall | 996 | trunk passed |
   | +1 | compile | 107 | trunk passed |
   | +1 | checkstyle | 35 | trunk passed |
   | +1 | mvnsite | 150 | trunk passed |
   | +1 | shadedclient | 667 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 141 | trunk passed |
   | +1 | javadoc | 119 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | -1 | mvninstall | 21 | dist in the patch failed. |
   | +1 | compile | 96 | the patch passed |
   | +1 | javac | 96 | the patch passed |
   | +1 | checkstyle | 25 | the patch passed |
   | +1 | mvnsite | 112 | the patch passed |
   | +1 | shellcheck | 25 | There were no new shellcheck issues. |
   | +1 | shelldocs | 18 | There were no new shelldocs issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 4 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 762 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 160 | the patch passed |
   | +1 | javadoc | 104 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 36 | common in the patch passed. |
   | +1 | unit | 45 | ozone-manager in the patch passed. |
   | +1 | unit | 36 | ozone-recon in the patch passed. |
   | +1 | unit | 25 | dist in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 4087 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-643/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/643 |
   | Optional Tests |  dupname  asflicense  mvnsite  unit  shellcheck  
shelldocs  compile  javac  javadoc  mvninstall  shadedclient  xml  findbugs  
checkstyle  |
   | uname | Linux 703a0a4ae7c5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c99b107 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-643/2/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-643/2/testReport/ |
   | Max. process+thread count | 441 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-643/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218466)
Time Spent: 50m  (was: 40m)

> Create Recon Server lifecyle integration with Ozone.
> 
>
> Key: HDDS-1260
> URL: https://issues.apache.org/jira/browse/HDDS-1260
> Project: Hadoop Distributed 

[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=218465=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218465
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 26/Mar/19 05:50
Start Date: 26/Mar/19 05:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#issuecomment-476485022
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 7 | https://github.com/apache/hadoop/pull/632 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/632 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218465)
Time Spent: 1.5h  (was: 1h 20m)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1102) Confusing error log when datanode tries to connect to a destroyed pipeline

2019-03-25 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey reassigned HDDS-1102:
--

Assignee: Shashikant Banerjee

> Confusing error log when datanode tries to connect to a destroyed pipeline
> --
>
> Key: HDDS-1102
> URL: https://issues.apache.org/jira/browse/HDDS-1102
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Critical
>  Labels: pushed-to-craterlake, test-badlands
> Attachments: allnode.log, datanode.log
>
>
> steps taken:
> 
>  # created 5 datanode cluster.
>  # shutdown 2 datanodes
>  # started the datanodes again.
> One of the datanodes was shut down.
> exception seen :
>  
> {noformat}
> 2019-02-14 07:37:26 INFO LeaderElection:230 - 
> 6a0522ba-019e-4b77-ac1f-a9322cd525b8 got exception when requesting votes: {}
> java.util.concurrent.ExecutionException: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: INTERNAL: 
> a3d1dd2d-554e-4e87-a2cf-076a229af352: group-FD6FA533F1FB not found.
>  at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>  at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>  at 
> org.apache.ratis.server.impl.LeaderElection.waitForResults(LeaderElection.java:214)
>  at 
> org.apache.ratis.server.impl.LeaderElection.askForVotes(LeaderElection.java:146)
>  at org.apache.ratis.server.impl.LeaderElection.run(LeaderElection.java:102)
> Caused by: org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: 
> INTERNAL: a3d1dd2d-554e-4e87-a2cf-076a229af352: group-FD6FA533F1FB not found.
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:233)
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:214)
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:139)
>  at 
> org.apache.ratis.proto.grpc.RaftServerProtocolServiceGrpc$RaftServerProtocolServiceBlockingStub.requestVote(RaftServerProtocolServiceGrpc.java:265)
>  at 
> org.apache.ratis.grpc.server.GrpcServerProtocolClient.requestVote(GrpcServerProtocolClient.java:83)
>  at org.apache.ratis.grpc.server.GrpcService.requestVote(GrpcService.java:187)
>  at 
> org.apache.ratis.server.impl.LeaderElection.lambda$submitRequests$0(LeaderElection.java:188)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> 2019-02-14 07:37:26 INFO LeaderElection:46 - 
> 6a0522ba-019e-4b77-ac1f-a9322cd525b8: Election PASSED; received 1 response(s) 
> [6a0522ba-019e-4b77-ac1f-a9322cd525b8<-61ad3bf3-e9b1-48e5-90e3-3b78c8b5bba5#0:OK-t7]
>  and 1 exception(s); 6a0522ba-019e-4b77-ac1f-a9322cd525b8:t7, leader=null, 
> voted=6a0522ba-019e-4b77-ac1f-a9322cd525b8, 
> raftlog=6a0522ba-019e-4b77-ac1f-a9322cd525b8-SegmentedRaftLog:OPENED, conf=3: 
> [61ad3bf3-e9b1-48e5-90e3-3b78c8b5bba5:172.20.0.8:9858, 
> 6a0522ba-019e-4b77-ac1f-a9322cd525b8:172.20.0.6:9858, 
> 0f377918-aafa-4d8a-972a-6ead54048fba:172.20.0.3:9858], old=null
> 2019-02-14 07:37:26 INFO LeaderElection:52 - 0: 
> java.util.concurrent.ExecutionException: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: INTERNAL: 
> a3d1dd2d-554e-4e87-a2cf-076a229af352: group-FD6FA533F1FB not found.
> 2019-02-14 07:37:26 INFO RoleInfo:130 - 6a0522ba-019e-4b77-ac1f-a9322cd525b8: 
> shutdown LeaderElection
> 2019-02-14 07:37:26 INFO RaftServerImpl:161 - 
> 6a0522ba-019e-4b77-ac1f-a9322cd525b8 changes role from CANDIDATE to LEADER at 
> term 7 for changeToLeader
> 2019-02-14 07:37:26 INFO RaftServerImpl:258 - 
> 6a0522ba-019e-4b77-ac1f-a9322cd525b8: change Leader from null to 
> 6a0522ba-019e-4b77-ac1f-a9322cd525b8 at term 7 for becomeLeader, leader 
> elected after 1066ms
> 2019-02-14 07:37:26 INFO RaftServerConfigKeys:43 - 
> raft.server.staging.catchup.gap = 1000 (default)
> 2019-02-14 07:37:26 INFO RaftServerConfigKeys:43 - raft.server.rpc.sleep.time 
> = 25ms (default)
> 2019-02-14 07:37:26 INFO RaftServerConfigKeys:43 - raft.server.watch.timeout 
> = 10s (default)
> 2019-02-14 07:37:26 INFO RaftServerConfigKeys:43 - 
> raft.server.watch.timeout.denomination = 1s (default)
> 2019-02-14 07:37:26 INFO RaftServerConfigKeys:43 - 
> raft.server.log.appender.snapshot.chunk.size.max = 16MB (=16777216) (default)
> 2019-02-14 07:37:26 INFO 

[jira] [Updated] (HDDS-1102) Confusing error log when datanode tries to connect to a destroyed pipeline

2019-03-25 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-1102:
---
Priority: Critical  (was: Major)

> Confusing error log when datanode tries to connect to a destroyed pipeline
> --
>
> Key: HDDS-1102
> URL: https://issues.apache.org/jira/browse/HDDS-1102
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Nilotpal Nandi
>Priority: Critical
>  Labels: pushed-to-craterlake, test-badlands
> Attachments: allnode.log, datanode.log
>
>
> steps taken:
> 
>  # created 5 datanode cluster.
>  # shutdown 2 datanodes
>  # started the datanodes again.
> One of the datanodes was shut down.
> exception seen :
>  
> {noformat}
> 2019-02-14 07:37:26 INFO LeaderElection:230 - 
> 6a0522ba-019e-4b77-ac1f-a9322cd525b8 got exception when requesting votes: {}
> java.util.concurrent.ExecutionException: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: INTERNAL: 
> a3d1dd2d-554e-4e87-a2cf-076a229af352: group-FD6FA533F1FB not found.
>  at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>  at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>  at 
> org.apache.ratis.server.impl.LeaderElection.waitForResults(LeaderElection.java:214)
>  at 
> org.apache.ratis.server.impl.LeaderElection.askForVotes(LeaderElection.java:146)
>  at org.apache.ratis.server.impl.LeaderElection.run(LeaderElection.java:102)
> Caused by: org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: 
> INTERNAL: a3d1dd2d-554e-4e87-a2cf-076a229af352: group-FD6FA533F1FB not found.
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:233)
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:214)
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:139)
>  at 
> org.apache.ratis.proto.grpc.RaftServerProtocolServiceGrpc$RaftServerProtocolServiceBlockingStub.requestVote(RaftServerProtocolServiceGrpc.java:265)
>  at 
> org.apache.ratis.grpc.server.GrpcServerProtocolClient.requestVote(GrpcServerProtocolClient.java:83)
>  at org.apache.ratis.grpc.server.GrpcService.requestVote(GrpcService.java:187)
>  at 
> org.apache.ratis.server.impl.LeaderElection.lambda$submitRequests$0(LeaderElection.java:188)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> 2019-02-14 07:37:26 INFO LeaderElection:46 - 
> 6a0522ba-019e-4b77-ac1f-a9322cd525b8: Election PASSED; received 1 response(s) 
> [6a0522ba-019e-4b77-ac1f-a9322cd525b8<-61ad3bf3-e9b1-48e5-90e3-3b78c8b5bba5#0:OK-t7]
>  and 1 exception(s); 6a0522ba-019e-4b77-ac1f-a9322cd525b8:t7, leader=null, 
> voted=6a0522ba-019e-4b77-ac1f-a9322cd525b8, 
> raftlog=6a0522ba-019e-4b77-ac1f-a9322cd525b8-SegmentedRaftLog:OPENED, conf=3: 
> [61ad3bf3-e9b1-48e5-90e3-3b78c8b5bba5:172.20.0.8:9858, 
> 6a0522ba-019e-4b77-ac1f-a9322cd525b8:172.20.0.6:9858, 
> 0f377918-aafa-4d8a-972a-6ead54048fba:172.20.0.3:9858], old=null
> 2019-02-14 07:37:26 INFO LeaderElection:52 - 0: 
> java.util.concurrent.ExecutionException: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: INTERNAL: 
> a3d1dd2d-554e-4e87-a2cf-076a229af352: group-FD6FA533F1FB not found.
> 2019-02-14 07:37:26 INFO RoleInfo:130 - 6a0522ba-019e-4b77-ac1f-a9322cd525b8: 
> shutdown LeaderElection
> 2019-02-14 07:37:26 INFO RaftServerImpl:161 - 
> 6a0522ba-019e-4b77-ac1f-a9322cd525b8 changes role from CANDIDATE to LEADER at 
> term 7 for changeToLeader
> 2019-02-14 07:37:26 INFO RaftServerImpl:258 - 
> 6a0522ba-019e-4b77-ac1f-a9322cd525b8: change Leader from null to 
> 6a0522ba-019e-4b77-ac1f-a9322cd525b8 at term 7 for becomeLeader, leader 
> elected after 1066ms
> 2019-02-14 07:37:26 INFO RaftServerConfigKeys:43 - 
> raft.server.staging.catchup.gap = 1000 (default)
> 2019-02-14 07:37:26 INFO RaftServerConfigKeys:43 - raft.server.rpc.sleep.time 
> = 25ms (default)
> 2019-02-14 07:37:26 INFO RaftServerConfigKeys:43 - raft.server.watch.timeout 
> = 10s (default)
> 2019-02-14 07:37:26 INFO RaftServerConfigKeys:43 - 
> raft.server.watch.timeout.denomination = 1s (default)
> 2019-02-14 07:37:26 INFO RaftServerConfigKeys:43 - 
> raft.server.log.appender.snapshot.chunk.size.max = 16MB (=16777216) (default)
> 2019-02-14 07:37:26 INFO RaftServerConfigKeys:43 - 
> 

[jira] [Updated] (HDDS-1304) Ozone ha breaks service discovery

2019-03-25 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-1304:
---
Target Version/s: 0.4.0

> Ozone ha breaks service discovery
> -
>
> Key: HDDS-1304
> URL: https://issues.apache.org/jira/browse/HDDS-1304
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Ajay Kumar
>Assignee: Nanda kumar
>Priority: Blocker
>
> Ozone ha breaks service discovery



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1334) Fix asf license errors in newly added files by HDDS-1234

2019-03-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16801375#comment-16801375
 ] 

Hudson commented on HDDS-1334:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16282 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16282/])
HDDS-1334. Fix asf license errors in newly added files by HDDS-1234. (yqlin: 
rev c99b107772f4a52832bafd3a4c23fdef8015fdea)
* (edit) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/AbstractOMMetadataManagerTest.java


> Fix asf license errors in newly added files by HDDS-1234
> 
>
> Key: HDDS-1334
> URL: https://issues.apache.org/jira/browse/HDDS-1334
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1334-000.patch
>
>
> [https://ci.anzix.net/job/ozone/11961//artifact/build/rat.out/*view*/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1300) Optimize non-recursive ozone filesystem apis

2019-03-25 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16801373#comment-16801373
 ] 

Mukul Kumar Singh commented on HDDS-1300:
-

Thanks for working on this [~ljain]. The patch generally looks good to me. Some 
comments as following.

1) There are findbugs & unit test failures related to the patch.
2) For createDirectory, should we pass the replication type and factor ? so 
that these values can be populated correctly for getFileStatus?
3) KeyManagerImpl:1366, OzoneFileSystem should append a "/" to the path and 
send this to OM. Is this needed for other cases ?
4) KeyManagerImpl:1395. lets rename this to create/createFile to follow the 
same naming convention as OzoneFilesystem. The same for lookupFile on line 1443 
as well.
5) KeyManagerImpl:535, as this is only used to create directory Key, should 
this be renamed to createDirectoryKeyInfo, also is setting OmKeyLocations 
required ?
6) OzoneClientAdapter:49, this can be removed as well.
7) OzoneFileSystem.java:240-248, should we push the exception handling inside 
the OzoneClientApapterImpl ? 
8) OzoneFileSystem:765, an extra printStackTrace
9) OzoneManager.java:2641, GET_FILE_STATUS -> CREATE_DIRECTORY
10) OzoneManagerProtocol.proto, lets add a CreateDirectoryResponse, even though 
we keep it empty. Lets do this so 1-1 mapping for requests vs response protos
11) RpcClient.java:931, 938, the indendation for @Override flag is off.



> Optimize non-recursive ozone filesystem apis
> 
>
> Key: HDDS-1300
> URL: https://issues.apache.org/jira/browse/HDDS-1300
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Filesystem, Ozone Manager
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, 
> HDDS-1300.003.patch
>
>
> This Jira aims to optimise non recursive apis in ozone file system. The Jira 
> would add support for such apis in Ozone manager in order to reduce the 
> number of rpc calls to Ozone Manager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1234) Iterate the OM DB snapshot and populate the recon container DB.

2019-03-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16801376#comment-16801376
 ] 

Hudson commented on HDDS-1234:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16282 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16282/])
HDDS-1334. Fix asf license errors in newly added files by HDDS-1234. (yqlin: 
rev c99b107772f4a52832bafd3a4c23fdef8015fdea)
* (edit) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/AbstractOMMetadataManagerTest.java


> Iterate the OM DB snapshot and populate the recon container DB. 
> 
>
> Key: HDDS-1234
> URL: https://issues.apache.org/jira/browse/HDDS-1234
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1234-000.patch, HDDS-1234-001.patch, 
> HDDS-1234-002.patch, HDDS-1234-003.patch, HDDS-1234-004.patch, 
> HDDS-1234-005.patch
>
>
> * OM DB snapshot contains the Key->ContainerId + BlockId information. 
> * Iterate the OM snapshot DB and create the reverse map of (ContainerId, Key 
> prefix) -> Key count to be stored in the Recon container DB.
> * Use a codec to store data into Recon container DB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1260) Create Recon Server lifecyle integration with Ozone.

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1260?focusedWorklogId=218452=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218452
 ]

ASF GitHub Bot logged work on HDDS-1260:


Author: ASF GitHub Bot
Created on: 26/Mar/19 04:09
Start Date: 26/Mar/19 04:09
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #643: 
HDDS-1260. Create Recon Server lifecycle integration with Ozone.
URL: https://github.com/apache/hadoop/pull/643#discussion_r268937473
 
 

 ##
 File path: hadoop-ozone/ozone-recon/pom.xml
 ##
 @@ -84,4 +84,33 @@
   
 
   
+  
+
+  
+org.apache.maven.plugins
+maven-dependency-plugin
+
+  
+copy-common-html
+prepare-package
+
+  unpack
+
+
+  
+
+  org.apache.hadoop
+  hadoop-hdds-server-framework
 
 Review comment:
   Yes. The hadoop-hdds-server-framework has commonly used static js and css 
files.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218452)
Time Spent: 40m  (was: 0.5h)

> Create Recon Server lifecyle integration with Ozone.
> 
>
> Key: HDDS-1260
> URL: https://issues.apache.org/jira/browse/HDDS-1260
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> * Create the lifecycle scripts (start/stop) for Recon Server along with Shell 
> interface like the other components.
>  * Verify configurations are being picked up by Recon Server on startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1260) Create Recon Server lifecyle integration with Ozone.

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1260?focusedWorklogId=218449=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218449
 ]

ASF GitHub Bot logged work on HDDS-1260:


Author: ASF GitHub Bot
Created on: 26/Mar/19 04:00
Start Date: 26/Mar/19 04:00
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #643: HDDS-1260. 
Create Recon Server lifecycle integration with Ozone.
URL: https://github.com/apache/hadoop/pull/643#discussion_r268936201
 
 

 ##
 File path: hadoop-ozone/ozone-recon/pom.xml
 ##
 @@ -84,4 +84,33 @@
   
 
   
+  
+
+  
+org.apache.maven.plugins
+maven-dependency-plugin
+
+  
+copy-common-html
+prepare-package
+
+  unpack
+
+
+  
+
+  org.apache.hadoop
+  hadoop-hdds-server-framework
 
 Review comment:
   Is this the correct artifactId?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218449)
Time Spent: 0.5h  (was: 20m)

> Create Recon Server lifecyle integration with Ozone.
> 
>
> Key: HDDS-1260
> URL: https://issues.apache.org/jira/browse/HDDS-1260
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> * Create the lifecycle scripts (start/stop) for Recon Server along with Shell 
> interface like the other components.
>  * Verify configurations are being picked up by Recon Server on startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1260) Create Recon Server lifecyle integration with Ozone.

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1260?focusedWorklogId=218450=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218450
 ]

ASF GitHub Bot logged work on HDDS-1260:


Author: ASF GitHub Bot
Created on: 26/Mar/19 04:00
Start Date: 26/Mar/19 04:00
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #643: HDDS-1260. 
Create Recon Server lifecycle integration with Ozone.
URL: https://github.com/apache/hadoop/pull/643#discussion_r268936233
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/resources/webapps.recon.WEB-INF/web.xml
 ##
 @@ -1,33 +0,0 @@
-
> Key: HDDS-1260
> URL: https://issues.apache.org/jira/browse/HDDS-1260
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> * Create the lifecycle scripts (start/stop) for Recon Server along with Shell 
> interface like the other components.
>  * Verify configurations are being picked up by Recon Server on startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1334) Fix asf license errors in newly added files by HDDS-1234

2019-03-25 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16801355#comment-16801355
 ] 

Yiqun Lin edited comment on HDDS-1334 at 3/26/19 3:54 AM:
--

Committed this to trunk.
Thanks [~avijayan] for the quick fix and [~bharatviswa] for catching this.
BTW, looks like the ASF check for recon module hasn't been triggered.


was (Author: linyiqun):
Committed this to trunk.
Thanks [~avijayan] for the quick fix and [~bharatviswa] for catching this.

> Fix asf license errors in newly added files by HDDS-1234
> 
>
> Key: HDDS-1334
> URL: https://issues.apache.org/jira/browse/HDDS-1334
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1334-000.patch
>
>
> [https://ci.anzix.net/job/ozone/11961//artifact/build/rat.out/*view*/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-939) Add S3 access check to Ozone manager

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-939?focusedWorklogId=218447=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218447
 ]

ASF GitHub Bot logged work on HDDS-939:
---

Author: ASF GitHub Bot
Created on: 26/Mar/19 03:52
Start Date: 26/Mar/19 03:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #634: HDDS-939. Add S3 
access check to Ozone manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#issuecomment-476465220
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 49 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1157 | trunk passed |
   | +1 | compile | 102 | trunk passed |
   | +1 | checkstyle | 28 | trunk passed |
   | +1 | mvnsite | 99 | trunk passed |
   | +1 | shadedclient | 869 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 97 | trunk passed |
   | +1 | javadoc | 71 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for patch |
   | +1 | mvninstall | 103 | the patch passed |
   | +1 | compile | 96 | the patch passed |
   | +1 | javac | 96 | the patch passed |
   | -0 | checkstyle | 26 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 88 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 801 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 112 | the patch passed |
   | +1 | javadoc | 67 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 38 | common in the patch passed. |
   | +1 | unit | 35 | s3gateway in the patch passed. |
   | +1 | unit | 837 | integration-test in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 4762 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/634 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 6d57c3b5a01c 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3f6d6d2 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/8/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/8/testReport/ |
   | Max. process+thread count | 4238 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/s3gateway 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/8/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218447)
Time Spent: 4h  (was: 3h 50m)

> Add S3 access check to Ozone manager
> 
>
> Key: HDDS-939
> URL: https://issues.apache.org/jira/browse/HDDS-939
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, S3
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Add the mapping from S3 User Identity to UGI inside Ozone Manager.  Also add 
> the access check permission, 

[jira] [Updated] (HDDS-1334) Fix asf license errors in newly added files by HDDS-1234

2019-03-25 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-1334:

  Resolution: Fixed
Target Version/s: 0.5.0
  Status: Resolved  (was: Patch Available)

Committed this to trunk.
Thanks [~avijayan] for the quick fix and [~bharatviswa] for catching this.

> Fix asf license errors in newly added files by HDDS-1234
> 
>
> Key: HDDS-1334
> URL: https://issues.apache.org/jira/browse/HDDS-1334
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1334-000.patch
>
>
> [https://ci.anzix.net/job/ozone/11961//artifact/build/rat.out/*view*/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1260) Create Recon Server lifecyle integration with Ozone.

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1260?focusedWorklogId=218446=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218446
 ]

ASF GitHub Bot logged work on HDDS-1260:


Author: ASF GitHub Bot
Created on: 26/Mar/19 03:50
Start Date: 26/Mar/19 03:50
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #643: HDDS-1260. 
Create Recon Server lifecycle integration with Ozone.
URL: https://github.com/apache/hadoop/pull/643#issuecomment-476464870
 
 
   @avijayanhwx @bharatviswa504 @swagle Please review when you find time
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218446)
Time Spent: 20m  (was: 10m)

> Create Recon Server lifecyle integration with Ozone.
> 
>
> Key: HDDS-1260
> URL: https://issues.apache.org/jira/browse/HDDS-1260
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> * Create the lifecycle scripts (start/stop) for Recon Server along with Shell 
> interface like the other components.
>  * Verify configurations are being picked up by Recon Server on startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1260) Create Recon Server lifecyle integration with Ozone.

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1260?focusedWorklogId=218445=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218445
 ]

ASF GitHub Bot logged work on HDDS-1260:


Author: ASF GitHub Bot
Created on: 26/Mar/19 03:49
Start Date: 26/Mar/19 03:49
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #643: 
HDDS-1260. Create Recon Server lifecycle integration with Ozone.
URL: https://github.com/apache/hadoop/pull/643
 
 
   - Create the lifecycle scripts (start/stop) for Recon Server along with 
Shell interface like the other components.
   - Verify configurations are being picked up by Recon Server on startup.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218445)
Time Spent: 10m
Remaining Estimate: 0h

> Create Recon Server lifecyle integration with Ozone.
> 
>
> Key: HDDS-1260
> URL: https://issues.apache.org/jira/browse/HDDS-1260
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> * Create the lifecycle scripts (start/stop) for Recon Server along with Shell 
> interface like the other components.
>  * Verify configurations are being picked up by Recon Server on startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1260) Create Recon Server lifecyle integration with Ozone.

2019-03-25 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-1260:
-
Description: 
* Create the lifecycle scripts (start/stop) for Recon Server along with Shell 
interface like the other components.
 * Verify configurations are being picked up by Recon Server on startup.

  was:
* Create the lifeycle scripts (start/stop) for Recon Server along with Shell 
interface like the other components. 
* Verify configurations are being picked up by Recon Server on startup.



> Create Recon Server lifecyle integration with Ozone.
> 
>
> Key: HDDS-1260
> URL: https://issues.apache.org/jira/browse/HDDS-1260
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>
> * Create the lifecycle scripts (start/stop) for Recon Server along with Shell 
> interface like the other components.
>  * Verify configurations are being picked up by Recon Server on startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1260) Create Recon Server lifecyle integration with Ozone.

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1260:
-
Labels: pull-request-available  (was: )

> Create Recon Server lifecyle integration with Ozone.
> 
>
> Key: HDDS-1260
> URL: https://issues.apache.org/jira/browse/HDDS-1260
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
>
> * Create the lifecycle scripts (start/stop) for Recon Server along with Shell 
> interface like the other components.
>  * Verify configurations are being picked up by Recon Server on startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1334) fix asf license errors in newly added files by HDDS-1234

2019-03-25 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16801350#comment-16801350
 ] 

Yiqun Lin commented on HDDS-1334:
-

+1, I was missing this when reviewed the HDDS-1234, :P. Committing this.

> fix asf license errors in newly added files by HDDS-1234
> 
>
> Key: HDDS-1334
> URL: https://issues.apache.org/jira/browse/HDDS-1334
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1334-000.patch
>
>
> [https://ci.anzix.net/job/ozone/11961//artifact/build/rat.out/*view*/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1334) Fix asf license errors in newly added files by HDDS-1234

2019-03-25 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-1334:

Summary: Fix asf license errors in newly added files by HDDS-1234  (was: 
fix asf license errors in newly added files by HDDS-1234)

> Fix asf license errors in newly added files by HDDS-1234
> 
>
> Key: HDDS-1334
> URL: https://issues.apache.org/jira/browse/HDDS-1334
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1334-000.patch
>
>
> [https://ci.anzix.net/job/ozone/11961//artifact/build/rat.out/*view*/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-939) Add S3 access check to Ozone manager

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-939?focusedWorklogId=218444=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218444
 ]

ASF GitHub Bot logged work on HDDS-939:
---

Author: ASF GitHub Bot
Created on: 26/Mar/19 03:44
Start Date: 26/Mar/19 03:44
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #634: HDDS-939. Add 
S3 access check to Ozone manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#issuecomment-476463735
 
 
   @ajayydv 
   Can you fix Jenkins reported issue and one minor NIT in the test.
   If these are fixed, this is ready to go.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218444)
Time Spent: 3h 50m  (was: 3h 40m)

> Add S3 access check to Ozone manager
> 
>
> Key: HDDS-939
> URL: https://issues.apache.org/jira/browse/HDDS-939
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, S3
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Add the mapping from S3 User Identity to UGI inside Ozone Manager.  Also add 
> the access check permission, that is call into the checkAccess, which will be 
> intercepted by Ranger or Ozone access check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-939) Add S3 access check to Ozone manager

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-939?focusedWorklogId=218443=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218443
 ]

ASF GitHub Bot logged work on HDDS-939:
---

Author: ASF GitHub Bot
Created on: 26/Mar/19 03:43
Start Date: 26/Mar/19 03:43
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #634: HDDS-939. Add 
S3 access check to Ozone manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#issuecomment-476463735
 
 
   @ajayydv 
   Can you fix Jenkins reported issue and one minor NIT in the test.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218443)
Time Spent: 3h 40m  (was: 3.5h)

> Add S3 access check to Ozone manager
> 
>
> Key: HDDS-939
> URL: https://issues.apache.org/jira/browse/HDDS-939
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, S3
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Add the mapping from S3 User Identity to UGI inside Ozone Manager.  Also add 
> the access check permission, that is call into the checkAccess, which will be 
> intercepted by Ranger or Ozone access check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1318) Fix MalformedTracerStateStringException on DN logs

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1318?focusedWorklogId=218428=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218428
 ]

ASF GitHub Bot logged work on HDDS-1318:


Author: ASF GitHub Bot
Created on: 26/Mar/19 02:38
Start Date: 26/Mar/19 02:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #641: HDDS-1318. Fix 
MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/641#issuecomment-476451629
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 70 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1250 | trunk passed |
   | +1 | compile | 1277 | trunk passed |
   | +1 | checkstyle | 210 | trunk passed |
   | +1 | mvnsite | 174 | trunk passed |
   | +1 | shadedclient | 1178 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 122 | trunk passed |
   | +1 | javadoc | 103 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | +1 | mvninstall | 94 | the patch passed |
   | +1 | compile | 1018 | the patch passed |
   | +1 | javac | 1018 | the patch passed |
   | +1 | checkstyle | 200 | the patch passed |
   | +1 | mvnsite | 119 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 702 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 130 | the patch passed |
   | +1 | javadoc | 102 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 30 | client in the patch passed. |
   | +1 | unit | 95 | common in the patch passed. |
   | -1 | unit | 859 | integration-test in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 7730 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.container.metrics.TestContainerMetrics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-641/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/641 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux a8ba333b4d6a 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3f6d6d2 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-641/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-641/1/testReport/ |
   | Max. process+thread count | 4096 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-hdds/common 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-641/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218428)
Time Spent: 20m  (was: 10m)

> Fix MalformedTracerStateStringException on DN logs
> --
>
> Key: HDDS-1318
> URL: https://issues.apache.org/jira/browse/HDDS-1318
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>

[jira] [Comment Edited] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-25 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800592#comment-16800592
 ] 

Feilong He edited comment on HDFS-14355 at 3/26/19 1:57 AM:


Thanks [~Sammi] for your valuable comments.
{quote}User would like to know the relationship between 
dfs.datanode.cache.pmem.capacity and dfs.datanode.max.locked.memory by reading 
the descriptions in hdfs-default.xml
{quote}
I will proofread the description to make them clear to user.
{quote}PmemUsedBytesCount, is there any forsee issue to reuse UsedBytesCount 
instead?  Also byte roundup is not addressed in PmemUsedBytesCount
{quote}
As you know, UsedBytesCount is used to count the DRAM bytes. It can ensure that 
after reserving bytes for DRAM cache the used bytes will not exceed maxBytes 
(dfs.datanode.max.locked.memory). We found that besides HDFS DRAM cache, Lazy 
Persist Writes also uses this UsedBytesCount to reserve/release bytes. Since 
supporting Lazy Persist Writes on pmem is not the target of this jira, we 
introduced PmemUsedBytesCount for pmem to separate pmem's cache bytes 
management with DRAM's. Thus Lazy Persist Writes will not be affected. User can 
still enable Lazy Persist Writes by configuring dfs.datanode.max.locked.memory. 
Pmem may not have page size like mechanism as DRAM (we will confirm it). So we 
didn't round up the bytes to a page size like value. Because of this 
difference, DRAM cache & pmem cache have different reserve/release methods, 
which also makes adding PmemUsedBytesCount necessary.
{quote}FsDatasetCached is not a good place to put specific memory loader 
implemetation functions like reservePmem, releasePmem. FsDatasetCached should 
be generic.
{quote}
Good suggestion. We are aware of this issue as you pointed out. In the new 
patch, we will move PmemUsedBytesCount, reservePmem, releasePmem to a new class 
PmemVolumeManager to keep FsDatasetCache generic.
{quote}As [~daryn] suggested, more elegant error handling.
{quote}
We are checking our code to make exceptions be handled elegantly.

 

Thanks again for your huge efforts on reviewing this patch. Your suggestions 
will be seriously considered by us.


was (Author: philohe):
Thanks [~Sammi] for your valuable comments.
{quote}User would like to know the relationship between 
dfs.datanode.cache.pmem.capacity and dfs.datanode.max.locked.memory by reading 
the descriptions in hdfs-default.xml
{quote}
I will proofread the description to make them clear to user.
{quote}PmemUsedBytesCount, is there any forsee issue to reuse UsedBytesCount 
instead?  Also byte roundup is not addressed in PmemUsedBytesCount
{quote}
As you know, UsedBytesCount is used to count the DRAM bytes. It can ensure that 
after reserving bytes for DRAM cache the used bytes will not exceed maxBytes 
(dfs.datanode.max.locked.memory). We found that besides HDFS DRAM cache, Lazy 
Persist Writes also uses this UsedBytesCount to reserve/release bytes. Since 
supporting Lazy Persist Writes on pmem is not the target of this jira, we 
introduced PmemUsedBytesCount for pmem to separate pmem's cache bytes 
management with DRAM's. Thus Lazy Persist Writes will not be affected. User can 
still enable Lazy Persist Writes by configuring dfs.datanode.max.locked.memory. 
Pmem may not have page size like mechanism as DRAM (we will confirm it). So we 
didn't round up the bytes to a page size like value. Because of this 
difference, DRAM cache & pmem cache have different reserve/release methods, 
which also makes adding PmemUsedBytesCount necessary.
{quote}FsDatasetCached is not a good place to put specific memory loader 
implemetation functions like reservePmem, releasePmem. FsDatasetCached should 
be generic.
{quote}
Good suggestion. We are aware of this issue as you pointed out. In the new 
path, we will move PmemUsedBytesCount, reservePmem, releasePmem to a new class 
PmemVolumeManager to keep FsDatasetCache generic.
{quote}As [~daryn] suggested, more elegant error handling.
{quote}
We are checking our code to make exceptions be handled elegantly.

 

Thanks again for your huge efforts on reviewing this patch. Your suggestions 
will be seriously considered by us.

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch, HDFS-14355.003.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or 

[jira] [Work logged] (HDDS-939) Add S3 access check to Ozone manager

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-939?focusedWorklogId=218396=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218396
 ]

ASF GitHub Bot logged work on HDDS-939:
---

Author: ASF GitHub Bot
Created on: 26/Mar/19 01:05
Start Date: 26/Mar/19 01:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #634: HDDS-939. Add S3 
access check to Ozone manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#issuecomment-476434316
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 520 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 52 | Maven dependency ordering for branch |
   | +1 | mvninstall | 982 | trunk passed |
   | +1 | compile | 107 | trunk passed |
   | +1 | checkstyle | 37 | trunk passed |
   | +1 | mvnsite | 95 | trunk passed |
   | +1 | shadedclient | 773 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 109 | trunk passed |
   | +1 | javadoc | 66 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | +1 | mvninstall | 91 | the patch passed |
   | +1 | compile | 90 | the patch passed |
   | +1 | javac | 90 | the patch passed |
   | -0 | checkstyle | 23 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 81 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 683 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 101 | the patch passed |
   | +1 | javadoc | 62 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 30 | common in the patch passed. |
   | -1 | unit | 32 | s3gateway in the patch failed. |
   | +1 | unit | 590 | integration-test in the patch passed. |
   | +1 | asflicense | 24 | The patch does not generate ASF License warnings. |
   | | | 4579 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.s3.endpoint.TestRootList |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/634 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux e2bc1d35cbd6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3f6d6d2 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/7/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/7/artifact/out/patch-unit-hadoop-ozone_s3gateway.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/7/testReport/ |
   | Max. process+thread count | 4975 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/s3gateway 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/7/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218396)
Time Spent: 3.5h  (was: 3h 20m)

> Add S3 access check to Ozone manager
> 
>
> Key: HDDS-939
> URL: https://issues.apache.org/jira/browse/HDDS-939
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, S3
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>

[jira] [Work logged] (HDDS-939) Add S3 access check to Ozone manager

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-939?focusedWorklogId=218394=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218394
 ]

ASF GitHub Bot logged work on HDDS-939:
---

Author: ASF GitHub Bot
Created on: 26/Mar/19 01:02
Start Date: 26/Mar/19 01:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #634: HDDS-939. Add S3 
access check to Ozone manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#issuecomment-476433712
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 461 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 58 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1140 | trunk passed |
   | +1 | compile | 110 | trunk passed |
   | +1 | checkstyle | 35 | trunk passed |
   | +1 | mvnsite | 101 | trunk passed |
   | +1 | shadedclient | 847 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 106 | trunk passed |
   | +1 | javadoc | 74 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for patch |
   | +1 | mvninstall | 111 | the patch passed |
   | +1 | compile | 99 | the patch passed |
   | +1 | javac | 99 | the patch passed |
   | -0 | checkstyle | 23 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 82 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 774 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 110 | the patch passed |
   | +1 | javadoc | 66 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 34 | common in the patch passed. |
   | -1 | unit | 35 | s3gateway in the patch failed. |
   | -1 | unit | 635 | integration-test in the patch failed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 4957 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.s3.endpoint.TestRootList |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/634 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 15b8cc2a2f4f 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3f6d6d2 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/6/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/6/artifact/out/patch-unit-hadoop-ozone_s3gateway.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/6/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/6/testReport/ |
   | Max. process+thread count | 3810 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/s3gateway 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/6/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218394)
Time Spent: 3h 20m  (was: 3h 10m)

> Add S3 access check to Ozone manager
> 
>
> Key: HDDS-939
> URL: 

[jira] [Work logged] (HDDS-939) Add S3 access check to Ozone manager

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-939?focusedWorklogId=218385=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218385
 ]

ASF GitHub Bot logged work on HDDS-939:
---

Author: ASF GitHub Bot
Created on: 26/Mar/19 00:50
Start Date: 26/Mar/19 00:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #634: HDDS-939. Add S3 
access check to Ozone manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#issuecomment-476431613
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 57 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1098 | trunk passed |
   | +1 | compile | 97 | trunk passed |
   | +1 | checkstyle | 28 | trunk passed |
   | +1 | mvnsite | 99 | trunk passed |
   | +1 | shadedclient | 849 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 105 | trunk passed |
   | +1 | javadoc | 75 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | +1 | mvninstall | 99 | the patch passed |
   | +1 | compile | 92 | the patch passed |
   | +1 | javac | 92 | the patch passed |
   | -0 | checkstyle | 23 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 80 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 801 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 105 | the patch passed |
   | +1 | javadoc | 66 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 33 | common in the patch passed. |
   | -1 | unit | 34 | s3gateway in the patch failed. |
   | -1 | unit | 907 | integration-test in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 4765 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.s3.endpoint.TestRootList |
   |   | hadoop.ozone.container.TestContainerReplication |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/634 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 9619cde4e8bf 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3f6d6d2 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/5/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/5/artifact/out/patch-unit-hadoop-ozone_s3gateway.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/5/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/5/testReport/ |
   | Max. process+thread count | 4135 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/s3gateway 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/5/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218385)
Time Spent: 3h 10m  (was: 3h)

> Add S3 access check to Ozone manager
> 
>
> Key: HDDS-939
> URL: 

[jira] [Updated] (HDDS-1318) Fix MalformedTracerStateStringException on DN logs

2019-03-25 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1318:
-
Status: Patch Available  (was: Open)

> Fix MalformedTracerStateStringException on DN logs
> --
>
> Key: HDDS-1318
> URL: https://issues.apache.org/jira/browse/HDDS-1318
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Have seen many warnings on DN logs. This ticket is opened to track the 
> investigation and fix for this.
> {code}
> 2019-03-20 19:01:33 WARN 
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 2c919331-9a51-4bc4-acee-df57a8dcecf0
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:42)
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:32)
>  at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
>  at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
>  at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:96)
>  at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.ratis.thirdparty.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.hadoop.hdds.tracing.GrpcServerInterceptor$1.onMessage(GrpcServerInterceptor.java:46)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:263)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:686)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1318) Fix MalformedTracerStateStringException on DN logs

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1318:
-
Labels: pull-request-available  (was: )

> Fix MalformedTracerStateStringException on DN logs
> --
>
> Key: HDDS-1318
> URL: https://issues.apache.org/jira/browse/HDDS-1318
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>
> Have seen many warnings on DN logs. This ticket is opened to track the 
> investigation and fix for this.
> {code}
> 2019-03-20 19:01:33 WARN 
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 2c919331-9a51-4bc4-acee-df57a8dcecf0
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:42)
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:32)
>  at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
>  at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
>  at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:96)
>  at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.ratis.thirdparty.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.hadoop.hdds.tracing.GrpcServerInterceptor$1.onMessage(GrpcServerInterceptor.java:46)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:263)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:686)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1318) Fix MalformedTracerStateStringException on DN logs

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1318?focusedWorklogId=218380=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218380
 ]

ASF GitHub Bot logged work on HDDS-1318:


Author: ASF GitHub Bot
Created on: 26/Mar/19 00:27
Start Date: 26/Mar/19 00:27
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #641: HDDS-1318. 
Fix MalformedTracerStateStringException on DN logs. Contributed by Xiaoyu Yao.
URL: https://github.com/apache/hadoop/pull/641
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218380)
Time Spent: 10m
Remaining Estimate: 0h

> Fix MalformedTracerStateStringException on DN logs
> --
>
> Key: HDDS-1318
> URL: https://issues.apache.org/jira/browse/HDDS-1318
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Have seen many warnings on DN logs. This ticket is opened to track the 
> investigation and fix for this.
> {code}
> 2019-03-20 19:01:33 WARN 
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 2c919331-9a51-4bc4-acee-df57a8dcecf0
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:42)
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:32)
>  at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
>  at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
>  at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:96)
>  at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.ratis.thirdparty.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.hadoop.hdds.tracing.GrpcServerInterceptor$1.onMessage(GrpcServerInterceptor.java:46)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:263)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:686)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14359) Inherited ACL permissions masked when parent directory does not exist (mkdir -p)

2019-03-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16801244#comment-16801244
 ] 

Hudson commented on HDFS-14359:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16281 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16281/])
HDFS-14359. Inherited ACL permissions masked when parent directory does 
(templedf: rev 3f6d6d28119049b003cb81735ce675e52d0d2104)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testAclCLIWithPosixAclInheritance.xml


> Inherited ACL permissions masked when parent directory does not exist (mkdir 
> -p)
> 
>
> Key: HDFS-14359
> URL: https://issues.apache.org/jira/browse/HDFS-14359
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14359.001.patch, HDFS-14359.002.patch, 
> HDFS-14359.003.patch
>
>
> There appears to be an issue with ACL inheritance if you 'mkdir' a directory 
> such that the parent directories need to be created (ie mkdir -p).
> If you have a folder /tmp2/testacls as:
> {code}
> hadoop fs -mkdir /tmp2
> hadoop fs -mkdir /tmp2/testacls
> hadoop fs -setfacl -m default:user:hive:rwx /tmp2/testacls
> hadoop fs -setfacl -m default:user:flume:rwx /tmp2/testacls
> hadoop fs -setfacl -m user:hive:rwx /tmp2/testacls
> hadoop fs -setfacl -m user:flume:rwx /tmp2/testacls
> hadoop fs -getfacl -R /tmp2/testacls
> # file: /tmp2/testacls
> # owner: kafka
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> Then create a sub-directory in it, the ACLs are as expected:
> {code}
> hadoop fs -mkdir /tmp2/testacls/dir_from_mkdir
> # file: /tmp2/testacls/dir_from_mkdir
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> However if you mkdir -p a directory, the situation is not the same:
> {code}
> hadoop fs -mkdir -p /tmp2/testacls/dir_with_subdirs/sub1/sub2
> # file: /tmp2/testacls/dir_with_subdirs
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx#effective:r-x
> user:hive:rwx #effective:r-x
> group::r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> # file: /tmp2/testacls/dir_with_subdirs/sub1
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx#effective:r-x
> user:hive:rwx #effective:r-x
> group::r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> # file: /tmp2/testacls/dir_with_subdirs/sub1/sub2
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> Notice the the leaf folder "sub2" is correct, but the two ancestor folders 
> have their permissions masked. I believe this is a regression from the fix 
> for HDFS-6962 with dfs.namenode.posix.acl.inheritance.enabled set to true, as 
> the code has changed significantly from the earlier 2.6 / 2.8 branch.
> I will submit a patch for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-939) Add S3 access check to Ozone manager

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-939?focusedWorklogId=218351=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218351
 ]

ASF GitHub Bot logged work on HDDS-939:
---

Author: ASF GitHub Bot
Created on: 25/Mar/19 23:39
Start Date: 25/Mar/19 23:39
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #634: 
HDDS-939. Add S3 access check to Ozone manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#discussion_r268895982
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -288,6 +292,23 @@ public void testCreateS3Bucket()
 Assert.assertTrue(volume.getCreationTime() >= currentTime);
   }
 
+  @Test
+  public void testCreateSecureS3Bucket() throws IOException {
+long currentTime = Time.now();
+String userName = "ozone/localh...@example.com";
+String bucketName = UUID.randomUUID().toString();
+String s3VolumeName = OzoneS3Util.getVolumeName(userName);
+store.createS3Bucket(s3VolumeName, bucketName);
+String volumeName = store.getOzoneVolumeName(bucketName);
+assertEquals(volumeName,"s3" + s3VolumeName);
 
 Review comment:
   Minor NIT: space after ,
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218351)
Time Spent: 3h  (was: 2h 50m)

> Add S3 access check to Ozone manager
> 
>
> Key: HDDS-939
> URL: https://issues.apache.org/jira/browse/HDDS-939
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, S3
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Add the mapping from S3 User Identity to UGI inside Ozone Manager.  Also add 
> the access check permission, that is call into the checkAccess, which will be 
> intercepted by Ranger or Ozone access check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-939) Add S3 access check to Ozone manager

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-939?focusedWorklogId=218348=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218348
 ]

ASF GitHub Bot logged work on HDDS-939:
---

Author: ASF GitHub Bot
Created on: 25/Mar/19 23:32
Start Date: 25/Mar/19 23:32
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #634: 
HDDS-939. Add S3 access check to Ozone manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#discussion_r268894566
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -288,6 +291,23 @@ public void testCreateS3Bucket()
 Assert.assertTrue(volume.getCreationTime() >= currentTime);
   }
 
+  @Test
+  public void testCreateSecureS3Bucket() throws IOException {
+long currentTime = Time.now();
+String userName = "ozone/localh...@example.com";
+String bucketName = UUID.randomUUID().toString();
+String md5Hash = DigestUtils.md5Hex(userName);
 
 Review comment:
   Here we should call OzoneS3Util.getVolumeName()
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218348)
Time Spent: 2h 50m  (was: 2h 40m)

> Add S3 access check to Ozone manager
> 
>
> Key: HDDS-939
> URL: https://issues.apache.org/jira/browse/HDDS-939
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, S3
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Add the mapping from S3 User Identity to UGI inside Ozone Manager.  Also add 
> the access check permission, that is call into the checkAccess, which will be 
> intercepted by Ranger or Ozone access check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14359) Inherited ACL permissions masked when parent directory does not exist (mkdir -p)

2019-03-25 Thread Daniel Templeton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-14359:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, [~sodonnell], and for the review, [~jojochuang].  
Committed to trunk.

> Inherited ACL permissions masked when parent directory does not exist (mkdir 
> -p)
> 
>
> Key: HDFS-14359
> URL: https://issues.apache.org/jira/browse/HDFS-14359
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14359.001.patch, HDFS-14359.002.patch, 
> HDFS-14359.003.patch
>
>
> There appears to be an issue with ACL inheritance if you 'mkdir' a directory 
> such that the parent directories need to be created (ie mkdir -p).
> If you have a folder /tmp2/testacls as:
> {code}
> hadoop fs -mkdir /tmp2
> hadoop fs -mkdir /tmp2/testacls
> hadoop fs -setfacl -m default:user:hive:rwx /tmp2/testacls
> hadoop fs -setfacl -m default:user:flume:rwx /tmp2/testacls
> hadoop fs -setfacl -m user:hive:rwx /tmp2/testacls
> hadoop fs -setfacl -m user:flume:rwx /tmp2/testacls
> hadoop fs -getfacl -R /tmp2/testacls
> # file: /tmp2/testacls
> # owner: kafka
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> Then create a sub-directory in it, the ACLs are as expected:
> {code}
> hadoop fs -mkdir /tmp2/testacls/dir_from_mkdir
> # file: /tmp2/testacls/dir_from_mkdir
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> However if you mkdir -p a directory, the situation is not the same:
> {code}
> hadoop fs -mkdir -p /tmp2/testacls/dir_with_subdirs/sub1/sub2
> # file: /tmp2/testacls/dir_with_subdirs
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx#effective:r-x
> user:hive:rwx #effective:r-x
> group::r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> # file: /tmp2/testacls/dir_with_subdirs/sub1
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx#effective:r-x
> user:hive:rwx #effective:r-x
> group::r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> # file: /tmp2/testacls/dir_with_subdirs/sub1/sub2
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> Notice the the leaf folder "sub2" is correct, but the two ancestor folders 
> have their permissions masked. I believe this is a regression from the fix 
> for HDFS-6962 with dfs.namenode.posix.acl.inheritance.enabled set to true, as 
> the code has changed significantly from the earlier 2.6 / 2.8 branch.
> I will submit a patch for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14359) Inherited ACL permissions masked when parent directory does not exist (mkdir -p)

2019-03-25 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16801220#comment-16801220
 ] 

Daniel Templeton commented on HDFS-14359:
-

Alrighty.  I'll get this committed.  Thanks, [~jojochuang]!

> Inherited ACL permissions masked when parent directory does not exist (mkdir 
> -p)
> 
>
> Key: HDFS-14359
> URL: https://issues.apache.org/jira/browse/HDFS-14359
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14359.001.patch, HDFS-14359.002.patch, 
> HDFS-14359.003.patch
>
>
> There appears to be an issue with ACL inheritance if you 'mkdir' a directory 
> such that the parent directories need to be created (ie mkdir -p).
> If you have a folder /tmp2/testacls as:
> {code}
> hadoop fs -mkdir /tmp2
> hadoop fs -mkdir /tmp2/testacls
> hadoop fs -setfacl -m default:user:hive:rwx /tmp2/testacls
> hadoop fs -setfacl -m default:user:flume:rwx /tmp2/testacls
> hadoop fs -setfacl -m user:hive:rwx /tmp2/testacls
> hadoop fs -setfacl -m user:flume:rwx /tmp2/testacls
> hadoop fs -getfacl -R /tmp2/testacls
> # file: /tmp2/testacls
> # owner: kafka
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> Then create a sub-directory in it, the ACLs are as expected:
> {code}
> hadoop fs -mkdir /tmp2/testacls/dir_from_mkdir
> # file: /tmp2/testacls/dir_from_mkdir
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> However if you mkdir -p a directory, the situation is not the same:
> {code}
> hadoop fs -mkdir -p /tmp2/testacls/dir_with_subdirs/sub1/sub2
> # file: /tmp2/testacls/dir_with_subdirs
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx#effective:r-x
> user:hive:rwx #effective:r-x
> group::r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> # file: /tmp2/testacls/dir_with_subdirs/sub1
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx#effective:r-x
> user:hive:rwx #effective:r-x
> group::r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> # file: /tmp2/testacls/dir_with_subdirs/sub1/sub2
> # owner: sodonnell
> # group: supergroup
> user::rwx
> user:flume:rwx
> user:hive:rwx
> group::r-x
> mask::rwx
> other::r-x
> default:user::rwx
> default:user:flume:rwx
> default:user:hive:rwx
> default:group::r-x
> default:mask::rwx
> default:other::r-x
> {code}
> Notice the the leaf folder "sub2" is correct, but the two ancestor folders 
> have their permissions masked. I believe this is a regression from the fix 
> for HDFS-6962 with dfs.namenode.posix.acl.inheritance.enabled set to true, as 
> the code has changed significantly from the earlier 2.6 / 2.8 branch.
> I will submit a patch for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-939) Add S3 access check to Ozone manager

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-939?focusedWorklogId=218332=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218332
 ]

ASF GitHub Bot logged work on HDDS-939:
---

Author: ASF GitHub Bot
Created on: 25/Mar/19 22:56
Start Date: 25/Mar/19 22:56
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #634: 
HDDS-939. Add S3 access check to Ozone manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#discussion_r268886290
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/OzoneS3Util.java
 ##
 @@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.hadoop.ozone.s3.util;
+
+import org.apache.commons.codec.digest.DigestUtils;
+import java.util.Objects;
+
+/**
+ * Ozone util for S3 related operations.
+ */
+public final class OzoneS3Util {
+
+  private OzoneS3Util() {
+  }
+
+  public static String getVolumeName(String userName) {
+Objects.nonNull(userName);
+return DigestUtils.md5Hex(userName);
 
 Review comment:
   I am fine, let's have the code as it is, we can see later if the computation 
is taking time, we can change it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218332)
Time Spent: 2h 40m  (was: 2.5h)

> Add S3 access check to Ozone manager
> 
>
> Key: HDDS-939
> URL: https://issues.apache.org/jira/browse/HDDS-939
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, S3
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Add the mapping from S3 User Identity to UGI inside Ozone Manager.  Also add 
> the access check permission, that is call into the checkAccess, which will be 
> intercepted by Ranger or Ozone access check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-939) Add S3 access check to Ozone manager

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-939?focusedWorklogId=218328=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218328
 ]

ASF GitHub Bot logged work on HDDS-939:
---

Author: ASF GitHub Bot
Created on: 25/Mar/19 22:44
Start Date: 25/Mar/19 22:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #634: HDDS-939. Add S3 
access check to Ozone manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#issuecomment-476406163
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 23 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 61 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1080 | trunk passed |
   | +1 | compile | 98 | trunk passed |
   | +1 | checkstyle | 27 | trunk passed |
   | +1 | mvnsite | 66 | trunk passed |
   | +1 | shadedclient | 794 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 109 | trunk passed |
   | +1 | javadoc | 57 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for patch |
   | +1 | mvninstall | 70 | the patch passed |
   | +1 | compile | 94 | the patch passed |
   | +1 | javac | 94 | the patch passed |
   | +1 | checkstyle | 23 | the patch passed |
   | +1 | mvnsite | 58 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 794 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 106 | the patch passed |
   | +1 | javadoc | 51 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 33 | common in the patch passed. |
   | -1 | unit | 35 | s3gateway in the patch failed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 3677 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.s3.endpoint.TestRootList |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/634 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 10d6a37b75c6 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 710cbc9 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/4/artifact/out/patch-unit-hadoop-ozone_s3gateway.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/4/testReport/ |
   | Max. process+thread count | 306 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/s3gateway U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218328)
Time Spent: 2.5h  (was: 2h 20m)

> Add S3 access check to Ozone manager
> 
>
> Key: HDDS-939
> URL: https://issues.apache.org/jira/browse/HDDS-939
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, S3
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Add the mapping from S3 User Identity to UGI inside Ozone Manager.  Also add 
> the access check permission, that is call into the checkAccess, which will be 
> intercepted by Ranger or Ozone access check.



--
This 

[jira] [Commented] (HDFS-14295) Add Threadpool for DataTransfers

2019-03-25 Thread David Mollitor (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16801164#comment-16801164
 ] 

David Mollitor commented on HDFS-14295:
---

[~elgoiri] OK.  Log message updated.

 

What is the path I must take to get this patch committed to the project?

> Add Threadpool for DataTransfers
> 
>
> Key: HDFS-14295
> URL: https://issues.apache.org/jira/browse/HDFS-14295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-14295.1.patch, HDFS-14295.10.patch, 
> HDFS-14295.2.patch, HDFS-14295.3.patch, HDFS-14295.4.patch, 
> HDFS-14295.5.patch, HDFS-14295.6.patch, HDFS-14295.7.patch, 
> HDFS-14295.8.patch, HDFS-14295.9.patch
>
>
> When a DataNode data transfers a block, is spins up a new thread for each 
> transfer.  
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339]
>  and 
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022].
>    Instead, add the threads to a {{CachedThreadPool}} so that when their 
> threads complete the transfer, they can be re-used for another transfer. This 
> should save resources spent on creating and spinning up transfer threads.
> One thing I'll point out that's a bit off, which I address in this patch, ...
> There are two places in the code where a {{DataTransfer}} thread is started. 
> In [one 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339-L2341],
>  it's started in a default thread group. In [another 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022],
>  it's started in the 
> [dataXceiverServer|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L1164]
>  thread group.
> I do not think it's correct to include any of these threads in the 
> {{dataXceiverServer}} thread group. Anything submitted to the 
> {{dataXceiverServer}} should probably be tied to the 
> {{dfs.datanode.max.transfer.threads}} configurations, and neither of these 
> methods are. Instead, they should be submitted into the same thread pool with 
> its own thread group (probably the default thread group, unless someone 
> suggests otherwise) and is what I have included in this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-939) Add S3 access check to Ozone manager

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-939?focusedWorklogId=218291=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218291
 ]

ASF GitHub Bot logged work on HDDS-939:
---

Author: ASF GitHub Bot
Created on: 25/Mar/19 21:42
Start Date: 25/Mar/19 21:42
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #634: HDDS-939. Add 
S3 access check to Ozone manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#discussion_r268864313
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/OzoneS3Util.java
 ##
 @@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.hadoop.ozone.s3.util;
+
+import org.apache.commons.codec.digest.DigestUtils;
+import java.util.Objects;
+
+/**
+ * Ozone util for S3 related operations.
+ */
+public final class OzoneS3Util {
+
+  private OzoneS3Util() {
+  }
+
+  public static String getVolumeName(String userName) {
+Objects.nonNull(userName);
 
 Review comment:
   thanks for catching this. fixed in new commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218291)
Time Spent: 2h 20m  (was: 2h 10m)

> Add S3 access check to Ozone manager
> 
>
> Key: HDDS-939
> URL: https://issues.apache.org/jira/browse/HDDS-939
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, S3
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Add the mapping from S3 User Identity to UGI inside Ozone Manager.  Also add 
> the access check permission, that is call into the checkAccess, which will be 
> intercepted by Ranger or Ozone access check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14389) getAclStatus returns incorrect permissions and owner when an iNodeAttributeProvider is configured

2019-03-25 Thread Stephen O'Donnell (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16801150#comment-16801150
 ] 

Stephen O'Donnell commented on HDFS-14389:
--

I didn't consider distcp, but there could be other areas impacted by this. 

With the way the AttributeProvider code has been refactored, I also wonder if 
there could be more instances of this sort of problem, but they could be hard 
to find. The new implementation means you need to know the importance of 
getting an InodeAttributes object to get the owner, group and permissions 
rather than an inode object (which also has these fields), so its an easy 
mistake to make.

> getAclStatus returns incorrect permissions and owner when an 
> iNodeAttributeProvider is configured
> -
>
> Key: HDFS-14389
> URL: https://issues.apache.org/jira/browse/HDFS-14389
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-14389.001.patch
>
>
> With an inodeAttributeProvider configured in the namenode (eg Sentry), the 
> permissions returned by a `hadoop fs -getfacl` command have an effective 
> comment after them, even if the group permission bits are rwx, eg:
> {code}
> hadoop fs -ls /user/hive/warehouse/sample_08
> Found 1 items
> -rwxrwx--x+  3 hive hive  46069 2019-03-22 00:19 
> /user/hive/warehouse/sample_08/sample_08
> NOTE THE GROUP PERMISSIONS - rwx - No ACLs should get masked.
> hadoop fs -getfacl /user/hive/warehouse/sample_08/sample_08
> # file: /user/hive/warehouse/sample_08/sample_08
> # owner: hive
> # group: hive
> user::rwx
> group::---
> user:hive:rwx   #effective:r--
> group:sentryDefaultAdmin:rwx#effective:r--
> user:admin:rwx  #effective:r--
> group:systest:rwx   #effective:r--
> group:hive:rwx  #effective:r--
> mask::rwx
> other::--x
> {code}
> Note the effective comment, indicating group permissions of r-- which the ls 
> output does not show.
> Usually this effective comment would downgrade the effective permissions, and 
> a user with the group systest would not be able to write to the file / 
> folder, but in this case that does not happen - this appears to be a display 
> issue in the client.
> After some debugging, the problem is due to getAclStatus returning the 
> permissions, owner and group of the underlying file in HDFS and not those 
> from the inodeAttributeProvider - ie this call does not correctly use the 
> attribute provider. Comparing the output with getFileStatus:
> {code}
> Permission from FileStatus: rwxrwx--x  # Correct, the provider says the 
> permissions are 771
> Permission from AclStatus: rw-r--r--  # Incorrect, these are the permissions 
> from HDFS if the provided is disabled
> {code}
> Note that in this example, the underlying file permissions have group r-\-, 
> and that is what is influencing the ACL output, making them effective r--.
> Within the namenode, the permissions are enforced correctly. The reason this 
> is a CLI display issue is that AclCommand.java makes a call to getAclStatus, 
> and from it, it gets the ACL list and the group permissions. Then it 'masks' 
> the ACLs it displays using the returned group permission within the client. 
> This is only for display purposes. FSPermissionChecker inside the Namenode is 
> not impacted by this and does the correct thing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-939) Add S3 access check to Ozone manager

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-939?focusedWorklogId=218275=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218275
 ]

ASF GitHub Bot logged work on HDDS-939:
---

Author: ASF GitHub Bot
Created on: 25/Mar/19 21:27
Start Date: 25/Mar/19 21:27
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on issue #634: HDDS-939. Add S3 
access check to Ozone manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#issuecomment-476384461
 
 
   > And also we can add few tests to verify the volumeName generated from the 
newly added function validates with the HddsUtils.verifyResourceName()? So, 
that in future if someone is changing the logic in this place, it can be caught 
immediately if it causes any issues.
   I am open for this but since volume name is not exposed to s3 clients i 
think it has less value. We test bucket name instead which is already validated 
in secure-robot test case. Do you still want to verify volume name?
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218275)
Time Spent: 2h 10m  (was: 2h)

> Add S3 access check to Ozone manager
> 
>
> Key: HDDS-939
> URL: https://issues.apache.org/jira/browse/HDDS-939
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, S3
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Add the mapping from S3 User Identity to UGI inside Ozone Manager.  Also add 
> the access check permission, that is call into the checkAccess, which will be 
> intercepted by Ranger or Ozone access check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-939) Add S3 access check to Ozone manager

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-939?focusedWorklogId=218266=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218266
 ]

ASF GitHub Bot logged work on HDDS-939:
---

Author: ASF GitHub Bot
Created on: 25/Mar/19 21:12
Start Date: 25/Mar/19 21:12
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #634: HDDS-939. Add 
S3 access check to Ozone manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#discussion_r268853326
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/OzoneS3Util.java
 ##
 @@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.hadoop.ozone.s3.util;
+
+import org.apache.commons.codec.digest.DigestUtils;
+import java.util.Objects;
+
+/**
+ * Ozone util for S3 related operations.
+ */
+public final class OzoneS3Util {
+
+  private OzoneS3Util() {
+  }
+
+  public static String getVolumeName(String userName) {
+Objects.nonNull(userName);
+return DigestUtils.md5Hex(userName);
 
 Review comment:
   Idea is to keep it consistent with what we do right now for volume names. 
With hex encoding i think volume name will be little bigger and hence take more 
space in embedded metadata store.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218266)
Time Spent: 2h  (was: 1h 50m)

> Add S3 access check to Ozone manager
> 
>
> Key: HDDS-939
> URL: https://issues.apache.org/jira/browse/HDDS-939
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, S3
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Add the mapping from S3 User Identity to UGI inside Ozone Manager.  Also add 
> the access check permission, that is call into the checkAccess, which will be 
> intercepted by Ranger or Ozone access check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-294) Destroy ratis pipeline on datanode on pipeline close event.

2019-03-25 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-294:
---

Assignee: Bharat Viswanadham

> Destroy ratis pipeline on datanode on pipeline close event.
> ---
>
> Key: HDDS-294
> URL: https://issues.apache.org/jira/browse/HDDS-294
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: alpha2, newbie
>
> Once a ratis pipeline is closed, the corresponding metadata on the datanode 
> should be destroyed as well. This jira proposes to remove the ratis metadata 
> and destroy the ratis ring on datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-294) Destroy ratis pipeline on datanode on pipeline close event.

2019-03-25 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-294:
---

Assignee: (was: Bharat Viswanadham)

> Destroy ratis pipeline on datanode on pipeline close event.
> ---
>
> Key: HDDS-294
> URL: https://issues.apache.org/jira/browse/HDDS-294
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Priority: Major
>  Labels: alpha2, newbie
>
> Once a ratis pipeline is closed, the corresponding metadata on the datanode 
> should be destroyed as well. This jira proposes to remove the ratis metadata 
> and destroy the ratis ring on datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1217) Refactor ChillMode rules and chillmode manager

2019-03-25 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-1217:
--
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

Thanks [~bharatviswa] for the contribution.

> Refactor ChillMode rules and chillmode manager
> --
>
> Key: HDDS-1217
> URL: https://issues.apache.org/jira/browse/HDDS-1217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> # Make the chillmodeExitRule abstract class and move common logic for all 
> rules into this.
>  # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1217) Refactor ChillMode rules and chillmode manager

2019-03-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16801030#comment-16801030
 ] 

Hudson commented on HDDS-1217:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16278 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16278/])
HDDS-1217. Refactor ChillMode rules and chillmode manager. (#558) 
(nandakumar131: rev 8739693514ac92c33b38e472c37b7dcf4febe73f)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/HealthyPipelineChillModeRule.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/chillmode/TestSCMChillModeManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/DataNodeChillModeRule.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/OneReplicaPipelineChillModeRule.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ChillModeExitRule.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/SCMChillModeManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/ContainerChillModeRule.java


> Refactor ChillMode rules and chillmode manager
> --
>
> Key: HDDS-1217
> URL: https://issues.apache.org/jira/browse/HDDS-1217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> # Make the chillmodeExitRule abstract class and move common logic for all 
> rules into this.
>  # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1217) Refactor ChillMode rules and chillmode manager

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1217?focusedWorklogId=218237=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218237
 ]

ASF GitHub Bot logged work on HDDS-1217:


Author: ASF GitHub Bot
Created on: 25/Mar/19 20:11
Start Date: 25/Mar/19 20:11
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #558: 
HDDS-1217. Refactor ChillMode rules and chillmode manager.
URL: https://github.com/apache/hadoop/pull/558
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218237)
Time Spent: 4.5h  (was: 4h 20m)

> Refactor ChillMode rules and chillmode manager
> --
>
> Key: HDDS-1217
> URL: https://issues.apache.org/jira/browse/HDDS-1217
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> # Make the chillmodeExitRule abstract class and move common logic for all 
> rules into this.
>  # Update test's for chill mode accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1335) Basic Recon UI for serving up container key mapping.

2019-03-25 Thread Aravindan Vijayan (JIRA)
Aravindan Vijayan created HDDS-1335:
---

 Summary: Basic Recon UI for serving up container key mapping.
 Key: HDDS-1335
 URL: https://issues.apache.org/jira/browse/HDDS-1335
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Recon
Reporter: Aravindan Vijayan
Assignee: Vivek Ratnavel Subramanian
 Fix For: 0.5.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1330) Test out Recon Container service endpoint.

2019-03-25 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1330:

Description: 
* Add a docker compose for Ozone deployment with Recon.
* Test out Recon container key service. 

  was:The end goal is using docker-compose to deploy a cluster with all 
components of Ozone + Recon.


> Test out Recon Container service endpoint.
> --
>
> Key: HDDS-1330
> URL: https://issues.apache.org/jira/browse/HDDS-1330
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>
> * Add a docker compose for Ozone deployment with Recon.
> * Test out Recon container key service. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1330) Test out Recon Container service endpoint.

2019-03-25 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan reassigned HDDS-1330:
---

Assignee: Aravindan Vijayan

> Test out Recon Container service endpoint.
> --
>
> Key: HDDS-1330
> URL: https://issues.apache.org/jira/browse/HDDS-1330
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>
> * Add a docker compose for Ozone deployment with Recon.
> * Test out Recon container key service. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1330) Test out Recon Container service endpoint.

2019-03-25 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1330:

Summary: Test out Recon Container service endpoint.  (was: Add a docker 
compose for Ozone deployment with Recon.)

> Test out Recon Container service endpoint.
> --
>
> Key: HDDS-1330
> URL: https://issues.apache.org/jira/browse/HDDS-1330
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>
> The end goal is using docker-compose to deploy a cluster with all components 
> of Ozone + Recon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1084) Ozone Recon Service

2019-03-25 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1084:
--
Component/s: (was: fsck)
 Ozone Recon

> Ozone Recon Service
> ---
>
> Key: HDDS-1084
> URL: https://issues.apache.org/jira/browse/HDDS-1084
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: Ozone_Recon_Design_V1_Draft.pdf
>
>
> Recon Server at a high level will maintain a global view of Ozone that is not 
> available from SCM or OM. Things like how many volumes exist; and how many 
> buckets exist per volume; which volume has maximum buckets; which are buckets 
> that have not been accessed for a year, which are the corrupt blocks, which 
> are blocks on data nodes which are not used; and answer similar queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-281) Need container size distribution metric in OzoneManager UI

2019-03-25 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-281:
-
Component/s: (was: Ozone Manager)
 Ozone Recon

> Need container size distribution metric in OzoneManager UI
> --
>
> Key: HDDS-281
> URL: https://issues.apache.org/jira/browse/HDDS-281
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Recon
>Reporter: Nilotpal Nandi
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>
> It would be good if we have some metric/histogram in OzoneManager UI 
> indicating the different container size range and corresponding percentages 
> for the same created in the cluster.
> For example :
> 0-2 GB           10%
> 2-4 GB .         20%
> 4-5 GB           70%
> 5+ GB            0%
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1146) Adding container related metrics in SCM

2019-03-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16801004#comment-16801004
 ] 

Hadoop QA commented on HDDS-1146:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-hdds: The patch generated 8 new + 0 
unchanged - 0 fixed = 8 total (was 0) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m  
7s{color} | {color:red} hadoop-hdds generated 2 new + 0 unchanged - 0 fixed = 2 
total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
32s{color} | {color:red} hadoop-ozone generated 2 new + 0 unchanged - 0 fixed = 
2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 31s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 56s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds |
|  |  
org.apache.hadoop.hdds.scm.container.SCMContainerManager.getHddsBytesUsed() 
invokes inefficient new Long(long) constructor; 

[jira] [Commented] (HDDS-1334) fix asf license errors in newly added files by HDDS-1234

2019-03-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16801001#comment-16801001
 ] 

Hadoop QA commented on HDDS-1334:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 43s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 46s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.TestContainerReplication |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2579/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1334 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963646/HDDS-1334-000.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux a8ba41a71988 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build@2/ozone.sh |
| git revision | trunk / e5d72f5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2579/artifact/out/patch-unit-hadoop-hdds.txt
 |
| unit | 

[jira] [Commented] (HDDS-1334) fix asf license errors in newly added files by HDDS-1234

2019-03-25 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800989#comment-16800989
 ] 

Bharat Viswanadham commented on HDDS-1334:
--

+1 LGTM.

> fix asf license errors in newly added files by HDDS-1234
> 
>
> Key: HDDS-1334
> URL: https://issues.apache.org/jira/browse/HDDS-1334
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1334-000.patch
>
>
> [https://ci.anzix.net/job/ozone/11961//artifact/build/rat.out/*view*/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1264) Remove Parametrized in TestOzoneShell

2019-03-25 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-1264:
-
Status: Patch Available  (was: Open)

> Remove Parametrized in TestOzoneShell
> -
>
> Key: HDDS-1264
> URL: https://issues.apache.org/jira/browse/HDDS-1264
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> HDDS-1068 removed RestClient from the TestOzoneShell.java.
> So now we don't need to be parameterized in the test anymore. We can directly 
> test with RpcClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1310) In datanode once a container becomes unhealthy, datanode restart fails.

2019-03-25 Thread Sandeep Nemuri (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800977#comment-16800977
 ] 

Sandeep Nemuri commented on HDDS-1310:
--

[~ajayydv], only {{TestFailureHandlingByClient}} is failing when tested locally 
and even that failure is not related to this patch.

 

> In datanode once a container becomes unhealthy, datanode restart fails.
> ---
>
> Key: HDDS-1310
> URL: https://issues.apache.org/jira/browse/HDDS-1310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.3.0
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Blocker
> Attachments: HDDS-1310.001.patch, HDDS-1310.002.patch
>
>
> When a container is marked as {{UNHEALTHY}} in a datanode, subsequent restart 
> of that datanode fails as it cannot generate ContainerReports anymore. 
> Unhealthy state of a container is not handled in ContainerReport generation 
> inside a datanode.
> We get the below exception when a datanode tries to generate the 
> ContainerReport which contains unhealthy container(s)
> {noformat}
> 2019-03-19 13:51:13,646 [Datanode State Machine Thread - 0] ERROR  - 
> Unable to communicate to SCM server at x.x.xxx:9861 for past 3300 
> seconds.
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Invalid Container state found: 86
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getHddsState(KeyValueContainer.java:623)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.getContainerReport(KeyValueContainer.java:593)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerSet.getContainerReport(ContainerSet.java:204)
> at 
> org.apache.hadoop.ozone.container.ozoneimpl.ContainerController.getContainerReport(ContainerController.java:82)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:114)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-939) Add S3 access check to Ozone manager

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-939?focusedWorklogId=218211=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218211
 ]

ASF GitHub Bot logged work on HDDS-939:
---

Author: ASF GitHub Bot
Created on: 25/Mar/19 18:43
Start Date: 25/Mar/19 18:43
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #634: HDDS-939. Add 
S3 access check to Ozone manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#issuecomment-476328617
 
 
   And also we can add few tests to verify the volumeName generated from the 
newly added function validates with the HddsUtils.verifyResourceName()? So, 
that in future if someone is changing the logic in this place, it can be caught 
immediately if it causes any issues.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218211)
Time Spent: 1h 50m  (was: 1h 40m)

> Add S3 access check to Ozone manager
> 
>
> Key: HDDS-939
> URL: https://issues.apache.org/jira/browse/HDDS-939
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, S3
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Add the mapping from S3 User Identity to UGI inside Ozone Manager.  Also add 
> the access check permission, that is call into the checkAccess, which will be 
> intercepted by Ranger or Ozone access check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-939) Add S3 access check to Ozone manager

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-939?focusedWorklogId=218209=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218209
 ]

ASF GitHub Bot logged work on HDDS-939:
---

Author: ASF GitHub Bot
Created on: 25/Mar/19 18:41
Start Date: 25/Mar/19 18:41
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #634: 
HDDS-939. Add S3 access check to Ozone manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#discussion_r268795366
 
 

 ##
 File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/OzoneS3Util.java
 ##
 @@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.hadoop.ozone.s3.util;
+
+import org.apache.commons.codec.digest.DigestUtils;
+import java.util.Objects;
+
+/**
+ * Ozone util for S3 related operations.
+ */
+public final class OzoneS3Util {
+
+  private OzoneS3Util() {
+  }
+
+  public static String getVolumeName(String userName) {
+Objects.nonNull(userName);
+return DigestUtils.md5Hex(userName);
 
 Review comment:
   Here instead of md5Hex, we can directly use
   Hex.encodeHexString(StringUtils.getBytesUtf8(userName)). In this way, we can 
avoid computing md5 for the awsAccessKeyID.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218209)
Time Spent: 1h 40m  (was: 1.5h)

> Add S3 access check to Ozone manager
> 
>
> Key: HDDS-939
> URL: https://issues.apache.org/jira/browse/HDDS-939
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, S3
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Add the mapping from S3 User Identity to UGI inside Ozone Manager.  Also add 
> the access check permission, that is call into the checkAccess, which will be 
> intercepted by Ranger or Ozone access check.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=218189=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218189
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 25/Mar/19 18:06
Start Date: 25/Mar/19 18:06
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#discussion_r268781516
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/s3/commonawslib.robot
 ##
 @@ -21,32 +21,8 @@ ${OZONE_S3_HEADER_VERSION} v4
 ${OZONE_S3_SET_CREDENTIALS}true
 
 *** Keywords ***
-Execute AWSS3APICli
 
 Review comment:
   This can be used inside security robot tests as well.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218189)
Time Spent: 1h 10m  (was: 1h)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1255) Refactor ozone acceptance test to allow run in secure mode

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1255?focusedWorklogId=218190=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218190
 ]

ASF GitHub Bot logged work on HDDS-1255:


Author: ASF GitHub Bot
Created on: 25/Mar/19 18:07
Start Date: 25/Mar/19 18:07
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #632: HDDS-1255. 
Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/632#discussion_r268781767
 
 

 ##
 File path: hadoop-ozone/dist/src/main/smoketest/commonlib.robot
 ##
 @@ -35,3 +41,51 @@ Compare files
 ${checksumbefore} = Executemd5sum ${file1} | 
awk '{print $1}'
 ${checksumafter} =  Executemd5sum ${file2} | 
awk '{print $1}'
 Should Be Equal${checksumbefore}   
 ${checksumafter}
+Execute AWSS3APICli
+[Arguments]   ${command}
+${output} =   Executeaws s3api --endpoint-url 
${ENDPOINT_URL} ${command}
+[return]  ${output}
+
+Execute AWSS3APICli and checkrc
+[Arguments]   ${command} ${expected_error_code}
+${output} =   Execute and checkrcaws s3api --endpoint-url 
${ENDPOINT_URL} ${command}  ${expected_error_code}
+[return]  ${output}
+
+Execute AWSS3Cli
+[Arguments]   ${command}
+${output} =   Execute aws s3 --endpoint-url 
${ENDPOINT_URL} ${command}
+[return]  ${output}
+
+Install aws cli s3 centos
+Executesudo yum install -y awscli
+
+Install aws cli s3 debian
+Executesudo apt-get install -y awscli
+
+Install aws cli
+${rc}  ${output} = Run And Return Rc And 
Output   which apt-get
+Run Keyword if '${rc}' == '0'  Install aws cli s3 debian
+${rc}  ${output} = Run And Return Rc And 
Output   yum --help
+Run Keyword if '${rc}' == '0'  Install aws cli s3 centos
+
+Kinit test user
+${hostname} =   Executehostname
+Set Suite Variable  ${TEST_USER}   
testuser/${hostname}@EXAMPLE.COM
+Execute kinit -k ${TEST_USER} -t 
/etc/security/keytabs/testuser.keytab
+
+Setup secure credentials
+Run Keyword Install aws cli
 
 Review comment:
   currently it used to setup aws secure credentials.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218190)
Time Spent: 1h 20m  (was: 1h 10m)

> Refactor ozone acceptance test to allow run in secure mode
> --
>
> Key: HDDS-1255
> URL: https://issues.apache.org/jira/browse/HDDS-1255
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Refactor ozone acceptance test to allow run in secure mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1293) ExcludeList#getProtoBuf throws ArrayIndexOutOfBoundsException

2019-03-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800944#comment-16800944
 ] 

Hadoop QA commented on HDDS-1293:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 51s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 25s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 41s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HDDS-Build/2577/artifact/out/Dockerfile 
|
| JIRA Issue | HDDS-1293 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963639/HDDS-1293.001.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 8b6d5ff701d1 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / e5d72f5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| 

[jira] [Updated] (HDDS-1334) fix asf license errors in newly added files by HDDS-1234

2019-03-25 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1334:

Attachment: HDDS-1334-000.patch
Status: Patch Available  (was: Open)

> fix asf license errors in newly added files by HDDS-1234
> 
>
> Key: HDDS-1334
> URL: https://issues.apache.org/jira/browse/HDDS-1334
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1334-000.patch
>
>
> [https://ci.anzix.net/job/ozone/11961//artifact/build/rat.out/*view*/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1146) Adding container related metrics in SCM

2019-03-25 Thread Supratim Deka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supratim Deka updated HDDS-1146:

Attachment: HDDS-1146.000.patch
Status: Patch Available  (was: Open)

All SCM Container metrics as per the document are implemented in the patch.
Have added unit test for the Hadoop Metrics2 counters, but not yet for the 
MXBean interfaces.

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Supratim Deka
>Priority: Major
> Attachments: HDDS-1146.000.patch
>
>
> This jira aims to add more container related metrics to SCM.
> Following metrics will be added as part of this jira:
> * Number of containers
> * Number of open containers
> * Number of closed containers
> * Number of quasi closed containers
> * Number of closing containers
> * Number of successful create container calls
> * Number of failed create container calls
> * Number of successful delete container calls
> * Number of failed delete container calls
> * Number of successful container report processing
> * Number of failed container report processing
> * Number of successful incremental container report processing
> * Number of failed incremental container report processing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1334) fix asf license errors in newly added files by HDDS-1234

2019-03-25 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1334:

Fix Version/s: 0.5.0

> fix asf license errors in newly added files by HDDS-1234
> 
>
> Key: HDDS-1334
> URL: https://issues.apache.org/jira/browse/HDDS-1334
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>
> [https://ci.anzix.net/job/ozone/11961//artifact/build/rat.out/*view*/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1334) fix asf license errors in newly added files by HDDS-1234

2019-03-25 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1334:


 Summary: fix asf license errors in newly added files by HDDS-1234
 Key: HDDS-1334
 URL: https://issues.apache.org/jira/browse/HDDS-1334
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Aravindan Vijayan


[https://ci.anzix.net/job/ozone/11961//artifact/build/rat.out/*view*/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=218177=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218177
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 25/Mar/19 17:46
Start Date: 25/Mar/19 17:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #626: HDDS-1262. In OM 
HA OpenKey and initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#issuecomment-476307122
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 23 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 62 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1054 | trunk passed |
   | +1 | compile | 126 | trunk passed |
   | +1 | checkstyle | 31 | trunk passed |
   | +1 | mvnsite | 103 | trunk passed |
   | +1 | shadedclient | 750 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 110 | trunk passed |
   | +1 | javadoc | 71 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for patch |
   | +1 | mvninstall | 118 | the patch passed |
   | +1 | compile | 108 | the patch passed |
   | +1 | cc | 108 | the patch passed |
   | +1 | javac | 108 | the patch passed |
   | -0 | checkstyle | 27 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 91 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 720 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 123 | the patch passed |
   | -1 | javadoc | 35 | hadoop-ozone_common generated 1 new + 1 unchanged - 0 
fixed = 2 total (was 1) |
   ||| _ Other Tests _ |
   | +1 | unit | 41 | common in the patch passed. |
   | +1 | unit | 55 | ozone-manager in the patch passed. |
   | -1 | unit | 818 | integration-test in the patch failed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 4546 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmChillMode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/626 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux 76eef4e09972 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e5d72f5 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/8/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/8/artifact/out/diff-javadoc-javadoc-hadoop-ozone_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/8/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/8/testReport/ |
   | Max. process+thread count | 4887 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/8/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time 

[jira] [Work logged] (HDDS-1262) In OM HA OpenKey call Should happen only leader OM

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1262?focusedWorklogId=218176=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218176
 ]

ASF GitHub Bot logged work on HDDS-1262:


Author: ASF GitHub Bot
Created on: 25/Mar/19 17:42
Start Date: 25/Mar/19 17:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #626: HDDS-1262. In OM 
HA OpenKey and initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#issuecomment-476305892
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 52 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1082 | trunk passed |
   | +1 | compile | 94 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 95 | trunk passed |
   | +1 | shadedclient | 727 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 98 | trunk passed |
   | +1 | javadoc | 71 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | +1 | mvninstall | 95 | the patch passed |
   | +1 | compile | 91 | the patch passed |
   | +1 | cc | 91 | the patch passed |
   | +1 | javac | 91 | the patch passed |
   | +1 | checkstyle | 20 | the patch passed |
   | +1 | mvnsite | 80 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 709 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 108 | the patch passed |
   | -1 | javadoc | 35 | hadoop-ozone_common generated 1 new + 1 unchanged - 0 
fixed = 2 total (was 1) |
   ||| _ Other Tests _ |
   | +1 | unit | 34 | common in the patch passed. |
   | +1 | unit | 40 | ozone-manager in the patch passed. |
   | +1 | unit | 630 | integration-test in the patch passed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 4218 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/626 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux 415a821960c6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e5d72f5 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/9/artifact/out/diff-javadoc-javadoc-hadoop-ozone_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/9/testReport/ |
   | Max. process+thread count | 3812 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/9/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218176)
Time Spent: 2h  (was: 1h 50m)

> In OM HA OpenKey call Should happen only leader OM
> --
>
> Key: HDDS-1262
> URL: https://issues.apache.org/jira/browse/HDDS-1262
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In OM HA, currently, when openKey is 

[jira] [Assigned] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes

2019-03-25 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-1333:


Assignee: Elek, Marton  (was: Xiaoyu Yao)

> OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security 
> classes
> -
>
> Key: HDDS-1333
> URL: https://issues.apache.org/jira/browse/HDDS-1333
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> The current ozonefs compatibility layer is broken by: HDDS-1299.
> The spark jobs (including hadoop 2.7) can't be executed any more:
> {code}
> 2019-03-25 09:50:08 INFO  StateStoreCoordinatorRef:54 - Registered 
> StateStoreCoordinator endpoint
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/crypto/key/KeyProviderTokenIssuer
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
> at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
> at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
> at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
> at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> at 
> org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
> at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
> at 
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
> at 
> org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
> at 
> org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:715)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:757)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:724)
> at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
> at 
> org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
> at 
> org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
> at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
> at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
> at 
> org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.crypto.key.KeyProviderTokenIssuer
> at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 43 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HDDS-1324) TestOzoneManagerHA seems to be flaky

2019-03-25 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reassigned HDDS-1324:


Assignee: Hanisha Koneru

> TestOzoneManagerHA seems to be flaky
> 
>
> Key: HDDS-1324
> URL: https://issues.apache.org/jira/browse/HDDS-1324
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Arpit Agarwal
>Assignee: Hanisha Koneru
>Priority: Major
>
> TestOzoneManagerHA failed once with the following error:
> {code}
> [ERROR] Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 105.931 s <<< FAILURE! - in org.apache.hadoop.ozone.om.TestOzoneManagerHA
> [ERROR] testOMRetryProxy(org.apache.hadoop.ozone.om.TestOzoneManagerHA)  Time 
> elapsed: 21.781 s  <<< FAILURE!
> java.lang.AssertionError: expected:<30> but was:<10>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.ozone.om.TestOzoneManagerHA.testOMRetryProxy(TestOzoneManagerHA.java:305)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes

2019-03-25 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800869#comment-16800869
 ] 

Steve Loughran commented on HDDS-1333:
--

bq. `java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StorageStatistics` 

That's the standard things complain about on stack overflow when they drop in a 
later version of hadoop-aws into a spark installation, for which the answer is 
"that's just the first thng you'll find missing..."

> OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security 
> classes
> -
>
> Key: HDDS-1333
> URL: https://issues.apache.org/jira/browse/HDDS-1333
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Major
>
> The current ozonefs compatibility layer is broken by: HDDS-1299.
> The spark jobs (including hadoop 2.7) can't be executed any more:
> {code}
> 2019-03-25 09:50:08 INFO  StateStoreCoordinatorRef:54 - Registered 
> StateStoreCoordinator endpoint
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/crypto/key/KeyProviderTokenIssuer
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
> at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
> at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
> at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
> at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> at 
> org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
> at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
> at 
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
> at 
> org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
> at 
> org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:715)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:757)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:724)
> at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
> at 
> org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
> at 
> org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
> at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
> at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
> at 
> org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.crypto.key.KeyProviderTokenIssuer
> at 

[jira] [Updated] (HDDS-1293) ExcludeList#getProtoBuf throws ArrayIndexOutOfBoundsException

2019-03-25 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-1293:
--
Attachment: HDDS-1293.001.patch

> ExcludeList#getProtoBuf throws ArrayIndexOutOfBoundsException
> -
>
> Key: HDDS-1293
> URL: https://issues.apache.org/jira/browse/HDDS-1293
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-1293.000.patch, HDDS-1293.001.patch
>
>
> ExcludeList#getProtoBuf throws ArrayIndexOutOfBoundsException because 
> getProtoBuf uses parallelStreams
> {code}
> 2019-03-17 16:24:35,774 INFO  retry.RetryInvocationHandler 
> (RetryInvocationHandler.java:log(411)) - 
> com.google.protobuf.ServiceException: 
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  3
>   at java.util.ArrayList.add(ArrayList.java:463)
>   at 
> org.apache.hadoop.hdds.protocol.proto.HddsProtos$ExcludeListProto$Builder.addContainerIds(HddsProtos.java:12904)
>   at 
> org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList.lambda$getProtoBuf$3(ExcludeList.java:89)
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
>   at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>   at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
>   at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291)
>   at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
>   at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>   at 
> java.util.concurrent.ForkJoinPool.helpComplete(ForkJoinPool.java:1870)
>   at 
> java.util.concurrent.ForkJoinPool.externalHelpComplete(ForkJoinPool.java:2467)
>   at 
> java.util.concurrent.ForkJoinTask.externalAwaitDone(ForkJoinTask.java:324)
>   at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:405)
>   at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734)
>   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160)
>   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
>   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
>   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
>   at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583)
>   at 
> org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList.getProtoBuf(ExcludeList.java:89)
>   at 
> org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolClientSideTranslatorPB.allocateBlock(ScmBlockLocationProtocolClientSideTranslatorPB.java:100)
>   at sun.reflect.GeneratedMethodAccessor107.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
>   at com.sun.proxy.$Proxy22.allocateBlock(Unknown Source)
>   at 
> org.apache.hadoop.ozone.om.KeyManagerImpl.allocateBlock(KeyManagerImpl.java:275)
>   at 
> org.apache.hadoop.ozone.om.KeyManagerImpl.allocateBlock(KeyManagerImpl.java:246)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.allocateBlock(OzoneManager.java:2023)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.allocateBlock(OzoneManagerRequestHandler.java:631)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handle(OzoneManagerRequestHandler.java:231)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.java:131)
>   at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:86)
>   at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> 

[jira] [Assigned] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes

2019-03-25 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-1333:


Assignee: Xiaoyu Yao

> OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security 
> classes
> -
>
> Key: HDDS-1333
> URL: https://issues.apache.org/jira/browse/HDDS-1333
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Xiaoyu Yao
>Priority: Major
>
> The current ozonefs compatibility layer is broken by: HDDS-1299.
> The spark jobs (including hadoop 2.7) can't be executed any more:
> {code}
> 2019-03-25 09:50:08 INFO  StateStoreCoordinatorRef:54 - Registered 
> StateStoreCoordinator endpoint
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/crypto/key/KeyProviderTokenIssuer
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
> at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
> at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
> at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
> at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> at 
> org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
> at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
> at 
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
> at 
> org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
> at 
> org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:715)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:757)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:724)
> at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
> at 
> org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
> at 
> org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
> at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
> at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
> at 
> org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.crypto.key.KeyProviderTokenIssuer
> at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 43 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HDDS-1299) Support TokenIssuer interface for running jobs with OzoneFileSystem

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1299?focusedWorklogId=218121=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218121
 ]

ASF GitHub Bot logged work on HDDS-1299:


Author: ASF GitHub Bot
Created on: 25/Mar/19 16:17
Start Date: 25/Mar/19 16:17
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#issuecomment-476270295
 
 
   Thanks Steve for the heads up. Ozone DT via HCFS requires Kerberos setup for 
various component to test e2e. There is no easy way to test inside unit test as 
OM/SCM/DN all login with different principles.
   We have follow up JIRA to add system test.
   
   Before committing the changes, I spend a few days to get the manual steps 
with various testing documented here:
   %your_hadoop_rep%/hadoop-ozone/dist/src/main/compose/ozonesecure-mr/README.md
   
   This will be the basis of system tests being added, where we may use 
different Hadoop-versions like 3.2 and 2.7 to test the combabilities.
   The test will be very similar to the one below just slightly different 
security setup.
   
%your_hadoop_rep%/hadoop/hadoop-ozone/dist/src/main/smoketest/security/ozone-secure.robot
   
   
   Bq. Without tests, when people like myself make an incompatible change which 
breaks Ozone's DT support, we're just going to shrug and say "well, you should 
have written the tests"
   Agree, test will be added. I have a WIP patch for DT system tests but there 
is a refactor of Ozone smoke test in 
HDDS-1255.  I will post my test 
after that.
   
   HDDS-1299 based on HADOOP-14445 are all Hadoop-3 only changes, it brings 
Ozone DT support. Before that, Ozone does not support DT.  There is a pending 
HADOOP-14445 patch for Hadoop-2.
   We could build support for Hadoop-2 after that in a separate ozone branch.
   
   Thanks,
   Xiaoyu
   
   From: Steve Loughran 
   Reply-To: apache/hadoop 
   Date: Monday, March 25, 2019 at 8:58 AM
   To: apache/hadoop 
   Cc: Xiaoyu Yao , State change 

   Subject: Re: [apache/hadoop] HDDS-1299. Support TokenIssuer interface for 
running jobs with OzoneFileSystem. (#627)
   
   
   Given this has broken things, can I ask a process question here, 
specifically regarding Yetus's v-1
   
   The patch doesn't appear to include any new or modified tests. Please 
justify why no new tests are needed for this patch. Also please list what 
manual steps were performed to verify this patch.
   
   There are lots of ways to test for DTs being ussued: The hdfs fs command, 
the hadoop dtuils, the actualyh M
   
 *   Marshalling/unmarshalling of tokens: 
org.apache.hadoop.fs.s3a.auth.delegation.TestS3ADelegationTokenSupport
 *   org.apache.hadoop.fs.s3a.auth.delegation.ITestRoleDelegationTokens
 *   org.apache.hadoop.fs.s3a.auth.delegation.ITestRoleDelegationInFileystem
 *   org.apache.hadoop.fs.azurebfs.extensions.ITestAbfsDelegationTokens
   
   You can even verify that MR cluster launch collects the tokens: 
org.apache.hadoop.fs.s3a.auth.delegation.ITestDelegatedMRJob
   
   Please. write tests. Apart from the MR job one its straightforward, and I've 
just given you all the clases to start with.
   
   Without tests, when people like myself make an incompatible change which 
breaks Ozone's DT support, we're just going to shrug and say "well, you should 
have written the tests"
   
   —
   You are receiving this because you modified the open/close state.
   Reply to this email directly, view it on 
GitHub, or 
mute the 
thread.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218121)
Time Spent: 6.5h  (was: 6h 20m)

> Support TokenIssuer interface for running jobs with OzoneFileSystem
> ---
>
> Key: HDDS-1299
> URL: https://issues.apache.org/jira/browse/HDDS-1299
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> This ticket is opened to add TokenIssuer interface support to 

[jira] [Commented] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes

2019-03-25 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800842#comment-16800842
 ] 

Elek, Marton commented on HDDS-1333:


Yes, we should discuss again the effort to support older hadoop2 branches.

I have an idea to create a workaround: I created a simplified version of the 
OzoneFileSystem:

{code}
public class OzoneFileSystemSimplified extends FileSystem {
   //all the existing implementation except the file token magic
}
{code}

And the real file system is just an extension:

{code}
@InterfaceAudience.Private
@InterfaceStability.Evolving
public class OzoneFileSystem extends OzoneFileSystemSimplified
implements KeyProviderTokenIssuer {

  @Override
  public KeyProvider getKeyProvider() throws IOException {
return getAdapter().getKeyProvider();
  }

  @Override
  public URI getKeyProviderUri() throws IOException {
return getAdapter().getKeyProviderUri();
  }

  @Override
  public DelegationTokenIssuer[] getAdditionalTokenIssuers()
  throws IOException {
KeyProvider keyProvider = getKeyProvider();
if (keyProvider instanceof DelegationTokenIssuer) {
  return new DelegationTokenIssuer[]{(DelegationTokenIssuer)keyProvider};
}
return null;
  }

}
{code}

As of now I am testing this change, but I have some other problem 
(`java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StorageStatistics`) 
which should be solved to test everything all together. 

Still working on this.



> OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security 
> classes
> -
>
> Key: HDDS-1333
> URL: https://issues.apache.org/jira/browse/HDDS-1333
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Priority: Major
>
> The current ozonefs compatibility layer is broken by: HDDS-1299.
> The spark jobs (including hadoop 2.7) can't be executed any more:
> {code}
> 2019-03-25 09:50:08 INFO  StateStoreCoordinatorRef:54 - Registered 
> StateStoreCoordinator endpoint
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/crypto/key/KeyProviderTokenIssuer
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
> at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
> at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
> at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
> at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> at 
> org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
> at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
> at 
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
> at 
> org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
> at 
> org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:715)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:757)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:724)
> at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> 

[jira] [Commented] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes

2019-03-25 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800834#comment-16800834
 ] 

Steve Loughran commented on HDDS-1333:
--

Looks like there's a class in trunk KeyProviderTokenIssuer which isn't in 
branch-2.7. But for this error to surface, the classpath is going to be a mix 
of trunk ozone JAR and a hadoop-common-2.7 JAR. That's inevitably going to be 
broken in so many ways, isn't it?

> OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security 
> classes
> -
>
> Key: HDDS-1333
> URL: https://issues.apache.org/jira/browse/HDDS-1333
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Priority: Major
>
> The current ozonefs compatibility layer is broken by: HDDS-1299.
> The spark jobs (including hadoop 2.7) can't be executed any more:
> {code}
> 2019-03-25 09:50:08 INFO  StateStoreCoordinatorRef:54 - Registered 
> StateStoreCoordinator endpoint
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/crypto/key/KeyProviderTokenIssuer
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
> at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
> at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
> at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
> at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> at 
> org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
> at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
> at 
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
> at 
> org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
> at 
> org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:715)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:757)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:724)
> at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
> at 
> org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
> at 
> org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
> at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
> at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
> at 
> org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.hadoop.crypto.key.KeyProviderTokenIssuer
> at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> at 

[jira] [Commented] (HDDS-1333) OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security classes

2019-03-25 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800827#comment-16800827
 ] 

Xiaoyu Yao commented on HDDS-1333:
--

Thanks [~elek] for reporting this. The KeyProviderTokenIssuer and 
DelegationTokenIssuer are new interfaces separated from Hadoop Compatible 
FileSystem interface by HADOOP-14445, which is an incompatible change on trunk. 
There are pending patches to support this on branch-2 but not commit yet.  

We add support token support for o3fs in HDDS-1299 for MR/Spark based on latest 
trunk (Hadoop-3.2+). If we really need to support hadoop-2 like this case, we 
should have a separate ozone-branch that builds for hadoop-2. 

 

> OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security 
> classes
> -
>
> Key: HDDS-1333
> URL: https://issues.apache.org/jira/browse/HDDS-1333
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Priority: Major
>
> The current ozonefs compatibility layer is broken by: HDDS-1299.
> The spark jobs (including hadoop 2.7) can't be executed any more:
> {code}
> 2019-03-25 09:50:08 INFO  StateStoreCoordinatorRef:54 - Registered 
> StateStoreCoordinator endpoint
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/hadoop/crypto/key/KeyProviderTokenIssuer
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
> at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
> at 
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
> at 
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
> at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> at 
> org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
> at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
> at 
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
> at 
> org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
> at 
> org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:715)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:757)
> at 
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:724)
> at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
> at 
> org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
> at 
> org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
> at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
> at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
> at 
> org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
> at 

[jira] [Work logged] (HDDS-1299) Support TokenIssuer interface for running jobs with OzoneFileSystem

2019-03-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1299?focusedWorklogId=218098=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-218098
 ]

ASF GitHub Bot logged work on HDDS-1299:


Author: ASF GitHub Bot
Created on: 25/Mar/19 15:47
Start Date: 25/Mar/19 15:47
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on issue #627: HDDS-1299. 
Support TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#issuecomment-476256841
 
 
   Given this has broken things, can I ask a process question here, 
specifically regarding Yetus's v-1
   
   > The patch doesn't appear to include any new or modified tests. Please 
justify why no new tests are needed for this patch. Also please list what 
manual steps were performed to verify this patch.
   
   There are lots of ways to test for DTs being ussued: The `hdfs fs` command, 
the `hadoop dtuils`, the actualyh M
   
   * Marshalling/unmarshalling of tokens: 
org.apache.hadoop.fs.s3a.auth.delegation.TestS3ADelegationTokenSupport
   * org.apache.hadoop.fs.s3a.auth.delegation.ITestRoleDelegationTokens
   * org.apache.hadoop.fs.s3a.auth.delegation.ITestRoleDelegationInFileystem
   * org.apache.hadoop.fs.azurebfs.extensions.ITestAbfsDelegationTokens
   
   You can even verify that MR cluster launch collects the tokens: 
org.apache.hadoop.fs.s3a.auth.delegation.ITestDelegatedMRJob
   
   Please. write tests. Apart from the MR job one its straightforward, and I've 
just given you all the clases to start with.
   
   Without tests, when people like myself make an incompatible change which 
breaks Ozone's DT support, we're just going to shrug and say "well, you should 
have written the tests"
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 218098)
Time Spent: 6h 20m  (was: 6h 10m)

> Support TokenIssuer interface for running jobs with OzoneFileSystem
> ---
>
> Key: HDDS-1299
> URL: https://issues.apache.org/jira/browse/HDDS-1299
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> This ticket is opened to add TokenIssuer interface support to OzoneFileSystem 
> so that MR and Spark jobs can run with OzoneFileSystem in secure mode. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1234) Iterate the OM DB snapshot and populate the recon container DB.

2019-03-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800787#comment-16800787
 ] 

Hudson commented on HDDS-1234:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16276 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16276/])
HDDS-1234. Iterate the OM DB snapshot and populate the recon container (yqlin: 
rev e5d72f504e2cf932657f96797623f3a5bbd71f4b)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/LevelDBStoreIterator.java
* (add) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/package-info.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/IntegerCodec.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/OzoneManagerServiceProvider.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/RocksDBStoreIterator.java
* (add) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/package-info.java
* (add) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/AbstractOMMetadataManagerTest.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/MetaStoreIterator.java
* (add) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/TestReconCodecs.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/recovery/ReconOmMetadataManagerImpl.java
* (edit) 
hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/TestMetadataStore.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/OzoneManagerServiceProviderImpl.java
* (delete) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/ReconContainerDBProvider.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/types/KeyMetadata.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/ContainerDBServiceProvider.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/RDBStore.java
* (edit) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/spi/impl/TestOzoneManagerServiceProviderImpl.java
* (add) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/spi/impl/TestReconContainerDBProvider.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/ReconConstants.java
* (add) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ReconContainerDBProvider.java
* (add) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestContainerKeyService.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/types/ContainerKeyPrefix.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/api/ContainerKeyService.java
* (edit) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/TestReconUtils.java
* (add) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestContainerKeyMapperTask.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStore.java
* (add) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerKeyPrefixCodec.java
* (add) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/package-info.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/ReconServer.java
* (add) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/ContainerKeyMapperTask.java
* (edit) 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/ReconControllerModule.java
* (edit) 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/spi/impl/TestContainerDBServiceProviderImpl.java


> Iterate the OM DB snapshot and populate the recon container DB. 
> 
>
> Key: HDDS-1234
> URL: https://issues.apache.org/jira/browse/HDDS-1234
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1234-000.patch, HDDS-1234-001.patch, 
> HDDS-1234-002.patch, HDDS-1234-003.patch, HDDS-1234-004.patch, 
> HDDS-1234-005.patch
>
>
> * OM DB snapshot contains the Key->ContainerId + BlockId information. 
> * Iterate the OM snapshot DB and create the reverse map of (ContainerId, Key 
> prefix) -> Key count to be stored in the Recon container DB.
> * Use a codec to store data into Recon container DB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

[jira] [Commented] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-03-25 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800723#comment-16800723
 ] 

He Xiaoqiao commented on HDFS-14316:


{quote}In HDFS-14316-HDFS-13891.010.patch I added a separate unit test 
(TestRouterFaultTolerant) which uses the MockNamenode (He Xiaoqiao you may want 
to take a look).
{quote}
Sorry for the late response and I will take time out this days to work on 
update MiniRouterDFSCluster ref MockNamenode, Thanks [~elgoiri] call me here.

> RBF: Support unavailable subclusters for mount points with multiple 
> destinations
> 
>
> Key: HDFS-14316
> URL: https://issues.apache.org/jira/browse/HDFS-14316
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14316-HDFS-13891.000.patch, 
> HDFS-14316-HDFS-13891.001.patch, HDFS-14316-HDFS-13891.002.patch, 
> HDFS-14316-HDFS-13891.003.patch, HDFS-14316-HDFS-13891.004.patch, 
> HDFS-14316-HDFS-13891.005.patch, HDFS-14316-HDFS-13891.006.patch, 
> HDFS-14316-HDFS-13891.007.patch, HDFS-14316-HDFS-13891.008.patch, 
> HDFS-14316-HDFS-13891.009.patch, HDFS-14316-HDFS-13891.010.patch, 
> HDFS-14316-HDFS-13891.011.patch, HDFS-14316-HDFS-13891.012.patch, 
> HDFS-14316-HDFS-13891.013.patch
>
>
> Currently mount points with multiple destinations (e.g., HASH_ALL) fail 
> writes when the destination subcluster is down. We need an option to allow 
> writing in other subclusters when one is down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1300) Optimize non-recursive ozone filesystem apis

2019-03-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800719#comment-16800719
 ] 

Hadoop QA commented on HDDS-1300:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-ozone: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
34s{color} | {color:red} hadoop-hdds generated 2 new + 0 unchanged - 0 fixed = 
2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
58s{color} | {color:red} hadoop-ozone generated 2 new + 0 unchanged - 0 fixed = 
2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 21s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 21s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds |
|  |  org.apache.hadoop.ozone.om.helpers.OzoneFileStatus doesn't override 
org.apache.hadoop.fs.FileStatus.equals(Object)  At OzoneFileStatus.java:At 
OzoneFileStatus.java:[line 1] |
|  |  Class org.apache.hadoop.ozone.om.helpers.OzoneFileStatus defines 
non-transient non-serializable 

[jira] [Updated] (HDDS-1234) Iterate the OM DB snapshot and populate the recon container DB.

2019-03-25 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-1234:

  Resolution: Fixed
Target Version/s: 0.5.0
  Status: Resolved  (was: Patch Available)

Committed this to trunk.
Thanks [~avijayan] for the contribution and [~swagle] for the review. Further 
comments can be addressed in follow-up JIRAs.

> Iterate the OM DB snapshot and populate the recon container DB. 
> 
>
> Key: HDDS-1234
> URL: https://issues.apache.org/jira/browse/HDDS-1234
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1234-000.patch, HDDS-1234-001.patch, 
> HDDS-1234-002.patch, HDDS-1234-003.patch, HDDS-1234-004.patch, 
> HDDS-1234-005.patch
>
>
> * OM DB snapshot contains the Key->ContainerId + BlockId information. 
> * Iterate the OM snapshot DB and create the reverse map of (ContainerId, Key 
> prefix) -> Key count to be stored in the Recon container DB.
> * Use a codec to store data into Recon container DB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1234) Iterate the OM DB snapshot and populate the recon container DB.

2019-03-25 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800769#comment-16800769
 ] 

Yiqun Lin commented on HDDS-1234:
-

LGTM, +1.

> Iterate the OM DB snapshot and populate the recon container DB. 
> 
>
> Key: HDDS-1234
> URL: https://issues.apache.org/jira/browse/HDDS-1234
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-1234-000.patch, HDDS-1234-001.patch, 
> HDDS-1234-002.patch, HDDS-1234-003.patch, HDDS-1234-004.patch, 
> HDDS-1234-005.patch
>
>
> * OM DB snapshot contains the Key->ContainerId + BlockId information. 
> * Iterate the OM snapshot DB and create the reverse map of (ContainerId, Key 
> prefix) -> Key count to be stored in the Recon container DB.
> * Use a codec to store data into Recon container DB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14385) RBF: Optimize MiniRouterDFSCluster with optional light weight MiniDFSCluster

2019-03-25 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800718#comment-16800718
 ] 

He Xiaoqiao commented on HDFS-14385:


Thanks [~elgoiri] for more comments, I will work on this jira the next days.

> RBF: Optimize MiniRouterDFSCluster with optional light weight MiniDFSCluster
> 
>
> Key: HDFS-14385
> URL: https://issues.apache.org/jira/browse/HDFS-14385
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
>
> MiniRouterDFSCluster mimic federated HDFS cluster with routers to support RBF 
> test, In MiniRouterDFSCluster, it starts MiniDFSCluster with complete roles 
> of HDFS which have significant time cost. As HDFS-14351 discussed, it is 
> better to provide mock MiniDFSCluster/Namenodes as one option to support some 
> test case and reduce time cost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14389) getAclStatus returns incorrect permissions and owner when an iNodeAttributeProvider is configured

2019-03-25 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800682#comment-16800682
 ] 

Wei-Chiu Chuang commented on HDFS-14389:


Makes sense to me.
bq. This is only for display purposes.
I'm not sure if this is true. This method is used by 
{{NameNodeRpcServer#getAclStatus}}, which is called by 
{{DistributedFileSystem#getAclStatus}} which is used by  distcp. Distcp may 
update ACLs incorrectly as a result.

> getAclStatus returns incorrect permissions and owner when an 
> iNodeAttributeProvider is configured
> -
>
> Key: HDFS-14389
> URL: https://issues.apache.org/jira/browse/HDFS-14389
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-14389.001.patch
>
>
> With an inodeAttributeProvider configured in the namenode (eg Sentry), the 
> permissions returned by a `hadoop fs -getfacl` command have an effective 
> comment after them, even if the group permission bits are rwx, eg:
> {code}
> hadoop fs -ls /user/hive/warehouse/sample_08
> Found 1 items
> -rwxrwx--x+  3 hive hive  46069 2019-03-22 00:19 
> /user/hive/warehouse/sample_08/sample_08
> NOTE THE GROUP PERMISSIONS - rwx - No ACLs should get masked.
> hadoop fs -getfacl /user/hive/warehouse/sample_08/sample_08
> # file: /user/hive/warehouse/sample_08/sample_08
> # owner: hive
> # group: hive
> user::rwx
> group::---
> user:hive:rwx   #effective:r--
> group:sentryDefaultAdmin:rwx#effective:r--
> user:admin:rwx  #effective:r--
> group:systest:rwx   #effective:r--
> group:hive:rwx  #effective:r--
> mask::rwx
> other::--x
> {code}
> Note the effective comment, indicating group permissions of r-- which the ls 
> output does not show.
> Usually this effective comment would downgrade the effective permissions, and 
> a user with the group systest would not be able to write to the file / 
> folder, but in this case that does not happen - this appears to be a display 
> issue in the client.
> After some debugging, the problem is due to getAclStatus returning the 
> permissions, owner and group of the underlying file in HDFS and not those 
> from the inodeAttributeProvider - ie this call does not correctly use the 
> attribute provider. Comparing the output with getFileStatus:
> {code}
> Permission from FileStatus: rwxrwx--x  # Correct, the provider says the 
> permissions are 771
> Permission from AclStatus: rw-r--r--  # Incorrect, these are the permissions 
> from HDFS if the provided is disabled
> {code}
> Note that in this example, the underlying file permissions have group r-\-, 
> and that is what is influencing the ACL output, making them effective r--.
> Within the namenode, the permissions are enforced correctly. The reason this 
> is a CLI display issue is that AclCommand.java makes a call to getAclStatus, 
> and from it, it gets the ACL list and the group permissions. Then it 'masks' 
> the ACLs it displays using the returned group permission within the client. 
> This is only for display purposes. FSPermissionChecker inside the Namenode is 
> not impacted by this and does the correct thing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-25 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800592#comment-16800592
 ] 

Feilong He edited comment on HDFS-14355 at 3/25/19 12:51 PM:
-

Thanks [~Sammi] for your valuable comments.
{quote}User would like to know the relationship between 
dfs.datanode.cache.pmem.capacity and dfs.datanode.max.locked.memory by reading 
the descriptions in hdfs-default.xml
{quote}
I will proofread the description to make them clear to user.
{quote}PmemUsedBytesCount, is there any forsee issue to reuse UsedBytesCount 
instead?  Also byte roundup is not addressed in PmemUsedBytesCount
{quote}
As you know, UsedBytesCount is used to count the DRAM bytes. It can ensure that 
after reserving bytes for DRAM cache the used bytes will not exceed maxBytes 
(dfs.datanode.max.locked.memory). We found that besides HDFS DRAM cache, Lazy 
Persist Writes also uses this UsedBytesCount to reserve/release bytes. Since 
supporting Lazy Persist Writes on pmem is not the target of this jira, we 
introduced PmemUsedBytesCount for pmem to separate pmem's cache bytes 
management with DRAM's. Thus Lazy Persist Writes will not be affected. User can 
still enable Lazy Persist Writes by configuring dfs.datanode.max.locked.memory. 
Pmem may not have page size like mechanism as DRAM (we will confirm it). So we 
didn't round up the bytes to a page size like value. Because of this 
difference, DRAM cache & pmem cache have different reserve/release methods, 
which also makes adding PmemUsedBytesCount necessary.

 
{quote}FsDatasetCached is not a good place to put specific memory loader 
implemetation functions like reservePmem, releasePmem. FsDatasetCached should 
be generic.
{quote}
Good suggestion. We are aware of this issue as you pointed out. In the new 
path, we will move PmemUsedBytesCount, reservePmem, releasePmem to a new class 
PmemVolumeManager to keep FsDatasetCache generic.
{quote}As [~daryn] suggested, more elegant error handling.
{quote}
We are checking our code to make exceptions be handled elegantly.

 

Thanks again for your huge efforts on reviewing this patch. Your suggestions 
will be seriously considered by us.


was (Author: philohe):
Thanks [~Sammi] for your valuable comments.
{quote}User would like to know the relationship between 
dfs.datanode.cache.pmem.capacity and dfs.datanode.max.locked.memory by reading 
the descriptions in hdfs-default.xml
{quote}
I will proofread the description to make them clear to user.
{quote}PmemUsedBytesCount, is there any forsee issue to reuse UsedBytesCount 
instead?  Also byte roundup is not addressed in PmemUsedBytesCount
{quote}
As you know, UsedBytesCount is used to count the DRAM bytes.

It can ensure that after reserving bytes for DRAM cache the used bytes will not 
exceed maxBytes ({{dfs.datanode.max.locked.memory}}). We found that besides 
HDFS DRAM cache, Lazy Persist Writes also uses this {{UsedBytesCount }}to 
reserve/release bytes. Since supporting Lazy Persist Writes on pmem is not the 
target of this jira, we introduce PmemUsedBytesCount for pmem to separate 
pmem's cache management with DRAM's. Thus Lazy Persist Writes will not be 
affected. User can still enable Lazy Persist Writes by configuring 
dfs.datanode.max.locked.memory. Pmem may not have page size like mechanism as 
DRAM (we will confirm it). So we didn't round up the bytes to a page size like 
value. Because of this difference, {{UsedBytesCount}} and PmemUsedBytesCount 
have different reserve/release method, which also makes adding 
PmemUsedBytesCount necessary.
{quote}FsDatasetCached is not a good place to put specific memory loader 
implemetation functions like reservePmem, releasePmem. FsDatasetCached should 
be generic.
{quote}
Good suggestion. We are aware of this issue as you pointed out. In the new 
path, we will move PmemUsedBytesCount, reservePmem, releasePmem to a new class 
PmemVolumeManager to keep FsDatasetCache generic.
{quote}As [~daryn] suggested, more elegant error handling.
{quote}
We are checking our code to make exceptions be handled elegantly.

 

Thanks again for your huge efforts on reviewing this patch. Your suggestions 
will be seriously considered by us.

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch, HDFS-14355.003.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 

[jira] [Comment Edited] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-25 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800592#comment-16800592
 ] 

Feilong He edited comment on HDFS-14355 at 3/25/19 12:51 PM:
-

Thanks [~Sammi] for your valuable comments.
{quote}User would like to know the relationship between 
dfs.datanode.cache.pmem.capacity and dfs.datanode.max.locked.memory by reading 
the descriptions in hdfs-default.xml
{quote}
I will proofread the description to make them clear to user.
{quote}PmemUsedBytesCount, is there any forsee issue to reuse UsedBytesCount 
instead?  Also byte roundup is not addressed in PmemUsedBytesCount
{quote}
As you know, UsedBytesCount is used to count the DRAM bytes. It can ensure that 
after reserving bytes for DRAM cache the used bytes will not exceed maxBytes 
(dfs.datanode.max.locked.memory). We found that besides HDFS DRAM cache, Lazy 
Persist Writes also uses this UsedBytesCount to reserve/release bytes. Since 
supporting Lazy Persist Writes on pmem is not the target of this jira, we 
introduced PmemUsedBytesCount for pmem to separate pmem's cache bytes 
management with DRAM's. Thus Lazy Persist Writes will not be affected. User can 
still enable Lazy Persist Writes by configuring dfs.datanode.max.locked.memory. 
Pmem may not have page size like mechanism as DRAM (we will confirm it). So we 
didn't round up the bytes to a page size like value. Because of this 
difference, DRAM cache & pmem cache have different reserve/release methods, 
which also makes adding PmemUsedBytesCount necessary.
{quote}FsDatasetCached is not a good place to put specific memory loader 
implemetation functions like reservePmem, releasePmem. FsDatasetCached should 
be generic.
{quote}
Good suggestion. We are aware of this issue as you pointed out. In the new 
path, we will move PmemUsedBytesCount, reservePmem, releasePmem to a new class 
PmemVolumeManager to keep FsDatasetCache generic.
{quote}As [~daryn] suggested, more elegant error handling.
{quote}
We are checking our code to make exceptions be handled elegantly.

 

Thanks again for your huge efforts on reviewing this patch. Your suggestions 
will be seriously considered by us.


was (Author: philohe):
Thanks [~Sammi] for your valuable comments.
{quote}User would like to know the relationship between 
dfs.datanode.cache.pmem.capacity and dfs.datanode.max.locked.memory by reading 
the descriptions in hdfs-default.xml
{quote}
I will proofread the description to make them clear to user.
{quote}PmemUsedBytesCount, is there any forsee issue to reuse UsedBytesCount 
instead?  Also byte roundup is not addressed in PmemUsedBytesCount
{quote}
As you know, UsedBytesCount is used to count the DRAM bytes. It can ensure that 
after reserving bytes for DRAM cache the used bytes will not exceed maxBytes 
(dfs.datanode.max.locked.memory). We found that besides HDFS DRAM cache, Lazy 
Persist Writes also uses this UsedBytesCount to reserve/release bytes. Since 
supporting Lazy Persist Writes on pmem is not the target of this jira, we 
introduced PmemUsedBytesCount for pmem to separate pmem's cache bytes 
management with DRAM's. Thus Lazy Persist Writes will not be affected. User can 
still enable Lazy Persist Writes by configuring dfs.datanode.max.locked.memory. 
Pmem may not have page size like mechanism as DRAM (we will confirm it). So we 
didn't round up the bytes to a page size like value. Because of this 
difference, DRAM cache & pmem cache have different reserve/release methods, 
which also makes adding PmemUsedBytesCount necessary.

 
{quote}FsDatasetCached is not a good place to put specific memory loader 
implemetation functions like reservePmem, releasePmem. FsDatasetCached should 
be generic.
{quote}
Good suggestion. We are aware of this issue as you pointed out. In the new 
path, we will move PmemUsedBytesCount, reservePmem, releasePmem to a new class 
PmemVolumeManager to keep FsDatasetCache generic.
{quote}As [~daryn] suggested, more elegant error handling.
{quote}
We are checking our code to make exceptions be handled elegantly.

 

Thanks again for your huge efforts on reviewing this patch. Your suggestions 
will be seriously considered by us.

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch, HDFS-14355.003.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available 

[jira] [Comment Edited] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-25 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800592#comment-16800592
 ] 

Feilong He edited comment on HDFS-14355 at 3/25/19 12:48 PM:
-

Thanks [~Sammi] for your valuable comments.
{quote}User would like to know the relationship between 
dfs.datanode.cache.pmem.capacity and dfs.datanode.max.locked.memory by reading 
the descriptions in hdfs-default.xml
{quote}
I will proofread the description to make them clear to user.
{quote}PmemUsedBytesCount, is there any forsee issue to reuse UsedBytesCount 
instead?  Also byte roundup is not addressed in PmemUsedBytesCount
{quote}
As you know, UsedBytesCount is used to count the DRAM bytes.

It can ensure that after reserving bytes for DRAM cache the used bytes will not 
exceed maxBytes ({{dfs.datanode.max.locked.memory}}). We found that besides 
HDFS DRAM cache, Lazy Persist Writes also uses this {{UsedBytesCount }}to 
reserve/release bytes. Since supporting Lazy Persist Writes on pmem is not the 
target of this jira, we introduce PmemUsedBytesCount for pmem to separate 
pmem's cache management with DRAM's. Thus Lazy Persist Writes will not be 
affected. User can still enable Lazy Persist Writes by configuring 
dfs.datanode.max.locked.memory. Pmem may not have page size like mechanism as 
DRAM (we will confirm it). So we didn't round up the bytes to a page size like 
value. Because of this difference, {{UsedBytesCount}} and PmemUsedBytesCount 
have different reserve/release method, which also makes adding 
PmemUsedBytesCount necessary.
{quote}FsDatasetCached is not a good place to put specific memory loader 
implemetation functions like reservePmem, releasePmem. FsDatasetCached should 
be generic.
{quote}
Good suggestion. We are aware of this issue as you pointed out. In the new 
path, we will move PmemUsedBytesCount, reservePmem, releasePmem to a new class 
PmemVolumeManager to keep FsDatasetCache generic.
{quote}As [~daryn] suggested, more elegant error handling.
{quote}
We are checking our code to make exceptions be handled elegantly.

 

Thanks again for your huge efforts on reviewing this patch. Your suggestions 
will be seriously considered by us.


was (Author: philohe):
Thanks [~Sammi] for your valuable comments.
{quote}User would like to know the relationship between 
dfs.datanode.cache.pmem.capacity and dfs.datanode.max.locked.memory by reading 
the descriptions in hdfs-default.xml
{quote}
I will proofread the description to make them clear to user.
{quote}PmemUsedBytesCount, is there any forsee issue to reuse UsedBytesCount 
instead?  Also byte roundup is not addressed in PmemUsedBytesCount
{quote}
As you know, {{UsedBytesCount is used to count the DRAM bytes. }}

It can ensure that after reserving bytes for DRAM cache the used bytes will not 
exceed maxBytes ({{dfs.datanode.max.locked.memory}}). We found that besides 
HDFS DRAM cache, Lazy Persist Writes also uses this {{UsedBytesCount}}to 
reserve/release bytes. Since supporting Lazy Persist Writes on pmem is not the 
target of this jira, we introduce PmemUsedBytesCount for pmem to separate 
pmem's cache management with DRAM's. Thus Lazy Persist Writes will not be 
affected. User can still enable Lazy Persist Writes by configuring 
dfs.datanode.max.locked.memory. Pmem may not have page size like mechanism as 
DRAM (we will confirm it). So we didn't round up the bytes to a page size like 
value. Because of this difference, {{UsedBytesCount}} and PmemUsedBytesCount 
have different reserve/release method which also makes adding 
PmemUsedBytesCount necessary.
{quote}FsDatasetCached is not a good place to put specific memory loader 
implemetation functions like reservePmem, releasePmem. FsDatasetCached should 
be generic.
{quote}
Good suggestion. We are aware of this issue as you pointed out. In the new 
path, we will move PmemUsedBytesCount, reservePmem, releasePmem to a new class 
PmemVolumeManager to keep FsDatasetCache generic.
{quote}As [~daryn] suggested, more elegant error handling.
{quote}
We are checking our code to make exceptions be handled elegantly.

 

Thanks again for your huge efforts on reviewing this patch. Your suggestions 
will be seriously considered by us.

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch, HDFS-14355.003.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in 

[jira] [Comment Edited] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-25 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800592#comment-16800592
 ] 

Feilong He edited comment on HDFS-14355 at 3/25/19 12:45 PM:
-

Thanks [~Sammi] for your valuable comments.
{quote}User would like to know the relationship between 
dfs.datanode.cache.pmem.capacity and dfs.datanode.max.locked.memory by reading 
the descriptions in hdfs-default.xml
{quote}
I will proofread the description to make them clear to user.
{quote}PmemUsedBytesCount, is there any forsee issue to reuse UsedBytesCount 
instead?  Also byte roundup is not addressed in PmemUsedBytesCount
{quote}
As you know, {{UsedBytesCount is used to count the DRAM bytes. }}

It can ensure that after reserving bytes for DRAM cache the used bytes will not 
exceed maxBytes ({{dfs.datanode.max.locked.memory}}). We found that besides 
HDFS DRAM cache, Lazy Persist Writes also uses this {{UsedBytesCount}}to 
reserve/release bytes. Since supporting Lazy Persist Writes on pmem is not the 
target of this jira, we introduce PmemUsedBytesCount for pmem to separate 
pmem's cache management with DRAM's. Thus Lazy Persist Writes will not be 
affected. User can still enable Lazy Persist Writes by configuring 
dfs.datanode.max.locked.memory. Pmem may not have page size like mechanism as 
DRAM (we will confirm it). So we didn't round up the bytes to a page size like 
value. Because of this difference, {{UsedBytesCount}} and PmemUsedBytesCount 
have different reserve/release method which also makes adding 
PmemUsedBytesCount necessary.
{quote}FsDatasetCached is not a good place to put specific memory loader 
implemetation functions like reservePmem, releasePmem. FsDatasetCached should 
be generic.
{quote}
Good suggestion. We are aware of this issue as you pointed out. In the new 
path, we will move PmemUsedBytesCount, reservePmem, releasePmem to a new class 
PmemVolumeManager to keep FsDatasetCache generic.
{quote}As [~daryn] suggested, more elegant error handling.
{quote}
We are checking our code to make exceptions be handled elegantly.

 

Thanks again for your huge efforts on reviewing this patch. Your suggestions 
will be seriously considered by us.


was (Author: philohe):
Thanks [~Sammi] for your valuable comments.
{quote}User would like to know the relationship between 
dfs.datanode.cache.pmem.capacity and dfs.datanode.max.locked.memory by reading 
the descriptions in hdfs-default.xml
{quote}
I will proofread the description to make them clear to user.
{quote}PmemUsedBytesCount, is there any forsee issue to reuse UsedBytesCount 
instead?  Also byte roundup is not addressed in PmemUsedBytesCount
{quote}
As you know, {{UsedBytesCount is used to count the DRAM bytes. It can ensure 
that after reserving bytes for DRAM cache the used bytes will not exceed 
maxBytes (dfs.datanode.max.locked.memory) . We found that besides DRAM cache, 
Lazy Persist Writes also uses this UsedBytesCount}} to reserve/release bytes. 
Since supporting Lazy Persist Writes on pmem is not the target of this jira, we 
introduce PmemUsedBytesCount for pmem. Thus Lazy Persist Writes will not be 
affected. User can still enable Lazy Persist Writes by configuring 
dfs.datanode.max.locked.memory. Pmem may not have page size like mechanism as 
DRAM (we will confirm it). So we didn't round up the bytes to a page size like 
value. Because of this difference, {{UsedBytesCount}} and PmemUsedBytesCount 
has different reserve/release method. So we just introduced PmemUsedBytesCount.
{quote}FsDatasetCached is not a good place to put specific memory loader 
implemetation functions like reservePmem, releasePmem. FsDatasetCached should 
be generic.
{quote}
Good suggestion. We are aware of this issue as you pointed out. In the new 
path, we will move PmemUsedBytesCount, reservePmem, releasePmem to a new class 
PmemVolumeManager to keep FsDatasetCache generic.
{quote}As [~daryn] suggested, more elegant error handling.
{quote}
We are checking our code to make exceptions be handled elegantly.

 

Thanks again for your huge efforts on reviewing this patch. Your suggestions 
will be seriously considered by us.

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch, HDFS-14355.003.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some 

[jira] [Updated] (HDDS-1300) Optimize non-recursive ozone filesystem apis

2019-03-25 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-1300:
--
Status: Patch Available  (was: Open)

> Optimize non-recursive ozone filesystem apis
> 
>
> Key: HDDS-1300
> URL: https://issues.apache.org/jira/browse/HDDS-1300
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Filesystem, Ozone Manager
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1300.001.patch, HDDS-1300.002.patch, 
> HDDS-1300.003.patch
>
>
> This Jira aims to optimise non recursive apis in ozone file system. The Jira 
> would add support for such apis in Ozone manager in order to reduce the 
> number of rpc calls to Ozone Manager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1185) Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call to OM.

2019-03-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800602#comment-16800602
 ] 

Hudson commented on HDDS-1185:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16275 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16275/])
HDDS-1185. Optimize GetFileStatus in OzoneFileSystem by reducing the (msingh: 
rev 67dd45fc25c1efd53e7e9010f251bdf60a332a06)
* (edit) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneClientAdapter.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetrics.java
* (edit) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneFileSystem.java
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rest/RestClient.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/fs/package-info.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneClientAdapterImpl.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerProtocol.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/fs/OzoneManagerFS.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/audit/OMAction.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OzoneFileStatus.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java


> Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call 
> to OM.
> ---
>
> Key: HDDS-1185
> URL: https://issues.apache.org/jira/browse/HDDS-1185
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Filesystem
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Critical
> Fix For: 0.5.0
>
> Attachments: HDDS-1185.001.patch, HDDS-1185.002.patch, 
> HDDS-1185.003.patch, HDDS-1185.004.patch, HDDS-1185.005.patch, 
> HDDS-1185.006.patch, HDDS-1185.007.patch, HDDS-1185.008.patch
>
>
> GetFileStatus sends multiple rpc calls to Ozone Manager to fetch the file 
> status for a given file. This can be optimized by performing all the 
> processing on the OzoneManager for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-25 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800592#comment-16800592
 ] 

Feilong He commented on HDFS-14355:
---

Thanks [~Sammi] for your valuable comments.
{quote}User would like to know the relationship between 
dfs.datanode.cache.pmem.capacity and dfs.datanode.max.locked.memory by reading 
the descriptions in hdfs-default.xml
{quote}
I will proofread the description to make them clear to user.
{quote}PmemUsedBytesCount, is there any forsee issue to reuse UsedBytesCount 
instead?  Also byte roundup is not addressed in PmemUsedBytesCount
{quote}
As you know, {{UsedBytesCount }}is used to count the DRAM bytes. It can ensure 
that after reserving bytes for DRAM cache the used bytes will not exceed 
maxBytes (dfs.datanode.max.locked.memory) . We found that besides DRAM cache, 
Lazy Persist Writes also uses this {{UsedBytesCount}} to reserve/release bytes. 
Since supporting Lazy Persist Writes on pmem is not the target of this jira, we 
introduce Pmem{{UsedBytesCount}} for pmem. Thus Lazy Persist Writes will not be 
affected. User can still enable Lazy Persist Writes by configuring 
dfs.datanode.max.locked.memory. Pmem may not have page size like mechanism as 
DRAM (we will confirm it). So we didn't round up the bytes to a page size like 
value. Because of this difference, {{UsedBytesCount}} and 
Pmem{{UsedBytesCount}} has different reserve/release method. So we just 
introduced Pmem{{UsedBytesCount}}.
{quote}FsDatasetCached is not a good place to put specific memory loader 
implemetation functions like reservePmem, releasePmem. FsDatasetCached should 
be generic.
{quote}
Good suggestion. We are aware of this issue as you pointed out. In the new 
path, we will move Pmem{{UsedBytesCount}}, reservePmem, releasePmem to a new 
class PmemVolumeManager to keep FsDatasetCache generic.
{quote}As [~daryn] suggested, more elegant error handling.
{quote}
We are checking our code to make exceptions be handled elegantly.

 

Thanks again for your huge efforts on reviewing this patch. Your suggestions 
will be seriously considered by us.

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch, HDFS-14355.003.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1185) Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call to OM.

2019-03-25 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1185:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the review [~ljain] and [~jnp]. I have committed this to trunk

> Optimize GetFileStatus in OzoneFileSystem by reducing the number of rpc call 
> to OM.
> ---
>
> Key: HDDS-1185
> URL: https://issues.apache.org/jira/browse/HDDS-1185
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Filesystem
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Critical
> Fix For: 0.5.0
>
> Attachments: HDDS-1185.001.patch, HDDS-1185.002.patch, 
> HDDS-1185.003.patch, HDDS-1185.004.patch, HDDS-1185.005.patch, 
> HDDS-1185.006.patch, HDDS-1185.007.patch, HDDS-1185.008.patch
>
>
> GetFileStatus sends multiple rpc calls to Ozone Manager to fetch the file 
> status for a given file. This can be optimized by performing all the 
> processing on the OzoneManager for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-25 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800592#comment-16800592
 ] 

Feilong He edited comment on HDFS-14355 at 3/25/19 11:37 AM:
-

Thanks [~Sammi] for your valuable comments.
{quote}User would like to know the relationship between 
dfs.datanode.cache.pmem.capacity and dfs.datanode.max.locked.memory by reading 
the descriptions in hdfs-default.xml
{quote}
I will proofread the description to make them clear to user.
{quote}PmemUsedBytesCount, is there any forsee issue to reuse UsedBytesCount 
instead?  Also byte roundup is not addressed in PmemUsedBytesCount
{quote}
As you know, {{UsedBytesCount is used to count the DRAM bytes. It can ensure 
that after reserving bytes for DRAM cache the used bytes will not exceed 
maxBytes (dfs.datanode.max.locked.memory) . We found that besides DRAM cache, 
Lazy Persist Writes also uses this UsedBytesCount}} to reserve/release bytes. 
Since supporting Lazy Persist Writes on pmem is not the target of this jira, we 
introduce PmemUsedBytesCount for pmem. Thus Lazy Persist Writes will not be 
affected. User can still enable Lazy Persist Writes by configuring 
dfs.datanode.max.locked.memory. Pmem may not have page size like mechanism as 
DRAM (we will confirm it). So we didn't round up the bytes to a page size like 
value. Because of this difference, {{UsedBytesCount}} and PmemUsedBytesCount 
has different reserve/release method. So we just introduced PmemUsedBytesCount.
{quote}FsDatasetCached is not a good place to put specific memory loader 
implemetation functions like reservePmem, releasePmem. FsDatasetCached should 
be generic.
{quote}
Good suggestion. We are aware of this issue as you pointed out. In the new 
path, we will move PmemUsedBytesCount, reservePmem, releasePmem to a new class 
PmemVolumeManager to keep FsDatasetCache generic.
{quote}As [~daryn] suggested, more elegant error handling.
{quote}
We are checking our code to make exceptions be handled elegantly.

 

Thanks again for your huge efforts on reviewing this patch. Your suggestions 
will be seriously considered by us.


was (Author: philohe):
Thanks [~Sammi] for your valuable comments.
{quote}User would like to know the relationship between 
dfs.datanode.cache.pmem.capacity and dfs.datanode.max.locked.memory by reading 
the descriptions in hdfs-default.xml
{quote}
I will proofread the description to make them clear to user.
{quote}PmemUsedBytesCount, is there any forsee issue to reuse UsedBytesCount 
instead?  Also byte roundup is not addressed in PmemUsedBytesCount
{quote}
As you know, {{UsedBytesCount }}is used to count the DRAM bytes. It can ensure 
that after reserving bytes for DRAM cache the used bytes will not exceed 
maxBytes (dfs.datanode.max.locked.memory) . We found that besides DRAM cache, 
Lazy Persist Writes also uses this {{UsedBytesCount}} to reserve/release bytes. 
Since supporting Lazy Persist Writes on pmem is not the target of this jira, we 
introduce Pmem{{UsedBytesCount}} for pmem. Thus Lazy Persist Writes will not be 
affected. User can still enable Lazy Persist Writes by configuring 
dfs.datanode.max.locked.memory. Pmem may not have page size like mechanism as 
DRAM (we will confirm it). So we didn't round up the bytes to a page size like 
value. Because of this difference, {{UsedBytesCount}} and 
Pmem{{UsedBytesCount}} has different reserve/release method. So we just 
introduced Pmem{{UsedBytesCount}}.
{quote}FsDatasetCached is not a good place to put specific memory loader 
implemetation functions like reservePmem, releasePmem. FsDatasetCached should 
be generic.
{quote}
Good suggestion. We are aware of this issue as you pointed out. In the new 
path, we will move Pmem{{UsedBytesCount}}, reservePmem, releasePmem to a new 
class PmemVolumeManager to keep FsDatasetCache generic.
{quote}As [~daryn] suggested, more elegant error handling.
{quote}
We are checking our code to make exceptions be handled elegantly.

 

Thanks again for your huge efforts on reviewing this patch. Your suggestions 
will be seriously considered by us.

> Implement HDFS cache on SCM by using pure java mapped byte buffer
> -
>
> Key: HDFS-14355
> URL: https://issues.apache.org/jira/browse/HDFS-14355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14355.000.patch, HDFS-14355.001.patch, 
> HDFS-14355.002.patch, HDFS-14355.003.patch
>
>
> This task is to implement the caching to persistent memory using pure 
> {{java.nio.MappedByteBuffer}}, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by 

[jira] [Comment Edited] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-25 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800514#comment-16800514
 ] 

Feilong He edited comment on HDFS-14355 at 3/25/19 11:25 AM:
-

[~daryn], thanks so much for your pretty valuable comments.
{quote}I quickly skimmed the patch. At a high level, a plugin design should not 
leak details. This patch breaks the abstraction and litters the code with 
references to "pmem" and explicit conditionals. The details of pmem should be 
pushed down and hidden in the pmem specific classes.
{quote}
We tried to separate our impl for pmem cache with the current HDFS code. Some 
classes  for pmem were introduced, such as PmemMappableBlockLoader, 
PmemMappedBlock, PmemVolumeManager. In FsdatasetCache, we indeed kept some 
references for pmem, for example pmemUsedBytesCount, which may be one issue you 
pointed out implicitly. In our new patch, pmemUsedBytesCount and 
reserve/release methods for pmem will be removed from FsdatasetCache to a new 
class PmemVolumeManager. We are trying to shade such unnecessarily exposed 
details for pmem as you suggested.
{quote}Adding another reference (cacheFilePath) to {{ReplicaInfo}} is less than 
desirable from a memory perspective. For those not using the feature, it's not 
nearly as bad as adding one to inodes but may be an issue for very dense DNs. 
More importantly, those that DO use the feature will incur a substantial memory 
hit to store the paths. Why does a SCM block need to track a completely 
different path? Isn't the SCM just another storage dir?
{quote}
The cacheFilePath in ReplicaInfo is just used for pmem cache. Since multiple 
pmems can be configured by user, to locate the cache file on pmem, it is 
necessary to keep such info for a cached block. Also, we seriously considered 
your reasonable concerns that adding cacheFilePath there may cause issues for 
very Dense DNs. Thanks for pointing out this. We will optimize this part with a 
new patch. In the new patch, we will remove cacheFilePath from ReplicaInfo. 
Instead, we will add a PmemVolumeManager for pmem cache to keep 
cacheFilePath(precisely, pmem volume index, which is enough for inferring the 
cache file path in our new impl). TB sized HDFS pmem cache should be able to 
cache around 10k blocks at most. In our evaluation, pmem cache would not 
consume substantial DRAM space to maintain which pmem volume a block is cached 
to for 10k level blocks. On the other hand, enabling pmem cache can alleviate 
the pressure of competing DRAM resource.
{quote}{{getCacheLoaderClaZZ}} should be {{getCacheLoaderClaSS}} and the 
returned {{Class}} must be instantiated. It's wrong to compare the class simple 
name against {{MemoryMappableBlockLoader}}. Consider if a user configures with 
{{my.custom.MemoryMappableBlockLoader}} and it instantiates 
{{org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader}}
 anyway because of the matching simple name? Catching 
{{ReflectiveOperationException}} and just logging it masks a serious 
misconfiguration or error condition.
{quote}
The {{getCacheLoaderClaZZ}} will be changed to {{getCacheLoaderClass as you 
suggested. Your suggestion makes us aware of that comparing class simple name 
is inconsiderate. We will fix this issue in our new patch. Since the 
constructor of cache loader requires a FsdatasetCache}} instance as its 
parameter, we still instantiate it there as you noted. The 
{{getCacheLoaderClass}} comes from DNconf and it should not depend on 
{{FsdatasetCache}} to return a instantiated cache loader. As you pointed out, 
it is not reasonable to catch ReflectiveOperationException and merely log it. 
We will throw a RuntimeException inside the catch block to terminate the start 
of DN.
{quote}There's quite a few other exceptions being swallowed or just logged when 
dubious "bad things" happen. Or discarding of exceptions and rethrowing generic 
"something went wrong" exceptions w/o the original exception as a cause. That 
complicates debugging.
{quote}
We are checking the code scrupulously and trying our best to fix the issues you 
mentioned to avoid complicating debugging. For loading pmem volume, we just log 
the error if one volume is not usable. We thought it is tolerable to have just 
one bad pmem volume configured by user. But an exception will be thrown to 
terminate the DN starting if all pmem volumes are invalid.
{quote}New methods in {{FsDatasetUtil}} are not ok. Invalid arguments are not 
"ignorable". That's how bugs creep in which are insanely hard to debug. Don't 
check if offset is valid, just seek, let it throw if invalid. Don't ignore null 
filenames, just let it throw. Catching {{Throwable}} is a bad practice; let 
alone catching, discarding, and throwing a bland exception.
{quote}
Good suggestion! We noted the issues in {{FsDatasetUtil}}. As you pointed out, 
the invalid offset and null filenames 

[jira] [Comment Edited] (HDFS-14355) Implement HDFS cache on SCM by using pure java mapped byte buffer

2019-03-25 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16800514#comment-16800514
 ] 

Feilong He edited comment on HDFS-14355 at 3/25/19 11:23 AM:
-

[~daryn], thanks so much for your pretty valuable comments.
{quote}I quickly skimmed the patch. At a high level, a plugin design should not 
leak details. This patch breaks the abstraction and litters the code with 
references to "pmem" and explicit conditionals. The details of pmem should be 
pushed down and hidden in the pmem specific classes.
{quote}
We tried to separate our impl for pmem cache with the current HDFS code. We 
introduced some classes  for pmem such as PmemMappableBlockLoader, 
PmemMappedBlock, PmemVolumeManager. In FsdatasetCache, we indeed kept some 
references for pmem, for example pmemUsedBytesCount, which may be one issue you 
pointed out implicitly. In our new patch, pmemUsedBytesCount and 
reserve/release methods for pmem will be removed from FsdatasetCache to a new 
class PmemVolumeManager. We are trying to shade such unnecessarily exposed 
details for pmem as you suggested.
{quote}Adding another reference (cacheFilePath) to {{ReplicaInfo}} is less than 
desirable from a memory perspective. For those not using the feature, it's not 
nearly as bad as adding one to inodes but may be an issue for very dense DNs. 
More importantly, those that DO use the feature will incur a substantial memory 
hit to store the paths. Why does a SCM block need to track a completely 
different path? Isn't the SCM just another storage dir?
{quote}
The cacheFilePath in ReplicaInfo is just used for pmem cache. Since multiple 
pmems can be configured by user, to locate the cache file on pmem, it is 
necessary to keep such info for a cached block. Also, we seriously considered 
your reasonable concerns that adding cacheFilePath there may cause issues for 
very Dense DNs. Thanks for pointing out this. We will optimize this part with a 
new patch. In the new patch, we will remove cacheFilePath from ReplicaInfo. 
Instead, we will add a PmemVolumeManager for pmem cache to keep 
cacheFilePath(precisely, pmem volume index, which is enough for inferring the 
cache file path in our new impl). TB sized HDFS pmem cache should be able to 
cache around 10k blocks at most. In our evaluation, pmem cache would not 
consume substantial DRAM space to maintain which pmem volume a block is cached 
to for 10k level blocks. On the other hand, enabling pmem cache can alleviate 
the pressure of competing DRAM resource.
{quote}{{getCacheLoaderClaZZ}} should be {{getCacheLoaderClaSS}} and the 
returned {{Class}} must be instantiated. It's wrong to compare the class simple 
name against {{MemoryMappableBlockLoader}}. Consider if a user configures with 
{{my.custom.MemoryMappableBlockLoader}} and it instantiates 
{{org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader}}
 anyway because of the matching simple name? Catching 
{{ReflectiveOperationException}} and just logging it masks a serious 
misconfiguration or error condition.
{quote}
The {{getCacheLoaderClaZZ}} will be changed to {{getCacheLoaderClass as you 
suggested. Your suggestion makes us aware of that comparing class simple name 
is inconsiderate. We will fix this issue in our new patch. Since the 
constructor of cache loader requires a FsdatasetCache}} instance as its 
parameter, we still instantiate it there as you noted. The 
{{getCacheLoaderClass}} comes from DNconf and it should not depend on 
{{FsdatasetCache}} to return a instantiated cache loader. As you pointed out, 
it is not reasonable to catch ReflectiveOperationException and merely log it. 
We will throw a RuntimeException inside the catch block to terminate the start 
of DN.
{quote}There's quite a few other exceptions being swallowed or just logged when 
dubious "bad things" happen. Or discarding of exceptions and rethrowing generic 
"something went wrong" exceptions w/o the original exception as a cause. That 
complicates debugging.
{quote}
We are checking the code scrupulously and trying our best to fix the issues you 
mentioned to avoid complicating debugging. For loading pmem volume, we just log 
the error if one volume is not usable. We thought it is tolerable to have just 
one bad pmem volume configured by user. But an exception will be thrown to 
terminate the DN starting if all pmem volumes are invalid.
{quote}New methods in {{FsDatasetUtil}} are not ok. Invalid arguments are not 
"ignorable". That's how bugs creep in which are insanely hard to debug. Don't 
check if offset is valid, just seek, let it throw if invalid. Don't ignore null 
filenames, just let it throw. Catching {{Throwable}} is a bad practice; let 
alone catching, discarding, and throwing a bland exception.
{quote}
Good suggestion! We noted the issues in {{FsDatasetUtil}}. As you pointed out, 
the invalid offset and null filenames 

[jira] [Updated] (HDDS-1317) KeyOutputStream#write throws ArrayIndexOutOfBoundsException when running RandomWrite MR examples

2019-03-25 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-1317:
--
   Resolution: Fixed
Fix Version/s: 0.5.0
   0.4.0
   Status: Resolved  (was: Patch Available)

Thanks [~xyao] for reporting and [~msingh] for the review. I have committed 
this patch to trunk and ozone-0.4 branch.

> KeyOutputStream#write throws ArrayIndexOutOfBoundsException when running 
> RandomWrite MR examples
> 
>
> Key: HDDS-1317
> URL: https://issues.apache.org/jira/browse/HDDS-1317
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.4.0
>Reporter: Xiaoyu Yao
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0, 0.5.0
>
> Attachments: HDDS-1317.000.patch, HDDS-1317.001.patch, 
> HDDS-1317.002.patch, HDDS-1317.003.patch, HDDS-1317.004.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Repro steps:
> {code} 
> jar $HADOOP_MAPRED_HOME/hadoop-mapreduce-examples-*.jar randomwriter 
> -Dtest.randomwrite.total_bytes=1000  o3fs://bucket1.vol1/randomwrite.out
> {code}
>  
> Error Stack:
> {code}
> 2019-03-20 19:02:37 INFO Job:1686 - Task Id : 
> attempt_1553108378906_0002_m_00_0, Status : FAILED
> Error: java.lang.ArrayIndexOutOfBoundsException: -5
>  at java.util.ArrayList.elementData(ArrayList.java:422)
>  at java.util.ArrayList.get(ArrayList.java:435)
>  at 
> org.apache.hadoop.hdds.scm.storage.BufferPool.getBuffer(BufferPool.java:45)
>  at 
> org.apache.hadoop.hdds.scm.storage.BufferPool.allocateBufferIfNeeded(BufferPool.java:59)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:215)
>  at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:130)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:311)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:273)
>  at 
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:46)
>  at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
>  at java.io.DataOutputStream.write(DataOutputStream.java:107)
>  at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1444)
>  at 
> org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat$1.write(SequenceFileOutputFormat.java:83)
>  at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:670)
>  at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
>  at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
>  at 
> org.apache.hadoop.examples.RandomWriter$RandomMapper.map(RandomWriter.java:199)
>  at 
> org.apache.hadoop.examples.RandomWriter$RandomMapper.map(RandomWriter.java:165)
>  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
>  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >