[jira] [Commented] (HDFS-14547) DirectoryWithQuotaFeature.quota costs additional memory even the storage type quota is not set.

2019-07-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883571#comment-16883571
 ] 

Hadoop QA commented on HDFS-14547:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.9 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
48s{color} | {color:green} branch-2.9 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} branch-2.9 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} branch-2.9 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} branch-2.9 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} branch-2.9 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} branch-2.9 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} branch-2.9 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} branch-2.9 passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m  
2s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95 with JDK 
v1.7.0_95 generated 1 new + 9 unchanged - 0 fixed = 10 total (was 9) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:c3439ff |
| JIRA Issue | HDFS-14547 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12974465/HDFS-14547-branch-2.9.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 68dbc638fbfc 4.4.0-139-generic #165-Ubunt

[jira] [Assigned] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-11 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-1787:


Assignee: Sammi Chen  (was: Hrishikesh Gadre)

> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Sammi Chen
>Priority: Major
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> 2019-07-10 16:45:53,176 WARN  om.KeyManagerImpl 
> (KeyManagerImpl.java:lambda$sortDatanodeInPipeline$7(2129)) - Unable to sort 
> datanodes based on distance to client, volume=xqoyzocpse, bucket=vxwajaczqh, 
> key=pool-444-thread-7-201077822, client=127.0.0.1, 
> datanodes=[10f15723-45d7-4a0c-8f01-8b101744a110{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, 7ac2777f-0a5c-4414-9e7f-bfbc47d696ea{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}], exception=java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail:

[jira] [Commented] (HDFS-14458) Report pmem stats to namenode

2019-07-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883546#comment-16883546
 ] 

Hadoop QA commented on HDFS-14458:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}112m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14458 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12974464/HDFS-14458.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 299d022f095d 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 00dd843 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27210/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27210/testReport/ |
| Max. process+thread count | 2482 (vs.

[jira] [Commented] (HDFS-11246) FSNameSystem#logAuditEvent should be called outside the read or write locks

2019-07-11 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883544#comment-16883544
 ] 

Daryn Sharp commented on HDFS-11246:


Yes, yes, yes, I would love this to go in.  Users are increasingly spamming 
write ops like setPermission/setOwner in their jobs that generate ACEs and 
waste cycles audit logging in the lock.  It's a big patch so I won't have 
cycles to look at it until next week due to prepping for kms/rpc deploy.

> FSNameSystem#logAuditEvent should be called outside the read or write locks
> ---
>
> Key: HDFS-11246
> URL: https://issues.apache.org/jira/browse/HDFS-11246
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>Priority: Major
> Attachments: HDFS-11246.001.patch, HDFS-11246.002.patch, 
> HDFS-11246.003.patch, HDFS-11246.004.patch, HDFS-11246.005.patch, 
> HDFS-11246.006.patch, HDFS-11246.007.patch, HDFS-11246.008.patch
>
>
> {code}
> readLock();
> boolean success = true;
> ContentSummary cs;
> try {
>   checkOperation(OperationCategory.READ);
>   cs = FSDirStatAndListingOp.getContentSummary(dir, src);
> } catch (AccessControlException ace) {
>   success = false;
>   logAuditEvent(success, operationName, src);
>   throw ace;
> } finally {
>   readUnlock(operationName);
> }
> {code}
> It would be nice to have audit logging outside the lock esp. in scenarios 
> where applications hammer a given operation several times. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1713) ReplicationManager fail to find proper node topology based on Datanode details from heartbeat

2019-07-11 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-1713:


Assignee: Sammi Chen  (was: Xiaoyu Yao)

> ReplicationManager fail to find proper node topology based on Datanode 
> details from heartbeat
> -
>
> Key: HDDS-1713
> URL: https://issues.apache.org/jira/browse/HDDS-1713
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> DN does not have the topology info included in its heartbeat message for 
> container report/pipeline report.
> SCM is where the topology information is available. During the processing of 
> heartbeat, we should not rely on the datanodedetails from report to choose 
> datanodes for close container. Otherwise, all the datanode locations of 
> existing container replicas will fallback to /default-rack.
>  
> The fix is to retrieve the corresponding datanode locations from scm 
> nodemanager, which has authoritative network topology information. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-11 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-1787:
-

Assignee: Hrishikesh Gadre

> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Hrishikesh Gadre
>Priority: Major
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> 2019-07-10 16:45:53,176 WARN  om.KeyManagerImpl 
> (KeyManagerImpl.java:lambda$sortDatanodeInPipeline$7(2129)) - Unable to sort 
> datanodes based on distance to client, volume=xqoyzocpse, bucket=vxwajaczqh, 
> key=pool-444-thread-7-201077822, client=127.0.0.1, 
> datanodes=[10f15723-45d7-4a0c-8f01-8b101744a110{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, 7ac2777f-0a5c-4414-9e7f-bfbc47d696ea{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}], exception=java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hd

[jira] [Work logged] (HDDS-1713) ReplicationManager fail to find proper node topology based on Datanode details from heartbeat

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1713?focusedWorklogId=275646&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275646
 ]

ASF GitHub Bot logged work on HDDS-1713:


Author: ASF GitHub Bot
Created on: 12/Jul/19 05:18
Start Date: 12/Jul/19 05:18
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1008: HDDS-1713. 
ReplicationManager fail to find proper node topology based…
URL: https://github.com/apache/hadoop/pull/1008#discussion_r302808526
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
 ##
 @@ -99,6 +99,9 @@ public SCMDatanodeHeartbeatDispatcher(NodeManager 
nodeManager,
   commands = nodeManager.getCommandQueue(dnID);
 
 } else {
+  // Get the datanode details again from node manager with the topology 
info
+  // for registered datanodes.
+  datanodeDetails = nodeManager.getNode(datanodeDetails.getIpAddress());
 
 Review comment:
   Property "dfs.datanode.use.datanode.hostname" is used to control whether use 
IP address or hostname.  Use Ip address or hostname, current exiting 
hadoop/hdfs/yarn topology tools/customer mgt scripts can be reused. It would be 
easy for user to adopt Ozone.  @xiaoyuyao, I can take over this if you are 
fully occupied. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275646)
Time Spent: 1.5h  (was: 1h 20m)

> ReplicationManager fail to find proper node topology based on Datanode 
> details from heartbeat
> -
>
> Key: HDDS-1713
> URL: https://issues.apache.org/jira/browse/HDDS-1713
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> DN does not have the topology info included in its heartbeat message for 
> container report/pipeline report.
> SCM is where the topology information is available. During the processing of 
> heartbeat, we should not rely on the datanodedetails from report to choose 
> datanodes for close container. Otherwise, all the datanode locations of 
> existing container replicas will fallback to /default-rack.
>  
> The fix is to retrieve the corresponding datanode locations from scm 
> nodemanager, which has authoritative network topology information. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1713) ReplicationManager fail to find proper node topology based on Datanode details from heartbeat

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1713?focusedWorklogId=275647&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275647
 ]

ASF GitHub Bot logged work on HDDS-1713:


Author: ASF GitHub Bot
Created on: 12/Jul/19 05:18
Start Date: 12/Jul/19 05:18
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1008: HDDS-1713. 
ReplicationManager fail to find proper node topology based…
URL: https://github.com/apache/hadoop/pull/1008#discussion_r302410636
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
 ##
 @@ -99,6 +99,9 @@ public SCMDatanodeHeartbeatDispatcher(NodeManager 
nodeManager,
   commands = nodeManager.getCommandQueue(dnID);
 
 } else {
+  // Get the datanode details again from node manager with the topology 
info
+  // for registered datanodes.
+  datanodeDetails = nodeManager.getNode(datanodeDetails.getIpAddress());
 
 Review comment:
   Xiaoyu, node can use Ipaddress or hostname as topology network name. 
   Maybe we should refactor nodeManager.getNode function, pass datanodeDetails 
in. Then make whether use Ipaddress or hostname as network topology name an 
inner logic in the getNode function. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275647)
Time Spent: 1h 40m  (was: 1.5h)

> ReplicationManager fail to find proper node topology based on Datanode 
> details from heartbeat
> -
>
> Key: HDDS-1713
> URL: https://issues.apache.org/jira/browse/HDDS-1713
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> DN does not have the topology info included in its heartbeat message for 
> container report/pipeline report.
> SCM is where the topology information is available. During the processing of 
> heartbeat, we should not rely on the datanodedetails from report to choose 
> datanodes for close container. Otherwise, all the datanode locations of 
> existing container replicas will fallback to /default-rack.
>  
> The fix is to retrieve the corresponding datanode locations from scm 
> nodemanager, which has authoritative network topology information. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14617) Improve fsimage load time by writing sub-sections to the fsimage index

2019-07-11 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883532#comment-16883532
 ] 

Daryn Sharp commented on HDFS-14617:


I'm definitely not opposed to startup gains because 1-2h startup is rough.    
I've been vocal about PB performance issues but I'm skeptical of the following:
{quote}The time taken to read and parse the protobuf messages seems to dominate 
the runtime
{quote}
Have you profiled to determine if PB parsing is the dominator?  Perhaps the cpu 
cycle dominator but I'd expect I/O latency to be the wall clock dominator.  
But, performance problems are rarely intuitive.
{quote}but we will need to somehow read and decode the protobuf in parallel to 
get significant speedup
{quote}
The biggest problem is usually the GC overhead from all the objects vomited by 
PB marshaling.  Parallelism is likely to exasperate the GC overhead but you 
appear to be seeing healthy performance gains.  It's a question/experiment of 
can we do better?

Perhaps the performance gain is simply due to the pipelining that decouples the 
I/O latency from the computation to update the fsdir related structures.   I'd 
be very interested in the relative performance to a single thread allowed to 
read as fast as the os/disk/page cache allows (while vomiting voluminous 
PB-related objects) while another updates the fsdir unimpeded by 
synchronization.  It might be similar or higher.

> Improve fsimage load time by writing sub-sections to the fsimage index
> --
>
> Key: HDFS-14617
> URL: https://issues.apache.org/jira/browse/HDFS-14617
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14617.001.patch
>
>
> Loading an fsimage is basically a single threaded process. The current 
> fsimage is written out in sections, eg iNode, iNode_Directory, Snapshots, 
> Snapshot_Diff etc. Then at the end of the file, an index is written that 
> contains the offset and length of each section. The image loader code uses 
> this index to initialize an input stream to read and process each section. It 
> is important that one section is fully loaded before another is started, as 
> the next section depends on the results of the previous one.
> What I would like to propose is the following:
> 1. When writing the image, we can optionally output sub_sections to the 
> index. That way, a given section would effectively be split into several 
> sections, eg:
> {code:java}
>inode_section offset 10 length 1000
>  inode_sub_section offset 10 length 500
>  inode_sub_section offset 510 length 500
>  
>inode_dir_section offset 1010 length 1000
>  inode_dir_sub_section offset 1010 length 500
>  inode_dir_sub_section offset 1010 length 500
> {code}
> Here you can see we still have the original section index, but then we also 
> have sub-section entries that cover the entire section. Then a processor can 
> either read the full section in serial, or read each sub-section in parallel.
> 2. In the Image Writer code, we should set a target number of sub-sections, 
> and then based on the total inodes in memory, it will create that many 
> sub-sections per major image section. I think the only sections worth doing 
> this for are inode, inode_reference, inode_dir and snapshot_diff. All others 
> tend to be fairly small in practice.
> 3. If there are under some threshold of inodes (eg 10M) then don't bother 
> with the sub-sections as a serial load only takes a few seconds at that scale.
> 4. The image loading code can then have a switch to enable 'parallel loading' 
> and a 'number of threads' where it uses the sub-sections, or if not enabled 
> falls back to the existing logic to read the entire section in serial.
> Working with a large image of 316M inodes and 35GB on disk, I have a proof of 
> concept of this change working, allowing just inode and inode_dir to be 
> loaded in parallel, but I believe inode_reference and snapshot_diff can be 
> make parallel with the same technique.
> Some benchmarks I have are as follows:
> {code:java}
> Threads   1 2 3 4 
> 
> inodes448   290   226   189 
> inode_dir 326   211   170   161 
> Total 927   651   535   488 (MD5 calculation about 100 seconds)
> {code}
> The above table shows the time in seconds to load the inode section and the 
> inode_directory section, and then the total load time of the image.
> With 4 threads using the above technique, we are able to better than half the 
> load time of the two sections. With the patch in HDFS-13694 it would take a 
> further 100 seconds off the run time, going from 927 seconds to 388, whi

[jira] [Created] (HDDS-1789) BlockOutputStream#watchForCommit fails with UnsupportedOperationException

2019-07-11 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1789:
-

 Summary: BlockOutputStream#watchForCommit fails with 
UnsupportedOperationException 
 Key: HDDS-1789
 URL: https://issues.apache.org/jira/browse/HDDS-1789
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.5.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0


{code:java}
```2019-07-12 08:45:17,981 ERROR ozone.MiniOzoneLoadGenerator 
(MiniOzoneLoadGenerator.java:load(105)) - LOADGEN: Create 
key:pool-444-thread-5-1328179725 failed with exception, skipping
java.lang.UnsupportedOperationException
at java.util.AbstractList.add(AbstractList.java:148)
at java.util.AbstractList.add(AbstractList.java:108)
at java.util.AbstractCollection.addAll(AbstractCollection.java:344)
at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.watchForCommit(BlockOutputStream.java:363)
at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFullBuffer(BlockOutputStream.java:332)
at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:259)
at 
org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:129)
at 
org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:211)
at 
org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:193)
at 
org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
at java.io.OutputStream.write(OutputStream.java:75)
at 
org.apache.hadoop.ozone.MiniOzoneLoadGenerator.load(MiniOzoneLoadGenerator.java:103)
at 
org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$0(MiniOzoneLoadGenerator.java:152)
at 
java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)```
ye ek aur issue hai from Chaos
please raise a bug

Shashikant Banerjee [9:56 AM]
okk
actually jstacks are taken at 15 min interval
i am yet to find any common hanging thread among all the 3 jstacks

Mukul Kumar Singh [10:00 AM]
2nd file mein
```java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x7fb5b29ed228> (a 
java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
at 
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:727)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.readLock(FSNamesystem.java:1595)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.handleHeartbeat(FSNamesystem.java:4894)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.sendHeartbeat(NameNodeRpcServer.java:1438)
at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.sendHeartbeat(DatanodeProtocolServerSideTranslatorPB.java:118)
at 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)```
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1785) OOM error in Freon due to the concurrency handling

2019-07-11 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883530#comment-16883530
 ] 

Xiaoyu Yao commented on HDDS-1785:
--

cc: [~Sammi] and [~xudongcao]

> OOM error in Freon due to the concurrency handling
> --
>
> Key: HDDS-1785
> URL: https://issues.apache.org/jira/browse/HDDS-1785
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Doroszlai, Attila
>Priority: Blocker
>
> HDDS-1532 modified the concurrent framework usage of Freon 
> (RandomKeyGenerator).
> The new approach uses separated tasks (Runnable) to create the 
> volumes/buckets/keys.
> Unfortunately it doesn't work very well in some cases.
>  # When Freon starts it creates an executor with fixed number of threads (10)
>  # The first loop submits numOfVolumes (10) VolumeProcessor tasks to the 
> executor
>  # The 10 threads starts to execute the 10 VolumeProcessor tasks
>  # Each VolumeProcessor tasks creates numOfBuckets (1000) BucketProcessor 
> tasks. All together 1 tasks are submitted to the executor.
>  # The 10 threads starts to execute the first 10 BucketProcessor tasks, they 
> starts to create the KeyProcessor tasks: 500 000 * 10 tasks are submitted.
>  # At this point of the time no keys are generated, but the next 10 
> BucketProcessor tasks are started to execute..
>  # To execute the first key creation we should process all the 
> BucketProcessor tasks which means that all the Key creation tasks (10 * 1000 
> * 500 000) are created and added to the executor
>  # Which requires a huge amount of time and memory



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1754) getContainerWithPipeline fails with PipelineNotFoundException

2019-07-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883528#comment-16883528
 ] 

Hudson commented on HDDS-1754:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16898 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16898/])
HDDS-1754. getContainerWithPipeline fails with (nanda: rev 
738fab3bff04ab0128146b401b4978d3d60ec97f)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerInfo.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java


> getContainerWithPipeline fails with PipelineNotFoundException
> -
>
> Key: HDDS-1754
> URL: https://issues.apache.org/jira/browse/HDDS-1754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Supratim Deka
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Once a pipeline is closed or finalized and it was not able to close all the 
> containers inside the pipeline. 
> Then getContainerWithPipeline will try to fetch the pipeline state from 
> pipelineManager after the pipeline has been closed.
> {code}
> 2019-07-02 20:48:20,370 INFO  ipc.Server (Server.java:logException(2726)) - 
> IPC Server handler 13 on 50130, call Call#17339 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol.getContainerWithPipeline
>  from 192.168.0.2:51452
> org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException: 
> PipelineID=e1a7b16a-48d9-4194-9774-ad49ec9ad78b not found
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.getPipeline(PipelineStateMap.java:132)
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateManager.getPipeline(PipelineStateManager.java:66)
> at 
> org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.getPipeline(SCMPipelineManager.java:184)
> at 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:244)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:144)
> at 
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:16390)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1754) getContainerWithPipeline fails with PipelineNotFoundException

2019-07-11 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar resolved HDDS-1754.
---
   Resolution: Fixed
Fix Version/s: 0.5.0

> getContainerWithPipeline fails with PipelineNotFoundException
> -
>
> Key: HDDS-1754
> URL: https://issues.apache.org/jira/browse/HDDS-1754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Supratim Deka
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Once a pipeline is closed or finalized and it was not able to close all the 
> containers inside the pipeline. 
> Then getContainerWithPipeline will try to fetch the pipeline state from 
> pipelineManager after the pipeline has been closed.
> {code}
> 2019-07-02 20:48:20,370 INFO  ipc.Server (Server.java:logException(2726)) - 
> IPC Server handler 13 on 50130, call Call#17339 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol.getContainerWithPipeline
>  from 192.168.0.2:51452
> org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException: 
> PipelineID=e1a7b16a-48d9-4194-9774-ad49ec9ad78b not found
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.getPipeline(PipelineStateMap.java:132)
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateManager.getPipeline(PipelineStateManager.java:66)
> at 
> org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.getPipeline(SCMPipelineManager.java:184)
> at 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:244)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:144)
> at 
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:16390)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1754) getContainerWithPipeline fails with PipelineNotFoundException

2019-07-11 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883527#comment-16883527
 ] 

Nanda kumar commented on HDDS-1754:
---

Thanks for the contribution [~sdeka]. Thanks [~msingh] for reporting it. 
Committed it to trunk.

> getContainerWithPipeline fails with PipelineNotFoundException
> -
>
> Key: HDDS-1754
> URL: https://issues.apache.org/jira/browse/HDDS-1754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Supratim Deka
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Once a pipeline is closed or finalized and it was not able to close all the 
> containers inside the pipeline. 
> Then getContainerWithPipeline will try to fetch the pipeline state from 
> pipelineManager after the pipeline has been closed.
> {code}
> 2019-07-02 20:48:20,370 INFO  ipc.Server (Server.java:logException(2726)) - 
> IPC Server handler 13 on 50130, call Call#17339 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol.getContainerWithPipeline
>  from 192.168.0.2:51452
> org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException: 
> PipelineID=e1a7b16a-48d9-4194-9774-ad49ec9ad78b not found
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.getPipeline(PipelineStateMap.java:132)
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateManager.getPipeline(PipelineStateManager.java:66)
> at 
> org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.getPipeline(SCMPipelineManager.java:184)
> at 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:244)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:144)
> at 
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:16390)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1754) getContainerWithPipeline fails with PipelineNotFoundException

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1754?focusedWorklogId=275644&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275644
 ]

ASF GitHub Bot logged work on HDDS-1754:


Author: ASF GitHub Bot
Created on: 12/Jul/19 05:01
Start Date: 12/Jul/19 05:01
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1081: 
HDDS-1754. getContainerWithPipeline fails with PipelineNotFoundException. 
Contributed by Supratim Deka
URL: https://github.com/apache/hadoop/pull/1081
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275644)
Time Spent: 20m  (was: 10m)

> getContainerWithPipeline fails with PipelineNotFoundException
> -
>
> Key: HDDS-1754
> URL: https://issues.apache.org/jira/browse/HDDS-1754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Supratim Deka
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Once a pipeline is closed or finalized and it was not able to close all the 
> containers inside the pipeline. 
> Then getContainerWithPipeline will try to fetch the pipeline state from 
> pipelineManager after the pipeline has been closed.
> {code}
> 2019-07-02 20:48:20,370 INFO  ipc.Server (Server.java:logException(2726)) - 
> IPC Server handler 13 on 50130, call Call#17339 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol.getContainerWithPipeline
>  from 192.168.0.2:51452
> org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException: 
> PipelineID=e1a7b16a-48d9-4194-9774-ad49ec9ad78b not found
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.getPipeline(PipelineStateMap.java:132)
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateManager.getPipeline(PipelineStateManager.java:66)
> at 
> org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.getPipeline(SCMPipelineManager.java:184)
> at 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:244)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:144)
> at 
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:16390)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1779) TestWatchForCommit tests are flaky

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1779?focusedWorklogId=275643&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275643
 ]

ASF GitHub Bot logged work on HDDS-1779:


Author: ASF GitHub Bot
Created on: 12/Jul/19 04:59
Start Date: 12/Jul/19 04:59
Worklog Time Spent: 10m 
  Work Description: supratimdeka commented on pull request #1071: 
HDDS-1779. TestWatchForCommit tests are flaky.
URL: https://github.com/apache/hadoop/pull/1071#discussion_r302827060
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestWatchForCommit.java
 ##
 @@ -343,61 +349,24 @@ public void testWatchForCommitForRetryfailure() throws 
Exception {
 cluster.shutdownHddsDatanode(pipeline.getNodes().get(1));
 // again write data with more than max buffer limit. This wi
 try {
-  // just watch for a lo index which in not updated in the commitInfo Map
-  xceiverClient.watchForCommit(index + 1, 2);
+  // just watch for a log index which in not updated in the commitInfo Map
+  // as well as there is no logIndex generate in Ratis.
+  // The basic idea here is just to test if its throws an exception.
+  xceiverClient
+  .watchForCommit(index + new Random().nextInt(100) + 10, 2);
 
 Review comment:
   instead of a Random increment, why not increment by a fixed number everytime 
- say 100 or 110? This applies to all the other modified test cases as well.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275643)
Time Spent: 40m  (was: 0.5h)

> TestWatchForCommit tests are flaky
> --
>
> Key: HDDS-1779
> URL: https://issues.apache.org/jira/browse/HDDS-1779
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The tests have become flaky bcoz once  nodes are shutdown inn Ratis pipeline, 
> a watch request can either be received at server at a server and fail with 
> NotReplicatedException or sometimes it fails with StatusRuntimeExceptions 
> from grpc which both need to be accounted for in the tests. Other than that, 
> HDDS-1384 also causes bind exception to e thrown intermittently which in turn 
> shuts down the miniOzoneCluster. To overcome this, the test class has been 
> refactored as well.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1779) TestWatchForCommit tests are flaky

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1779?focusedWorklogId=275642&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275642
 ]

ASF GitHub Bot logged work on HDDS-1779:


Author: ASF GitHub Bot
Created on: 12/Jul/19 04:59
Start Date: 12/Jul/19 04:59
Worklog Time Spent: 10m 
  Work Description: supratimdeka commented on pull request #1071: 
HDDS-1779. TestWatchForCommit tests are flaky.
URL: https://github.com/apache/hadoop/pull/1071#discussion_r302824792
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestWatchForCommit.java
 ##
 @@ -303,10 +305,14 @@ public void testWatchForCommitWithSmallerTimeoutValue() 
throws Exception {
 cluster.shutdownHddsDatanode(pipeline.getNodes().get(0));
 cluster.shutdownHddsDatanode(pipeline.getNodes().get(1));
 try {
-  // just watch for a lo index which in not updated in the commitInfo Map
-  xceiverClient.watchForCommit(index + 1, 3000);
+  // just watch for a log index which in not updated in the commitInfo Map
+  // as well as there is no logIndex generate in Ratis.
+  // The basic idea here is just to test if its throws an exception.
+  xceiverClient
+  .watchForCommit(index + new Random().nextInt(100) + 10, 3000);
   Assert.fail("expected exception not thrown");
 } catch (Exception e) {
+  System.out.println("exception " + e);
 
 Review comment:
   as you've already noticed, this needs to go.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275642)
Time Spent: 40m  (was: 0.5h)

> TestWatchForCommit tests are flaky
> --
>
> Key: HDDS-1779
> URL: https://issues.apache.org/jira/browse/HDDS-1779
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The tests have become flaky bcoz once  nodes are shutdown inn Ratis pipeline, 
> a watch request can either be received at server at a server and fail with 
> NotReplicatedException or sometimes it fails with StatusRuntimeExceptions 
> from grpc which both need to be accounted for in the tests. Other than that, 
> HDDS-1384 also causes bind exception to e thrown intermittently which in turn 
> shuts down the miniOzoneCluster. To overcome this, the test class has been 
> refactored as well.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1713) ReplicationManager fail to find proper node topology based on Datanode details from heartbeat

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1713?focusedWorklogId=275641&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275641
 ]

ASF GitHub Bot logged work on HDDS-1713:


Author: ASF GitHub Bot
Created on: 12/Jul/19 04:52
Start Date: 12/Jul/19 04:52
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1008: HDDS-1713. 
ReplicationManager fail to find proper node topology based…
URL: https://github.com/apache/hadoop/pull/1008#discussion_r302826566
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
 ##
 @@ -99,6 +99,9 @@ public SCMDatanodeHeartbeatDispatcher(NodeManager 
nodeManager,
   commands = nodeManager.getCommandQueue(dnID);
 
 } else {
+  // Get the datanode details again from node manager with the topology 
info
+  // for registered datanodes.
+  datanodeDetails = nodeManager.getNode(datanodeDetails.getIpAddress());
 
 Review comment:
   @nandakumar131, yes. We will need to handle this case for the minicluster 
based tests.
   The current topology awareness is based on a map of ip/dns->location, I 
think change it to uuid->location should work as long we have a mapping from 
uuid->ip/dns maintained.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275641)
Time Spent: 1h 20m  (was: 1h 10m)

> ReplicationManager fail to find proper node topology based on Datanode 
> details from heartbeat
> -
>
> Key: HDDS-1713
> URL: https://issues.apache.org/jira/browse/HDDS-1713
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> DN does not have the topology info included in its heartbeat message for 
> container report/pipeline report.
> SCM is where the topology information is available. During the processing of 
> heartbeat, we should not rely on the datanodedetails from report to choose 
> datanodes for close container. Otherwise, all the datanode locations of 
> existing container replicas will fallback to /default-rack.
>  
> The fix is to retrieve the corresponding datanode locations from scm 
> nodemanager, which has authoritative network topology information. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11687) Add new public encryption APIs required by Hive

2019-07-11 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883523#comment-16883523
 ] 

Daryn Sharp commented on HDFS-11687:


Admittedly very late to comment but this is a bad patch.  Not for the intent 
but the unintended consequences.   The intent was to expose if the NN is using 
a key provider but also changed getTrashRoot to catch/log errors.  Here's the 
problem:  Before trashing paths one must determine if the path is in an EZ.  If 
yes, the path cannot be renamed to the user's trash and must (enforced by NN) 
be renamed to an EZ local trash dir.  This patch catches/logs/ignores the 
attempt to determine if the path is in an EZ so as a consequence, the client 
will attempt to rename an EZ path to the user's trash and obviously fail.

We need to avoid the catch/log anti-pattern and haphazardly changing unrelated 
code to ignore exceptions.  Perhaps a once in a million chance to fail and will 
in all likelihood succeed if retried, but it's bothersome when ramping up EZs 
at scale...

> Add new public encryption APIs required by Hive
> ---
>
> Key: HDFS-11687
> URL: https://issues.apache.org/jira/browse/HDFS-11687
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Lei (Eddy) Xu
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HDFS-11687.00.patch, HDFS-11687.01.patch, 
> HDFS-11687.02.patch, HDFS-11687.03.patch
>
>
> As discovered on HADOOP-14333, Hive is using reflection to get a DFSClient 
> for its encryption shim. We should provide proper public APIs for getting 
> this information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1754) getContainerWithPipeline fails with PipelineNotFoundException

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1754?focusedWorklogId=275636&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275636
 ]

ASF GitHub Bot logged work on HDDS-1754:


Author: ASF GitHub Bot
Created on: 12/Jul/19 04:13
Start Date: 12/Jul/19 04:13
Worklog Time Spent: 10m 
  Work Description: supratimdeka commented on pull request #1081: 
HDDS-1754. getContainerWithPipeline fails with PipelineNotFoundException. 
Contributed by Supratim Deka
URL: https://github.com/apache/hadoop/pull/1081
 
 
   https://issues.apache.org/jira/browse/HDDS-1754
   
   DeadNodeHandler can clean up the pipeline while containers are still in 
CLOSING state.
   modified getContainerWithPipeline() to refer the pipeline only if the 
container is in OPEN state.
   In CLOSING state, the read pipeline will be constructed from the Replicas 
known to SCM - this is already existing behavior for CLOSED state.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275636)
Time Spent: 10m
Remaining Estimate: 0h

> getContainerWithPipeline fails with PipelineNotFoundException
> -
>
> Key: HDDS-1754
> URL: https://issues.apache.org/jira/browse/HDDS-1754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Supratim Deka
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Once a pipeline is closed or finalized and it was not able to close all the 
> containers inside the pipeline. 
> Then getContainerWithPipeline will try to fetch the pipeline state from 
> pipelineManager after the pipeline has been closed.
> {code}
> 2019-07-02 20:48:20,370 INFO  ipc.Server (Server.java:logException(2726)) - 
> IPC Server handler 13 on 50130, call Call#17339 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol.getContainerWithPipeline
>  from 192.168.0.2:51452
> org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException: 
> PipelineID=e1a7b16a-48d9-4194-9774-ad49ec9ad78b not found
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.getPipeline(PipelineStateMap.java:132)
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateManager.getPipeline(PipelineStateManager.java:66)
> at 
> org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.getPipeline(SCMPipelineManager.java:184)
> at 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:244)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:144)
> at 
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:16390)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1713) ReplicationManager fail to find proper node topology based on Datanode details from heartbeat

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1713?focusedWorklogId=275637&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275637
 ]

ASF GitHub Bot logged work on HDDS-1713:


Author: ASF GitHub Bot
Created on: 12/Jul/19 04:13
Start Date: 12/Jul/19 04:13
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1008: 
HDDS-1713. ReplicationManager fail to find proper node topology based…
URL: https://github.com/apache/hadoop/pull/1008#discussion_r302821281
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
 ##
 @@ -99,6 +99,9 @@ public SCMDatanodeHeartbeatDispatcher(NodeManager 
nodeManager,
   commands = nodeManager.getCommandQueue(dnID);
 
 } else {
+  // Get the datanode details again from node manager with the topology 
info
+  // for registered datanodes.
+  datanodeDetails = nodeManager.getNode(datanodeDetails.getIpAddress());
 
 Review comment:
   > More than one DN instances on the same machine are most likely from 
test/dev environment such as MiniOzoneCluster. In production, even containers 
in K8S has dedicate IPs.
   
   I agree, but the problem here is that after this change the test/dev 
environment where there are more than one datanode process running in same 
machine will not even work properly. Heartbeat from different datanode process 
(running on same machine) will be mapped to a single process and all the other 
datanode process will be marked as dead even though they are heartbeating.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275637)
Time Spent: 1h 10m  (was: 1h)

> ReplicationManager fail to find proper node topology based on Datanode 
> details from heartbeat
> -
>
> Key: HDDS-1713
> URL: https://issues.apache.org/jira/browse/HDDS-1713
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> DN does not have the topology info included in its heartbeat message for 
> container report/pipeline report.
> SCM is where the topology information is available. During the processing of 
> heartbeat, we should not rely on the datanodedetails from report to choose 
> datanodes for close container. Otherwise, all the datanode locations of 
> existing container replicas will fallback to /default-rack.
>  
> The fix is to retrieve the corresponding datanode locations from scm 
> nodemanager, which has authoritative network topology information. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1754) getContainerWithPipeline fails with PipelineNotFoundException

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1754:
-
Labels: MiniOzoneChaosCluster pull-request-available  (was: 
MiniOzoneChaosCluster)

> getContainerWithPipeline fails with PipelineNotFoundException
> -
>
> Key: HDDS-1754
> URL: https://issues.apache.org/jira/browse/HDDS-1754
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Supratim Deka
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>
> Once a pipeline is closed or finalized and it was not able to close all the 
> containers inside the pipeline. 
> Then getContainerWithPipeline will try to fetch the pipeline state from 
> pipelineManager after the pipeline has been closed.
> {code}
> 2019-07-02 20:48:20,370 INFO  ipc.Server (Server.java:logException(2726)) - 
> IPC Server handler 13 on 50130, call Call#17339 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocol.getContainerWithPipeline
>  from 192.168.0.2:51452
> org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException: 
> PipelineID=e1a7b16a-48d9-4194-9774-ad49ec9ad78b not found
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.getPipeline(PipelineStateMap.java:132)
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateManager.getPipeline(PipelineStateManager.java:66)
> at 
> org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.getPipeline(SCMPipelineManager.java:184)
> at 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:244)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:144)
> at 
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:16390)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14563) Enhance interface about recommissioning/decommissioning

2019-07-11 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883508#comment-16883508
 ] 

He Xiaoqiao commented on HDFS-14563:


Thanks a lot [~kihwal] for your informations and the helpful example patch. I 
will update docs this week then invite guys to review again.

> Enhance interface about recommissioning/decommissioning
> ---
>
> Key: HDFS-14563
> URL: https://issues.apache.org/jira/browse/HDFS-14563
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14563.001.patch, HDFS-14563.002.patch, mt_mode-2.txt
>
>
> In current implementation, if we need to decommissioning or recommissioning 
> one datanode, the only way is add the datanode to include or exclude file 
> under namenode configuration path then execute command `bin/hadoop dfsadmin 
> -refreshNodes` and trigger namenode to reload include/exclude and start to 
> recommissioning or decommissioning datanode.
> The shortcomings of this approach is that:
> a. namenode reload include/exclude configuration file from devices, if I/O 
> load is high, handler may be blocked.
> b. namenode has to process every datnodes in include and exclude 
> configurations, if there are many datanodes (very common for large cluster) 
> pending to process, namenode will be hung for hundred seconds to wait 
> recommision/decommision finish at the worst since holding write lock.
> I think we should expose one lightweight interface to support recommissioning 
> or decommissioning single datanode, thus we can operate datanode using 
> dfsadmin more smooth.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14547) DirectoryWithQuotaFeature.quota costs additional memory even the storage type quota is not set.

2019-07-11 Thread Jinglun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883504#comment-16883504
 ] 

Jinglun commented on HDFS-14547:


Hi [~xkrogen], I upload a new patch for branch-2.9, with all lambdas changed to 
anonymous inner class. Could you have a review of it please. Thanks.

> DirectoryWithQuotaFeature.quota costs additional memory even the storage type 
> quota is not set.
> ---
>
> Key: HDFS-14547
> URL: https://issues.apache.org/jira/browse/HDFS-14547
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14547-branch-2.9.001.patch, HDFS-14547-design, 
> HDFS-14547-patch003-Test Report.pdf, HDFS-14547.001.patch, 
> HDFS-14547.002.patch, HDFS-14547.003.patch, HDFS-14547.004.patch, 
> HDFS-14547.005.patch, HDFS-14547.006.patch, HDFS-14547.007.patch
>
>
> Our XiaoMi HDFS is considering upgrading from 2.6 to 3.1. We notice the 
> storage type quota 'tsCounts' is instantiated to 
> EnumCounters(StorageType.class), so it will cost a long[5] even 
> if we don't have any storage type quota on this inode(only space quota or 
> name quota).
> In our cluster we have many dirs with quota and the NameNode's memory is in 
> tension, so the additional cost will be a problem.
>  See DirectoryWithQuotaFeature.Builder().
>  
> {code:java}
> class DirectoryWithQuotaFeature$Builder {
>   public Builder() {
>this.quota = new QuotaCounts.Builder().nameSpace(DEFAULT_NAMESPACE_QUOTA).
>storageSpace(DEFAULT_STORAGE_SPACE_QUOTA).
>typeSpaces(DEFAULT_STORAGE_SPACE_QUOTA).build();// set default value -1.
>this.usage = new QuotaCounts.Builder().nameSpace(1).build();
>   }
>   public Builder typeSpaces(long val) {// set default value.
>this.tsCounts.reset(val);
>return this;
>   }
> }
> class QuotaCounts$Builder {
>   public Builder() {
> this.nsSsCounts = new EnumCounters(Quota.class);
> this.tsCounts = new EnumCounters(StorageType.class);
>   }
> }
> class EnumCounters {
>   public EnumCounters(final Class enumClass) {
> final E[] enumConstants = enumClass.getEnumConstants();
> Preconditions.checkNotNull(enumConstants);
> this.enumClass = enumClass;
> this.counters = new long[enumConstants.length];// new a long array here.
>   }
> }
> {code}
> Related to HDFS-14542.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



Namenode crashes in 2.7.2 and couldn't be started

2019-07-11 Thread kumar r
 Hi,

In Hadoop-2.7.2, i am getting same error reported in here
https://issues.apache.org/jira/browse/HDFS-12985

Is there patch available for hadoop-2.7.2 version? How can i restart
namenode without NPE?

Is there any way to get back namenode live without modifying source?

Thanks,
Kumar


[jira] [Updated] (HDFS-14547) DirectoryWithQuotaFeature.quota costs additional memory even the storage type quota is not set.

2019-07-11 Thread Jinglun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14547:
---
Attachment: HDFS-14547-branch-2.9.001.patch

> DirectoryWithQuotaFeature.quota costs additional memory even the storage type 
> quota is not set.
> ---
>
> Key: HDFS-14547
> URL: https://issues.apache.org/jira/browse/HDFS-14547
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14547-branch-2.9.001.patch, HDFS-14547-design, 
> HDFS-14547-patch003-Test Report.pdf, HDFS-14547.001.patch, 
> HDFS-14547.002.patch, HDFS-14547.003.patch, HDFS-14547.004.patch, 
> HDFS-14547.005.patch, HDFS-14547.006.patch, HDFS-14547.007.patch
>
>
> Our XiaoMi HDFS is considering upgrading from 2.6 to 3.1. We notice the 
> storage type quota 'tsCounts' is instantiated to 
> EnumCounters(StorageType.class), so it will cost a long[5] even 
> if we don't have any storage type quota on this inode(only space quota or 
> name quota).
> In our cluster we have many dirs with quota and the NameNode's memory is in 
> tension, so the additional cost will be a problem.
>  See DirectoryWithQuotaFeature.Builder().
>  
> {code:java}
> class DirectoryWithQuotaFeature$Builder {
>   public Builder() {
>this.quota = new QuotaCounts.Builder().nameSpace(DEFAULT_NAMESPACE_QUOTA).
>storageSpace(DEFAULT_STORAGE_SPACE_QUOTA).
>typeSpaces(DEFAULT_STORAGE_SPACE_QUOTA).build();// set default value -1.
>this.usage = new QuotaCounts.Builder().nameSpace(1).build();
>   }
>   public Builder typeSpaces(long val) {// set default value.
>this.tsCounts.reset(val);
>return this;
>   }
> }
> class QuotaCounts$Builder {
>   public Builder() {
> this.nsSsCounts = new EnumCounters(Quota.class);
> this.tsCounts = new EnumCounters(StorageType.class);
>   }
> }
> class EnumCounters {
>   public EnumCounters(final Class enumClass) {
> final E[] enumConstants = enumClass.getEnumConstants();
> Preconditions.checkNotNull(enumConstants);
> this.enumClass = enumClass;
> this.counters = new long[enumConstants.length];// new a long array here.
>   }
> }
> {code}
> Related to HDFS-14542.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14579) In refreshNodes, avoid performing a DNS lookup while holding the write lock

2019-07-11 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883501#comment-16883501
 ] 

He Xiaoqiao commented on HDFS-14579:


Thanks [~sodonnell],
{quote}I would suggest we resolve this Jira now as 'not an issue' and then if 
we can gather some jstacks in the future that proves it is a problem sometimes, 
we can reopen{quote}
Please go ahead. And I will offer more information include jstacks when 
reproduce online next time. Thanks again.

> In refreshNodes, avoid performing a DNS lookup while holding the write lock
> ---
>
> Key: HDFS-14579
> URL: https://issues.apache.org/jira/browse/HDFS-14579
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14579.001.patch
>
>
> When refreshNodes is called on a large cluster, or a cluster where DNS is not 
> performing well, it can cause the namenode to hang for a long time. This is 
> because the refreshNodes operation holds the global write lock while it is 
> running. Most of refreshNodes code is simple and hence fast, but 
> unfortunately it performs a DNS lookup for each host in the cluster while the 
> lock is held. 
> Right now, it calls:
> {code}
>   public void refreshNodes(final Configuration conf) throws IOException {
> refreshHostsReader(conf);
> namesystem.writeLock();
> try {
>   refreshDatanodes();
>   countSoftwareVersions();
> } finally {
>   namesystem.writeUnlock();
> }
>   }
> {code}
> The line refreshHostsReader(conf); reads the new config file and does a DNS 
> lookup on each entry - the write lock is not held here. Then the main work is 
> done here:
> {code}
>   private void refreshDatanodes() {
> final Map copy;
> synchronized (this) {
>   copy = new HashMap<>(datanodeMap);
> }
> for (DatanodeDescriptor node : copy.values()) {
>   // Check if not include.
>   if (!hostConfigManager.isIncluded(node)) {
> node.setDisallowed(true);
>   } else {
> long maintenanceExpireTimeInMS =
> hostConfigManager.getMaintenanceExpirationTimeInMS(node);
> if (node.maintenanceNotExpired(maintenanceExpireTimeInMS)) {
>   datanodeAdminManager.startMaintenance(
>   node, maintenanceExpireTimeInMS);
> } else if (hostConfigManager.isExcluded(node)) {
>   datanodeAdminManager.startDecommission(node);
> } else {
>   datanodeAdminManager.stopMaintenance(node);
>   datanodeAdminManager.stopDecommission(node);
> }
>   }
>   node.setUpgradeDomain(hostConfigManager.getUpgradeDomain(node));
> }
>   }
> {code}
> All the isIncluded(), isExcluded() methods call node.getResolvedAddress() 
> which does the DNS lookup. We could probably change things to perform all the 
> DNS lookups outside of the write lock, and then take the lock and process the 
> nodes. Also change or overload isIncluded() etc to take the inetAddress 
> rather than the datanode descriptor.
> It would not shorten the time the operation takes to run overall, but it 
> would move the long duration out of the write lock and avoid blocking the 
> namenode for the entire time.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1554) Create disk tests for fault injection test

2019-07-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883499#comment-16883499
 ] 

Hadoop QA commented on HDDS-1554:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
0s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 23 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
1s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  5m 
22s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
17s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
17s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 41s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} | {color:green} 

[jira] [Commented] (HDFS-14313) Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du

2019-07-11 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883490#comment-16883490
 ] 

Lisheng Sun commented on HDFS-14313:


Ping [~jojochuang]:) Could you continue to help review it? Thank you.

> Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory  
> instead of df/du
> 
>
> Key: HDFS-14313
> URL: https://issues.apache.org/jira/browse/HDFS-14313
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, performance
>Affects Versions: 2.6.0, 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14313.000.patch, HDFS-14313.001.patch, 
> HDFS-14313.002.patch, HDFS-14313.003.patch, HDFS-14313.004.patch, 
> HDFS-14313.005.patch, HDFS-14313.006.patch
>
>
> There are two ways of DU/DF getting used space that are insufficient.
>  #  Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike.
>  #  Running DF is inaccurate when the disk sharing by multiple datanode or 
> other servers.
>  Getting hdfs used space from  FsDatasetImpl#volumeMap#ReplicaInfos in memory 
> is very small and accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14458) Report pmem stats to namenode

2019-07-11 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883480#comment-16883480
 ] 

Feilong He edited comment on HDFS-14458 at 7/12/19 2:42 AM:


[^HDFS-14458.004.patch] has been uploaded to fix code style issue.


was (Author: philohe):
[^HDFS-14458.004.patch] has been uploaded to fix checkstyle issue.

> Report pmem stats to namenode
> -
>
> Key: HDFS-14458
> URL: https://issues.apache.org/jira/browse/HDFS-14458
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14458.000.patch, HDFS-14458.001.patch, 
> HDFS-14458.002.patch, HDFS-14458.003.patch, HDFS-14458.004.patch
>
>
> Currently, two important stats should be reported to NameNode: cache used and 
> cache capacity. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14458) Report pmem stats to namenode

2019-07-11 Thread Feilong He (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feilong He updated HDFS-14458:
--
Attachment: HDFS-14458.004.patch

> Report pmem stats to namenode
> -
>
> Key: HDFS-14458
> URL: https://issues.apache.org/jira/browse/HDFS-14458
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14458.000.patch, HDFS-14458.001.patch, 
> HDFS-14458.002.patch, HDFS-14458.003.patch, HDFS-14458.004.patch
>
>
> Currently, two important stats should be reported to NameNode: cache used and 
> cache capacity. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14458) Report pmem stats to namenode

2019-07-11 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883480#comment-16883480
 ] 

Feilong He commented on HDFS-14458:
---

[^HDFS-14458.004.patch] has been uploaded to fix checkstyle issue.

> Report pmem stats to namenode
> -
>
> Key: HDFS-14458
> URL: https://issues.apache.org/jira/browse/HDFS-14458
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14458.000.patch, HDFS-14458.001.patch, 
> HDFS-14458.002.patch, HDFS-14458.003.patch, HDFS-14458.004.patch
>
>
> Currently, two important stats should be reported to NameNode: cache used and 
> cache capacity. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12733) Option to disable to namenode local edits

2019-07-11 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883479#comment-16883479
 ] 

Ayush Saxena commented on HDFS-12733:
-

Thanx Konstantin for the feedback!!!
bq.  let's think of using existing parameters for disabling local writes in HA. 
For example, in HA mode we can treat dfs.namenode.edits.dir = null as not 
having local directories for writing.

I guess then we can do something like this, Just wanted to make sure, we don't 
change the existing behavior, Not having the local edits should be an explicit 
call.

Most importantly we should clearly document this behavior too.

> Option to disable to namenode local edits
> -
>
> Key: HDFS-12733
> URL: https://issues.apache.org/jira/browse/HDFS-12733
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, performance
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HDFS-12733-001.patch, HDFS-12733-002.patch, 
> HDFS-12733-003.patch, HDFS-12733.004.patch, HDFS-12733.005.patch, 
> HDFS-12733.006.patch
>
>
> As of now, Edits will be written in local and shared locations which will be 
> redundant and local edits never used in HA setup.
> Disabling local edits gives little performance improvement.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14644) That replication of block failed leads to decommission is blocked when the number of replicas of block is greater than the number of datanode

2019-07-11 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883466#comment-16883466
 ] 

Wei-Chiu Chuang commented on HDFS-14644:


[~sodonnell] FYI.

> That replication of block failed leads to decommission is blocked when the 
> number of replicas of block is greater than the number of datanode
> -
>
> Key: HDFS-14644
> URL: https://issues.apache.org/jira/browse/HDFS-14644
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.1.1, 2.9.2, 3.0.3, 2.8.5, 2.7.7
>Reporter: Lisheng Sun
>Priority: Major
>
> 2019-07-10,15:37:18,028 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 5 to reach 10 
> (unavailableStorages=[DISK, ARCHIVE], 
> storagePolicy=BlockStoragePolicy\{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All 
> required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], 
> storagePolicy=BlockStoragePolicy\{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> 2019-07-10,15:37:18,028 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 5 to reach 10 
> (unavailableStorages=[], storagePolicy=BlockStoragePolicy\{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) For more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14644) That replication of block failed leads to decommission is blocked when the number of replicas of block is greater than the number of datanode

2019-07-11 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883464#comment-16883464
 ] 

Wei-Chiu Chuang commented on HDFS-14644:


[~leosun08] I think this is expected behavior. I am thinking we should surface 
this issue to the users and make it easier to understand. What do you think?

> That replication of block failed leads to decommission is blocked when the 
> number of replicas of block is greater than the number of datanode
> -
>
> Key: HDFS-14644
> URL: https://issues.apache.org/jira/browse/HDFS-14644
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.1.1, 2.9.2, 3.0.3, 2.8.5, 2.7.7
>Reporter: Lisheng Sun
>Priority: Major
>
> 2019-07-10,15:37:18,028 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 5 to reach 10 
> (unavailableStorages=[DISK, ARCHIVE], 
> storagePolicy=BlockStoragePolicy\{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All 
> required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], 
> storagePolicy=BlockStoragePolicy\{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
> 2019-07-10,15:37:18,028 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 5 to reach 10 
> (unavailableStorages=[], storagePolicy=BlockStoragePolicy\{HOT:7, 
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, 
> newBlock=false) For more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14595) HDFS-11848 breaks API compatibility

2019-07-11 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883462#comment-16883462
 ] 

Wei-Chiu Chuang commented on HDFS-14595:


Thanks. Let's update the test to use the new API.

> HDFS-11848 breaks API compatibility
> ---
>
> Key: HDFS-14595
> URL: https://issues.apache.org/jira/browse/HDFS-14595
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Siyao Meng
>Priority: Blocker
> Attachments: HDFS-14595.001.patch, hadoop_ 
> 36e1870eab904d5a6f12ecfb1fdb52ca08d95ac5 to 
> b241194d56f97ee372cbec7062bcf155bc3df662 compatibility report.htm
>
>
> Our internal tool caught an API compatibility issue with HDFS-11848.
> HDFS-11848 adds an additional parameter to 
> DistributedFileSystem.listOpenFiles(), but it doesn't keep the existing API.
> This can cause issue when upgrading from Hadoop 2.9.0/2.8.3/3.0.0 to 
> 3.0.1/3.1.0 and above.
> Suggest:
> (1) Add back the old API (which was added in HDFS-10480), and mark it 
> deprecated.
> (2) Update release doc to enforce running API compatibility check for each 
> releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14466) Add a regression test for HDFS-14323

2019-07-11 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14466:
-
   Resolution: Fixed
Fix Version/s: 3.1.3
   3.2.1
   3.3.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-3.2, and branch-3.1. Thanks [~ebyhr] for the 
report and thanks [~iwasakims] for the fix.

> Add a regression test for HDFS-14323
> 
>
> Key: HDFS-14466
> URL: https://issues.apache.org/jira/browse/HDFS-14466
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, webhdfs
>Affects Versions: 3.2.0
>Reporter: Yuya Ebihara
>Assignee: Masatake Iwasaki
>Priority: Minor
>  Labels: webhdfs
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16258.001.patch, HDFS-14466.002.patch
>
>
> Recently, we upgraded hadoop library from 2.7.7 to 3.2.0. This issue happens 
> after the update. When we call FileSystem.listLocatedStatus with location 
> 'webhdfs://hadoop-master:50070/user/hive/warehouse/test_part/dt=1', the 
> internal calls are
>  * 2.7.7 
> [http://hadoop-master:50070/webhdfs/v1/user/hive/warehouse/test_part/dt=1?op=LISTSTATUS&user.name=xxx|http://hadoop-master:50070/webhdfs/v1/user/hive/warehouse/test_part/dt=1?op=LISTSTATUS&user.name=xxx%27,]
>  * 3.2.0 
> [http://hadoop-master:50070/webhdfs/v1/user/hive/warehouse/test_part/dt%253D1?op=LISTSTATUS&user.name=xxx]'
> As a result, it returns RemoteException with FileNotFoundException.
> {code:java}
> {"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  /user/hive/warehouse/test_part/dt%3D1 does not exist."}}
> {code}
> Could you please tell me whether it's a bug and the way to avoid it?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14466) Add a regression test for HDFS-14323

2019-07-11 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14466:
-
Component/s: test

> Add a regression test for HDFS-14323
> 
>
> Key: HDFS-14466
> URL: https://issues.apache.org/jira/browse/HDFS-14466
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, test, webhdfs
>Affects Versions: 3.2.0
>Reporter: Yuya Ebihara
>Assignee: Masatake Iwasaki
>Priority: Minor
>  Labels: webhdfs
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16258.001.patch, HDFS-14466.002.patch
>
>
> Recently, we upgraded hadoop library from 2.7.7 to 3.2.0. This issue happens 
> after the update. When we call FileSystem.listLocatedStatus with location 
> 'webhdfs://hadoop-master:50070/user/hive/warehouse/test_part/dt=1', the 
> internal calls are
>  * 2.7.7 
> [http://hadoop-master:50070/webhdfs/v1/user/hive/warehouse/test_part/dt=1?op=LISTSTATUS&user.name=xxx|http://hadoop-master:50070/webhdfs/v1/user/hive/warehouse/test_part/dt=1?op=LISTSTATUS&user.name=xxx%27,]
>  * 3.2.0 
> [http://hadoop-master:50070/webhdfs/v1/user/hive/warehouse/test_part/dt%253D1?op=LISTSTATUS&user.name=xxx]'
> As a result, it returns RemoteException with FileNotFoundException.
> {code:java}
> {"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  /user/hive/warehouse/test_part/dt%3D1 does not exist."}}
> {code}
> Could you please tell me whether it's a bug and the way to avoid it?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14466) Add a regression test for HDFS-14323

2019-07-11 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14466:
-
Labels:   (was: webhdfs)

> Add a regression test for HDFS-14323
> 
>
> Key: HDFS-14466
> URL: https://issues.apache.org/jira/browse/HDFS-14466
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, test, webhdfs
>Affects Versions: 3.2.0
>Reporter: Yuya Ebihara
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16258.001.patch, HDFS-14466.002.patch
>
>
> Recently, we upgraded hadoop library from 2.7.7 to 3.2.0. This issue happens 
> after the update. When we call FileSystem.listLocatedStatus with location 
> 'webhdfs://hadoop-master:50070/user/hive/warehouse/test_part/dt=1', the 
> internal calls are
>  * 2.7.7 
> [http://hadoop-master:50070/webhdfs/v1/user/hive/warehouse/test_part/dt=1?op=LISTSTATUS&user.name=xxx|http://hadoop-master:50070/webhdfs/v1/user/hive/warehouse/test_part/dt=1?op=LISTSTATUS&user.name=xxx%27,]
>  * 3.2.0 
> [http://hadoop-master:50070/webhdfs/v1/user/hive/warehouse/test_part/dt%253D1?op=LISTSTATUS&user.name=xxx]'
> As a result, it returns RemoteException with FileNotFoundException.
> {code:java}
> {"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  /user/hive/warehouse/test_part/dt%3D1 does not exist."}}
> {code}
> Could you please tell me whether it's a bug and the way to avoid it?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275578&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275578
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796158
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -49,53 +52,71 @@
 @SuppressWarnings("ProtocolBufferOrdinal")
 public class OmOzoneAclMap {
   // per Acl Type user:rights map
-  private ArrayList> aclMaps;
+  private ArrayList> accessAclMap;
+  private List defaultAclList;
 
   OmOzoneAclMap() {
-aclMaps = new ArrayList<>();
+accessAclMap = new ArrayList<>();
+defaultAclList = new ArrayList<>();
 for (OzoneAclType aclType : OzoneAclType.values()) {
-  aclMaps.add(aclType.ordinal(), new HashMap<>());
+  accessAclMap.add(aclType.ordinal(), new HashMap<>());
 }
   }
 
-  private Map getMap(OzoneAclType type) {
-return aclMaps.get(type.ordinal());
+  private Map getAccessAclMap(OzoneAclType type) {
+return accessAclMap.get(type.ordinal());
   }
 
   // For a given acl type and user, get the stored acl
   private BitSet getAcl(OzoneAclType type, String user) {
-return getMap(type).get(user);
+return getAccessAclMap(type).get(user);
   }
 
   public List getAcl() {
 List acls = new ArrayList<>();
 
+acls.addAll(getAccessAcls());
+acls.addAll(defaultAclList.stream().map(a -> 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275578)
Time Spent: 3h 20m  (was: 3h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275577&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275577
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796153
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -145,9 +170,14 @@ public void removeAcl(OzoneAcl acl) throws OMException {
   // Add a new acl to the map
   public void addAcl(OzoneAclInfo acl) throws OMException {
 Objects.requireNonNull(acl, "Acl should not be null.");
-if (!getMap(acl.getType()).containsKey(acl.getName())) {
+if (acl.getAclScope().equals(OzoneAclInfo.OzoneAclScope.DEFAULT)) {
+  defaultAclList.add(acl);
+  return;
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275577)
Time Spent: 3h 10m  (was: 3h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275599&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275599
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796226
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -983,6 +1019,48 @@ public OmMultipartInfo 
applyInitiateMultipartUpload(OmKeyArgs keyArgs,
 }
   }
 
+  private List getAclsForKey(OmKeyArgs keyArgs, 
+  OmVolumeArgs volArgs, OmBucketInfo bucketInfo) {
+List acls = new ArrayList<>(keyArgs.getAcls().size());
+
+keyArgs.getAcls().stream().map(OzoneAcl::toProtobuf).
+collect(Collectors.toList());
+
+//if(volArgs != null) {
+//  acls.addAll(volArgs.getAclMap().getDefaultAclList());
+//}
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275599)
Time Spent: 6h 50m  (was: 6h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1713) ReplicationManager fail to find proper node topology based on Datanode details from heartbeat

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1713?focusedWorklogId=275605&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275605
 ]

ASF GitHub Bot logged work on HDDS-1713:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:20
Start Date: 12/Jul/19 01:20
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #1008: HDDS-1713. 
ReplicationManager fail to find proper node topology based…
URL: https://github.com/apache/hadoop/pull/1008#discussion_r302797377
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
 ##
 @@ -99,6 +99,9 @@ public SCMDatanodeHeartbeatDispatcher(NodeManager 
nodeManager,
   commands = nodeManager.getCommandQueue(dnID);
 
 } else {
+  // Get the datanode details again from node manager with the topology 
info
+  // for registered datanodes.
+  datanodeDetails = nodeManager.getNode(datanodeDetails.getIpAddress());
 
 Review comment:
   But IP address can change for the same datanode. In fact, we have a Jira to 
remove it in the future from the yaml file: HDDS-1480
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275605)
Time Spent: 1h  (was: 50m)

> ReplicationManager fail to find proper node topology based on Datanode 
> details from heartbeat
> -
>
> Key: HDDS-1713
> URL: https://issues.apache.org/jira/browse/HDDS-1713
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> DN does not have the topology info included in its heartbeat message for 
> container report/pipeline report.
> SCM is where the topology information is available. During the processing of 
> heartbeat, we should not rely on the datanodedetails from report to choose 
> datanodes for close container. Otherwise, all the datanode locations of 
> existing container replicas will fallback to /default-rack.
>  
> The fix is to retrieve the corresponding datanode locations from scm 
> nodemanager, which has authoritative network topology information. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275597&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275597
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796219
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -957,8 +994,7 @@ public OmMultipartInfo 
applyInitiateMultipartUpload(OmKeyArgs keyArgs,
   .setReplicationFactor(keyArgs.getFactor())
   .setOmKeyLocationInfos(Collections.singletonList(
   new OmKeyLocationInfoGroup(0, locations)))
-  .setAcls(keyArgs.getAcls().stream().map(a ->
-  OzoneAcl.toProtobuf(a)).collect(Collectors.toList()))
+  .setAcls(getAclsForKey(keyArgs, null, bucketInfo))
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275597)
Time Spent: 6.5h  (was: 6h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275581&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275581
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796170
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
 ##
 @@ -365,4 +369,29 @@ public static boolean checkIfAclBitIsSet(ACLType acl, 
BitSet bitset) {
 || bitset.get(ALL.ordinal()))
 && !bitset.get(NONE.ordinal()));
   }
+
+  /**
+   * Helper function to find and return all DEFAULT acls in input list with
+   * scope changed to ACCESS.
+   * @param acls
+   * 
+   * @return list of default Acls.
+   * */
+  public static Collection getDefaultAclsProto(List 
acls) {
+return acls.stream().filter(a -> a.getAclScope() == DEFAULT)
+.map(OzoneAcl::toProtobufWithAccessType).collect(Collectors.toList());
+  }
+
+  /**
+   * Helper function to find and return all DEFAULT acls in input list with
+   * scope changed to ACCESS.
+   * @param acls
+   *
+   * @return list of default Acls.
+   * */
+  public static Collection getDefaultAcls(List acls) {
+return acls.stream().filter(a -> a.getAclScope() == DEFAULT)
+.collect(Collectors.toList());
+  }
+  
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275581)
Time Spent: 3h 50m  (was: 3h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275594&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275594
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796212
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/security/acl/TestOzoneNativeAuthorizer.java
 ##
 @@ -256,9 +258,10 @@ public void testCheckAccessForBucket() throws Exception {
 
   @Test
   public void testCheckAccessForKey() throws Exception {
-OzoneAcl userAcl = new OzoneAcl(USER, ugi.getUserName(), parentDirUserAcl);
+OzoneAcl userAcl = new OzoneAcl(USER, ugi.getUserName(), parentDirUserAcl, 
+ACCESS);
 OzoneAcl groupAcl = new OzoneAcl(GROUP, ugi.getGroups().size() > 0 ?
-ugi.getGroups().get(0) : "", parentDirGroupAcl);
+ugi.getGroups().get(0) : "", parentDirGroupAcl, ACCESS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275594)
Time Spent: 6h  (was: 5h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275576&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275576
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796151
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -116,9 +136,14 @@ public void setAcls(List acls) throws 
OMException {
   // Add a new acl to the map
   public void removeAcl(OzoneAcl acl) throws OMException {
 Objects.requireNonNull(acl, "Acl should not be null.");
+if (acl.getAclScope().equals(OzoneAcl.AclScope.DEFAULT)) {
+  defaultAclList.remove(OzoneAcl.toProtobuf(acl));
+  return;
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275576)
Time Spent: 3h  (was: 2h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275579&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275579
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796163
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -49,53 +52,71 @@
 @SuppressWarnings("ProtocolBufferOrdinal")
 public class OmOzoneAclMap {
   // per Acl Type user:rights map
-  private ArrayList> aclMaps;
+  private ArrayList> accessAclMap;
+  private List defaultAclList;
 
   OmOzoneAclMap() {
-aclMaps = new ArrayList<>();
+accessAclMap = new ArrayList<>();
+defaultAclList = new ArrayList<>();
 for (OzoneAclType aclType : OzoneAclType.values()) {
-  aclMaps.add(aclType.ordinal(), new HashMap<>());
+  accessAclMap.add(aclType.ordinal(), new HashMap<>());
 }
   }
 
-  private Map getMap(OzoneAclType type) {
-return aclMaps.get(type.ordinal());
+  private Map getAccessAclMap(OzoneAclType type) {
+return accessAclMap.get(type.ordinal());
   }
 
   // For a given acl type and user, get the stored acl
   private BitSet getAcl(OzoneAclType type, String user) {
-return getMap(type).get(user);
+return getAccessAclMap(type).get(user);
   }
 
   public List getAcl() {
 List acls = new ArrayList<>();
 
+acls.addAll(getAccessAcls());
+acls.addAll(defaultAclList.stream().map(a -> 
+OzoneAcl.fromProtobuf(a)).collect(Collectors.toList()));
+return acls;
+  }
+
+  private Collection getAccessAcls() {
+List acls = new ArrayList<>();
 for (OzoneAclType type : OzoneAclType.values()) {
-  aclMaps.get(type.ordinal()).entrySet().stream().
+  accessAclMap.get(type.ordinal()).entrySet().stream().
   forEach(entry -> acls.add(new OzoneAcl(ACLIdentityType.
-  valueOf(type.name()), entry.getKey(), entry.getValue(;
+  valueOf(type.name()), entry.getKey(), entry.getValue(),
+  OzoneAcl.AclScope.ACCESS)));
 }
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275579)
Time Spent: 3.5h  (was: 3h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275580&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275580
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796168
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
 ##
 @@ -365,4 +369,29 @@ public static boolean checkIfAclBitIsSet(ACLType acl, 
BitSet bitset) {
 || bitset.get(ALL.ordinal()))
 && !bitset.get(NONE.ordinal()));
   }
+
+  /**
+   * Helper function to find and return all DEFAULT acls in input list with
+   * scope changed to ACCESS.
+   * @param acls
+   * 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275580)
Time Spent: 3h 40m  (was: 3.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275571&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275571
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796123
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -141,13 +153,27 @@ public static OzoneAcl parseAcl(String acl) throws 
IllegalArgumentException {
 ACLIdentityType aclType = ACLIdentityType.valueOf(parts[0].toUpperCase());
 BitSet acls = new BitSet(ACLType.getNoOfAcls());
 
-for (char ch : parts[2].toCharArray()) {
+String bits = parts[2];
+
+// Default acl scope is ACCESS.
+AclScope aclScope = AclScope.ACCESS;
+
+// Check if acl string contains scope info.
+if(parts[2].matches(ACL_SCOPE_REGEX)) {
+  int indexOfOpenBracket = parts[2].indexOf("[");
+  bits = parts[2].substring(0, indexOfOpenBracket);
+  aclScope = AclScope.valueOf(parts[2].substring(indexOfOpenBracket + 1,
+  parts[2].indexOf("]")));
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275571)
Time Spent: 2h 10m  (was: 2h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275596&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275596
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796217
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
 ##
 @@ -165,10 +169,15 @@ public void createBucket(OmBucketInfo bucketInfo) throws 
IOException {
 .setVersion(CryptoProtocolVersion.ENCRYPTION_ZONES)
 .setSuite(CipherSuite.convert(metadata.getCipher()));
   }
+  List acls = new ArrayList<>();
+  acls.addAll(bucketInfo.getAcls());
+  volumeArgs.getAclMap().getDefaultAclList().forEach(
+  a -> acls.add(OzoneAcl.fromProtobufWithAccessType(a)));
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275596)
Time Spent: 6h 20m  (was: 6h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275575&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275575
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796146
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -49,53 +52,71 @@
 @SuppressWarnings("ProtocolBufferOrdinal")
 public class OmOzoneAclMap {
   // per Acl Type user:rights map
-  private ArrayList> aclMaps;
+  private ArrayList> accessAclMap;
+  private List defaultAclList;
 
   OmOzoneAclMap() {
-aclMaps = new ArrayList<>();
+accessAclMap = new ArrayList<>();
+defaultAclList = new ArrayList<>();
 for (OzoneAclType aclType : OzoneAclType.values()) {
-  aclMaps.add(aclType.ordinal(), new HashMap<>());
+  accessAclMap.add(aclType.ordinal(), new HashMap<>());
 }
   }
 
-  private Map getMap(OzoneAclType type) {
-return aclMaps.get(type.ordinal());
+  private Map getAccessAclMap(OzoneAclType type) {
+return accessAclMap.get(type.ordinal());
   }
 
   // For a given acl type and user, get the stored acl
   private BitSet getAcl(OzoneAclType type, String user) {
-return getMap(type).get(user);
+return getAccessAclMap(type).get(user);
   }
 
   public List getAcl() {
 List acls = new ArrayList<>();
 
+acls.addAll(getAccessAcls());
+acls.addAll(defaultAclList.stream().map(a -> 
+OzoneAcl.fromProtobuf(a)).collect(Collectors.toList()));
+return acls;
+  }
+
+  private Collection getAccessAcls() {
+List acls = new ArrayList<>();
 for (OzoneAclType type : OzoneAclType.values()) {
-  aclMaps.get(type.ordinal()).entrySet().stream().
+  accessAclMap.get(type.ordinal()).entrySet().stream().
   forEach(entry -> acls.add(new OzoneAcl(ACLIdentityType.
-  valueOf(type.name()), entry.getKey(), entry.getValue(;
+  valueOf(type.name()), entry.getKey(), entry.getValue(),
+  OzoneAcl.AclScope.ACCESS)));
 }
+
 return acls;
   }
 
   // Add a new acl to the map
   public void addAcl(OzoneAcl acl) throws OMException {
 Objects.requireNonNull(acl, "Acl should not be null.");
+if (acl.getAclScope().equals(OzoneAcl.AclScope.DEFAULT)) {
+  defaultAclList.add(OzoneAcl.toProtobuf(acl));
+  return;
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275575)
Time Spent: 2h 50m  (was: 2h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275603&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275603
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796250
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -429,18 +430,15 @@ private OzoneManager(OzoneConfiguration conf) throws 
IOException,
 } else {
   accessAuthorizer = null;
 }
-ozAdmins = conf.getTrimmedStringCollection(OzoneConfigKeys
-.OZONE_ADMINISTRATORS);
+ozAdmins = conf.getTrimmedStringCollection(OZONE_ADMINISTRATORS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275603)
Time Spent: 7.5h  (was: 7h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275590&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275590
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796196
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2357,28 +2456,28 @@ public void testNativeAclsForPrefix() throws Exception 
{
 ACLType userRights = aclConfig.getUserDefaultRights();
 ACLType groupRights = aclConfig.getGroupDefaultRights();
 
-listOfAcls.add(new OzoneAcl(ACLIdentityType.USER,
-ugi.getUserName(), userRights));
+listOfAcls.add(new OzoneAcl(USER,
+ugi.getUserName(), userRights, ACCESS));
 //Group ACLs of the User
 List userGroups = Arrays.asList(ugi.getGroupNames());
 userGroups.stream().forEach((group) -> listOfAcls.add(
-new OzoneAcl(ACLIdentityType.GROUP, group, groupRights)));
+new OzoneAcl(GROUP, group, groupRights, ACCESS)));
 return listOfAcls;
   }
 
   /**
* Helper function to validate ozone Acl for given object.
* @param ozObj
* */
-  private void validateOzoneAcl(OzoneObj ozObj) throws IOException {
+  private void validateOzoneAccessAcl(OzoneObj ozObj) throws IOException {
 // Get acls for volume.
 List expectedAcls = getAclList(new OzoneConfiguration());
 
 // Case:1 Add new acl permission to existing acl.
 if(expectedAcls.size()>0) {
   OzoneAcl oldAcl = expectedAcls.get(0);
   OzoneAcl newAcl = new OzoneAcl(oldAcl.getType(), oldAcl.getName(),
-  ACLType.READ_ACL);
+  ACLType.READ_ACL, ACCESS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275590)
Time Spent: 5h 20m  (was: 5h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275598&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275598
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796223
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -983,6 +1019,48 @@ public OmMultipartInfo 
applyInitiateMultipartUpload(OmKeyArgs keyArgs,
 }
   }
 
+  private List getAclsForKey(OmKeyArgs keyArgs, 
+  OmVolumeArgs volArgs, OmBucketInfo bucketInfo) {
+List acls = new ArrayList<>(keyArgs.getAcls().size());
+
+keyArgs.getAcls().stream().map(OzoneAcl::toProtobuf).
+collect(Collectors.toList());
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275598)
Time Spent: 6h 40m  (was: 6.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275573&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275573
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796106
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -120,16 +129,19 @@ public OzoneAcl(ACLIdentityType type, String name, 
BitSet acls) {
 && (name.length() == 0)) {
   throw new IllegalArgumentException("User or group name is required");
 }
+aclScope = scope;
   }
 
   /**
-   * Parses an ACL string and returns the ACL object.
+   * Parses an ACL string and returns the ACL object. If acl scope is not 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275573)
Time Spent: 2.5h  (was: 2h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275592&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275592
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796204
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/security/acl/TestOzoneNativeAuthorizer.java
 ##
 @@ -57,6 +57,7 @@
 import java.util.stream.Collectors;
 
 import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275592)
Time Spent: 5h 40m  (was: 5.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275589&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275589
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796193
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2279,7 +2332,42 @@ public void testNativeAclsForKey() throws Exception {
 .setStoreType(OzoneObj.StoreType.OZONE)
 .build();
 
-validateOzoneAcl(ozObj);
+// Validates access acls.
+validateOzoneAccessAcl(ozObj);
+
+// Check default acls inherited from bucket.
+OzoneObj buckObj = new OzoneObjInfo.Builder()
+.setVolumeName(volumeName)
+.setBucketName(bucketName)
+.setKeyName(key1)
+.setResType(OzoneObj.ResourceType.BUCKET)
+.setStoreType(OzoneObj.StoreType.OZONE)
+.build();
+
+validateDefaultAcls(buckObj, ozObj, null, bucket);
+
+// Check default acls inherited from prefix.
+OzoneObj prefixObj = new OzoneObjInfo.Builder()
+.setVolumeName(volumeName)
+.setBucketName(bucketName)
+.setKeyName(key1)
+.setPrefixName("dir1/")
+.setResType(OzoneObj.ResourceType.PREFIX)
+.setStoreType(OzoneObj.StoreType.OZONE)
+.build();
+store.setAcl(prefixObj, getAclList(new OzoneConfiguration()));
+// Prefix should inherit DEFAULT acl from bucket.
+
+List acls = store.getAcl(prefixObj);
+assertTrue("Current acls:" + StringUtils.join(",", acls),
+acls.contains(inheritedUserAcl));
+assertTrue("Current acls:" + StringUtils.join(",", acls),
+acls.contains(inheritedGroupAcl));
+// Remove inherited acls from prefix.
+assertTrue(store.removeAcl(prefixObj, inheritedUserAcl));
+assertTrue(store.removeAcl(prefixObj, inheritedGroupAcl));
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275589)
Time Spent: 5h 10m  (was: 5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275593&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275593
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796209
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/security/acl/TestOzoneNativeAuthorizer.java
 ##
 @@ -242,9 +243,10 @@ public void testCheckAccessForVolume() throws Exception {
   @Test
   public void testCheckAccessForBucket() throws Exception {
 
-OzoneAcl userAcl = new OzoneAcl(USER, ugi.getUserName(), parentDirUserAcl);
+OzoneAcl userAcl = new OzoneAcl(USER, ugi.getUserName(), parentDirUserAcl, 
+ACCESS);
 OzoneAcl groupAcl = new OzoneAcl(GROUP, ugi.getGroups().size() > 0 ?
-ugi.getGroups().get(0) : "", parentDirGroupAcl);
+ugi.getGroups().get(0) : "", parentDirGroupAcl, ACCESS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275593)
Time Spent: 5h 50m  (was: 5h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275604&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275604
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1074: HDDS-1544. 
Support default Acls for volume, bucket, keys and prefix. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/1074#issuecomment-510706921
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 99 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for branch |
   | +1 | mvninstall | 561 | trunk passed |
   | +1 | compile | 269 | trunk passed |
   | +1 | checkstyle | 66 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 893 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | trunk passed |
   | 0 | spotbugs | 344 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 548 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 509 | the patch passed |
   | +1 | compile | 309 | the patch passed |
   | +1 | cc | 309 | the patch passed |
   | +1 | javac | 309 | the patch passed |
   | -0 | checkstyle | 45 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 41 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 741 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | the patch passed |
   | +1 | findbugs | 551 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 427 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1524 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 7172 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.security.acl.TestOzoneNativeAuthorizer |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1074/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1074 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 965b6d02d6c9 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9b54dd7 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1074/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1074/1/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1074/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1074/1/testReport/ |
   | Max. process+thread count | 2861 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service hadoop-ozone/dist 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1074/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275604)
Time Spent: 7h 40m  (was: 7.5h)

> Support default Acls for volume, bucket, keys and prefix
> ---

[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275601&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275601
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796234
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -983,6 +1019,48 @@ public OmMultipartInfo 
applyInitiateMultipartUpload(OmKeyArgs keyArgs,
 }
   }
 
+  private List getAclsForKey(OmKeyArgs keyArgs, 
+  OmVolumeArgs volArgs, OmBucketInfo bucketInfo) {
+List acls = new ArrayList<>(keyArgs.getAcls().size());
+
+keyArgs.getAcls().stream().map(OzoneAcl::toProtobuf).
+collect(Collectors.toList());
+
+//if(volArgs != null) {
+//  acls.addAll(volArgs.getAclMap().getDefaultAclList());
+//}
+
+// Inherit DEFAULT acls from prefix.
+boolean prefixParentFound = false;
+if(prefixManager != null) {
+  List prefixList = prefixManager.getLongestPrefixPath(
+  OZONE_URI_DELIMITER +
+  keyArgs.getVolumeName() + OZONE_URI_DELIMITER +
+  keyArgs.getBucketName() + OZONE_URI_DELIMITER +
+  keyArgs.getKeyName());
+
+  if(prefixList.size() > 0) {
+// Add all acls from direct parent to key.
+OmPrefixInfo prefixInfo = prefixList.get(prefixList.size() - 1);
+if(prefixInfo  != null) {
+  acls.addAll(OzoneUtils.getDefaultAclsProto(prefixInfo.getAcls()));
+  prefixParentFound = true;
+}
+  }
+}
+
+// Inherit DEFAULT acls from bucket only if DEFAULT acls for 
+// prefix are not set.
+if (!prefixParentFound && bucketInfo != null) {
+  acls.addAll(bucketInfo.getAcls().stream().filter(a -> a.getAclScope()
+  .equals(OzoneAcl.AclScope.DEFAULT))
+  .map(OzoneAcl::toProtobufWithAccessType)
+  .collect(Collectors.toList()));
+}
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275601)
Time Spent: 7h 10m  (was: 7h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275587&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275587
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796186
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2252,15 +2266,54 @@ public void testNativeAclsForBucket() throws Exception 
{
 .setStoreType(OzoneObj.StoreType.OZONE)
 .build();
 
-validateOzoneAcl(ozObj);
+validateOzoneAccessAcl(ozObj);
+
+OzoneObj volObj = new OzoneObjInfo.Builder()
+.setVolumeName(volumeName)
+.setResType(OzoneObj.ResourceType.VOLUME)
+.setStoreType(OzoneObj.StoreType.OZONE)
+.build();
+validateDefaultAcls(volObj, ozObj, volume, null);
+  }
+
+  private void validateDefaultAcls(OzoneObj parentObj, OzoneObj childObj, 
+  OzoneVolume volume,  OzoneBucket bucket) throws Exception {
+assertTrue(store.addAcl(parentObj, defaultUserAcl));
+assertTrue(store.addAcl(parentObj, defaultGroupAcl));
+if (volume != null) {
+  volume.deleteBucket(childObj.getBucketName());
+  volume.createBucket(childObj.getBucketName());
+} else {
+  if (childObj.getResourceType().equals(OzoneObj.ResourceType.KEY)) {
+bucket.deleteKey(childObj.getKeyName());
+writeKey(childObj.getKeyName(), bucket);
+  } else {
+store.setAcl(childObj, getAclList(new OzoneConfiguration()));
+  }
+}
+List acls = store.getAcl(parentObj);
+assertTrue("Current acls:" + StringUtils.join(",", acls) +
+" inheritedUserAcl:" + inheritedUserAcl,
+acls.contains(defaultUserAcl));
+assertTrue("Current acls:" + StringUtils.join(",", acls) +
+" inheritedUserAcl:" + inheritedUserAcl,
+acls.contains(defaultGroupAcl));
+
+acls = store.getAcl(childObj);
+assertTrue("Current acls:" + StringUtils.join(",", acls) + 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275587)
Time Spent: 4h 50m  (was: 4h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275583&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275583
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796141
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -59,11 +63,13 @@ public OzoneAcl() {
   /**
* Constructor for OzoneAcl.
*
-   * @param type - Type
-   * @param name - Name of user
-   * @param acl - Rights
+   * @param type   - Type
+   * @param name   - Name of user
+   * @param acl- Rights
+   * @param scope  - AclScope
*/
-  public OzoneAcl(ACLIdentityType type, String name, ACLType acl) {
+  public OzoneAcl(ACLIdentityType type, String name, ACLType acl, 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275583)
Time Spent: 4h 10m  (was: 4h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275586&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275586
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796183
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2252,15 +2266,54 @@ public void testNativeAclsForBucket() throws Exception 
{
 .setStoreType(OzoneObj.StoreType.OZONE)
 .build();
 
-validateOzoneAcl(ozObj);
+validateOzoneAccessAcl(ozObj);
+
+OzoneObj volObj = new OzoneObjInfo.Builder()
+.setVolumeName(volumeName)
+.setResType(OzoneObj.ResourceType.VOLUME)
+.setStoreType(OzoneObj.StoreType.OZONE)
+.build();
+validateDefaultAcls(volObj, ozObj, volume, null);
+  }
+
+  private void validateDefaultAcls(OzoneObj parentObj, OzoneObj childObj, 
+  OzoneVolume volume,  OzoneBucket bucket) throws Exception {
+assertTrue(store.addAcl(parentObj, defaultUserAcl));
+assertTrue(store.addAcl(parentObj, defaultGroupAcl));
+if (volume != null) {
+  volume.deleteBucket(childObj.getBucketName());
+  volume.createBucket(childObj.getBucketName());
+} else {
+  if (childObj.getResourceType().equals(OzoneObj.ResourceType.KEY)) {
+bucket.deleteKey(childObj.getKeyName());
+writeKey(childObj.getKeyName(), bucket);
+  } else {
+store.setAcl(childObj, getAclList(new OzoneConfiguration()));
+  }
+}
+List acls = store.getAcl(parentObj);
+assertTrue("Current acls:" + StringUtils.join(",", acls) +
+" inheritedUserAcl:" + inheritedUserAcl,
+acls.contains(defaultUserAcl));
+assertTrue("Current acls:" + StringUtils.join(",", acls) +
+" inheritedUserAcl:" + inheritedUserAcl,
+acls.contains(defaultGroupAcl));
+
+acls = store.getAcl(childObj);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275586)
Time Spent: 4h 40m  (was: 4.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275602&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275602
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796242
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -472,7 +481,8 @@ public OpenKeySession openKey(OmKeyArgs args) throws 
IOException {
 if (keyInfo == null) {
   // the key does not exist, create a new object, the new blocks are the
   // version 0
-  keyInfo = createKeyInfo(args, locations, factor, type, size, encInfo);
+  keyInfo = createKeyInfo(args, locations, factor, type, size, 
+  encInfo, bucketInfo);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275602)
Time Spent: 7h 20m  (was: 7h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 7h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275600&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275600
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796231
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -983,6 +1019,48 @@ public OmMultipartInfo 
applyInitiateMultipartUpload(OmKeyArgs keyArgs,
 }
   }
 
+  private List getAclsForKey(OmKeyArgs keyArgs, 
+  OmVolumeArgs volArgs, OmBucketInfo bucketInfo) {
+List acls = new ArrayList<>(keyArgs.getAcls().size());
+
+keyArgs.getAcls().stream().map(OzoneAcl::toProtobuf).
+collect(Collectors.toList());
+
+//if(volArgs != null) {
+//  acls.addAll(volArgs.getAclMap().getDefaultAclList());
+//}
+
+// Inherit DEFAULT acls from prefix.
+boolean prefixParentFound = false;
+if(prefixManager != null) {
+  List prefixList = prefixManager.getLongestPrefixPath(
+  OZONE_URI_DELIMITER +
+  keyArgs.getVolumeName() + OZONE_URI_DELIMITER +
+  keyArgs.getBucketName() + OZONE_URI_DELIMITER +
+  keyArgs.getKeyName());
+
+  if(prefixList.size() > 0) {
+// Add all acls from direct parent to key.
+OmPrefixInfo prefixInfo = prefixList.get(prefixList.size() - 1);
+if(prefixInfo  != null) {
+  acls.addAll(OzoneUtils.getDefaultAclsProto(prefixInfo.getAcls()));
+  prefixParentFound = true;
+}
+  }
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275600)
Time Spent: 7h  (was: 6h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275569&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275569
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796103
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -83,16 +89,19 @@ public OzoneAcl(ACLIdentityType type, String name, ACLType 
acl) {
 && (name.length() == 0)) {
   throw new IllegalArgumentException("User or group name is required");
 }
+aclScope = scope;
   }
 
   /**
* Constructor for OzoneAcl.
*
-   * @param type - Type
-   * @param name - Name of user
-   * @param acls - Rights
+   * @param type   - Type
+   * @param name   - Name of user
+   * @param acls   - Rights
+   * @param scope  - AclScope
*/
-  public OzoneAcl(ACLIdentityType type, String name, BitSet acls) {
+  public OzoneAcl(ACLIdentityType type, String name, BitSet acls, 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275569)
Time Spent: 1h 50m  (was: 1h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275588&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275588
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796189
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2279,7 +2332,42 @@ public void testNativeAclsForKey() throws Exception {
 .setStoreType(OzoneObj.StoreType.OZONE)
 .build();
 
-validateOzoneAcl(ozObj);
+// Validates access acls.
+validateOzoneAccessAcl(ozObj);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275588)
Time Spent: 5h  (was: 4h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275574&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275574
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796131
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -178,19 +204,55 @@ public static OzoneAclInfo toProtobuf(OzoneAcl acl) {
 OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
 .setName(acl.getName())
 .setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.valueOf(acl.getAclScope().name()))
 .setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
 return builder.build();
   }
 
   public static OzoneAcl fromProtobuf(OzoneAclInfo protoAcl) {
 BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
 return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
-protoAcl.getName(), aclRights);
+protoAcl.getName(), aclRights, 
+AclScope.valueOf(protoAcl.getAclScope().name()));
+  }
+
+  /**
+   * Helper function to convert a proto message of type {@link OzoneAclInfo}
+   * to {@link OzoneAcl} with acl scope of type ACCESS.
+   * 
+   * @param protoAcl
+   * @return OzoneAcl
+   * */
+  public static OzoneAcl fromProtobufWithAccessType(OzoneAclInfo protoAcl) {
+BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
+return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
+protoAcl.getName(), aclRights, AclScope.ACCESS);
+  }
+
+  /**
+   * Helper function to convert an {@link OzoneAcl} to proto message of type
+   * {@link OzoneAclInfo} with acl scope of type ACCESS.
+   *
+   * @param acl
+   * @return OzoneAclInfo
+   * */
+  public static OzoneAclInfo toProtobufWithAccessType(OzoneAcl acl) {
+OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
+.setName(acl.getName())
+.setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.ACCESS)
+.setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
+return builder.build();
   }
 
+  public AclScope getAclScope() {
+return aclScope;
+  }
+  
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275574)
Time Spent: 2h 40m  (was: 2.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275585&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275585
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796180
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2252,15 +2266,54 @@ public void testNativeAclsForBucket() throws Exception 
{
 .setStoreType(OzoneObj.StoreType.OZONE)
 .build();
 
-validateOzoneAcl(ozObj);
+validateOzoneAccessAcl(ozObj);
+
+OzoneObj volObj = new OzoneObjInfo.Builder()
+.setVolumeName(volumeName)
+.setResType(OzoneObj.ResourceType.VOLUME)
+.setStoreType(OzoneObj.StoreType.OZONE)
+.build();
+validateDefaultAcls(volObj, ozObj, volume, null);
+  }
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275585)
Time Spent: 4.5h  (was: 4h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275567&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275567
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796114
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -141,13 +153,27 @@ public static OzoneAcl parseAcl(String acl) throws 
IllegalArgumentException {
 ACLIdentityType aclType = ACLIdentityType.valueOf(parts[0].toUpperCase());
 BitSet acls = new BitSet(ACLType.getNoOfAcls());
 
-for (char ch : parts[2].toCharArray()) {
+String bits = parts[2];
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275567)
Time Spent: 1.5h  (was: 1h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275572&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275572
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796126
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -178,19 +204,55 @@ public static OzoneAclInfo toProtobuf(OzoneAcl acl) {
 OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
 .setName(acl.getName())
 .setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.valueOf(acl.getAclScope().name()))
 .setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
 return builder.build();
   }
 
   public static OzoneAcl fromProtobuf(OzoneAclInfo protoAcl) {
 BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
 return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
-protoAcl.getName(), aclRights);
+protoAcl.getName(), aclRights, 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275572)
Time Spent: 2h 20m  (was: 2h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275570&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275570
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796120
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -141,13 +153,27 @@ public static OzoneAcl parseAcl(String acl) throws 
IllegalArgumentException {
 ACLIdentityType aclType = ACLIdentityType.valueOf(parts[0].toUpperCase());
 BitSet acls = new BitSet(ACLType.getNoOfAcls());
 
-for (char ch : parts[2].toCharArray()) {
+String bits = parts[2];
+
+// Default acl scope is ACCESS.
+AclScope aclScope = AclScope.ACCESS;
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275570)
Time Spent: 2h  (was: 1h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275591&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275591
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796199
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2433,8 +2532,10 @@ private void validateOzoneAcl(OzoneObj ozObj) throws 
IOException {
 expectedAcls.forEach(a -> assertTrue(finalNewAcls.contains(a)));
 
 // Reset acl's.
-OzoneAcl ua = new OzoneAcl(ACLIdentityType.USER, "userx", 
ACLType.READ_ACL);
-OzoneAcl ug = new OzoneAcl(ACLIdentityType.GROUP, "userx", ACLType.ALL);
+OzoneAcl ua = new OzoneAcl(USER, "userx", 
+ACLType.READ_ACL, ACCESS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275591)
Time Spent: 5.5h  (was: 5h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275568&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275568
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796110
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -120,16 +129,19 @@ public OzoneAcl(ACLIdentityType type, String name, 
BitSet acls) {
 && (name.length() == 0)) {
   throw new IllegalArgumentException("User or group name is required");
 }
+aclScope = scope;
   }
 
   /**
-   * Parses an ACL string and returns the ACL object.
+   * Parses an ACL string and returns the ACL object. If acl scope is not 
+   * passed in input string then scope is set to ACCESS.
*
* @param acl - Acl String , Ex. user:anu:rw
*
* @return - Ozone ACLs
*/
-  public static OzoneAcl parseAcl(String acl) throws IllegalArgumentException {
+  public static OzoneAcl parseAcl(String acl) 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275568)
Time Spent: 1h 40m  (was: 1.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275582&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275582
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796173
 
 

 ##
 File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
 ##
 @@ -507,9 +507,15 @@ message OzoneAclInfo {
 CLIENT_IP = 5;
 }
 
+enum OzoneAclScope {
+  ACCESS = 0;
+  DEFAULT = 1;
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275582)
Time Spent: 4h  (was: 3h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275595&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275595
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796214
 
 

 ##
 File path: 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/storage/DistributedStorageHandler.java
 ##
 @@ -71,6 +70,8 @@
 import java.util.Objects;
 import java.util.concurrent.TimeUnit;
 
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275595)
Time Spent: 6h 10m  (was: 6h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275584&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275584
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 12/Jul/19 01:11
Start Date: 12/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1074: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302796176
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -137,6 +142,15 @@
   private static OzoneManager ozoneManager;
   private static StorageContainerLocationProtocolClientSideTranslatorPB
   storageContainerLocationClient;
+  private static String remoteUserName = "remoteUser";
+  private static OzoneAcl defaultUserAcl = new OzoneAcl(USER, remoteUserName,
+  READ, DEFAULT);
+  private static OzoneAcl defaultGroupAcl = new OzoneAcl(GROUP, remoteUserName,
+  READ, DEFAULT);
+  private static OzoneAcl inheritedUserAcl = new OzoneAcl(USER, remoteUserName,
+  READ, ACCESS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275584)
Time Spent: 4h 20m  (was: 4h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14323) Distcp fails in Hadoop 3.x when 2.x source webhdfs url has special characters in hdfs file path

2019-07-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883449#comment-16883449
 ] 

Hudson commented on HDFS-14323:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16897 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16897/])
HDFS-14466. Add a regression test for HDFS-14323. (aajisaka: rev 
00dd843a1a6c9d8b616631cdcf24c00e82498dab)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


> Distcp fails in Hadoop 3.x when 2.x source webhdfs url has special characters 
> in hdfs file path
> ---
>
> Key: HDFS-14323
> URL: https://issues.apache.org/jira/browse/HDFS-14323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14323v0.patch
>
>
> There was an enhancement to allow semicolon in source/target URLs for distcp 
> use case as part of HDFS-13176 and backward compatibility fix as part of 
> HDFS-13582 . Still there seems to be an issue when trying to trigger distcp 
> from 3.x cluster to pull webhdfs data from 2.x hadoop cluster. We might need 
> to deal with existing fix as described below by making sure if url is already 
> encoded or not. That fixes it. 
> diff --git 
> a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
>  
> b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> index 5936603c34a..dc790286aff 100644
> --- 
> a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> +++ 
> b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> @@ -609,7 +609,10 @@ URL toUrl(final HttpOpParam.Op op, final Path fspath,
>  boolean pathAlreadyEncoded = false;
>  try {
>  fspathUriDecoded = URLDecoder.decode(fspathUri.getPath(), "UTF-8");
> - pathAlreadyEncoded = true;
> + if(!fspathUri.getPath().equals(fspathUriDecoded))
> + {
> + pathAlreadyEncoded = true;
> + }
>  } catch (IllegalArgumentException ex) {
>  LOG.trace("Cannot decode URL encoded file", ex);
>  }
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14466) Add a regression test for HDFS-14323

2019-07-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883448#comment-16883448
 ] 

Hudson commented on HDFS-14466:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16897 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16897/])
HDFS-14466. Add a regression test for HDFS-14323. (aajisaka: rev 
00dd843a1a6c9d8b616631cdcf24c00e82498dab)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


> Add a regression test for HDFS-14323
> 
>
> Key: HDFS-14466
> URL: https://issues.apache.org/jira/browse/HDFS-14466
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, webhdfs
>Affects Versions: 3.2.0
>Reporter: Yuya Ebihara
>Assignee: Masatake Iwasaki
>Priority: Minor
>  Labels: webhdfs
> Attachments: HADOOP-16258.001.patch, HDFS-14466.002.patch
>
>
> Recently, we upgraded hadoop library from 2.7.7 to 3.2.0. This issue happens 
> after the update. When we call FileSystem.listLocatedStatus with location 
> 'webhdfs://hadoop-master:50070/user/hive/warehouse/test_part/dt=1', the 
> internal calls are
>  * 2.7.7 
> [http://hadoop-master:50070/webhdfs/v1/user/hive/warehouse/test_part/dt=1?op=LISTSTATUS&user.name=xxx|http://hadoop-master:50070/webhdfs/v1/user/hive/warehouse/test_part/dt=1?op=LISTSTATUS&user.name=xxx%27,]
>  * 3.2.0 
> [http://hadoop-master:50070/webhdfs/v1/user/hive/warehouse/test_part/dt%253D1?op=LISTSTATUS&user.name=xxx]'
> As a result, it returns RemoteException with FileNotFoundException.
> {code:java}
> {"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  /user/hive/warehouse/test_part/dt%3D1 does not exist."}}
> {code}
> Could you please tell me whether it's a bug and the way to avoid it?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14466) Add a regression test for HDFS-14323

2019-07-11 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14466:
-
Hadoop Flags: Reviewed
 Summary: Add a regression test for HDFS-14323  (was: 
FileSystem.listLocatedStatus for path including '=' encodes it and returns 
FileNotFoundException)

+1, committing this.

> Add a regression test for HDFS-14323
> 
>
> Key: HDFS-14466
> URL: https://issues.apache.org/jira/browse/HDFS-14466
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, webhdfs
>Affects Versions: 3.2.0
>Reporter: Yuya Ebihara
>Assignee: Masatake Iwasaki
>Priority: Minor
>  Labels: webhdfs
> Attachments: HADOOP-16258.001.patch, HDFS-14466.002.patch
>
>
> Recently, we upgraded hadoop library from 2.7.7 to 3.2.0. This issue happens 
> after the update. When we call FileSystem.listLocatedStatus with location 
> 'webhdfs://hadoop-master:50070/user/hive/warehouse/test_part/dt=1', the 
> internal calls are
>  * 2.7.7 
> [http://hadoop-master:50070/webhdfs/v1/user/hive/warehouse/test_part/dt=1?op=LISTSTATUS&user.name=xxx|http://hadoop-master:50070/webhdfs/v1/user/hive/warehouse/test_part/dt=1?op=LISTSTATUS&user.name=xxx%27,]
>  * 3.2.0 
> [http://hadoop-master:50070/webhdfs/v1/user/hive/warehouse/test_part/dt%253D1?op=LISTSTATUS&user.name=xxx]'
> As a result, it returns RemoteException with FileNotFoundException.
> {code:java}
> {"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  /user/hive/warehouse/test_part/dt%3D1 does not exist."}}
> {code}
> Could you please tell me whether it's a bug and the way to avoid it?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-545) NullPointerException error thrown while trying to close container

2019-07-11 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-545.
---
Resolution: Cannot Reproduce

Not seen this issue for a while. Resolving for now. please reopen if needed.

> NullPointerException error thrown while trying to close container
> -
>
> Key: HDDS-545
> URL: https://issues.apache.org/jira/browse/HDDS-545
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Nanda kumar
>Priority: Critical
> Attachments: all-node-ozone-logs-1537875436.tar.gz
>
>
> Seen following null pointer in ozone.log while trying to close the container 
> on receiving SCM container close request'.
>  
> ozone version:
> --
>  
> {noformat}
> Source code repository g...@github.com:apache/hadoop.git -r 
> 968082ffa5d9e50ed8538f653c375edd1b8feea5
> Compiled by elek on 2018-09-19T20:57Z
> Compiled with protoc 2.5.0
> From source with checksum efbdeabb5670d69d9efde85846e4ee98
> Using HDDS 0.2.1-alpha
> Source code repository g...@github.com:apache/hadoop.git -r 
> 968082ffa5d9e50ed8538f653c375edd1b8feea5
> Compiled by elek on 2018-09-19T20:56Z
> Compiled with protoc 2.5.0
> From source with checksum 8bf78cff4b73c95d486da5b21053ef
> {noformat}
>  
> ozone.log
> {noformat}
> 2018-09-24 11:32:55,910 [Thread-2921] DEBUG (XceiverServerRatis.java:401) - 
> pipeline Action CLOSE on pipeline 
> pipelineId=eabdcbe2-da3b-41be-a281-f0ea8d4120f7.Reason : 
> 7d1c7be2-7882-4446-be61-be868d2e188a is in candidate state for 1074164ms
> 2018-09-24 11:32:56,343 [Datanode State Machine Thread - 1] DEBUG 
> (HeartbeatEndpointTask.java:255) - Received SCM container close request for 
> container 54
> 2018-09-24 11:32:56,347 [Datanode State Machine Thread - 1] DEBUG 
> (HeartbeatEndpointTask.java:255) - Received SCM container close request for 
> container 42
> 2018-09-24 11:32:56,347 [Datanode State Machine Thread - 1] DEBUG 
> (HeartbeatEndpointTask.java:255) - Received SCM container close request for 
> container 44
> 2018-09-24 11:32:56,354 [Datanode State Machine Thread - 1] DEBUG 
> (HeartbeatEndpointTask.java:255) - Received SCM container close request for 
> container 46
> 2018-09-24 11:32:56,355 [Datanode State Machine Thread - 1] DEBUG 
> (HeartbeatEndpointTask.java:255) - Received SCM container close request for 
> container 48
> 2018-09-24 11:32:56,357 [Datanode State Machine Thread - 1] DEBUG 
> (HeartbeatEndpointTask.java:255) - Received SCM container close request for 
> container 50
> 2018-09-24 11:32:56,357 [Datanode State Machine Thread - 1] DEBUG 
> (HeartbeatEndpointTask.java:255) - Received SCM container close request for 
> container 52
> 2018-09-24 11:32:56,548 [Command processor thread] DEBUG 
> (CloseContainerCommandHandler.java:64) - Processing Close Container command.
> 2018-09-24 11:32:56,636 [Command processor thread] ERROR 
> (CloseContainerCommandHandler.java:105) - Can't close container 54
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.submitContainerRequest(OzoneContainer.java:192)
>  at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CloseContainerCommandHandler.handle(CloseContainerCommandHandler.java:91)
>  at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CommandDispatcher.handle(CommandDispatcher.java:93)
>  at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$initCommandHandlerThread$1(DatanodeStateMachine.java:382)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-09-24 11:32:56,726 [Command processor thread] DEBUG 
> (CloseContainerCommandHandler.java:64) - Processing Close Container command.
> 2018-09-24 11:32:56,728 [Command processor thread] ERROR 
> (CloseContainerCommandHandler.java:105) - Can't close container 42
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.submitContainerRequest(OzoneContainer.java:192)
>  at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CloseContainerCommandHandler.handle(CloseContainerCommandHandler.java:91)
>  at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CommandDispatcher.handle(CommandDispatcher.java:93)
>  at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$initCommandHandlerThread$1(DatanodeStateMachine.java:382)
>  at java.lang.Thread.run(Thread.java:745)
> 2018-09-24 11:32:56,787 [Command processor thread] DEBUG 
> (CloseContainerCommandHandler.java:64) - Processing Close Container command.
> 2018-09-24 11:32:56,814 [Command processor thread] ERROR 
> (CloseContainerCommandHandler.java:105) - Can't close container 44
> java.lang.NullPointerException
>  a

[jira] [Resolved] (HDDS-622) Datanode shuts down with RocksDBStore java.lang.NoSuchMethodError

2019-07-11 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-622.
---
Resolution: Not A Problem

I don't think this is a problem anymore. Since we are going to support later 
versions of Hadoop only. Closing for now.

> Datanode shuts down with RocksDBStore java.lang.NoSuchMethodError
> -
>
> Key: HDDS-622
> URL: https://issues.apache.org/jira/browse/HDDS-622
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Priority: Critical
>  Labels: beta1
>
> Datanodes are registered fine on a Hadoop + Ozone cluster.
> While running jobs against Ozone, Datanode shuts down as below:
> {code:java}
> 2018-10-10 21:50:42,708 INFO storage.RaftLogWorker 
> (RaftLogWorker.java:rollLogSegment(263)) - Rolling 
> segment:7c1a32b5-34ed-4a2a-aa07-ac75d25858b6-RaftLogWorker index to:2
> 2018-10-10 21:50:42,714 INFO impl.RaftServerImpl 
> (ServerState.java:setRaftConf(319)) - 7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: 
> set configuration 2: [7c1a32b5-34ed-4a2a-aa07-ac75d25858b6:172.27.56.9:9858, 
> ee
> 20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858, 
> b7fbd501-27ae-4304-8c42-a612915094c6:172.27.17.133:9858], old=null at 2
> 2018-10-10 21:50:42,729 WARN impl.LogAppender (LogUtils.java:warn(135)) - 
> 7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: Failed appendEntries to 
> e20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858: org.apache..
> ratis.shaded.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
> 2018-10-10 21:50:43,245 WARN impl.LogAppender (LogUtils.java:warn(135)) - 
> 7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: Failed appendEntries to 
> e20b6291-d898-46de-8cb2-861523aed1a3:172.27.87.64:9858: org.apache..
> ratis.shaded.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
> 2018-10-10 21:50:43,310 ERROR impl.RaftServerImpl 
> (RaftServerImpl.java:applyLogToStateMachine(1153)) - 
> 7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: applyTransaction failed for index:1 
> proto:(t:2, i:1)SMLOGENTRY,,
> client-894EC0846FDF, cid=0
> 2018-10-10 21:50:43,313 ERROR impl.StateMachineUpdater 
> (ExitUtils.java:terminate(86)) - Terminating with exit status 2: 
> StateMachineUpdater-7c1a32b5-34ed-4a2a-aa07-ac75d25858b6: the 
> StateMachineUpdater hii
> ts Throwable
> java.lang.NoSuchMethodError: 
> org.apache.hadoop.metrics2.util.MBeans.register(Ljava/lang/String;Ljava/lang/String;Ljava/util/Map;Ljava/lang/Object;)Ljavax/management/ObjectName;
> at org.apache.hadoop.utils.RocksDBStore.(RocksDBStore.java:74)
> at 
> org.apache.hadoop.utils.MetadataStoreBuilder.build(MetadataStoreBuilder.java:142)
> at 
> org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.createContainerMetaData(KeyValueContainerUtil.java:78)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(KeyValueContainer.java:133)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleCreateContainer(KeyValueHandler.java:256)
> at 
> org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:179)
> at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:142)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:223)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:229)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.access$300(ContainerStateMachine.java:115)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine$StateMachineHelper.handleCreateContainer(ContainerStateMachine.java:618)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine$StateMachineHelper.executeContainerCommand(ContainerStateMachine.java:642)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:396)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1150)
> at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:148)
> at java.lang.Thread.run(Thread.java:748)
> 2018-10-10 21:50:43,320 INFO datanode.DataNode (LogAdapter.java:info(51)) - 
> SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down DataNode at 
> ctr-e138-1518143905142-510793-01-02.hwx.site/172.27.56.9
> /
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe

[jira] [Work logged] (HDDS-1735) Create separated unit and integration test executor dev-support scripts

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1735?focusedWorklogId=275564&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275564
 ]

ASF GitHub Bot logged work on HDDS-1735:


Author: ASF GitHub Bot
Created on: 12/Jul/19 00:40
Start Date: 12/Jul/19 00:40
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1035: HDDS-1735. Create 
separate unit and integration test executor dev-support script
URL: https://github.com/apache/hadoop/pull/1035#issuecomment-510701982
 
 
   @elek  I was trying to merge, but seems like we have some conflicts. Perhaps 
due to the fact that I merged a patch from nanda, also there is an author check 
warning.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275564)
Time Spent: 1h 50m  (was: 1h 40m)

> Create separated unit and integration test executor dev-support scripts
> ---
>
> Key: HDDS-1735
> URL: https://issues.apache.org/jira/browse/HDDS-1735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2019-07-02 at 3.25.33 PM.png
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> hadoop-ozone/dev-support/checks directory contains multiple helper script to 
> execute different type of testing (findbugs, rat, unit, build).
> They easily define how tests should be executed, with the following contract:
>  * The problems should be printed out to the console
>  * in case of test failure a non zero exit code should be used
>  
> The tests are working well (in fact I have some experiments with executing 
> these scripts on k8s and argo where all the shell scripts are executed 
> parallel) but we need some update:
>  1. Most important: the unit tests and integration tests can be separated. 
> Integration tests are more flaky and it's better to have a way to run only 
> the normal unit tests
>  2. As HDDS-1115 introduced a pom.ozone.xml it's better to use them instead 
> of the magical "am pl hadoop-ozone-dist" trick--
>  3. To make it possible to run blockade test in containers we should use - T 
> flag with docker-compose
>  4. checkstyle violations are printed out to the console



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1788) Add kerberos support to Ozone Recon

2019-07-11 Thread Vivek Ratnavel Subramanian (JIRA)
Vivek Ratnavel Subramanian created HDDS-1788:


 Summary: Add kerberos support to Ozone Recon
 Key: HDDS-1788
 URL: https://issues.apache.org/jira/browse/HDDS-1788
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Recon
Affects Versions: 0.4.0
Reporter: Vivek Ratnavel Subramanian
Assignee: Vivek Ratnavel Subramanian


Recon fails to startup in a kerberized cluster with the following error:


{code:java}
Failed startup of context 
o.e.j.w.WebAppContext@2009f9b0{/,file:///tmp/jetty-0.0.0.0-9888-recon-_-any-2565178148822292652.dir/webapp/,UNAVAILABLE}{/recon}
 javax.servlet.ServletException: javax.servlet.ServletException: Principal not 
defined in configuration at 
org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:188)
 at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:194)
 at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:180)
 at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139) at 
org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873) at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:349)
 at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1406) 
at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1368) 
at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778)
 at 
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262)
 at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:522) at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
 at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
 at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
 at 
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
 at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
 at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
 at org.eclipse.jetty.server.Server.start(Server.java:427) at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
 at 
org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
 at org.eclipse.jetty.server.Server.doStart(Server.java:394) at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
 at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1140) at 
org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:175) at 
org.apache.hadoop.ozone.recon.ReconServer.call(ReconServer.java:102) at 
org.apache.hadoop.ozone.recon.ReconServer.call(ReconServer.java:50) at 
picocli.CommandLine.execute(CommandLine.java:1173) at 
picocli.CommandLine.access$800(CommandLine.java:141) at 
picocli.CommandLine$RunLast.handle(CommandLine.java:1367) at 
picocli.CommandLine$RunLast.handle(CommandLine.java:1335) at 
picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:1243)
 at picocli.CommandLine.parseWithHandlers(CommandLine.java:1526) at 
picocli.CommandLine.parseWithHandler(CommandLine.java:1465) at 
org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:65) at 
org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:56) at 
org.apache.hadoop.ozone.recon.ReconServer.main(ReconServer.java:61)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1788) Add kerberos support to Ozone Recon

2019-07-11 Thread Vivek Ratnavel Subramanian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1788 started by Vivek Ratnavel Subramanian.

> Add kerberos support to Ozone Recon
> ---
>
> Key: HDDS-1788
> URL: https://issues.apache.org/jira/browse/HDDS-1788
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Affects Versions: 0.4.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> Recon fails to startup in a kerberized cluster with the following error:
> {code:java}
> Failed startup of context 
> o.e.j.w.WebAppContext@2009f9b0{/,file:///tmp/jetty-0.0.0.0-9888-recon-_-any-2565178148822292652.dir/webapp/,UNAVAILABLE}{/recon}
>  javax.servlet.ServletException: javax.servlet.ServletException: Principal 
> not defined in configuration at 
> org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:188)
>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:194)
>  at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:180)
>  at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139) 
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:873) 
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:349)
>  at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1406) 
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1368) 
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:778)
>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:262)
>  at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:522) at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>  at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
>  at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
>  at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>  at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
>  at org.eclipse.jetty.server.Server.start(Server.java:427) at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
>  at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
>  at org.eclipse.jetty.server.Server.doStart(Server.java:394) at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
>  at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1140) at 
> org.apache.hadoop.hdds.server.BaseHttpServer.start(BaseHttpServer.java:175) 
> at org.apache.hadoop.ozone.recon.ReconServer.call(ReconServer.java:102) at 
> org.apache.hadoop.ozone.recon.ReconServer.call(ReconServer.java:50) at 
> picocli.CommandLine.execute(CommandLine.java:1173) at 
> picocli.CommandLine.access$800(CommandLine.java:141) at 
> picocli.CommandLine$RunLast.handle(CommandLine.java:1367) at 
> picocli.CommandLine$RunLast.handle(CommandLine.java:1335) at 
> picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:1243)
>  at picocli.CommandLine.parseWithHandlers(CommandLine.java:1526) at 
> picocli.CommandLine.parseWithHandler(CommandLine.java:1465) at 
> org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:65) at 
> org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:56) at 
> org.apache.hadoop.ozone.recon.ReconServer.main(ReconServer.java:61)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1784) Missing HostName and IpAddress in the response of register command

2019-07-11 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1784:
---
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

Thanks for the contribution, I have committed this to the trunk branch. Please 
feel free to cherry-pick if you like this to be part of 0.4.1

> Missing HostName and IpAddress in the response of register command
> --
>
> Key: HDDS-1784
> URL: https://issues.apache.org/jira/browse/HDDS-1784
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {{SCMNodeManager}} sets the HostName and IpAddress to the response of 
> register command, but that is being ignored in {{SCMDatanodeProtocolServer}} 
> while sending the response back to the datanode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1784) Missing HostName and IpAddress in the response of register command

2019-07-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883426#comment-16883426
 ] 

Hudson commented on HDDS-1784:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16896 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16896/])
HDDS-1784. Missing HostName and IpAddress in the response of register (nanda: 
rev 0f399b0d57875c64f49df3942743111905fd2198)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java


> Missing HostName and IpAddress in the response of register command
> --
>
> Key: HDDS-1784
> URL: https://issues.apache.org/jira/browse/HDDS-1784
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {{SCMNodeManager}} sets the HostName and IpAddress to the response of 
> register command, but that is being ignored in {{SCMDatanodeProtocolServer}} 
> while sending the response back to the datanode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1784) Missing HostName and IpAddress in the response of register command

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1784?focusedWorklogId=275562&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275562
 ]

ASF GitHub Bot logged work on HDDS-1784:


Author: ASF GitHub Bot
Created on: 12/Jul/19 00:17
Start Date: 12/Jul/19 00:17
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1078: HDDS-1784. 
Missing HostName and IpAddress in the response of register command.
URL: https://github.com/apache/hadoop/pull/1078
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275562)
Time Spent: 40m  (was: 0.5h)

> Missing HostName and IpAddress in the response of register command
> --
>
> Key: HDDS-1784
> URL: https://issues.apache.org/jira/browse/HDDS-1784
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {{SCMNodeManager}} sets the HostName and IpAddress to the response of 
> register command, but that is being ignored in {{SCMDatanodeProtocolServer}} 
> while sending the response back to the datanode.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1554) Create disk tests for fault injection test

2019-07-11 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883425#comment-16883425
 ] 

Eric Yang commented on HDDS-1554:
-

Patch 12 addressed all comments up to date.  Most of reusable code has been 
moved to ClusterTester class with a set of primitive methods for reuse purpose. 
 [~arp] All yaml file have been clean up to use inheritance.  

[~elek] For H) there is no fix because between cluster restart there is a 30 
seconds wait time, I think this is sufficient for initialize scm metadata, 
hence no extra handling has been added.

I think I addressed all previous concerns, let me know if I miss anything.  
Thanks

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch, 
> HDDS-1554.012.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14616) Add the warn log when the volume available space isn't enough

2019-07-11 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-14616:
--

Assignee: liying

> Add the warn log when the volume available space isn't enough
> -
>
> Key: HDFS-14616
> URL: https://issues.apache.org/jira/browse/HDFS-14616
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.7.2
>Reporter: liying
>Assignee: liying
>Priority: Minor
> Fix For: 2.7.3
>
> Attachments: HDFS-14616-v1.patch, HDFS-14616-v2.patch
>
>
> In the hadoop2 version, there is no warning log that the disk is not 
> available when using the disk. Therefore, the datanode log cannot be used to 
> check if the disk is not available ata certain time or for other problems.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1554) Create disk tests for fault injection test

2019-07-11 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1554:

Attachment: HDDS-1554.012.patch

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch, HDDS-1554.005.patch, 
> HDDS-1554.006.patch, HDDS-1554.007.patch, HDDS-1554.008.patch, 
> HDDS-1554.009.patch, HDDS-1554.010.patch, HDDS-1554.011.patch, 
> HDDS-1554.012.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14616) Add the warn log when the volume available space isn't enough

2019-07-11 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883408#comment-16883408
 ] 

Wei-Chiu Chuang commented on HDFS-14616:


If it just adds a log message you don't need a test case. 
Also note that RoundRobinVolumeChoosingPolicy uses slf4j logger. So you can 
consider using parameterized logging as well.

> Add the warn log when the volume available space isn't enough
> -
>
> Key: HDFS-14616
> URL: https://issues.apache.org/jira/browse/HDFS-14616
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.7.2
>Reporter: liying
>Priority: Minor
> Fix For: 2.7.3
>
> Attachments: HDFS-14616-v1.patch, HDFS-14616-v2.patch
>
>
> In the hadoop2 version, there is no warning log that the disk is not 
> available when using the disk. Therefore, the datanode log cannot be used to 
> check if the disk is not available ata certain time or for other problems.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-11 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883400#comment-16883400
 ] 

Siddharth Wagle commented on HDDS-1787:
---

cc: [~msingh]

> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Priority: Major
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> 2019-07-10 16:45:53,176 WARN  om.KeyManagerImpl 
> (KeyManagerImpl.java:lambda$sortDatanodeInPipeline$7(2129)) - Unable to sort 
> datanodes based on distance to client, volume=xqoyzocpse, bucket=vxwajaczqh, 
> key=pool-444-thread-7-201077822, client=127.0.0.1, 
> datanodes=[10f15723-45d7-4a0c-8f01-8b101744a110{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, 7ac2777f-0a5c-4414-9e7f-bfbc47d696ea{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}], exception=java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-iss

[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275544&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275544
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 11/Jul/19 23:11
Start Date: 11/Jul/19 23:11
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #1074: HDDS-1544. 
Support default Acls for volume, bucket, keys and prefix. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302776110
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -1689,7 +1688,8 @@ public void createVolume(OmVolumeArgs args) throws 
IOException {
 !ozAdmins.contains(ProtobufRpcEngine.Server.getRemoteUser()
 .getUserName())) {
   LOG.error("Only admin users are authorized to create " +
-  "Ozone volumes.");
+  "Ozone volumes. User :{} is not an.", 
 
 Review comment:
   resolved in new commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275544)
Time Spent: 1h  (was: 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275546&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275546
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 11/Jul/19 23:11
Start Date: 11/Jul/19 23:11
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #1074: HDDS-1544. 
Support default Acls for volume, bucket, keys and prefix. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302776156
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -92,7 +98,8 @@ public OzoneAcl(ACLIdentityType type, String name, ACLType 
acl) {
* @param name - Name of user
* @param acls - Rights
*/
-  public OzoneAcl(ACLIdentityType type, String name, BitSet acls) {
+  public OzoneAcl(ACLIdentityType type, String name, BitSet acls, 
+  AclScope scope) {
 
 Review comment:
   done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275546)
Time Spent: 1h 20m  (was: 1h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=275545&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275545
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 11/Jul/19 23:11
Start Date: 11/Jul/19 23:11
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #1074: HDDS-1544. 
Support default Acls for volume, bucket, keys and prefix. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/1074#discussion_r302776130
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -105,6 +105,7 @@
 import org.apache.hadoop.ozone.audit.Auditor;
 import org.apache.hadoop.ozone.audit.OMAction;
 import org.apache.hadoop.ozone.common.Storage.StorageState;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
 
 Review comment:
   removed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275545)
Time Spent: 1h 10m  (was: 1h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1773) Add intermittent IO disk test to fault injection test

2019-07-11 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883392#comment-16883392
 ] 

Eric Yang edited comment on HDDS-1773 at 7/11/19 11:08 PM:
---

[~elek] Disk hang is still a good test case to verify datanode health logic.  
How about we move the patch 001 logic to HDDS-1774?

I have expressed concern about Byteman touching ASF licensed code in JVM.  
There is no clear answer if this is allowed.  Besides, it is not clear to me 
where to inject faults in jvm can yield the same result as actual disk errors.

The current approach of mounting disk volume is more favorable approach to 
simulate disk errors.  This can be combined with device mapper to create faulty 
virtual device to simulate real disk errors.  For example, we can create 
virtual block device with:

{code}dd if=/dev/zero of=/var/lib/virtualblock.img bs=512 count=1048576
losetup /dev/loop0 /var/lib/virtualblock.img{code}

This creates 512M file, and we format the loopback device, and punch some 
'hole' in the block device:

{code}dmsetup create errdev0
0 261144 linear /dev/loop0 0
261144 5 error
261149 787427 linear /dev/loop0 261139{code}

This will create a device called 'errdev0' (typically in /dev/mapper). When you 
type dmsetup create errdev0 it will wait for stdin and will finish on ^D being 
input.

In the example above, we've made a 5 sector hole (2.5kb) at sectors 261144 of 
the loop device. We then continue through the loop device as normal.

We can mount the errdev0 device like a normal block device into docker 
container.  When Ozone writes data to the errdev0 device, the program will come 
across some IO problems when it hits sectors that are really IO holes in the 
virtual device.  This is more realistic simulation imho.

We can include a readme file with instruction for setting up the faulty virtual 
device for user to repeat the tests.  Thought on this approach?



was (Author: eyang):
[~elek] Disk hang is still a good test case to verify datanode health logic.  
How about we move the patch 001 logic to HDDS-1774?

I have expressed concern about Byteman touching ASF licensed code in JVM.  
There is no clear answer if this is allowed.  Besides, it is not clear to me 
where to inject faults in jvm can yield the same result as actual disk errors.

The current approach of mounting disk volume is more favorable approach to 
simulate disk errors.  This can be combined with device mapper to create faulty 
virtual device to simulate real disk errors.  For example, we can create 
virtual block device with:

{code}dd if=/dev/zero of=/var/lib/virtualblock.img bs=512 count=1048576
losetup /dev/loop0 /var/lib/virtualblock.img{code}

This creates 512M file, and we format the loopback device, and punch some 
'hole' in the block device:

{code}dmsetup create errdev0
0 261144 linear /dev/loop0 0
261144 5 error
261149 787427 linear /dev/loop0 261139{code}

This will create a device called 'errdev0' (typically in /dev/mapper). When you 
type dmsetup create errdev0 it will wait for stdin and will finish on ^D being 
input.

In the example above, we've made a 5 sector hole (2.5kb) at sectors 261144 of 
the loop device. We then continue through the loop device as normal.

The we mount the errdev0 device like a normal block device into docker 
container.  When Ozone writes data to the errdev0 device, the program will come 
across some IO problems when it hits sectors that are really IO holes in the 
virtual device.  This is more realistic simulation imho.

We can include a readme file with instruction for setting up the faulty virtual 
device for user to repeat the tests.  Thought on this approach?


> Add intermittent IO disk test to fault injection test
> -
>
> Key: HDDS-1773
> URL: https://issues.apache.org/jira/browse/HDDS-1773
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Priority: Major
> Attachments: HDDS-1773.001.patch
>
>
> Disk errors can also be simulated by setting cgroup blkio rate to 0 while 
> Ozone cluster is running.  
> This test will be added to corruption test project and this test will only be 
> performed if there is write access into host cgroup to control the throttle 
> of disk IO.
> Expected result:
> When datanode becomes irresponsive due to slow io, scm must flag the node as 
> unhealthy.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1773) Add intermittent IO disk test to fault injection test

2019-07-11 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16883392#comment-16883392
 ] 

Eric Yang commented on HDDS-1773:
-

[~elek] Disk hang is still a good test case to verify datanode health logic.  
How about we move the patch 001 logic to HDDS-1774?

I have expressed concern about Byteman touching ASF licensed code in JVM.  
There is no clear answer if this is allowed.  Besides, it is not clear to me 
where to inject faults in jvm can yield the same result as actual disk errors.

The current approach of mounting disk volume is more favorable approach to 
simulate disk errors.  This can be combined with device mapper to create faulty 
virtual device to simulate real disk errors.  For example, we can create 
virtual block device with:

{code}dd if=/dev/zero of=/var/lib/virtualblock.img bs=512 count=1048576
losetup /dev/loop0 /var/lib/virtualblock.img{code}

This creates 512M file, and we format the loopback device, and punch some 
'hole' in the block device:

{code}dmsetup create errdev0
0 261144 linear /dev/loop0 0
261144 5 error
261149 787427 linear /dev/loop0 261139{code}

This will create a device called 'errdev0' (typically in /dev/mapper). When you 
type dmsetup create errdev0 it will wait for stdin and will finish on ^D being 
input.

In the example above, we've made a 5 sector hole (2.5kb) at sectors 261144 of 
the loop device. We then continue through the loop device as normal.

The we mount the errdev0 device like a normal block device into docker 
container.  When Ozone writes data to the errdev0 device, the program will come 
across some IO problems when it hits sectors that are really IO holes in the 
virtual device.  This is more realistic simulation imho.

We can include a readme file with instruction for setting up the faulty virtual 
device for user to repeat the tests.  Thought on this approach?


> Add intermittent IO disk test to fault injection test
> -
>
> Key: HDDS-1773
> URL: https://issues.apache.org/jira/browse/HDDS-1773
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Priority: Major
> Attachments: HDDS-1773.001.patch
>
>
> Disk errors can also be simulated by setting cgroup blkio rate to 0 while 
> Ozone cluster is running.  
> This test will be added to corruption test project and this test will only be 
> performed if there is write access into host cgroup to control the throttle 
> of disk IO.
> Expected result:
> When datanode becomes irresponsive due to slow io, scm must flag the node as 
> unhealthy.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1752) ConcurrentModificationException while handling DeadNodeHandler event

2019-07-11 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1752?focusedWorklogId=275538&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-275538
 ]

ASF GitHub Bot logged work on HDDS-1752:


Author: ASF GitHub Bot
Created on: 11/Jul/19 22:36
Start Date: 11/Jul/19 22:36
Worklog Time Spent: 10m 
  Work Description: hgadre commented on pull request #1080:  HDDS-1752 Use 
concurrent set implementation for node to pipelines ma…
URL: https://github.com/apache/hadoop/pull/1080
 
 
   …pping
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 275538)
Time Spent: 10m
Remaining Estimate: 0h

> ConcurrentModificationException while handling DeadNodeHandler event
> 
>
> Key: HDDS-1752
> URL: https://issues.apache.org/jira/browse/HDDS-1752
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Hrishikesh Gadre
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ConcurrentModificationException while handling DeadNodeHandler event
> {code}
> 2019-07-02 19:29:25,190 ERROR events.SingleThreadExecutor 
> (SingleThreadExecutor.java:lambda$onMessage$1(88)) - Error on execution 
> message 56591ec5-c9e4-416c-9a36-db0507739fe5{ip: 192.168.0.2, host: 192.16
> 8.0.2, networkLocation: /default-rack, certSerialId: null}
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
> at java.util.HashMap$KeyIterator.next(HashMap.java:1466)
> at java.lang.Iterable.forEach(Iterable.java:74)
> at 
> java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1080)
> at 
> org.apache.hadoop.hdds.scm.node.DeadNodeHandler.lambda$destroyPipelines$1(DeadNodeHandler.java:99)
> at java.util.Optional.ifPresent(Optional.java:159)
> at 
> org.apache.hadoop.hdds.scm.node.DeadNodeHandler.destroyPipelines(DeadNodeHandler.java:98)
> at 
> org.apache.hadoop.hdds.scm.node.DeadNodeHandler.onMessage(DeadNodeHandler.java:78)
> at 
> org.apache.hadoop.hdds.scm.node.DeadNodeHandler.onMessage(DeadNodeHandler.java:44)
> at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:85)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >