[jira] [Created] (HDDS-1808) TestRatisPipelineCreateAndDestory#testPipelineCreationOnNodeRestart times out

2019-07-15 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1808:
-

 Summary: 
TestRatisPipelineCreateAndDestory#testPipelineCreationOnNodeRestart times out
 Key: HDDS-1808
 URL: https://issues.apache.org/jira/browse/HDDS-1808
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.5.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0


{code:java}
Error Message
test timed out after 3 milliseconds
Stacktrace
java.lang.Exception: test timed out after 3 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:382)
at 
org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory.waitForPipelines(TestRatisPipelineCreateAndDestory.java:126)
at 
org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory.testPipelineCreationOnNodeRestart(TestRatisPipelineCreateAndDestory.java:121)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1807) TestWatchForCommit#testWatchForCommitForRetryfailure fails as a result of no leader election for extended period of time

2019-07-15 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1807:
-

 Summary: TestWatchForCommit#testWatchForCommitForRetryfailure 
fails as a result of no leader election for extended period of time 
 Key: HDDS-1807
 URL: https://issues.apache.org/jira/browse/HDDS-1807
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0


{code:java}
org.apache.ratis.protocol.RaftRetryFailureException: Failed 
RaftClientRequest:client-6C83DC527A4C->73bdd98d-b003-44ff-a45b-bd12dfd50509@group-75C642DF7AE9,
 cid=55, seq=1*, RW, 
org.apache.hadoop.hdds.scm.XceiverClientRatis$$Lambda$407/213850519@1a8843a2 
for 10 attempts with RetryLimited(maxAttempts=10, sleepTime=1000ms)
Stacktrace
java.util.concurrent.ExecutionException: 
org.apache.ratis.protocol.RaftRetryFailureException: Failed 
RaftClientRequest:client-6C83DC527A4C->73bdd98d-b003-44ff-a45b-bd12dfd50509@group-75C642DF7AE9,
 cid=55, seq=1*, RW, 
org.apache.hadoop.hdds.scm.XceiverClientRatis$$Lambda$407/213850519@1a8843a2 
for 10 attempts with RetryLimited(maxAttempts=10, sleepTime=1000ms)
at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at 
org.apache.hadoop.ozone.client.rpc.TestWatchForCommit.testWatchForCommitForRetryfailure(TestWatchForCommit.java:345)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{code}
The client here retries times with a delay of 1 sec between each retry but 
leader eleactiocouldnot complete.
{code:java}
2019-07-12 19:30:46,451 INFO  client.GrpcClientProtocolClient 
(GrpcClientProtocolClient.java:onNext(255)) - 
client-6C83DC527A4C->5931fd83-b899-480e-b15a-ecb8e7f7dd46: receive 
RaftClientReply:client-6C83DC527A4C->5931fd83-b899-480e-b15a-ecb8e7f7dd46@group-75C642DF7AE9,
 cid=55, FAILED org.apache.ratis.protocol.NotLeaderException: Server 
5931fd83-b899-480e-b15a-ecb8e7f7dd46 is not the leader (null). Request must be 
sent to leader., logIndex=0, commits[5931fd83-b899-480e-b15a-ecb8e7f7dd46:c-1]
2019-07-12 19:30:47,469 INFO  client.GrpcClientProtocolClient 
(GrpcClientProtocolClient.java:onNext(255)) - 
client-6C83DC527A4C->d83929f1-c4db-499d-b67f-ad7f10dd7dde: receive 
RaftClientReply:client-6C83DC527A4C->d83929f1-c4db-499d-b67f-ad7f10dd7dde@group-75C642DF7AE9,
 cid=55, FAILED org.apache.ratis.protocol.NotLeaderException: Server 
d83929f1-c4db-499d-b67f-ad7f10dd7dde is not the leader (null). Request must be 
sent to leader., logIndex=0, commits[d83929f1-c4db-499d-b67f-ad7f10dd7dde:c-1]
2019-07-12 19:30:48,504 INFO  client.GrpcClientProtocolClient 
(GrpcClientProtocolClient.java:onNext(255)) - 

[jira] [Created] (HDDS-1806) TestDataValidateWithSafeByteOperations tests are failing

2019-07-15 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1806:
-

 Summary: TestDataValidateWithSafeByteOperations tests are failing
 Key: HDDS-1806
 URL: https://issues.apache.org/jira/browse/HDDS-1806
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.5.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0


 
{code:java}
Unexpected Storage Container Exception: 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
ContainerID 3 does not exist

Stacktrace
java.io.IOException: Unexpected Storage Container Exception: 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
ContainerID 3 does not exist at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.setIoException(BlockOutputStream.java:549)
 at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:540)
 at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.lambda$writeChunkToContainer$2(BlockOutputStream.java:615)
 at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602) 
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
 at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748) Caused by: 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
ContainerID 3 does not exist at 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:536)
 at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:537)
 ... 7 more
{code}
The error propagated to client is erroneous. The container creation failed as a 
result disk full   condition but never propagated to client.

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1493) Download and Import Container replicator fails.

2019-07-15 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-1493:
-

Assignee: Hrishikesh Gadre  (was: Nanda kumar)

[~hgadre] Would you be able to take a look at this?

> Download and Import Container replicator fails.
> ---
>
> Key: HDDS-1493
> URL: https://issues.apache.org/jira/browse/HDDS-1493
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Hrishikesh Gadre
>Priority: Blocker
> Attachments: ozone.log
>
>
> While running batch jobs (16 threads writing a lot of 10MB+ files), the 
> following error is seen in the SCM logs.
> {code}
> ERROR  - Can't import the downloaded container data id=317
> {code}
> It is unclear from the logs why this happens. Needs more investigation to 
> find the root cause.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1615) ManagedChannel references are being leaked in ReplicationSupervisor.java

2019-07-15 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-1615:
-

Assignee: Hrishikesh Gadre

> ManagedChannel references are being leaked in ReplicationSupervisor.java
> 
>
> Key: HDDS-1615
> URL: https://issues.apache.org/jira/browse/HDDS-1615
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Hrishikesh Gadre
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> ManagedChannel references are being leaked in ReplicationSupervisor.java
> {code}
> May 30, 2019 8:10:56 AM 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference
>  cleanQueue
> SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=1495, 
> target=192.168.0.3:49868} was not shutdown properly!!! ~*~*~*
> Make sure to call shutdown()/shutdownNow() and wait until 
> awaitTermination() returns true.
> java.lang.RuntimeException: ManagedChannel allocation site
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.(ManagedChannelOrphanWrapper.java:103)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper.(ManagedChannelOrphanWrapper.java:53)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper.(ManagedChannelOrphanWrapper.java:44)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:411)
> at 
> org.apache.hadoop.ozone.container.replication.GrpcReplicationClient.(GrpcReplicationClient.java:65)
> at 
> org.apache.hadoop.ozone.container.replication.SimpleContainerDownloader.getContainerDataFromReplicas(SimpleContainerDownloader.java:87)
> at 
> org.apache.hadoop.ozone.container.replication.DownloadAndImportReplicator.replicate(DownloadAndImportReplicator.java:118)
> at 
> org.apache.hadoop.ozone.container.replication.ReplicationSupervisor$TaskRunner.run(ReplicationSupervisor.java:115)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14655) SBN : Namenode crashes if one of The JN is down

2019-07-15 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14655:

Summary: SBN : Namenode crashes if one of The JN is down  (was: SBN : 
Namenode crashes if one of The jN is down)

> SBN : Namenode crashes if one of The JN is down
> ---
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Harshakiran Reddy
>Priority: Major
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1749) Ozone Client should randomize the list of nodes in pipeline for reads

2019-07-15 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-1749:
-

Assignee: Aravindan Vijayan

> Ozone Client should randomize the list of nodes in pipeline for reads
> -
>
> Key: HDDS-1749
> URL: https://issues.apache.org/jira/browse/HDDS-1749
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>
> Currently the list of nodes returned by SCM are static and are returned in 
> the same order to all the clients. Ideally these should be sorted by the 
> network topology and then returned to client.
> However even when network topology in not available, then SCM/client should 
> randomly sort the nodes before choosing the replica's to connect.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-07-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885851#comment-16885851
 ] 

Hudson commented on HDDS-1736:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16923 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16923/])
HDDS-1736. Cleanup 2phase old HA code for Key requests. (#1038) (github: rev 
395cb3cfd703320c96855325dadb37a19fbcfc54)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/audit/OMAction.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerHAProtocol.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMAllocateBlockRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetrics.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java


> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14655) SBN : Namenode crashes if one of The jN is down

2019-07-15 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885850#comment-16885850
 ] 

Ayush Saxena commented on HDFS-14655:
-

For Observer read, The edit log tail period is set to 0, 
(dfs.ha.tail-edits.period)
There are three Journal nodes, the tailing process succeeds if it is able to 
fetch the response from majority of the JN's and then moves out. i.e succeeds 
and returns if it gets response from 2 JN. The thread for the third JN keeps on 
trying 10 times on ConnectException. But since the tailing period is quite low, 
By the time one stuck thread completes. Similar retrying stuck threads, Piles 
up. Leading to OOM

> SBN : Namenode crashes if one of The jN is down
> ---
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Harshakiran Reddy
>Priority: Major
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14655) SBN : Namenode crashes if one of The jN is down

2019-07-15 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14655:


 Summary: SBN : Namenode crashes if one of The jN is down
 Key: HDFS-14655
 URL: https://issues.apache.org/jira/browse/HDFS-14655
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Harshakiran Reddy



{noformat}
2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 9 
time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
sleepTime=1000 MILLISECONDS) | Client.java:975
2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at 
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
at 
com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
at 
com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
at 
org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
at 
org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
java.lang.OutOfMemoryError: unable to create new native thread | 
ExitUtil.java:210
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1805:
-
Description: 
Implement S3 Initiate MPU request to use OM Cache, double buffer.

 

In this Jira will add the changes to implement S3 bucket operations, and 
HA/Non-HA will have a different code path, but once all requests are 
implemented will have a single code path.

  was:
Implement S3 Bucket write requests to use OM Cache, double buffer.

 

In this Jira will add the changes to implement S3 bucket operations, and 
HA/Non-HA will have a different code path, but once all requests are 
implemented will have a single code path.


> Implement S3 Initiate MPU request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1805
> URL: https://issues.apache.org/jira/browse/HDDS-1805
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Implement S3 Initiate MPU request to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-15 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1805:


 Summary: Implement S3 Initiate MPU request to use Cache and 
DoubleBuffer
 Key: HDDS-1805
 URL: https://issues.apache.org/jira/browse/HDDS-1805
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Implement S3 Bucket write requests to use OM Cache, double buffer.

 

In this Jira will add the changes to implement S3 bucket operations, and 
HA/Non-HA will have a different code path, but once all requests are 
implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?focusedWorklogId=277209=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277209
 ]

ASF GitHub Bot logged work on HDDS-1736:


Author: ASF GitHub Bot
Created on: 16/Jul/19 04:54
Start Date: 16/Jul/19 04:54
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1038: HDDS-1736. 
Cleanup 2phase old HA code for Key requests.
URL: https://github.com/apache/hadoop/pull/1038#issuecomment-511663998
 
 
   Thank You @arp7 for the review.
   I will commit this to the trunk. Ran S3 secure acceptance test suite 
locally, tests are passing. Test failures are not related to this patch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277209)
Time Spent: 1.5h  (was: 1h 20m)

> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-07-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1736:
-
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?focusedWorklogId=277208=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277208
 ]

ASF GitHub Bot logged work on HDDS-1736:


Author: ASF GitHub Bot
Created on: 16/Jul/19 04:52
Start Date: 16/Jul/19 04:52
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1038: 
HDDS-1736. Cleanup 2phase old HA code for Key requests.
URL: https://github.com/apache/hadoop/pull/1038
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277208)
Time Spent: 1h 20m  (was: 1h 10m)

> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1492) Generated chunk size name too long.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1492?focusedWorklogId=277206=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277206
 ]

ASF GitHub Bot logged work on HDDS-1492:


Author: ASF GitHub Bot
Created on: 16/Jul/19 04:51
Start Date: 16/Jul/19 04:51
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #1084: HDDS-1492. 
Generated chunk size name too long.
URL: https://github.com/apache/hadoop/pull/1084#issuecomment-511663920
 
 
   The updated patch addresses checkstyle issues. I have also verified the name 
of chunk file in dn.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277206)
Time Spent: 40m  (was: 0.5h)

> Generated chunk size name too long.
> ---
>
> Key: HDDS-1492
> URL: https://issues.apache.org/jira/browse/HDDS-1492
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Shashikant Banerjee
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Following exception is seen in SCM logs intermittently. 
> {code}
> java.lang.RuntimeException: file name 
> 'chunks/2a54b2a153f4a9c5da5f44e2c6f97c60_stream_9c6ac565-e2d4-469c-bd5c-47922a35e798_chunk_10.tmp.2.23115'
>  is too long ( > 100 bytes)
> {code}
> We may have to limit the name of the chunk to 100 bytes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?focusedWorklogId=277207=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277207
 ]

ASF GitHub Bot logged work on HDDS-1736:


Author: ASF GitHub Bot
Created on: 16/Jul/19 04:51
Start Date: 16/Jul/19 04:51
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1038: HDDS-1736. 
Cleanup 2phase old HA code for Key requests.
URL: https://github.com/apache/hadoop/pull/1038#issuecomment-511663998
 
 
   Thank You @arp7 for the review.
   I will commit this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277207)
Time Spent: 1h 10m  (was: 1h)

> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1492) Generated chunk size name too long.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1492?focusedWorklogId=277204=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277204
 ]

ASF GitHub Bot logged work on HDDS-1492:


Author: ASF GitHub Bot
Created on: 16/Jul/19 04:49
Start Date: 16/Jul/19 04:49
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #1084: HDDS-1492. 
Generated chunk size name too long.
URL: https://github.com/apache/hadoop/pull/1084#issuecomment-511663557
 
 
   The updated patch addresses checkstyle issues. I have also verified the name 
of chunk file in dn.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277204)
Time Spent: 0.5h  (was: 20m)

> Generated chunk size name too long.
> ---
>
> Key: HDDS-1492
> URL: https://issues.apache.org/jira/browse/HDDS-1492
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Shashikant Banerjee
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Following exception is seen in SCM logs intermittently. 
> {code}
> java.lang.RuntimeException: file name 
> 'chunks/2a54b2a153f4a9c5da5f44e2c6f97c60_stream_9c6ac565-e2d4-469c-bd5c-47922a35e798_chunk_10.tmp.2.23115'
>  is too long ( > 100 bytes)
> {code}
> We may have to limit the name of the chunk to 100 bytes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14652) HealthMonitor connection retry times should be configurable

2019-07-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885841#comment-16885841
 ] 

Hadoop QA commented on HDFS-14652:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  2s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 107 unchanged - 1 fixed = 110 total (was 108) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 58s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14652 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12974790/HDFS-14652-002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e265cdf169da 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f77d54c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27233/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27233/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27233/testReport/ |
| Max. process+thread count | 1464 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| 

[jira] [Created] (HDDS-1804) TestCloseContainerHandlingByClient#estBlockWrites fails intermittently

2019-07-15 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1804:
-

 Summary: TestCloseContainerHandlingByClient#estBlockWrites fails 
intermittently
 Key: HDDS-1804
 URL: https://issues.apache.org/jira/browse/HDDS-1804
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.5.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0


The test fails intermittently as reported here:

[https://builds.apache.org/job/hadoop-multibranch/job/PR-1082/1/testReport/org.apache.hadoop.ozone.client.rpc/TestCloseContainerHandlingByClient/testBlockWrites/]
{code:java}
java.lang.IllegalArgumentException
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
at 
org.apache.hadoop.hdds.scm.XceiverClientManager.acquireClient(XceiverClientManager.java:150)
at 
org.apache.hadoop.hdds.scm.XceiverClientManager.acquireClientForReadData(XceiverClientManager.java:143)
at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.getChunkInfos(BlockInputStream.java:154)
at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.initialize(BlockInputStream.java:118)
at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.read(BlockInputStream.java:222)
at 
org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:171)
at 
org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:47)
at java.io.InputStream.read(InputStream.java:101)
at 
org.apache.hadoop.ozone.container.ContainerTestHelper.validateData(ContainerTestHelper.java:709)
at 
org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient.validateData(TestCloseContainerHandlingByClient.java:401)
at 
org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient.testBlockWrites(TestCloseContainerHandlingByClient.java:471)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1803?focusedWorklogId=277186=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277186
 ]

ASF GitHub Bot logged work on HDDS-1803:


Author: ASF GitHub Bot
Created on: 16/Jul/19 04:12
Start Date: 16/Jul/19 04:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1102: HDDS-1803. 
shellcheck.sh does not work on Mac
URL: https://github.com/apache/hadoop/pull/1102#issuecomment-511657290
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 492 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 843 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 444 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 739 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 115 | hadoop-hdds in the patch passed. |
   | +1 | unit | 193 | hadoop-ozone in the patch passed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 3141 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1102/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1102 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux 773ae44f6140 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f77d54c |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1102/1/testReport/ |
   | Max. process+thread count | 307 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1102/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277186)
Time Spent: 0.5h  (was: 20m)

> shellcheck.sh does not work on Mac
> --
>
> Key: HDDS-1803
> URL: https://issues.apache.org/jira/browse/HDDS-1803
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> # {{shellcheck.sh}} does not work on Mac
> {code}
> find: -executable: unknown primary or operator
> {code}
> # {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
> {{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-15 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1803:

Affects Version/s: 0.4.1

> shellcheck.sh does not work on Mac
> --
>
> Key: HDDS-1803
> URL: https://issues.apache.org/jira/browse/HDDS-1803
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> # {{shellcheck.sh}} does not work on Mac
> {code}
> find: -executable: unknown primary or operator
> {code}
> # {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
> {{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-15 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1803:

Status: Patch Available  (was: In Progress)

> shellcheck.sh does not work on Mac
> --
>
> Key: HDDS-1803
> URL: https://issues.apache.org/jira/browse/HDDS-1803
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> # {{shellcheck.sh}} does not work on Mac
> {code}
> find: -executable: unknown primary or operator
> {code}
> # {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
> {{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14547) DirectoryWithQuotaFeature.quota costs additional memory even the storage type quota is not set.

2019-07-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885822#comment-16885822
 ] 

Hadoop QA commented on HDFS-14547:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.9 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
19s{color} | {color:green} branch-2.9 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} branch-2.9 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} branch-2.9 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} branch-2.9 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} branch-2.9 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} branch-2.9 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} branch-2.9 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} branch-2.9 passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:c3439fff6be |
| JIRA Issue | HDFS-14547 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12974786/HDFS-14547-branch-2.9.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bfa19505701b 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.9 / 330e5c0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| Multi-JDK versions |  

[jira] [Work logged] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1803?focusedWorklogId=277170=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277170
 ]

ASF GitHub Bot logged work on HDDS-1803:


Author: ASF GitHub Bot
Created on: 16/Jul/19 03:20
Start Date: 16/Jul/19 03:20
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1102: HDDS-1803. 
shellcheck.sh does not work on Mac
URL: https://github.com/apache/hadoop/pull/1102#issuecomment-511648883
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277170)
Time Spent: 20m  (was: 10m)

> shellcheck.sh does not work on Mac
> --
>
> Key: HDDS-1803
> URL: https://issues.apache.org/jira/browse/HDDS-1803
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> # {{shellcheck.sh}} does not work on Mac
> {code}
> find: -executable: unknown primary or operator
> {code}
> # {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
> {{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1803?focusedWorklogId=277169=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277169
 ]

ASF GitHub Bot logged work on HDDS-1803:


Author: ASF GitHub Bot
Created on: 16/Jul/19 03:18
Start Date: 16/Jul/19 03:18
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1102: HDDS-1803. 
shellcheck.sh does not work on Mac
URL: https://github.com/apache/hadoop/pull/1102
 
 
   ## What changes were proposed in this pull request?
   
* Filter for file permission on Mac.
* Merge two separate `find` calls to avoid overwriting output (and 
eliminate code duplication).
   
   https://issues.apache.org/jira/browse/HDDS-1803
   
   ## How was this patch tested?
   
   ```
   $ hadoop-ozone/dev-support/checks/shellcheck.sh | wc
133 6006065
   
   $ wc target/shell-problems.txt
133 6006065 target/shell-problems.txt
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277169)
Time Spent: 10m
Remaining Estimate: 0h

> shellcheck.sh does not work on Mac
> --
>
> Key: HDDS-1803
> URL: https://issues.apache.org/jira/browse/HDDS-1803
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> # {{shellcheck.sh}} does not work on Mac
> {code}
> find: -executable: unknown primary or operator
> {code}
> # {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
> {{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1803:
-
Labels: pull-request-available  (was: )

> shellcheck.sh does not work on Mac
> --
>
> Key: HDDS-1803
> URL: https://issues.apache.org/jira/browse/HDDS-1803
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>
> # {{shellcheck.sh}} does not work on Mac
> {code}
> find: -executable: unknown primary or operator
> {code}
> # {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
> {{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-15 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1803 started by Doroszlai, Attila.
---
> shellcheck.sh does not work on Mac
> --
>
> Key: HDDS-1803
> URL: https://issues.apache.org/jira/browse/HDDS-1803
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>
> # {{shellcheck.sh}} does not work on Mac
> {code}
> find: -executable: unknown primary or operator
> {code}
> # {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
> {{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?focusedWorklogId=277168=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277168
 ]

ASF GitHub Bot logged work on HDDS-1736:


Author: ASF GitHub Bot
Created on: 16/Jul/19 03:11
Start Date: 16/Jul/19 03:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1038: HDDS-1736. 
Cleanup 2phase old HA code for Key requests.
URL: https://github.com/apache/hadoop/pull/1038#issuecomment-511647235
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 49 | Maven dependency ordering for branch |
   | +1 | mvninstall | 499 | trunk passed |
   | +1 | compile | 267 | trunk passed |
   | +1 | checkstyle | 79 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 883 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   | 0 | spotbugs | 312 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 507 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 444 | the patch passed |
   | +1 | compile | 273 | the patch passed |
   | +1 | cc | 273 | the patch passed |
   | +1 | javac | 273 | the patch passed |
   | +1 | checkstyle | 85 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 677 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | the patch passed |
   | +1 | findbugs | 531 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 290 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1641 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 55 | The patch does not generate ASF License warnings. |
   | | | 6859 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.container.TestReplicationManager |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1038/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1038 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 9c00d285b1db 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ef66e49 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1038/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1038/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1038/2/testReport/ |
   | Max. process+thread count | 5341 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1038/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog 

[jira] [Updated] (HDFS-14652) HealthMonitor connection retry times should be configurable

2019-07-15 Thread Chen Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-14652:
--
Attachment: HDFS-14652-002.patch

> HealthMonitor connection retry times should be configurable
> ---
>
> Key: HDFS-14652
> URL: https://issues.apache.org/jira/browse/HDFS-14652
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14652-001.patch, HDFS-14652-002.patch
>
>
> On our production HDFS cluster, some client's burst requests cause the tcp 
> kernel queue full on NameNode's host,  since the configuration value of 
> "net.ipv4.tcp_syn_retries" in our environment is 1, so after 3 seconds, the 
> ZooKeeper Healthmonitor got an connection error like this:
> {code:java}
> WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to 
> monitor health of NameNode at nn_host_name/ip_address:port: Call From 
> zkfc_host_name/ip to nn_host_name:port failed on connection exception: 
> java.net.ConnectException: Connection timed out; For more details see: 
> http://wiki.apache.org/hadoop/ConnectionRefused
> {code}
> This error caused a failover and affects the availability of that cluster, we 
> fixed this issue by enlarge the kernel parameter net.ipv4.tcp_syn_retries to 6
> But during working on this issue, we found that the connection retry 
> time(ipc.client.connect.max.retries) of health-monitor is hard coded as 1, I 
> think it should be configurable, then if we don't want the health-monitor so 
> sensitive, we can change it's behavior by change this configuration



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-15 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1803:
---

 Summary: shellcheck.sh does not work on Mac
 Key: HDDS-1803
 URL: https://issues.apache.org/jira/browse/HDDS-1803
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


# {{shellcheck.sh}} does not work on Mac
{code}
find: -executable: unknown primary or operator
{code}
# {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
{{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14642) processMisReplicatedBlocks does not return correct processed count

2019-07-15 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885798#comment-16885798
 ] 

Ayush Saxena commented on HDFS-14642:
-

Committed to trunk.
Thanks Everyone!!!

> processMisReplicatedBlocks does not return correct processed count
> --
>
> Key: HDFS-14642
> URL: https://issues.apache.org/jira/browse/HDFS-14642
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14642.001.patch
>
>
> HDFS-14053 introduced a method "processMisReplicatedBlocks" to the 
> blockManager, and it is used by fsck to schedule mis-replicated blocks for 
> replication.
> The method should return a the number of blocks it processed, but it always 
> returns zero as "processed" is never incremented in the method.
> It should also drop and re-take the write lock every "numBlocksPerIteration" 
> but as processed is never incremented, it will never drop and re-take the 
> write lock, giving potential for holding the write lock for a long time.
> {code:java}
> public int processMisReplicatedBlocks(List blocks) {
>   int processed = 0;
>   Iterator iter = blocks.iterator();
>   try {
> while (isPopulatingReplQueues() && namesystem.isRunning()
> && !Thread.currentThread().isInterrupted()
> && iter.hasNext()) {
>   int limit = processed + numBlocksPerIteration;
>   namesystem.writeLockInterruptibly();
>   try {
> while (iter.hasNext() && processed < limit) {
>   BlockInfo blk = iter.next();
>   MisReplicationResult r = processMisReplicatedBlock(blk);
>   LOG.debug("BLOCK* processMisReplicatedBlocks: " +
>   "Re-scanned block {}, result is {}", blk, r);
> }
>   } finally {
> namesystem.writeUnlock();
>   }
> }
>   } catch (InterruptedException ex) {
> LOG.info("Caught InterruptedException while scheduling replication work" +
> " for mis-replicated blocks");
> Thread.currentThread().interrupt();
>   }
>   return processed;
> }{code}
> Due to this, fsck causes a warning to be logged in the NN for every 
> mis-replicated file it schedules replication for, as it checks the processed 
> count:
> {code:java}
> 2019-07-10 15:46:14,790 WARN namenode.NameNode: Fsck: Block manager is able 
> to process only 0 mis-replicated blocks (Total count : 1 ) for path /...{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14642) processMisReplicatedBlocks does not return correct processed count

2019-07-15 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14642:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> processMisReplicatedBlocks does not return correct processed count
> --
>
> Key: HDFS-14642
> URL: https://issues.apache.org/jira/browse/HDFS-14642
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14642.001.patch
>
>
> HDFS-14053 introduced a method "processMisReplicatedBlocks" to the 
> blockManager, and it is used by fsck to schedule mis-replicated blocks for 
> replication.
> The method should return a the number of blocks it processed, but it always 
> returns zero as "processed" is never incremented in the method.
> It should also drop and re-take the write lock every "numBlocksPerIteration" 
> but as processed is never incremented, it will never drop and re-take the 
> write lock, giving potential for holding the write lock for a long time.
> {code:java}
> public int processMisReplicatedBlocks(List blocks) {
>   int processed = 0;
>   Iterator iter = blocks.iterator();
>   try {
> while (isPopulatingReplQueues() && namesystem.isRunning()
> && !Thread.currentThread().isInterrupted()
> && iter.hasNext()) {
>   int limit = processed + numBlocksPerIteration;
>   namesystem.writeLockInterruptibly();
>   try {
> while (iter.hasNext() && processed < limit) {
>   BlockInfo blk = iter.next();
>   MisReplicationResult r = processMisReplicatedBlock(blk);
>   LOG.debug("BLOCK* processMisReplicatedBlocks: " +
>   "Re-scanned block {}, result is {}", blk, r);
> }
>   } finally {
> namesystem.writeUnlock();
>   }
> }
>   } catch (InterruptedException ex) {
> LOG.info("Caught InterruptedException while scheduling replication work" +
> " for mis-replicated blocks");
> Thread.currentThread().interrupt();
>   }
>   return processed;
> }{code}
> Due to this, fsck causes a warning to be logged in the NN for every 
> mis-replicated file it schedules replication for, as it checks the processed 
> count:
> {code:java}
> 2019-07-10 15:46:14,790 WARN namenode.NameNode: Fsck: Block manager is able 
> to process only 0 mis-replicated blocks (Total count : 1 ) for path /...{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14642) processMisReplicatedBlocks does not return correct processed count

2019-07-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885793#comment-16885793
 ] 

Hudson commented on HDFS-14642:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16922 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16922/])
HDFS-14642. processMisReplicatedBlocks does not return correct processed 
(ayushsaxena: rev f77d54c24343e6ca7c438d9db431cef14c3ae77b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


> processMisReplicatedBlocks does not return correct processed count
> --
>
> Key: HDFS-14642
> URL: https://issues.apache.org/jira/browse/HDFS-14642
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14642.001.patch
>
>
> HDFS-14053 introduced a method "processMisReplicatedBlocks" to the 
> blockManager, and it is used by fsck to schedule mis-replicated blocks for 
> replication.
> The method should return a the number of blocks it processed, but it always 
> returns zero as "processed" is never incremented in the method.
> It should also drop and re-take the write lock every "numBlocksPerIteration" 
> but as processed is never incremented, it will never drop and re-take the 
> write lock, giving potential for holding the write lock for a long time.
> {code:java}
> public int processMisReplicatedBlocks(List blocks) {
>   int processed = 0;
>   Iterator iter = blocks.iterator();
>   try {
> while (isPopulatingReplQueues() && namesystem.isRunning()
> && !Thread.currentThread().isInterrupted()
> && iter.hasNext()) {
>   int limit = processed + numBlocksPerIteration;
>   namesystem.writeLockInterruptibly();
>   try {
> while (iter.hasNext() && processed < limit) {
>   BlockInfo blk = iter.next();
>   MisReplicationResult r = processMisReplicatedBlock(blk);
>   LOG.debug("BLOCK* processMisReplicatedBlocks: " +
>   "Re-scanned block {}, result is {}", blk, r);
> }
>   } finally {
> namesystem.writeUnlock();
>   }
> }
>   } catch (InterruptedException ex) {
> LOG.info("Caught InterruptedException while scheduling replication work" +
> " for mis-replicated blocks");
> Thread.currentThread().interrupt();
>   }
>   return processed;
> }{code}
> Due to this, fsck causes a warning to be logged in the NN for every 
> mis-replicated file it schedules replication for, as it checks the processed 
> count:
> {code:java}
> 2019-07-10 15:46:14,790 WARN namenode.NameNode: Fsck: Block manager is able 
> to process only 0 mis-replicated blocks (Total count : 1 ) for path /...{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277156=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277156
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:49
Start Date: 16/Jul/19 02:49
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709384
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2279,7 +2332,42 @@ public void testNativeAclsForKey() throws Exception {
 .setStoreType(OzoneObj.StoreType.OZONE)
 .build();
 
-validateOzoneAcl(ozObj);
+// Validates access acls.
+validateOzoneAccessAcl(ozObj);
+
+// Check default acls inherited from bucket.
+OzoneObj buckObj = new OzoneObjInfo.Builder()
+.setVolumeName(volumeName)
+.setBucketName(bucketName)
+.setKeyName(key1)
+.setResType(OzoneObj.ResourceType.BUCKET)
+.setStoreType(OzoneObj.StoreType.OZONE)
+.build();
+
+validateDefaultAcls(buckObj, ozObj, null, bucket);
+
+// Check default acls inherited from prefix.
+OzoneObj prefixObj = new OzoneObjInfo.Builder()
+.setVolumeName(volumeName)
+.setBucketName(bucketName)
+.setKeyName(key1)
+.setPrefixName("dir1/")
+.setResType(OzoneObj.ResourceType.PREFIX)
+.setStoreType(OzoneObj.StoreType.OZONE)
+.build();
+store.setAcl(prefixObj, getAclList(new OzoneConfiguration()));
+// Prefix should inherit DEFAULT acl from bucket.
+
+List acls = store.getAcl(prefixObj);
+assertTrue("Current acls:" + StringUtils.join(",", acls),
+acls.contains(inheritedUserAcl));
+assertTrue("Current acls:" + StringUtils.join(",", acls),
+acls.contains(inheritedGroupAcl));
+// Remove inherited acls from prefix.
+assertTrue(store.removeAcl(prefixObj, inheritedUserAcl));
+assertTrue(store.removeAcl(prefixObj, inheritedGroupAcl));
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277156)
Time Spent: 14h  (was: 13h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 14h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277146=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277146
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709405
 
 

 ##
 File path: 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/storage/DistributedStorageHandler.java
 ##
 @@ -71,6 +70,8 @@
 import java.util.Objects;
 import java.util.concurrent.TimeUnit;
 
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277146)
Time Spent: 12h 20m  (was: 12h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 12h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277150=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277150
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709426
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -983,6 +1019,44 @@ public OmMultipartInfo 
applyInitiateMultipartUpload(OmKeyArgs keyArgs,
 }
   }
 
+  private List getAclsForKey(OmKeyArgs keyArgs, 
+  OmVolumeArgs volArgs, OmBucketInfo bucketInfo) {
+List acls = new ArrayList<>(keyArgs.getAcls().size());
+
+keyArgs.getAcls().stream().map(OzoneAcl::toProtobuf).
+collect(Collectors.toList());
+
+// Inherit DEFAULT acls from prefix.
+boolean prefixParentFound = false;
+if(prefixManager != null) {
+  List prefixList = prefixManager.getLongestPrefixPath(
+  OZONE_URI_DELIMITER +
+  keyArgs.getVolumeName() + OZONE_URI_DELIMITER +
+  keyArgs.getBucketName() + OZONE_URI_DELIMITER +
+  keyArgs.getKeyName());
+
+  if(prefixList.size() > 0) {
+// Add all acls from direct parent to key.
+OmPrefixInfo prefixInfo = prefixList.get(prefixList.size() - 1);
+if(prefixInfo  != null) {
+  acls.addAll(OzoneUtils.getDefaultAclsProto(prefixInfo.getAcls()));
+  prefixParentFound = true;
+}
+  }
+}
+
+// Inherit DEFAULT acls from bucket only if DEFAULT acls for 
+// prefix are not set.
+if (!prefixParentFound && bucketInfo != null) {
+  acls.addAll(bucketInfo.getAcls().stream().filter(a -> a.getAclScope()
+  .equals(OzoneAcl.AclScope.DEFAULT))
+  .map(OzoneAcl::toProtobufWithAccessType)
+  .collect(Collectors.toList()));
+}
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277150)
Time Spent: 13h  (was: 12h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 13h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277152=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277152
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709437
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -472,7 +481,8 @@ public OpenKeySession openKey(OmKeyArgs args) throws 
IOException {
 if (keyInfo == null) {
   // the key does not exist, create a new object, the new blocks are the
   // version 0
-  keyInfo = createKeyInfo(args, locations, factor, type, size, encInfo);
+  keyInfo = createKeyInfo(args, locations, factor, type, size, 
+  encInfo, bucketInfo);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277152)
Time Spent: 13h 20m  (was: 13h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 13h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277149=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277149
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709421
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -983,6 +1019,44 @@ public OmMultipartInfo 
applyInitiateMultipartUpload(OmKeyArgs keyArgs,
 }
   }
 
+  private List getAclsForKey(OmKeyArgs keyArgs, 
+  OmVolumeArgs volArgs, OmBucketInfo bucketInfo) {
+List acls = new ArrayList<>(keyArgs.getAcls().size());
+
+keyArgs.getAcls().stream().map(OzoneAcl::toProtobuf).
+collect(Collectors.toList());
+
+// Inherit DEFAULT acls from prefix.
+boolean prefixParentFound = false;
+if(prefixManager != null) {
+  List prefixList = prefixManager.getLongestPrefixPath(
+  OZONE_URI_DELIMITER +
+  keyArgs.getVolumeName() + OZONE_URI_DELIMITER +
+  keyArgs.getBucketName() + OZONE_URI_DELIMITER +
+  keyArgs.getKeyName());
+
+  if(prefixList.size() > 0) {
+// Add all acls from direct parent to key.
+OmPrefixInfo prefixInfo = prefixList.get(prefixList.size() - 1);
+if(prefixInfo  != null) {
+  acls.addAll(OzoneUtils.getDefaultAclsProto(prefixInfo.getAcls()));
+  prefixParentFound = true;
+}
+  }
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277149)
Time Spent: 12h 50m  (was: 12h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 12h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277151=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277151
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709432
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -455,8 +463,9 @@ public OpenKeySession openKey(OmKeyArgs args) throws 
IOException {
 
 FileEncryptionInfo encInfo;
 metadataManager.getLock().acquireLock(BUCKET_LOCK, volumeName, bucketName);
+OmBucketInfo bucketInfo;
 try {
-  OmBucketInfo bucketInfo = getBucketInfo(volumeName, bucketName);
+  bucketInfo = getBucketInfo(volumeName, bucketName);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277151)
Time Spent: 13h 10m  (was: 13h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 13h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277145=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277145
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709403
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/security/acl/TestOzoneNativeAuthorizer.java
 ##
 @@ -256,9 +258,10 @@ public void testCheckAccessForBucket() throws Exception {
 
   @Test
   public void testCheckAccessForKey() throws Exception {
-OzoneAcl userAcl = new OzoneAcl(USER, ugi.getUserName(), parentDirUserAcl);
+OzoneAcl userAcl = new OzoneAcl(USER, ugi.getUserName(), parentDirUserAcl, 
+ACCESS);
 OzoneAcl groupAcl = new OzoneAcl(GROUP, ugi.getGroups().size() > 0 ?
-ugi.getGroups().get(0) : "", parentDirGroupAcl);
+ugi.getGroups().get(0) : "", parentDirGroupAcl, ACCESS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277145)
Time Spent: 12h 10m  (was: 12h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277153=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277153
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709445
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -617,10 +629,35 @@ private OmKeyInfo createKeyInfo(OmKeyArgs keyArgs,
 .setReplicationType(type)
 .setReplicationFactor(factor)
 .setFileEncryptionInfo(encInfo);
+List acls = new ArrayList<>();
 if(keyArgs.getAcls() != null) {
-  builder.setAcls(keyArgs.getAcls().stream().map(a ->
+  acls.addAll(keyArgs.getAcls().stream().map(a ->
   OzoneAcl.toProtobuf(a)).collect(Collectors.toList()));
 }
+
+// Inherit DEFAULT acls from prefix.
+boolean prefixParentFound = false;
+if(prefixManager != null) {
+  List prefixList = prefixManager.getLongestPrefixPath(
+  OZONE_URI_DELIMITER +
+  keyArgs.getVolumeName() + OZONE_URI_DELIMITER +
+  keyArgs.getBucketName() + OZONE_URI_DELIMITER +
+  keyArgs.getKeyName());
+
+  if(prefixList.size() > 0) {
+// Add all acls from direct parent to key.
+OmPrefixInfo prefixInfo = prefixList.get(prefixList.size() - 1);
+if(prefixInfo  != null) {
+  acls.addAll(OzoneUtils.getDefaultAclsProto(prefixInfo.getAcls()));
+  prefixParentFound = true;
+}
+  }
+}
+if(!prefixParentFound && omBucketInfo != null) {
+  acls.addAll(OzoneUtils.getDefaultAclsProto(omBucketInfo.getAcls()));
+}
+builder.setAcls(acls);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277153)
Time Spent: 13.5h  (was: 13h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 13.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277155=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277155
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1101: HDDS-1544. 
Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#issuecomment-511642985
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 41 | Maven dependency ordering for branch |
   | +1 | mvninstall | 492 | trunk passed |
   | +1 | compile | 263 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 868 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 143 | trunk passed |
   | 0 | spotbugs | 314 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 504 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 456 | the patch passed |
   | +1 | compile | 367 | the patch passed |
   | +1 | cc | 367 | the patch passed |
   | +1 | javac | 367 | the patch passed |
   | -0 | checkstyle | 37 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 39 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 639 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | the patch passed |
   | +1 | findbugs | 526 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 308 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2043 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 7238 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.TestContainerOperations |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1101/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1101 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux f5e6b6b990c9 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1411513 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1101/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1101/1/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1101/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1101/1/testReport/ |
   | Max. process+thread count | 4826 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service hadoop-ozone/dist 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1101/1/console |
   | versions | git=2.7.4 maven=3.3.9 

[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277142=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277142
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709388
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2357,28 +2467,28 @@ public void testNativeAclsForPrefix() throws Exception 
{
 ACLType userRights = aclConfig.getUserDefaultRights();
 ACLType groupRights = aclConfig.getGroupDefaultRights();
 
-listOfAcls.add(new OzoneAcl(ACLIdentityType.USER,
-ugi.getUserName(), userRights));
+listOfAcls.add(new OzoneAcl(USER,
+ugi.getUserName(), userRights, ACCESS));
 //Group ACLs of the User
 List userGroups = Arrays.asList(ugi.getGroupNames());
 userGroups.stream().forEach((group) -> listOfAcls.add(
-new OzoneAcl(ACLIdentityType.GROUP, group, groupRights)));
+new OzoneAcl(GROUP, group, groupRights, ACCESS)));
 return listOfAcls;
   }
 
   /**
* Helper function to validate ozone Acl for given object.
* @param ozObj
* */
-  private void validateOzoneAcl(OzoneObj ozObj) throws IOException {
+  private void validateOzoneAccessAcl(OzoneObj ozObj) throws IOException {
 // Get acls for volume.
 List expectedAcls = getAclList(new OzoneConfiguration());
 
 // Case:1 Add new acl permission to existing acl.
 if(expectedAcls.size()>0) {
   OzoneAcl oldAcl = expectedAcls.get(0);
   OzoneAcl newAcl = new OzoneAcl(oldAcl.getType(), oldAcl.getName(),
-  ACLType.READ_ACL);
+  ACLType.READ_ACL, ACCESS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277142)
Time Spent: 11h 40m  (was: 11.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 11h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277144=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277144
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709396
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/security/acl/TestOzoneNativeAuthorizer.java
 ##
 @@ -57,6 +57,7 @@
 import java.util.stream.Collectors;
 
 import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277144)
Time Spent: 12h  (was: 11h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 12h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277148=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277148
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709417
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -957,8 +994,7 @@ public OmMultipartInfo 
applyInitiateMultipartUpload(OmKeyArgs keyArgs,
   .setReplicationFactor(keyArgs.getFactor())
   .setOmKeyLocationInfos(Collections.singletonList(
   new OmKeyLocationInfoGroup(0, locations)))
-  .setAcls(keyArgs.getAcls().stream().map(a ->
-  OzoneAcl.toProtobuf(a)).collect(Collectors.toList()))
+  .setAcls(getAclsForKey(keyArgs, null, bucketInfo))
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277148)
Time Spent: 12h 40m  (was: 12.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 12h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277154=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277154
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709450
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -429,18 +430,15 @@ private OzoneManager(OzoneConfiguration conf) throws 
IOException,
 } else {
   accessAuthorizer = null;
 }
-ozAdmins = conf.getTrimmedStringCollection(OzoneConfigKeys
-.OZONE_ADMINISTRATORS);
+ozAdmins = conf.getTrimmedStringCollection(OZONE_ADMINISTRATORS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277154)
Time Spent: 13h 40m  (was: 13.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 13h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277143=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277143
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709392
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2433,8 +2543,10 @@ private void validateOzoneAcl(OzoneObj ozObj) throws 
IOException {
 expectedAcls.forEach(a -> assertTrue(finalNewAcls.contains(a)));
 
 // Reset acl's.
-OzoneAcl ua = new OzoneAcl(ACLIdentityType.USER, "userx", 
ACLType.READ_ACL);
-OzoneAcl ug = new OzoneAcl(ACLIdentityType.GROUP, "userx", ACLType.ALL);
+OzoneAcl ua = new OzoneAcl(USER, "userx", 
+ACLType.READ_ACL, ACCESS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277143)
Time Spent: 11h 50m  (was: 11h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 11h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277147=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277147
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709413
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
 ##
 @@ -165,10 +169,15 @@ public void createBucket(OmBucketInfo bucketInfo) throws 
IOException {
 .setVersion(CryptoProtocolVersion.ENCRYPTION_ZONES)
 .setSuite(CipherSuite.convert(metadata.getCipher()));
   }
+  List acls = new ArrayList<>();
+  acls.addAll(bucketInfo.getAcls());
+  volumeArgs.getAclMap().getDefaultAclList().forEach(
+  a -> acls.add(OzoneAcl.fromProtobufWithAccessType(a)));
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277147)
Time Spent: 12.5h  (was: 12h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 12.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277130=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277130
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709318
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -178,19 +204,55 @@ public static OzoneAclInfo toProtobuf(OzoneAcl acl) {
 OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
 .setName(acl.getName())
 .setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.valueOf(acl.getAclScope().name()))
 .setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
 return builder.build();
   }
 
   public static OzoneAcl fromProtobuf(OzoneAclInfo protoAcl) {
 BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
 return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
-protoAcl.getName(), aclRights);
+protoAcl.getName(), aclRights, 
+AclScope.valueOf(protoAcl.getAclScope().name()));
+  }
+
+  /**
+   * Helper function to convert a proto message of type {@link OzoneAclInfo}
+   * to {@link OzoneAcl} with acl scope of type ACCESS.
+   * 
+   * @param protoAcl
+   * @return OzoneAcl
+   * */
+  public static OzoneAcl fromProtobufWithAccessType(OzoneAclInfo protoAcl) {
+BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
+return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
+protoAcl.getName(), aclRights, AclScope.ACCESS);
+  }
+
+  /**
+   * Helper function to convert an {@link OzoneAcl} to proto message of type
+   * {@link OzoneAclInfo} with acl scope of type ACCESS.
+   *
+   * @param acl
+   * @return OzoneAclInfo
+   * */
+  public static OzoneAclInfo toProtobufWithAccessType(OzoneAcl acl) {
+OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
+.setName(acl.getName())
+.setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.ACCESS)
+.setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
+return builder.build();
   }
 
+  public AclScope getAclScope() {
+return aclScope;
+  }
+  
   @Override
   public String toString() {
-return type + ":" + name + ":" + ACLType.getACLString(aclBitSet);
+return type + ":" + name + ":" + ACLType.getACLString(aclBitSet) 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277130)
Time Spent: 9h 40m  (was: 9.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277124=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277124
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709283
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -120,16 +129,19 @@ public OzoneAcl(ACLIdentityType type, String name, 
BitSet acls) {
 && (name.length() == 0)) {
   throw new IllegalArgumentException("User or group name is required");
 }
+aclScope = scope;
   }
 
   /**
-   * Parses an ACL string and returns the ACL object.
+   * Parses an ACL string and returns the ACL object. If acl scope is not 
+   * passed in input string then scope is set to ACCESS.
*
* @param acl - Acl String , Ex. user:anu:rw
*
* @return - Ozone ACLs
*/
-  public static OzoneAcl parseAcl(String acl) throws IllegalArgumentException {
+  public static OzoneAcl parseAcl(String acl) 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277124)
Time Spent: 8h 40m  (was: 8.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 8h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277126=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277126
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709298
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -141,13 +153,27 @@ public static OzoneAcl parseAcl(String acl) throws 
IllegalArgumentException {
 ACLIdentityType aclType = ACLIdentityType.valueOf(parts[0].toUpperCase());
 BitSet acls = new BitSet(ACLType.getNoOfAcls());
 
-for (char ch : parts[2].toCharArray()) {
+String bits = parts[2];
+
+// Default acl scope is ACCESS.
+AclScope aclScope = AclScope.ACCESS;
+
+// Check if acl string contains scope info.
+if(parts[2].matches(ACL_SCOPE_REGEX)) {
+  int indexOfOpenBracket = parts[2].indexOf("[");
+  bits = parts[2].substring(0, indexOfOpenBracket);
+  aclScope = AclScope.valueOf(parts[2].substring(indexOfOpenBracket + 1,
+  parts[2].indexOf("]")));
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277126)
Time Spent: 9h  (was: 8h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 9h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277133=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277133
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709339
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -49,53 +52,71 @@
 @SuppressWarnings("ProtocolBufferOrdinal")
 public class OmOzoneAclMap {
   // per Acl Type user:rights map
-  private ArrayList> aclMaps;
+  private ArrayList> accessAclMap;
+  private List defaultAclList;
 
   OmOzoneAclMap() {
-aclMaps = new ArrayList<>();
+accessAclMap = new ArrayList<>();
+defaultAclList = new ArrayList<>();
 for (OzoneAclType aclType : OzoneAclType.values()) {
-  aclMaps.add(aclType.ordinal(), new HashMap<>());
+  accessAclMap.add(aclType.ordinal(), new HashMap<>());
 }
   }
 
-  private Map getMap(OzoneAclType type) {
-return aclMaps.get(type.ordinal());
+  private Map getAccessAclMap(OzoneAclType type) {
+return accessAclMap.get(type.ordinal());
   }
 
   // For a given acl type and user, get the stored acl
   private BitSet getAcl(OzoneAclType type, String user) {
-return getMap(type).get(user);
+return getAccessAclMap(type).get(user);
   }
 
   public List getAcl() {
 List acls = new ArrayList<>();
 
+acls.addAll(getAccessAcls());
+acls.addAll(defaultAclList.stream().map(a -> 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277133)
Time Spent: 10h 10m  (was: 10h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 10h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277127=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277127
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709302
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -178,19 +204,55 @@ public static OzoneAclInfo toProtobuf(OzoneAcl acl) {
 OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
 .setName(acl.getName())
 .setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.valueOf(acl.getAclScope().name()))
 .setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
 return builder.build();
   }
 
   public static OzoneAcl fromProtobuf(OzoneAclInfo protoAcl) {
 BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
 return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
-protoAcl.getName(), aclRights);
+protoAcl.getName(), aclRights, 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277127)
Time Spent: 9h 10m  (was: 9h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 9h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277129=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277129
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709315
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -178,19 +204,55 @@ public static OzoneAclInfo toProtobuf(OzoneAcl acl) {
 OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
 .setName(acl.getName())
 .setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.valueOf(acl.getAclScope().name()))
 .setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
 return builder.build();
   }
 
   public static OzoneAcl fromProtobuf(OzoneAclInfo protoAcl) {
 BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
 return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
-protoAcl.getName(), aclRights);
+protoAcl.getName(), aclRights, 
+AclScope.valueOf(protoAcl.getAclScope().name()));
+  }
+
+  /**
+   * Helper function to convert a proto message of type {@link OzoneAclInfo}
+   * to {@link OzoneAcl} with acl scope of type ACCESS.
+   * 
+   * @param protoAcl
+   * @return OzoneAcl
+   * */
+  public static OzoneAcl fromProtobufWithAccessType(OzoneAclInfo protoAcl) {
+BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
+return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
+protoAcl.getName(), aclRights, AclScope.ACCESS);
+  }
+
+  /**
+   * Helper function to convert an {@link OzoneAcl} to proto message of type
+   * {@link OzoneAclInfo} with acl scope of type ACCESS.
+   *
+   * @param acl
+   * @return OzoneAclInfo
+   * */
+  public static OzoneAclInfo toProtobufWithAccessType(OzoneAcl acl) {
+OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
+.setName(acl.getName())
+.setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.ACCESS)
+.setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
+return builder.build();
   }
 
+  public AclScope getAclScope() {
+return aclScope;
+  }
+  
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277129)
Time Spent: 9.5h  (was: 9h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277140=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277140
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709373
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2252,15 +2266,54 @@ public void testNativeAclsForBucket() throws Exception 
{
 .setStoreType(OzoneObj.StoreType.OZONE)
 .build();
 
-validateOzoneAcl(ozObj);
+validateOzoneAccessAcl(ozObj);
+
+OzoneObj volObj = new OzoneObjInfo.Builder()
+.setVolumeName(volumeName)
+.setResType(OzoneObj.ResourceType.VOLUME)
+.setStoreType(OzoneObj.StoreType.OZONE)
+.build();
+validateDefaultAcls(volObj, ozObj, volume, null);
+  }
+
+  private void validateDefaultAcls(OzoneObj parentObj, OzoneObj childObj, 
+  OzoneVolume volume,  OzoneBucket bucket) throws Exception {
+assertTrue(store.addAcl(parentObj, defaultUserAcl));
+assertTrue(store.addAcl(parentObj, defaultGroupAcl));
+if (volume != null) {
+  volume.deleteBucket(childObj.getBucketName());
+  volume.createBucket(childObj.getBucketName());
+} else {
+  if (childObj.getResourceType().equals(OzoneObj.ResourceType.KEY)) {
+bucket.deleteKey(childObj.getKeyName());
+writeKey(childObj.getKeyName(), bucket);
+  } else {
+store.setAcl(childObj, getAclList(new OzoneConfiguration()));
+  }
+}
+List acls = store.getAcl(parentObj);
+assertTrue("Current acls:" + StringUtils.join(",", acls) +
+" inheritedUserAcl:" + inheritedUserAcl,
+acls.contains(defaultUserAcl));
+assertTrue("Current acls:" + StringUtils.join(",", acls) +
+" inheritedUserAcl:" + inheritedUserAcl,
+acls.contains(defaultGroupAcl));
+
+acls = store.getAcl(childObj);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277140)
Time Spent: 11h 20m  (was: 11h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 11h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277123=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277123
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709293
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -141,13 +153,27 @@ public static OzoneAcl parseAcl(String acl) throws 
IllegalArgumentException {
 ACLIdentityType aclType = ACLIdentityType.valueOf(parts[0].toUpperCase());
 BitSet acls = new BitSet(ACLType.getNoOfAcls());
 
-for (char ch : parts[2].toCharArray()) {
+String bits = parts[2];
+
+// Default acl scope is ACCESS.
+AclScope aclScope = AclScope.ACCESS;
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277123)
Time Spent: 8.5h  (was: 8h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 8.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277137=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277137
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709360
 
 

 ##
 File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
 ##
 @@ -507,9 +507,15 @@ message OzoneAclInfo {
 CLIENT_IP = 5;
 }
 
+enum OzoneAclScope {
+  ACCESS = 0;
+  DEFAULT = 1;
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277137)
Time Spent: 10h 50m  (was: 10h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 10h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277132=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277132
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709335
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -116,9 +136,14 @@ public void setAcls(List acls) throws 
OMException {
   // Add a new acl to the map
   public void removeAcl(OzoneAcl acl) throws OMException {
 Objects.requireNonNull(acl, "Acl should not be null.");
+if (acl.getAclScope().equals(OzoneAcl.AclScope.DEFAULT)) {
+  defaultAclList.remove(OzoneAcl.toProtobuf(acl));
+  return;
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277132)
Time Spent: 10h  (was: 9h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 10h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277134=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277134
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709343
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -49,53 +52,71 @@
 @SuppressWarnings("ProtocolBufferOrdinal")
 public class OmOzoneAclMap {
   // per Acl Type user:rights map
-  private ArrayList> aclMaps;
+  private ArrayList> accessAclMap;
+  private List defaultAclList;
 
   OmOzoneAclMap() {
-aclMaps = new ArrayList<>();
+accessAclMap = new ArrayList<>();
+defaultAclList = new ArrayList<>();
 for (OzoneAclType aclType : OzoneAclType.values()) {
-  aclMaps.add(aclType.ordinal(), new HashMap<>());
+  accessAclMap.add(aclType.ordinal(), new HashMap<>());
 }
   }
 
-  private Map getMap(OzoneAclType type) {
-return aclMaps.get(type.ordinal());
+  private Map getAccessAclMap(OzoneAclType type) {
+return accessAclMap.get(type.ordinal());
   }
 
   // For a given acl type and user, get the stored acl
   private BitSet getAcl(OzoneAclType type, String user) {
-return getMap(type).get(user);
+return getAccessAclMap(type).get(user);
   }
 
   public List getAcl() {
 List acls = new ArrayList<>();
 
+acls.addAll(getAccessAcls());
+acls.addAll(defaultAclList.stream().map(a -> 
+OzoneAcl.fromProtobuf(a)).collect(Collectors.toList()));
+return acls;
+  }
+
+  private Collection getAccessAcls() {
+List acls = new ArrayList<>();
 for (OzoneAclType type : OzoneAclType.values()) {
-  aclMaps.get(type.ordinal()).entrySet().stream().
+  accessAclMap.get(type.ordinal()).entrySet().stream().
   forEach(entry -> acls.add(new OzoneAcl(ACLIdentityType.
-  valueOf(type.name()), entry.getKey(), entry.getValue(;
+  valueOf(type.name()), entry.getKey(), entry.getValue(),
+  OzoneAcl.AclScope.ACCESS)));
 }
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277134)
Time Spent: 10h 20m  (was: 10h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 10h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277131=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277131
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709327
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -49,53 +52,71 @@
 @SuppressWarnings("ProtocolBufferOrdinal")
 public class OmOzoneAclMap {
   // per Acl Type user:rights map
-  private ArrayList> aclMaps;
+  private ArrayList> accessAclMap;
+  private List defaultAclList;
 
   OmOzoneAclMap() {
-aclMaps = new ArrayList<>();
+accessAclMap = new ArrayList<>();
+defaultAclList = new ArrayList<>();
 for (OzoneAclType aclType : OzoneAclType.values()) {
-  aclMaps.add(aclType.ordinal(), new HashMap<>());
+  accessAclMap.add(aclType.ordinal(), new HashMap<>());
 }
   }
 
-  private Map getMap(OzoneAclType type) {
-return aclMaps.get(type.ordinal());
+  private Map getAccessAclMap(OzoneAclType type) {
+return accessAclMap.get(type.ordinal());
   }
 
   // For a given acl type and user, get the stored acl
   private BitSet getAcl(OzoneAclType type, String user) {
-return getMap(type).get(user);
+return getAccessAclMap(type).get(user);
   }
 
   public List getAcl() {
 List acls = new ArrayList<>();
 
+acls.addAll(getAccessAcls());
+acls.addAll(defaultAclList.stream().map(a -> 
+OzoneAcl.fromProtobuf(a)).collect(Collectors.toList()));
+return acls;
+  }
+
+  private Collection getAccessAcls() {
+List acls = new ArrayList<>();
 for (OzoneAclType type : OzoneAclType.values()) {
-  aclMaps.get(type.ordinal()).entrySet().stream().
+  accessAclMap.get(type.ordinal()).entrySet().stream().
   forEach(entry -> acls.add(new OzoneAcl(ACLIdentityType.
-  valueOf(type.name()), entry.getKey(), entry.getValue(;
+  valueOf(type.name()), entry.getKey(), entry.getValue(),
+  OzoneAcl.AclScope.ACCESS)));
 }
+
 return acls;
   }
 
   // Add a new acl to the map
   public void addAcl(OzoneAcl acl) throws OMException {
 Objects.requireNonNull(acl, "Acl should not be null.");
+if (acl.getAclScope().equals(OzoneAcl.AclScope.DEFAULT)) {
+  defaultAclList.add(OzoneAcl.toProtobuf(acl));
+  return;
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277131)
Time Spent: 9h 50m  (was: 9h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 9h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277121=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277121
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709275
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -83,16 +89,19 @@ public OzoneAcl(ACLIdentityType type, String name, ACLType 
acl) {
 && (name.length() == 0)) {
   throw new IllegalArgumentException("User or group name is required");
 }
+aclScope = scope;
   }
 
   /**
* Constructor for OzoneAcl.
*
-   * @param type - Type
-   * @param name - Name of user
-   * @param acls - Rights
+   * @param type   - Type
+   * @param name   - Name of user
+   * @param acls   - Rights
+   * @param scope  - AclScope
*/
-  public OzoneAcl(ACLIdentityType type, String name, BitSet acls) {
+  public OzoneAcl(ACLIdentityType type, String name, BitSet acls, 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277121)
Time Spent: 8h 10m  (was: 8h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277135=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277135
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709348
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
 ##
 @@ -365,4 +369,30 @@ public static boolean checkIfAclBitIsSet(ACLType acl, 
BitSet bitset) {
 || bitset.get(ALL.ordinal()))
 && !bitset.get(NONE.ordinal()));
   }
+
+  /**
+   * Helper function to find and return all DEFAULT acls in input list with
+   * scope changed to ACCESS.
+   * @param acls
+   * 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277135)
Time Spent: 10.5h  (was: 10h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277125=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277125
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709280
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -120,16 +129,19 @@ public OzoneAcl(ACLIdentityType type, String name, 
BitSet acls) {
 && (name.length() == 0)) {
   throw new IllegalArgumentException("User or group name is required");
 }
+aclScope = scope;
   }
 
   /**
-   * Parses an ACL string and returns the ACL object.
+   * Parses an ACL string and returns the ACL object. If acl scope is not 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277125)
Time Spent: 8h 50m  (was: 8h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 8h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277141=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277141
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709378
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2279,7 +2332,42 @@ public void testNativeAclsForKey() throws Exception {
 .setStoreType(OzoneObj.StoreType.OZONE)
 .build();
 
-validateOzoneAcl(ozObj);
+// Validates access acls.
+validateOzoneAccessAcl(ozObj);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277141)
Time Spent: 11.5h  (was: 11h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277139=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277139
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709367
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2252,15 +2266,54 @@ public void testNativeAclsForBucket() throws Exception 
{
 .setStoreType(OzoneObj.StoreType.OZONE)
 .build();
 
-validateOzoneAcl(ozObj);
+validateOzoneAccessAcl(ozObj);
+
+OzoneObj volObj = new OzoneObjInfo.Builder()
+.setVolumeName(volumeName)
+.setResType(OzoneObj.ResourceType.VOLUME)
+.setStoreType(OzoneObj.StoreType.OZONE)
+.build();
+validateDefaultAcls(volObj, ozObj, volume, null);
+  }
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277139)
Time Spent: 11h 10m  (was: 11h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 11h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277128=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277128
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709308
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -178,19 +204,55 @@ public static OzoneAclInfo toProtobuf(OzoneAcl acl) {
 OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
 .setName(acl.getName())
 .setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.valueOf(acl.getAclScope().name()))
 .setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
 return builder.build();
   }
 
   public static OzoneAcl fromProtobuf(OzoneAclInfo protoAcl) {
 BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
 return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
-protoAcl.getName(), aclRights);
+protoAcl.getName(), aclRights, 
+AclScope.valueOf(protoAcl.getAclScope().name()));
+  }
+
+  /**
+   * Helper function to convert a proto message of type {@link OzoneAclInfo}
+   * to {@link OzoneAcl} with acl scope of type ACCESS.
+   * 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277128)
Time Spent: 9h 20m  (was: 9h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 9h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277122=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277122
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709290
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -141,13 +153,27 @@ public static OzoneAcl parseAcl(String acl) throws 
IllegalArgumentException {
 ACLIdentityType aclType = ACLIdentityType.valueOf(parts[0].toUpperCase());
 BitSet acls = new BitSet(ACLType.getNoOfAcls());
 
-for (char ch : parts[2].toCharArray()) {
+String bits = parts[2];
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277122)
Time Spent: 8h 20m  (was: 8h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277138=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277138
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709361
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -137,6 +142,15 @@
   private static OzoneManager ozoneManager;
   private static StorageContainerLocationProtocolClientSideTranslatorPB
   storageContainerLocationClient;
+  private static String remoteUserName = "remoteUser";
+  private static OzoneAcl defaultUserAcl = new OzoneAcl(USER, remoteUserName,
+  READ, DEFAULT);
+  private static OzoneAcl defaultGroupAcl = new OzoneAcl(GROUP, remoteUserName,
+  READ, DEFAULT);
+  private static OzoneAcl inheritedUserAcl = new OzoneAcl(USER, remoteUserName,
+  READ, ACCESS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277138)
Time Spent: 11h  (was: 10h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 11h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277136=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277136
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709353
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
 ##
 @@ -365,4 +369,30 @@ public static boolean checkIfAclBitIsSet(ACLType acl, 
BitSet bitset) {
 || bitset.get(ALL.ordinal()))
 && !bitset.get(NONE.ordinal()));
   }
+
+  /**
+   * Helper function to find and return all DEFAULT acls in input list with
+   * scope changed to ACCESS.
+   * @param acls
+   * 
+   * @return list of default Acls.
+   * */
+  public static Collection getDefaultAclsProto(
+  List acls) {
+return acls.stream().filter(a -> a.getAclScope() == DEFAULT)
+.map(OzoneAcl::toProtobufWithAccessType).collect(Collectors.toList());
+  }
+
+  /**
+   * Helper function to find and return all DEFAULT acls in input list with
+   * scope changed to ACCESS.
+   * @param acls
+   *
+   * @return list of default Acls.
+   * */
+  public static Collection getDefaultAcls(List acls) {
+return acls.stream().filter(a -> a.getAclScope() == DEFAULT)
+.collect(Collectors.toList());
+  }
+  
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277136)
Time Spent: 10h 40m  (was: 10.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 10h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1787?focusedWorklogId=277119=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277119
 ]

ASF GitHub Bot logged work on HDDS-1787:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:33
Start Date: 16/Jul/19 02:33
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1094: HDDS-1787. 
NPE thrown while trying to find DN closest to client.
URL: https://github.com/apache/hadoop/pull/1094#discussion_r303705636
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
 ##
 @@ -290,7 +290,12 @@ public ScmInfo getScmInfo() throws IOException {
   NodeManager nodeManager = scm.getScmNodeManager();
   Node client = nodeManager.getNode(clientMachine);
   List nodeList = new ArrayList();
-  nodes.stream().forEach(path -> nodeList.add(nodeManager.getNode(path)));
+  nodes.stream().forEach(path -> {
+DatanodeDetails node = nodeManager.getNode(path);
+if (node != null) {
 
 Review comment:
   nodeManager.getNode will return null when it can't find the node in the 
network topology or the node found is not a leaf node.  The first case usually 
is because of network topology is not well configured(such as use hostname as 
network name while query getNode use Ipaddress). The second case usually will 
not happen, otherwise it indicates there is some bugs.  I created a unit test 
case, which provides illegal inputs to reproduce this case. 
   
   The WARN log for all these cases are  in nodeManager.getNode function.  
   
   if (node != null) {
 if (node instanceof InnerNode) {
   LOG.warn("Get node for {} return {}, it's an inner node, " +
   "not a datanode", address, node.getNetworkFullPath());
 } else {
   LOG.debug("Get node for {} return {}", address,
   node.getNetworkFullPath());
   return (DatanodeDetails)node;
 }
   } else {
 LOG.warn("Cannot find node for {}", address);
   }
 return null;
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277119)
Time Spent: 1h  (was: 50m)

> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> 

[jira] [Work logged] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1787?focusedWorklogId=277115=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277115
 ]

ASF GitHub Bot logged work on HDDS-1787:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:24
Start Date: 16/Jul/19 02:24
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1094: HDDS-1787. 
NPE thrown while trying to find DN closest to client.
URL: https://github.com/apache/hadoop/pull/1094#discussion_r303705636
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
 ##
 @@ -290,7 +290,12 @@ public ScmInfo getScmInfo() throws IOException {
   NodeManager nodeManager = scm.getScmNodeManager();
   Node client = nodeManager.getNode(clientMachine);
   List nodeList = new ArrayList();
-  nodes.stream().forEach(path -> nodeList.add(nodeManager.getNode(path)));
+  nodes.stream().forEach(path -> {
+DatanodeDetails node = nodeManager.getNode(path);
+if (node != null) {
 
 Review comment:
   nodeManager.getNode will return null when it can't find the node in the 
network topology or the node found is not a leaf node.  The WARN log for all 
these cases are  in nodeManager.getNode function.  
   
   if (node != null) {
 if (node instanceof InnerNode) {
   LOG.warn("Get node for {} return {}, it's an inner node, " +
   "not a datanode", address, node.getNetworkFullPath());
 } else {
   LOG.debug("Get node for {} return {}", address,
   node.getNetworkFullPath());
   return (DatanodeDetails)node;
 }
   } else {
 LOG.warn("Cannot find node for {}", address);
   }
 return null;
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277115)
Time Spent: 50m  (was: 40m)

> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> 2019-07-10 16:45:53,176 WARN  om.KeyManagerImpl 
> (KeyManagerImpl.java:lambda$sortDatanodeInPipeline$7(2129)) - Unable to sort 
> datanodes based on distance to client, volume=xqoyzocpse, 

[jira] [Work logged] (HDDS-1793) Acceptance test of ozone-topology cluster is failing

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1793?focusedWorklogId=277112=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277112
 ]

ASF GitHub Bot logged work on HDDS-1793:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:23
Start Date: 16/Jul/19 02:23
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1096: HDDS-1793. 
Acceptance test of ozone-topology cluster is failing
URL: https://github.com/apache/hadoop/pull/1096#discussion_r303705545
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/testlib.sh
 ##
 @@ -28,7 +28,7 @@ mkdir -p "$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$RESULT_DIR"
 
-## @description wait until 3 datanodes are up (or 30 seconds)
+## @description wait until 3 or more datanodes are up (or 30 seconds)
 ## @param the docker-compose file
 wait_for_datanodes(){
 
 Review comment:
   Thanks for the suggestion.  I have replaced the global variable with a 
function parameter in a new commit.  This also gets rid of the shellcheck 
warning.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277112)
Time Spent: 1h  (was: 50m)

> Acceptance test of ozone-topology cluster is failing
> 
>
> Key: HDDS-1793
> URL: https://issues.apache.org/jira/browse/HDDS-1793
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Since HDDS-1586 the smoketests of the ozone-topology compose file is broken:
> {code:java}
> Output:  
> /tmp/smoketest/ozone-topology/result/robot-ozone-topology-ozone-topology-basic-scm.xml
> must specify at least one container source
> Stopping datanode_2 ... 
> Stopping datanode_3 ... 
> Stopping datanode_4 ... 
> Stopping scm... 
> Stopping om ... 
> Stopping datanode_1 ... 
> 
> Stopping datanode_2 ... done
> 
> Stopping datanode_4 ... done
> 
> Stopping datanode_1 ... done
> 
> Stopping datanode_3 ... done
> 
> Stopping scm... done
> 
> Stopping om ... done
> Removing datanode_2 ... 
> Removing datanode_3 ... 
> Removing datanode_4 ... 
> Removing scm... 
> Removing om ... 
> Removing datanode_1 ... 
> 
> Removing datanode_1 ... done
> 
> Removing om ... done
> 
> Removing datanode_3 ... done
> 
> Removing datanode_4 ... done
> 
> Removing datanode_2 ... done
> 
> Removing scm... done
> Removing network ozone-topology_net
> [ ERROR ] Reading XML source 
> '/var/jenkins_home/workspace/ozone/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-topology/result/robot-*.xml'
>  failed: No such file or directory
> Try --help for usage information.
> ERROR: Test execution of 
> /var/jenkins_home/workspace/ozone/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-topology
>  is FAILED{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1787?focusedWorklogId=277114=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277114
 ]

ASF GitHub Bot logged work on HDDS-1787:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:23
Start Date: 16/Jul/19 02:23
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1094: HDDS-1787. 
NPE thrown while trying to find DN closest to client.
URL: https://github.com/apache/hadoop/pull/1094#discussion_r303705636
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
 ##
 @@ -290,7 +290,12 @@ public ScmInfo getScmInfo() throws IOException {
   NodeManager nodeManager = scm.getScmNodeManager();
   Node client = nodeManager.getNode(clientMachine);
   List nodeList = new ArrayList();
-  nodes.stream().forEach(path -> nodeList.add(nodeManager.getNode(path)));
+  nodes.stream().forEach(path -> {
+DatanodeDetails node = nodeManager.getNode(path);
+if (node != null) {
 
 Review comment:
   nodeManager.getNode will return null when it can't find the node in the 
network topology or the node found is not a leaf node.  The WARN log for all 
these cases are  in nodeManager.getNode function.  
   
   if (node != null) {
 if (node instanceof InnerNode) {
   LOG.warn("Get node for {} return {}, it's an inner node, " +
   "not a datanode", address, node.getNetworkFullPath());
 } else {
   LOG.debug("Get node for {} return {}", address,
   node.getNetworkFullPath());
   return (DatanodeDetails)node;
 }
   } else {
 LOG.warn("Cannot find node for {}", address);
   }
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277114)
Time Spent: 40m  (was: 0.5h)

> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> 2019-07-10 16:45:53,176 WARN  om.KeyManagerImpl 
> (KeyManagerImpl.java:lambda$sortDatanodeInPipeline$7(2129)) - Unable to sort 
> datanodes based on distance to client, volume=xqoyzocpse, bucket=vxwajaczqh, 
> 

[jira] [Commented] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-15 Thread Sammi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885781#comment-16885781
 ] 

Sammi Chen commented on HDDS-1787:
--

Hi [~msingh], thanks for the instructions.  I will try it locally.  I also 
created a unit test which reproduced the issue. 

> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> 2019-07-10 16:45:53,176 WARN  om.KeyManagerImpl 
> (KeyManagerImpl.java:lambda$sortDatanodeInPipeline$7(2129)) - Unable to sort 
> datanodes based on distance to client, volume=xqoyzocpse, bucket=vxwajaczqh, 
> key=pool-444-thread-7-201077822, client=127.0.0.1, 
> datanodes=[10f15723-45d7-4a0c-8f01-8b101744a110{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, 7ac2777f-0a5c-4414-9e7f-bfbc47d696ea{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}], exception=java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message 

[jira] [Work logged] (HDDS-1802) Add Eviction policy for table cache

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1802?focusedWorklogId=277111=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277111
 ]

ASF GitHub Bot logged work on HDDS-1802:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:03
Start Date: 16/Jul/19 02:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1100: HDDS-1802. Add 
Eviction policy for table cache.
URL: https://github.com/apache/hadoop/pull/1100#issuecomment-511634741
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 98 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 71 | Maven dependency ordering for branch |
   | +1 | mvninstall | 508 | trunk passed |
   | +1 | compile | 267 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 974 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 170 | trunk passed |
   | 0 | spotbugs | 342 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 546 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for patch |
   | +1 | mvninstall | 486 | the patch passed |
   | +1 | compile | 268 | the patch passed |
   | +1 | javac | 268 | the patch passed |
   | +1 | checkstyle | 75 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 732 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | the patch passed |
   | +1 | findbugs | 607 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 352 | hadoop-hdds in the patch passed. |
   | -1 | unit | 232 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 5893 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.response.volume.TestOMVolumeCreateResponse |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketCreateRequest |
   |   | hadoop.ozone.om.response.bucket.TestOMBucketCreateResponse |
   |   | hadoop.ozone.om.TestKeyDeletingService |
   |   | hadoop.ozone.om.request.key.TestOMAllocateBlockRequest |
   |   | hadoop.ozone.om.TestBucketManagerImpl |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeSetQuotaResponse |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeSetOwnerResponse |
   |   | hadoop.ozone.om.request.key.TestOMKeyCreateRequest |
   |   | hadoop.ozone.om.response.bucket.TestOMBucketSetPropertyResponse |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeSetOwnerRequest |
   |   | hadoop.ozone.om.request.file.TestOMFileCreateRequest |
   |   | hadoop.ozone.om.request.file.TestOMDirectoryCreateRequest |
   |   | hadoop.ozone.om.TestS3BucketManager |
   |   | hadoop.ozone.om.request.key.TestOMKeyCommitRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeDeleteRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeSetQuotaRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeCreateRequest |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketSetPropertyRequest |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1100/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1100 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fbba45fe4640 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1411513 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1100/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1100/1/testReport/ |
   | Max. process+thread count | 1096 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1100/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 

[jira] [Commented] (HDFS-14547) DirectoryWithQuotaFeature.quota costs additional memory even the storage type quota is not set.

2019-07-15 Thread Jinglun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885770#comment-16885770
 ] 

Jinglun commented on HDFS-14547:


My bad:(, I should check the patch more carefully, very sorry for that.

Upload patch branch2.9.003 and remove all the throw ConstEnumException.

> DirectoryWithQuotaFeature.quota costs additional memory even the storage type 
> quota is not set.
> ---
>
> Key: HDFS-14547
> URL: https://issues.apache.org/jira/browse/HDFS-14547
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14547-branch-2.9.001.patch, 
> HDFS-14547-branch-2.9.002.patch, HDFS-14547-branch-2.9.003.patch, 
> HDFS-14547-design, HDFS-14547-patch003-Test Report.pdf, HDFS-14547.001.patch, 
> HDFS-14547.002.patch, HDFS-14547.003.patch, HDFS-14547.004.patch, 
> HDFS-14547.005.patch, HDFS-14547.006.patch, HDFS-14547.007.patch
>
>
> Our XiaoMi HDFS is considering upgrading from 2.6 to 3.1. We notice the 
> storage type quota 'tsCounts' is instantiated to 
> EnumCounters(StorageType.class), so it will cost a long[5] even 
> if we don't have any storage type quota on this inode(only space quota or 
> name quota).
> In our cluster we have many dirs with quota and the NameNode's memory is in 
> tension, so the additional cost will be a problem.
>  See DirectoryWithQuotaFeature.Builder().
>  
> {code:java}
> class DirectoryWithQuotaFeature$Builder {
>   public Builder() {
>this.quota = new QuotaCounts.Builder().nameSpace(DEFAULT_NAMESPACE_QUOTA).
>storageSpace(DEFAULT_STORAGE_SPACE_QUOTA).
>typeSpaces(DEFAULT_STORAGE_SPACE_QUOTA).build();// set default value -1.
>this.usage = new QuotaCounts.Builder().nameSpace(1).build();
>   }
>   public Builder typeSpaces(long val) {// set default value.
>this.tsCounts.reset(val);
>return this;
>   }
> }
> class QuotaCounts$Builder {
>   public Builder() {
> this.nsSsCounts = new EnumCounters(Quota.class);
> this.tsCounts = new EnumCounters(StorageType.class);
>   }
> }
> class EnumCounters {
>   public EnumCounters(final Class enumClass) {
> final E[] enumConstants = enumClass.getEnumConstants();
> Preconditions.checkNotNull(enumConstants);
> this.enumClass = enumClass;
> this.counters = new long[enumConstants.length];// new a long array here.
>   }
> }
> {code}
> Related to HDFS-14542.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1802) Add Eviction policy for table cache

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1802?focusedWorklogId=277110=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277110
 ]

ASF GitHub Bot logged work on HDDS-1802:


Author: ASF GitHub Bot
Created on: 16/Jul/19 01:53
Start Date: 16/Jul/19 01:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1099: HDDS-1802. Add 
Eviction policy for table cache.
URL: https://github.com/apache/hadoop/pull/1099#issuecomment-511633015
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 67 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 63 | Maven dependency ordering for branch |
   | +1 | mvninstall | 521 | trunk passed |
   | +1 | compile | 259 | trunk passed |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 847 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | trunk passed |
   | 0 | spotbugs | 315 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 515 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for patch |
   | +1 | mvninstall | 507 | the patch passed |
   | +1 | compile | 274 | the patch passed |
   | +1 | javac | 274 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 671 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | the patch passed |
   | +1 | findbugs | 523 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 325 | hadoop-hdds in the patch failed. |
   | -1 | unit | 231 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 5531 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeCreateRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeDeleteRequest |
   |   | hadoop.ozone.om.response.bucket.TestOMBucketSetPropertyResponse |
   |   | hadoop.ozone.om.request.file.TestOMFileCreateRequest |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeSetOwnerResponse |
   |   | hadoop.ozone.om.response.bucket.TestOMBucketCreateResponse |
   |   | hadoop.ozone.om.request.key.TestOMKeyCommitRequest |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeCreateResponse |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketCreateRequest |
   |   | hadoop.ozone.om.TestBucketManagerImpl |
   |   | hadoop.ozone.om.TestKeyDeletingService |
   |   | hadoop.ozone.om.request.key.TestOMAllocateBlockRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeSetQuotaRequest |
   |   | hadoop.ozone.om.request.key.TestOMKeyCreateRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeSetOwnerRequest |
   |   | hadoop.ozone.om.TestS3BucketManager |
   |   | hadoop.ozone.om.request.file.TestOMDirectoryCreateRequest |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketSetPropertyRequest |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeSetQuotaResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1099/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1099 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux dc94130e6b0d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1411513 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1099/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1099/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1099/1/testReport/ |
   | Max. process+thread count | 1380 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager U: 

[jira] [Updated] (HDFS-14547) DirectoryWithQuotaFeature.quota costs additional memory even the storage type quota is not set.

2019-07-15 Thread Jinglun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14547:
---
Attachment: HDFS-14547-branch-2.9.003.patch

> DirectoryWithQuotaFeature.quota costs additional memory even the storage type 
> quota is not set.
> ---
>
> Key: HDFS-14547
> URL: https://issues.apache.org/jira/browse/HDFS-14547
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14547-branch-2.9.001.patch, 
> HDFS-14547-branch-2.9.002.patch, HDFS-14547-branch-2.9.003.patch, 
> HDFS-14547-design, HDFS-14547-patch003-Test Report.pdf, HDFS-14547.001.patch, 
> HDFS-14547.002.patch, HDFS-14547.003.patch, HDFS-14547.004.patch, 
> HDFS-14547.005.patch, HDFS-14547.006.patch, HDFS-14547.007.patch
>
>
> Our XiaoMi HDFS is considering upgrading from 2.6 to 3.1. We notice the 
> storage type quota 'tsCounts' is instantiated to 
> EnumCounters(StorageType.class), so it will cost a long[5] even 
> if we don't have any storage type quota on this inode(only space quota or 
> name quota).
> In our cluster we have many dirs with quota and the NameNode's memory is in 
> tension, so the additional cost will be a problem.
>  See DirectoryWithQuotaFeature.Builder().
>  
> {code:java}
> class DirectoryWithQuotaFeature$Builder {
>   public Builder() {
>this.quota = new QuotaCounts.Builder().nameSpace(DEFAULT_NAMESPACE_QUOTA).
>storageSpace(DEFAULT_STORAGE_SPACE_QUOTA).
>typeSpaces(DEFAULT_STORAGE_SPACE_QUOTA).build();// set default value -1.
>this.usage = new QuotaCounts.Builder().nameSpace(1).build();
>   }
>   public Builder typeSpaces(long val) {// set default value.
>this.tsCounts.reset(val);
>return this;
>   }
> }
> class QuotaCounts$Builder {
>   public Builder() {
> this.nsSsCounts = new EnumCounters(Quota.class);
> this.tsCounts = new EnumCounters(StorageType.class);
>   }
> }
> class EnumCounters {
>   public EnumCounters(final Class enumClass) {
> final E[] enumConstants = enumClass.getEnumConstants();
> Preconditions.checkNotNull(enumConstants);
> this.enumClass = enumClass;
> this.counters = new long[enumConstants.length];// new a long array here.
>   }
> }
> {code}
> Related to HDFS-14542.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14593) RBF: Implement deletion feature for expired records in State Store

2019-07-15 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885752#comment-16885752
 ] 

Takanobu Asanuma commented on HDFS-14593:
-

Created HDFS-14654 for the flaky test.

> RBF: Implement deletion feature for expired records in State Store
> --
>
> Key: HDFS-14593
> URL: https://issues.apache.org/jira/browse/HDFS-14593
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14593.001.patch, HDFS-14593.002.patch, 
> HDFS-14593.003.patch, HDFS-14593.004.patch, HDFS-14593.005.patch, 
> HDFS-14593.006.patch, HDFS-14593.007.patch, HDFS-14593.008.patch, 
> HDFS-14593.009.patch, HDFS-14593.010.patch, HDFS-14593.011.patch
>
>
> Currently, any router seems to exist in the Router Information eternally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14654) RBF: TestRouterRpc tests are flaky

2019-07-15 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14654:

Attachment: error.log

> RBF: TestRouterRpc tests are flaky
> --
>
> Key: HDFS-14654
> URL: https://issues.apache.org/jira/browse/HDFS-14654
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Priority: Major
> Attachments: error.log
>
>
> They sometimes pass and sometimes fail.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14654) RBF: TestRouterRpc tests are flaky

2019-07-15 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885751#comment-16885751
 ] 

Takanobu Asanuma commented on HDFS-14654:
-

Attached the error log.

> RBF: TestRouterRpc tests are flaky
> --
>
> Key: HDFS-14654
> URL: https://issues.apache.org/jira/browse/HDFS-14654
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Priority: Major
> Attachments: error.log
>
>
> They sometimes pass and sometimes fail.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14654) RBF: TestRouterRpc tests are flaky

2019-07-15 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-14654:
---

 Summary: RBF: TestRouterRpc tests are flaky
 Key: HDFS-14654
 URL: https://issues.apache.org/jira/browse/HDFS-14654
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Takanobu Asanuma


They sometimes pass and sometimes fail.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?focusedWorklogId=277104=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277104
 ]

ASF GitHub Bot logged work on HDDS-1736:


Author: ASF GitHub Bot
Created on: 16/Jul/19 01:19
Start Date: 16/Jul/19 01:19
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1038: HDDS-1736. 
Cleanup 2phase old HA code for Key requests.
URL: https://github.com/apache/hadoop/pull/1038#issuecomment-511626065
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277104)
Time Spent: 50m  (was: 40m)

> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?focusedWorklogId=277103=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277103
 ]

ASF GitHub Bot logged work on HDDS-1736:


Author: ASF GitHub Bot
Created on: 16/Jul/19 01:17
Start Date: 16/Jul/19 01:17
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1038: HDDS-1736. 
Cleanup 2phase old HA code for Key requests.
URL: https://github.com/apache/hadoop/pull/1038#issuecomment-511626101
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277103)
Time Spent: 40m  (was: 0.5h)

> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1802) Add Eviction policy for table cache

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1802?focusedWorklogId=277101=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277101
 ]

ASF GitHub Bot logged work on HDDS-1802:


Author: ASF GitHub Bot
Created on: 16/Jul/19 01:11
Start Date: 16/Jul/19 01:11
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1100: 
HDDS-1802. Add Eviction policy for table cache.
URL: https://github.com/apache/hadoop/pull/1100#discussion_r303694634
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/TableCacheImpl.java
 ##
 @@ -70,7 +77,10 @@ public void put(CACHEKEY cacheKey, CACHEVALUE value) {
 
   @Override
   public void cleanup(long epoch) {
-executorService.submit(() -> evictCache(epoch));
+// If it is never do nothing.
 
 Review comment:
   It is caller responsibility to call this method after the flush. And right 
now this is called only after flushing to DB from OzoneManagerDoubleBuffer.
   
   So, if it is NEVER, we will not do the cleanup.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277101)
Time Spent: 1h  (was: 50m)

> Add Eviction policy for table cache
> ---
>
> Key: HDDS-1802
> URL: https://issues.apache.org/jira/browse/HDDS-1802
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In this Jira we will add eviction policy for table cache.
> In this Jira, we will add 2 eviction policies for the cache.
> NEVER, // Cache will not be cleaned up. This mean's the table maintains full 
> cache.
> AFTERFLUSH // Cache will be cleaned up, once after flushing to DB.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14593) RBF: Implement deletion feature for expired records in State Store

2019-07-15 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885742#comment-16885742
 ] 

Takanobu Asanuma commented on HDFS-14593:
-

Thanks for your reviews, [~elgoiri], and thanks for your reviews and your first 
commit to trunk, [~ayushtkn]!

> RBF: Implement deletion feature for expired records in State Store
> --
>
> Key: HDFS-14593
> URL: https://issues.apache.org/jira/browse/HDFS-14593
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14593.001.patch, HDFS-14593.002.patch, 
> HDFS-14593.003.patch, HDFS-14593.004.patch, HDFS-14593.005.patch, 
> HDFS-14593.006.patch, HDFS-14593.007.patch, HDFS-14593.008.patch, 
> HDFS-14593.009.patch, HDFS-14593.010.patch, HDFS-14593.011.patch
>
>
> Currently, any router seems to exist in the Router Information eternally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1802) Add Eviction policy for table cache

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1802?focusedWorklogId=277097=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277097
 ]

ASF GitHub Bot logged work on HDDS-1802:


Author: ASF GitHub Bot
Created on: 16/Jul/19 01:00
Start Date: 16/Jul/19 01:00
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1100: HDDS-1802. Add 
Eviction policy for table cache.
URL: https://github.com/apache/hadoop/pull/1100#discussion_r303692949
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TypedTable.java
 ##
 @@ -104,14 +148,25 @@ public boolean isExist(KEY key) throws IOException {
   public VALUE get(KEY key) throws IOException {
 // Here the metadata lock will guarantee that cache is not updated for same
 // key during get key.
-CacheValue< VALUE > cacheValue = cache.get(new CacheKey<>(key));
-if (cacheValue == null) {
-  // If no cache for the table or if it does not exist in cache get from
-  // RocksDB table.
+
+// First get from cache. If it has return value.
+// If it does not have
+//  If cache cleanup policy is NEVER return null. Because cache here is
+//  full table data in-memory, so no need to get from underlying rocksdb
+//  table.
+//  If cache cleanup policy is AFTER_FLUSH return from underlying rocksdb
+//  table. As it might have been cleaned up from cache, might be there in
+//  DB.
+CacheValue cacheValue =
+Optional.fromNullable(cache.get(new CacheKey<>(key))).orNull();
+if (cacheValue != null) {
+  return cacheValue.getCacheValue();
+}
+
+if (cacheCleanupPolicy == TableCacheImpl.CacheCleanupPolicy.AFTER_FLUSH) {
 
 Review comment:
   Same here.. we should not have these checks here.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277097)
Time Spent: 50m  (was: 40m)

> Add Eviction policy for table cache
> ---
>
> Key: HDDS-1802
> URL: https://issues.apache.org/jira/browse/HDDS-1802
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In this Jira we will add eviction policy for table cache.
> In this Jira, we will add 2 eviction policies for the cache.
> NEVER, // Cache will not be cleaned up. This mean's the table maintains full 
> cache.
> AFTERFLUSH // Cache will be cleaned up, once after flushing to DB.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1802) Add Eviction policy for table cache

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1802?focusedWorklogId=277095=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277095
 ]

ASF GitHub Bot logged work on HDDS-1802:


Author: ASF GitHub Bot
Created on: 16/Jul/19 00:59
Start Date: 16/Jul/19 00:59
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1100: HDDS-1802. Add 
Eviction policy for table cache.
URL: https://github.com/apache/hadoop/pull/1100#discussion_r303692835
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TypedTable.java
 ##
 @@ -83,11 +116,22 @@ public boolean isEmpty() throws IOException {
 
   @Override
   public boolean isExist(KEY key) throws IOException {
-CacheValue cacheValue= cache.get(new CacheKey<>(key));
-return (cacheValue != null && cacheValue.getCacheValue() != null) ||
+
+if (cacheCleanupPolicy == TableCacheImpl.CacheCleanupPolicy.NEVER) {
 
 Review comment:
   I understand why we added this policy-specific check here. However it is 
probably misplaced here. Your original solution to have two cache types with 
`isExist` overloaded was probably better.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277095)
Time Spent: 40m  (was: 0.5h)

> Add Eviction policy for table cache
> ---
>
> Key: HDDS-1802
> URL: https://issues.apache.org/jira/browse/HDDS-1802
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In this Jira we will add eviction policy for table cache.
> In this Jira, we will add 2 eviction policies for the cache.
> NEVER, // Cache will not be cleaned up. This mean's the table maintains full 
> cache.
> AFTERFLUSH // Cache will be cleaned up, once after flushing to DB.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1666) Improve logic in openKey when allocating block

2019-07-15 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1666:

  Resolution: Fixed
   Fix Version/s: 0.5.0
Target Version/s:   (was: 0.4.1)
  Status: Resolved  (was: Patch Available)

+1.

I've committed this via GitHub. Thanks for the contribution [~bharatviswa]!

> Improve logic in openKey when allocating block
> --
>
> Key: HDDS-1666
> URL: https://issues.apache.org/jira/browse/HDDS-1666
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> We set size as below
> {code}
> final long size = args.getDataSize() >= 0 ?
>  args.getDataSize() : scmBlockSize;
> {code}
>  
> and create OmKeyInfo with below size set. But when allocating Block for 
> openKey, we use as below.
> allocateBlockInKey(keyInfo, args.getDataSize(), currentTime);
>  
> I feel here, we should use size which is set above so that we allocate at 
> least a block when the openKey call happens.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1666) Improve logic in openKey when allocating block

2019-07-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885737#comment-16885737
 ] 

Hudson commented on HDDS-1666:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16921 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16921/])
HDDS-1666. Issue in openKey when allocating block. (#943) (arp7: rev 
ef66e4999f3cd5f0ea2fa018359facb776bf892f)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java


> Improve logic in openKey when allocating block
> --
>
> Key: HDDS-1666
> URL: https://issues.apache.org/jira/browse/HDDS-1666
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> We set size as below
> {code}
> final long size = args.getDataSize() >= 0 ?
>  args.getDataSize() : scmBlockSize;
> {code}
>  
> and create OmKeyInfo with below size set. But when allocating Block for 
> openKey, we use as below.
> allocateBlockInKey(keyInfo, args.getDataSize(), currentTime);
>  
> I feel here, we should use size which is set above so that we allocate at 
> least a block when the openKey call happens.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1666) Improve logic in openKey when allocating block

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1666?focusedWorklogId=277092=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277092
 ]

ASF GitHub Bot logged work on HDDS-1666:


Author: ASF GitHub Bot
Created on: 16/Jul/19 00:54
Start Date: 16/Jul/19 00:54
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #943: HDDS-1666. Issue 
in openKey when allocating block.
URL: https://github.com/apache/hadoop/pull/943
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277092)
Time Spent: 50m  (was: 40m)

> Improve logic in openKey when allocating block
> --
>
> Key: HDDS-1666
> URL: https://issues.apache.org/jira/browse/HDDS-1666
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> We set size as below
> {code}
> final long size = args.getDataSize() >= 0 ?
>  args.getDataSize() : scmBlockSize;
> {code}
>  
> and create OmKeyInfo with below size set. But when allocating Block for 
> openKey, we use as below.
> allocateBlockInKey(keyInfo, args.getDataSize(), currentTime);
>  
> I feel here, we should use size which is set above so that we allocate at 
> least a block when the openKey call happens.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1761) Fix class hierarchy for KeyRequest and FileRequest classes.

2019-07-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885735#comment-16885735
 ] 

Hudson commented on HDDS-1761:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16920 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16920/])
HDDS-1761. Fix class hierarchy for KeyRequest and FileRequest classes. (github: 
rev 585f4d5c6405c8f85c6ef11a43a14302c261be11)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRenameRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMAllocateBlockRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCommitRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java


> Fix class hierarchy for KeyRequest and FileRequest classes.
> ---
>
> Key: HDDS-1761
> URL: https://issues.apache.org/jira/browse/HDDS-1761
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The patch looks mostly fine to me. A few minor comments. -and one type error 
> that needs to be fixed.-
> I would like to see the class hierarchy refactored in a follow up patch. 
> {{OMFileCreateRequest}}should not extend {{OMKeyCreateRequest}}. Instead they 
> should both extend an abstract class that encapsulates the common 
> functionality.
> Generally deriving from _concrete_ classes is a bad idea.
>  
> This Jira is created based on [~arp] comment during review of HDDS-1731



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1761) Fix class hierarchy for KeyRequest and FileRequest classes.

2019-07-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1761:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix class hierarchy for KeyRequest and FileRequest classes.
> ---
>
> Key: HDDS-1761
> URL: https://issues.apache.org/jira/browse/HDDS-1761
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The patch looks mostly fine to me. A few minor comments. -and one type error 
> that needs to be fixed.-
> I would like to see the class hierarchy refactored in a follow up patch. 
> {{OMFileCreateRequest}}should not extend {{OMKeyCreateRequest}}. Instead they 
> should both extend an abstract class that encapsulates the common 
> functionality.
> Generally deriving from _concrete_ classes is a bad idea.
>  
> This Jira is created based on [~arp] comment during review of HDDS-1731



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1761) Fix class hierarchy for KeyRequest and FileRequest classes.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1761?focusedWorklogId=277090=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277090
 ]

ASF GitHub Bot logged work on HDDS-1761:


Author: ASF GitHub Bot
Created on: 16/Jul/19 00:53
Start Date: 16/Jul/19 00:53
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1052: HDDS-1761. Fix 
class hierarchy for KeyRequest and FileRequest classes.
URL: https://github.com/apache/hadoop/pull/1052#issuecomment-511621943
 
 
   Thank You @arp7 for the review.
   Test failures are not related to this patch. Will commit this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277090)
Time Spent: 0.5h  (was: 20m)

> Fix class hierarchy for KeyRequest and FileRequest classes.
> ---
>
> Key: HDDS-1761
> URL: https://issues.apache.org/jira/browse/HDDS-1761
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The patch looks mostly fine to me. A few minor comments. -and one type error 
> that needs to be fixed.-
> I would like to see the class hierarchy refactored in a follow up patch. 
> {{OMFileCreateRequest}}should not extend {{OMKeyCreateRequest}}. Instead they 
> should both extend an abstract class that encapsulates the common 
> functionality.
> Generally deriving from _concrete_ classes is a bad idea.
>  
> This Jira is created based on [~arp] comment during review of HDDS-1731



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1761) Fix class hierarchy for KeyRequest and FileRequest classes.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1761?focusedWorklogId=277091=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277091
 ]

ASF GitHub Bot logged work on HDDS-1761:


Author: ASF GitHub Bot
Created on: 16/Jul/19 00:53
Start Date: 16/Jul/19 00:53
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1052: 
HDDS-1761. Fix class hierarchy for KeyRequest and FileRequest classes.
URL: https://github.com/apache/hadoop/pull/1052
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277091)
Time Spent: 40m  (was: 0.5h)

> Fix class hierarchy for KeyRequest and FileRequest classes.
> ---
>
> Key: HDDS-1761
> URL: https://issues.apache.org/jira/browse/HDDS-1761
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The patch looks mostly fine to me. A few minor comments. -and one type error 
> that needs to be fixed.-
> I would like to see the class hierarchy refactored in a follow up patch. 
> {{OMFileCreateRequest}}should not extend {{OMKeyCreateRequest}}. Instead they 
> should both extend an abstract class that encapsulates the common 
> functionality.
> Generally deriving from _concrete_ classes is a bad idea.
>  
> This Jira is created based on [~arp] comment during review of HDDS-1731



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?focusedWorklogId=277088=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277088
 ]

ASF GitHub Bot logged work on HDDS-1736:


Author: ASF GitHub Bot
Created on: 16/Jul/19 00:49
Start Date: 16/Jul/19 00:49
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1038: HDDS-1736. Cleanup 
2phase old HA code for Key requests.
URL: https://github.com/apache/hadoop/pull/1038#issuecomment-511621402
 
 
   +1 LGTM. Feel free to commit with checkstyle issues fixed.
   
   I assume you verified the test failures are unrelated.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277088)
Time Spent: 0.5h  (was: 20m)

> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277087=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277087
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 00:44
Start Date: 16/Jul/19 00:44
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1101: HDDS-1544. 
Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101
 
 
   …Contributed by Ajay Kumar.
   
   Some additional change to fix the unit tests and checkstyle issue. 
   
   TestOzoneRpcClientWithRatis still has 3 test failing as expected. The root 
cause is the Ozone ACL work has not fully integrated with the OM-HA targeted 
for ozone 0.5. OM-HA support for HA is tracked separately by HDDS-1619. 
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277087)
Time Spent: 8h  (was: 7h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14573) Backport Standby Read to branch-3

2019-07-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885731#comment-16885731
 ] 

Hadoop QA commented on HDFS-14573:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 27 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.0 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
12s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
32s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 9s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
6s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m  
4s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in branch-3.0 has 2 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 11m  
3s{color} | {color:green} branch-3.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 17m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 30s{color} 
| {color:red} root generated 1 new + 1253 unchanged - 1 fixed = 1254 total (was 
1254) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 16s{color} | {color:orange} root: The patch generated 27 new + 2554 
unchanged - 10 fixed = 2581 total (was 2564) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 44s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 14s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
35s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
20s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 40s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} 

[jira] [Updated] (HDDS-1617) Restructure the code layout for Ozone Manager

2019-07-15 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1617:

Labels:   (was: pull-request-available)

> Restructure the code layout for Ozone Manager
> -
>
> Key: HDDS-1617
> URL: https://issues.apache.org/jira/browse/HDDS-1617
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> The Ozone Manager has a flat structure that deals with lot of specific 
> functions. This Jira proposes to refactor ozone managers code base and move 
> function specific packages.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >