[jira] [Work logged] (HDDS-1802) Add Eviction policy for table cache

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1802?focusedWorklogId=277973=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277973
 ]

ASF GitHub Bot logged work on HDDS-1802:


Author: ASF GitHub Bot
Created on: 17/Jul/19 05:46
Start Date: 17/Jul/19 05:46
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1100: HDDS-1802. Add 
Eviction policy for table cache.
URL: https://github.com/apache/hadoop/pull/1100#issuecomment-512111723
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277973)
Time Spent: 2h 10m  (was: 2h)

> Add Eviction policy for table cache
> ---
>
> Key: HDDS-1802
> URL: https://issues.apache.org/jira/browse/HDDS-1802
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> In this Jira we will add eviction policy for table cache.
> In this Jira, we will add 2 eviction policies for the cache.
> NEVER, // Cache will not be cleaned up. This mean's the table maintains full 
> cache.
> AFTERFLUSH // Cache will be cleaned up, once after flushing to DB.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1786) Datanodes takeSnapshot should delete previously created snapshots

2019-07-16 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-1786:
-

Assignee: Aravindan Vijayan

> Datanodes takeSnapshot should delete previously created snapshots
> -
>
> Key: HDDS-1786
> URL: https://issues.apache.org/jira/browse/HDDS-1786
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>
> Right now, after after taking a new snapshot, the previous snapshot file is 
> left in the raft log directory. When a new snapshot is taken, the previous 
> snapshots should be deleted.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14461) RBF: Fix intermittently failing kerberos related unit test

2019-07-16 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886683#comment-16886683
 ] 

He Xiaoqiao commented on HDFS-14461:


This failed unit test could be related with changes of HADOOP-16354 and 
HADOOP-16314.
Ping [~eyang], would you mind take a look?

> RBF: Fix intermittently failing kerberos related unit test
> --
>
> Key: HDFS-14461
> URL: https://issues.apache.org/jira/browse/HDFS-14461
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Fengnan Li
>Priority: Major
>
> TestRouterHttpDelegationToken#testGetDelegationToken fails intermittently. It 
> may be due to some race condition before using the keytab that's created for 
> testing.
>  
> {code:java}
>  Failed
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.testGetDelegationToken
>  Failing for the past 1 build (Since 
> [!https://builds.apache.org/static/1e9ab9cc/images/16x16/red.png! 
> #26721|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/] )
>  [Took 89 
> ms.|https://builds.apache.org/job/PreCommit-HDFS-Build/26721/testReport/org.apache.hadoop.hdfs.server.federation.security/TestRouterHttpDelegationToken/testGetDelegationToken/history]
>   
>  Error Message
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED
> h3. Stacktrace
> org.apache.hadoop.service.ServiceStateException: 
> org.apache.hadoop.security.KerberosAuthException: failure to login: for 
> principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted field 
> failed (31) - PREAUTH_FAILED at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:173) 
> at 
> org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.setup(TestRouterHttpDelegationToken.java:99)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) 
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 
> Caused by: org.apache.hadoop.security.KerberosAuthException: failure to 
> login: for principal: router/localh...@example.com from keytab 
> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
>  javax.security.auth.login.LoginException: Integrity check on decrypted 

[jira] [Commented] (HDFS-14576) Avoid block report retry and slow down namenode startup

2019-07-16 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886681#comment-16886681
 ] 

He Xiaoqiao commented on HDFS-14576:


Thanks [~zhangchen] for your detailed comment, As you said above, BlockReport 
Lease is a good solution, and it could reduce load of NameNode. I believe there 
are still some points need to optimize on some scenarios, especially for 
NameNode restart stage.

> Avoid block report retry and slow down namenode startup
> ---
>
> Key: HDFS-14576
> URL: https://issues.apache.org/jira/browse/HDFS-14576
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
>
> During namenode startup, the load will be very high since it has to process 
> every datanodes blockreport one by one. If there are hundreds datanodes block 
> reports pending process, the issue will be more serious even 
> #processFirstBlockReport is processed a lot more efficiently than ordinary 
> block reports. Then some of datanode will retry blockreport and lengthens 
> restart times. I think we should filter the block report request (via 
> datanode blockreport retries) which has be processed and return directly then 
> shorten down restart time. I want to state this proposal may be obvious only 
> for large cluster.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1809) Ozone Read fails with StatusRunTimeExceptions after 2 datanode fail in Ratis pipeline

2019-07-16 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886668#comment-16886668
 ] 

Mukul Kumar Singh commented on HDDS-1809:
-

cc: [~xyao]

> Ozone Read fails with StatusRunTimeExceptions after 2 datanode fail in Ratis 
> pipeline
> -
>
> Key: HDDS-1809
> URL: https://issues.apache.org/jira/browse/HDDS-1809
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Sammi Chen
>Priority: Major
> Fix For: 0.5.0
>
>
> {code:java}
> java.io.IOException: Unexpected OzoneException: java.io.IOException: 
> java.util.concurrent.ExecutionException: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> at 
> org.apache.hadoop.hdds.scm.storage.ChunkInputStream.readChunk(ChunkInputStream.java:342)
> at 
> org.apache.hadoop.hdds.scm.storage.ChunkInputStream.readChunkFromContainer(ChunkInputStream.java:307)
> at 
> org.apache.hadoop.hdds.scm.storage.ChunkInputStream.prepareRead(ChunkInputStream.java:259)
> at 
> org.apache.hadoop.hdds.scm.storage.ChunkInputStream.read(ChunkInputStream.java:144)
> at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.read(BlockInputStream.java:239)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:171)
> at 
> org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:47)
> at java.io.InputStream.read(InputStream.java:101)
> at 
> org.apache.hadoop.ozone.container.ContainerTestHelper.validateData(ContainerTestHelper.java:709)
> at 
> org.apache.hadoop.ozone.client.rpc.TestFailureHandlingByClient.validateData(TestFailureHandlingByClient.java:458)
> at 
> org.apache.hadoop.ozone.client.rpc.TestFailureHandlingByClient.testBlockWritesWithDnFailures(TestFailureHandlingByClient.java:158)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1812) Du while calculating used disk space reports that chunk files are file not found

2019-07-16 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886667#comment-16886667
 ] 

Mukul Kumar Singh commented on HDDS-1812:
-

This is not really a disk corruption test, I feel that this is a benign error.
And the reason is because we rename the chunk files once they are written on 
DN. the DU function needs to be enhanced to handle this

> Du while calculating used disk space reports that chunk files are file not 
> found
> 
>
> Key: HDDS-1812
> URL: https://issues.apache.org/jira/browse/HDDS-1812
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Priority: Major
>
> {code}
> 2019-07-16 08:16:49,787 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Could 
> not get disk usage information for path /data/3/ozone-0715
> ExitCodeException exitCode=1: du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/b113dd390e68e914d3ff405f3deec564_stream_60448f
> 77-6349-48fa-ae86-b2d311730569_chunk_1.tmp.1.14118085': No such file or 
> directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/37993af2849bdd0320d0f9d4a6ef4b92_stream_1f68be9f-e083-45e5-84a9-08809bc392ed
> _chunk_1.tmp.1.14118091': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a38677def61389ec0be9105b1b4fddff_stream_9c3c3741-f710-4482-8423-7ac6695be96b
> _chunk_1.tmp.1.14118102': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a689c89f71a75547471baf6182f3be01_stream_baf0f21d-2fb0-4cd8-84b0-eff1723019a0
> _chunk_1.tmp.1.14118105': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/f58cf0fa5cb9360058ae25e8bc983e84_stream_d8d5ea61-995f-4ff5-88fb-4a9e97932f00
> _chunk_1.tmp.1.14118109': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a1d13ee6bbefd1f8156b1bd8db0d1b67_stream_db214bdd-a0c0-4f4a-8bc7-a3817e047e45_chunk_1.tmp.1.14118115':
>  No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/8f8a4bd3f6c31161a70f82cb5ab8ee60_stream_d532d657-3d87-4332-baf8-effad9b3db23_chunk_1.tmp.1.14118127':
>  No such file or directory
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
> at org.apache.hadoop.util.Shell.run(Shell.java:901)
> at org.apache.hadoop.fs.DU$DUShell.startRefresh(DU.java:62)
> at org.apache.hadoop.fs.DU.refresh(DU.java:53)
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:181)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1798) Propagate failure in writeStateMachineData to Ratis

2019-07-16 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1798:

Description: 
Currently, 

writeStateMachineData() returns a future to Ratis. This future does not track 
any errors or failures encountered as part of the operation - WriteChunk / 
handleWriteChunk(). The error is propagated back to the client in the form of 
an error code embedded inside writeChunkResponseProto. But the error goes 
undetected and unhandled in the Ratis server. The future handed back to Ratis 
is always completed with success.

The goal is to detect any errors in writeStateMachineData in Ratis and treat is 
as a failure of the Ratis log. Handling for which is already implemented in 
HDDS-1603. 

 

  was:
Currently, 

writeStateMachineData() returns a future to Ratis. This future does not track 
any errors or failures encountered as part of the operation - WriteChunk / 
handleWriteChunk(). The error is propagated back to the client in the form of 
an error code embedded inside writeChunkResponseProto. But the error goes 
undetected and unhandled in the Ratis server. The future handed back to Ratis 
is always completed with success.

The goal is to detect any errors in writeStateMachineData in Ratis and treat is 
as a failure

of the Ratis log. Handling for which is already implemented in HDDS-1603. 

 


> Propagate failure in writeStateMachineData to Ratis
> ---
>
> Key: HDDS-1798
> URL: https://issues.apache.org/jira/browse/HDDS-1798
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>
> Currently, 
> writeStateMachineData() returns a future to Ratis. This future does not track 
> any errors or failures encountered as part of the operation - WriteChunk / 
> handleWriteChunk(). The error is propagated back to the client in the form of 
> an error code embedded inside writeChunkResponseProto. But the error goes 
> undetected and unhandled in the Ratis server. The future handed back to Ratis 
> is always completed with success.
> The goal is to detect any errors in writeStateMachineData in Ratis and treat 
> is as a failure of the Ratis log. Handling for which is already implemented 
> in HDDS-1603. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1712) Remove sudo access from Ozone docker image

2019-07-16 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886653#comment-16886653
 ] 

Eric Yang commented on HDDS-1712:
-

[~elek] core-site.xml is required because fs.defaultName needs to be specified. 
 If there is no core-site.xml with volume and bucket in URL, then the test code 
does not test Ozone.

[~anu] Doesn't Ozone quick start guide refer to use docker-compose to start the 
cluster?  This puts Docker image on the critical path for most users to try it 
out.  Why ask people to try it out with docker, if you have no intention to 
finish what you started?

> Remove sudo access from Ozone docker image
> --
>
> Key: HDDS-1712
> URL: https://issues.apache.org/jira/browse/HDDS-1712
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1712.001.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone docker image is given unlimited sudo access to hadoop user.  This poses 
> a security risk where host level user uid 1000 can attach a debugger to the 
> container process to obtain root access.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1802) Add Eviction policy for table cache

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1802?focusedWorklogId=277951=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277951
 ]

ASF GitHub Bot logged work on HDDS-1802:


Author: ASF GitHub Bot
Created on: 17/Jul/19 03:10
Start Date: 17/Jul/19 03:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1100: HDDS-1802. Add 
Eviction policy for table cache.
URL: https://github.com/apache/hadoop/pull/1100#issuecomment-512083973
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 498 | trunk passed |
   | +1 | compile | 268 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 799 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | trunk passed |
   | 0 | spotbugs | 318 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 514 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 492 | the patch passed |
   | +1 | compile | 255 | the patch passed |
   | +1 | javac | 255 | the patch passed |
   | +1 | checkstyle | 67 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 692 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | the patch passed |
   | +1 | findbugs | 504 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 284 | hadoop-hdds in the patch passed. |
   | -1 | unit | 137 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 5187 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.request.volume.TestOMVolumeSetQuotaRequest |
   |   | hadoop.ozone.om.request.key.TestOMKeyRenameRequest |
   |   | hadoop.ozone.om.request.key.TestOMKeyDeleteRequest |
   |   | hadoop.ozone.om.request.file.TestOMDirectoryCreateRequest |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeSetQuotaResponse |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithDummyResponse |
   |   | hadoop.ozone.om.request.file.TestOMFileCreateRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeCreateRequest |
   |   | hadoop.ozone.om.response.key.TestOMKeyRenameResponse |
   |   | hadoop.ozone.om.TestKeyDeletingService |
   |   | hadoop.ozone.om.TestBucketManagerImpl |
   |   | hadoop.ozone.om.response.bucket.TestOMBucketCreateResponse |
   |   | hadoop.ozone.om.response.bucket.TestOMBucketDeleteResponse |
   |   | hadoop.ozone.om.response.key.TestOMKeyCommitResponse |
   |   | hadoop.ozone.om.response.key.TestOMKeyCreateResponse |
   |   | hadoop.ozone.om.request.key.TestOMKeyCreateRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeSetOwnerRequest |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketSetPropertyRequest |
   |   | hadoop.ozone.om.response.bucket.TestOMBucketSetPropertyResponse |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeDeleteResponse |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeDeleteRequest |
   |   | hadoop.ozone.om.TestS3BucketManager |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   |   | hadoop.ozone.security.TestOzoneDelegationTokenSecretManager |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketDeleteRequest |
   |   | hadoop.ozone.om.request.key.TestOMAllocateBlockRequest |
   |   | hadoop.ozone.om.request.key.TestOMKeyCommitRequest |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketCreateRequest |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeCreateResponse |
   |   | hadoop.ozone.om.response.key.TestOMAllocateBlockResponse |
   |   | hadoop.ozone.om.response.file.TestOMDirectoryCreateResponse |
   |   | hadoop.ozone.om.response.key.TestOMKeyDeleteResponse |
   |   | hadoop.ozone.om.request.TestOMClientRequestWithUserInfo |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeSetOwnerResponse |
   |   | hadoop.ozone.om.request.key.TestOMKeyPurgeRequestAndResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 

[jira] [Commented] (HDDS-1771) Add slow IO disk test to fault injection test

2019-07-16 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886645#comment-16886645
 ] 

Eric Yang commented on HDDS-1771:
-

{quote} But can you please defined what is the expected behavior? It's not 
clear (for me) from the tests. I assume that a good test should have some kind 
of assertions. What is the assertion here?{quote}

The existing ITReadWrite tests are supposed to pass unless the user defined 
rate is too slow for normal operations.  When this happens, there should be 
some error message in logs or UI to report unhealthy disk/nodes.

{quote}What is your expectation in case of a very slow hard disk? To drop 
client connections? (If I understood well, this is what you mentioned.). To 
throw an IOException?{quote}

IOException may be thrown on a connection that is in-flight.  If connection has 
not been established, it may throw connection refused or service unavailable 
exceptions.

HA logic and disk health detection logics hasn't been implemented.  The tests 
can be added later, and keep this JIRA as a turning knob for testing slow disks 
to find out the minimum IO rate required for a normal operation.

> Add slow IO disk test to fault injection test
> -
>
> Key: HDDS-1771
> URL: https://issues.apache.org/jira/browse/HDDS-1771
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Eric Yang
>Priority: Major
> Attachments: HDDS-1771.001.patch, HDDS-1771.002.patch, 
> HDDS-1771.003.patch
>
>
> In fault injection test, one possible simulation is to create slow disk IO.  
> This test can assist in developing a set of timing profiles that works for 
> Ozone cluster.  When we write to a file, the data travels across a bunch of 
> buffers and caches before it is effectively written to the disk.  By 
> controlling cgroup blkio rate in Linux Kernel, we can simulate slow disk 
> read, write.  Docker provides the following parameters to control cgroup:
> {code}
> --device-read-bps=""
> --device-write-bps=""
> --device-read-iops=""
> --device-write-iops=""
> {code}
> The test will be added to read/write test with docker compose file as 
> parameters to test the timing profiles.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1768) Audit permission failures from authorizer

2019-07-16 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1768 started by Dinesh Chitlangia.
---
> Audit permission failures from authorizer
> -
>
> Key: HDDS-1768
> URL: https://issues.apache.org/jira/browse/HDDS-1768
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> Audit permission failures from authorizer



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14350) dfs.datanode.ec.reconstruction.threads not take effect

2019-07-16 Thread hunshenshi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hunshenshi reassigned HDFS-14350:
-

Assignee: hunshenshi

> dfs.datanode.ec.reconstruction.threads not take effect
> --
>
> Key: HDFS-14350
> URL: https://issues.apache.org/jira/browse/HDFS-14350
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
> Fix For: 3.2.0
>
>
> In ErasureCodingWorker, stripedReconstructionPool is create by 
> {code:java}
> initializeStripedBlkReconstructionThreadPool(conf.getInt(
> DFSConfigKeys.DFS_DN_EC_RECONSTRUCTION_THREADS_KEY,
> DFSConfigKeys.DFS_DN_EC_RECONSTRUCTION_THREADS_DEFAULT));
> private void initializeStripedBlkReconstructionThreadPool(int numThreads) {
>   LOG.debug("Using striped block reconstruction; pool threads={}",
>   numThreads);
>   stripedReconstructionPool = DFSUtilClient.getThreadPoolExecutor(2,
>   numThreads, 60, new LinkedBlockingQueue<>(),
>   "StripedBlockReconstruction-", false);
>   stripedReconstructionPool.allowCoreThreadTimeOut(true);
> }{code}
> so stripedReconstructionPool is a ThreadPoolExecutor, and the queue is a 
> LinkedBlockingQueue, then the active thread is awalys 2, the 
> dfs.datanode.ec.reconstruction.threads not take effect.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14361) SNN will always upload fsimage

2019-07-16 Thread hunshenshi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hunshenshi reassigned HDFS-14361:
-

Assignee: hunshenshi

> SNN will always upload fsimage
> --
>
> Key: HDFS-14361
> URL: https://issues.apache.org/jira/browse/HDFS-14361
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
> Fix For: 3.2.0
>
>
> Related to -HDFS-12248.-
> {code:java}
> boolean sendRequest = isPrimaryCheckPointer
> || secsSinceLastUpload >= checkpointConf.getQuietPeriod();
> doCheckpoint(sendRequest);
> {code}
> If sendRequest is true, SNN will upload fsimage. But isPrimaryCheckPointer 
> always is true,
> {code:java}
> if (ie == null && ioe == null) {
>   //Update only when response from remote about success or
>   lastUploadTime = monotonicNow();
>   // we are primary if we successfully updated the ANN
>   this.isPrimaryCheckPointer = success;
> }
> {code}
> isPrimaryCheckPointer should be outside the if condition.
> If the ANN update was not successful, then isPrimaryCheckPointer should be 
> set to false.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14547) DirectoryWithQuotaFeature.quota costs additional memory even the storage type quota is not set.

2019-07-16 Thread Jinglun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886624#comment-16886624
 ] 

Jinglun commented on HDFS-14547:


Thanks [~xkrogen] for your nice review and comments.:)

> DirectoryWithQuotaFeature.quota costs additional memory even the storage type 
> quota is not set.
> ---
>
> Key: HDFS-14547
> URL: https://issues.apache.org/jira/browse/HDFS-14547
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14547-branch-2.9.001.patch, 
> HDFS-14547-branch-2.9.002.patch, HDFS-14547-branch-2.9.003.patch, 
> HDFS-14547-design, HDFS-14547-patch003-Test Report.pdf, HDFS-14547.001.patch, 
> HDFS-14547.002.patch, HDFS-14547.003.patch, HDFS-14547.004.patch, 
> HDFS-14547.005.patch, HDFS-14547.006.patch, HDFS-14547.007.patch
>
>
> Our XiaoMi HDFS is considering upgrading from 2.6 to 3.1. We notice the 
> storage type quota 'tsCounts' is instantiated to 
> EnumCounters(StorageType.class), so it will cost a long[5] even 
> if we don't have any storage type quota on this inode(only space quota or 
> name quota).
> In our cluster we have many dirs with quota and the NameNode's memory is in 
> tension, so the additional cost will be a problem.
>  See DirectoryWithQuotaFeature.Builder().
>  
> {code:java}
> class DirectoryWithQuotaFeature$Builder {
>   public Builder() {
>this.quota = new QuotaCounts.Builder().nameSpace(DEFAULT_NAMESPACE_QUOTA).
>storageSpace(DEFAULT_STORAGE_SPACE_QUOTA).
>typeSpaces(DEFAULT_STORAGE_SPACE_QUOTA).build();// set default value -1.
>this.usage = new QuotaCounts.Builder().nameSpace(1).build();
>   }
>   public Builder typeSpaces(long val) {// set default value.
>this.tsCounts.reset(val);
>return this;
>   }
> }
> class QuotaCounts$Builder {
>   public Builder() {
> this.nsSsCounts = new EnumCounters(Quota.class);
> this.tsCounts = new EnumCounters(StorageType.class);
>   }
> }
> class EnumCounters {
>   public EnumCounters(final Class enumClass) {
> final E[] enumConstants = enumClass.getEnumConstants();
> Preconditions.checkNotNull(enumConstants);
> this.enumClass = enumClass;
> this.counters = new long[enumConstants.length];// new a long array here.
>   }
> }
> {code}
> Related to HDFS-14542.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1775) Make OM KeyDeletingService compatible with HA model

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1775?focusedWorklogId=277927=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277927
 ]

ASF GitHub Bot logged work on HDDS-1775:


Author: ASF GitHub Bot
Created on: 17/Jul/19 02:00
Start Date: 17/Jul/19 02:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1063: HDDS-1775. Make 
OM KeyDeletingService compatible with HA model
URL: https://github.com/apache/hadoop/pull/1063#issuecomment-512070364
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 87 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for branch |
   | +1 | mvninstall | 576 | trunk passed |
   | +1 | compile | 270 | trunk passed |
   | +1 | checkstyle | 79 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 916 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 184 | trunk passed |
   | 0 | spotbugs | 340 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 602 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 449 | the patch passed |
   | +1 | compile | 265 | the patch passed |
   | +1 | cc | 265 | the patch passed |
   | +1 | javac | 265 | the patch passed |
   | +1 | checkstyle | 77 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 710 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 106 | hadoop-ozone generated 1 new + 12 unchanged - 0 fixed 
= 13 total (was 12) |
   | +1 | findbugs | 584 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 384 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2317 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 7925 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1063 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 3fc217d56237 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 71c21f0 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/6/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/6/testReport/ |
   | Max. process+thread count | 4342 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277927)
Time Spent: 3.5h  (was: 3h 20m)

> Make OM KeyDeletingService compatible with HA model
> 

[jira] [Commented] (HDFS-14655) SBN : Namenode crashes if one of The JN is down

2019-07-16 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886615#comment-16886615
 ] 

Ayush Saxena commented on HDFS-14655:
-

Thanx [~vagarychen]

Yes, It affected all the observers.

In standby NN we didn't enable tailling edits so the standby namenode were not 
affected, they stayed intact. But when the ONN was restarted after the crash, 
it starts in Standby state. so it was crashing in that state too.

The problem is with tailing. It can happen with standby too, if edit log 
tailing is enabled. But we just had it enabled in our Observer node only, To 
prevent stale reads

 

> SBN : Namenode crashes if one of The JN is down
> ---
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Harshakiran Reddy
>Priority: Major
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1775) Make OM KeyDeletingService compatible with HA model

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1775?focusedWorklogId=277913=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277913
 ]

ASF GitHub Bot logged work on HDDS-1775:


Author: ASF GitHub Bot
Created on: 17/Jul/19 01:14
Start Date: 17/Jul/19 01:14
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1063: HDDS-1775. Make 
OM KeyDeletingService compatible with HA model
URL: https://github.com/apache/hadoop/pull/1063#issuecomment-512061464
 
 
   When you commit the next JIRA, you might want to take care of 2 CheckStyle 
and 3 JavaDoc. If you click on the links you will see the errors. Thank you for 
getting this done. Appreciate it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277913)
Time Spent: 3h 20m  (was: 3h 10m)

> Make OM KeyDeletingService compatible with HA model
> ---
>
> Key: HDDS-1775
> URL: https://issues.apache.org/jira/browse/HDDS-1775
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Currently OM KeyDeletingService directly deletes all the keys in DeletedTable 
> after deleting the corresponding blocks through SCM. For HA compatibility, 
> the key purging should happen through the OM Ratis server. This Jira 
> introduces PurgeKeys request in OM protocol. This request will be submitted 
> to OMs Ratis server after SCM deletes blocks corresponding to deleted keys.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?focusedWorklogId=277894=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277894
 ]

ASF GitHub Bot logged work on HDDS-1689:


Author: ASF GitHub Bot
Created on: 17/Jul/19 00:34
Start Date: 17/Jul/19 00:34
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1088: HDDS-1689. 
Implement S3 Create Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088#discussion_r304179189
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketCreateRequest.java
 ##
 @@ -0,0 +1,350 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.volume.OMVolumeRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.bucket.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.s3.bucket.S3BucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateBucketRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateBucketResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateVolumeInfo;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.VolumeList;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import static org.apache.hadoop.ozone.OzoneConsts.OM_S3_VOLUME_PREFIX;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.USER_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.VOLUME_LOCK;
+
+/**
+ * Handles S3 Bucket create request.
+ */
+public class S3BucketCreateRequest extends OMVolumeRequest {
+
+  private static final String S3_ADMIN_NAME = "OzoneS3Manager";
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3CreateBucketRequest.class);
+
+  public S3BucketCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+S3CreateBucketRequest s3CreateBucketRequest =
+getOmRequest().getCreateS3BucketRequest();
+Preconditions.checkNotNull(s3CreateBucketRequest);
+
+S3CreateBucketRequest.Builder newS3CreateBucketRequest =
+s3CreateBucketRequest.toBuilder().setS3CreateVolumeInfo(
+S3CreateVolumeInfo.newBuilder().setCreationTime(Time.now()));
+
+// TODO: Do we need to enforce the bucket rules in this 

[jira] [Work logged] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?focusedWorklogId=277885=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277885
 ]

ASF GitHub Bot logged work on HDDS-1689:


Author: ASF GitHub Bot
Created on: 17/Jul/19 00:31
Start Date: 17/Jul/19 00:31
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1088: HDDS-1689. 
Implement S3 Create Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088#discussion_r304178640
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketCreateRequest.java
 ##
 @@ -0,0 +1,350 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.volume.OMVolumeRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.bucket.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.s3.bucket.S3BucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateBucketRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateBucketResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateVolumeInfo;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.VolumeList;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import static org.apache.hadoop.ozone.OzoneConsts.OM_S3_VOLUME_PREFIX;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.USER_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.VOLUME_LOCK;
+
+/**
+ * Handles S3 Bucket create request.
+ */
+public class S3BucketCreateRequest extends OMVolumeRequest {
+
+  private static final String S3_ADMIN_NAME = "OzoneS3Manager";
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3CreateBucketRequest.class);
+
+  public S3BucketCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+S3CreateBucketRequest s3CreateBucketRequest =
+getOmRequest().getCreateS3BucketRequest();
+Preconditions.checkNotNull(s3CreateBucketRequest);
+
+S3CreateBucketRequest.Builder newS3CreateBucketRequest =
+s3CreateBucketRequest.toBuilder().setS3CreateVolumeInfo(
+S3CreateVolumeInfo.newBuilder().setCreationTime(Time.now()));
+
+// TODO: Do we need to enforce the bucket rules in this 

[jira] [Work logged] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?focusedWorklogId=277886=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277886
 ]

ASF GitHub Bot logged work on HDDS-1689:


Author: ASF GitHub Bot
Created on: 17/Jul/19 00:31
Start Date: 17/Jul/19 00:31
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1088: HDDS-1689. 
Implement S3 Create Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088#discussion_r304178640
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketCreateRequest.java
 ##
 @@ -0,0 +1,350 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.volume.OMVolumeRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.bucket.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.s3.bucket.S3BucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateBucketRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateBucketResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateVolumeInfo;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.VolumeList;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import static org.apache.hadoop.ozone.OzoneConsts.OM_S3_VOLUME_PREFIX;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.USER_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.VOLUME_LOCK;
+
+/**
+ * Handles S3 Bucket create request.
+ */
+public class S3BucketCreateRequest extends OMVolumeRequest {
+
+  private static final String S3_ADMIN_NAME = "OzoneS3Manager";
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3CreateBucketRequest.class);
+
+  public S3BucketCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+S3CreateBucketRequest s3CreateBucketRequest =
+getOmRequest().getCreateS3BucketRequest();
+Preconditions.checkNotNull(s3CreateBucketRequest);
+
+S3CreateBucketRequest.Builder newS3CreateBucketRequest =
+s3CreateBucketRequest.toBuilder().setS3CreateVolumeInfo(
+S3CreateVolumeInfo.newBuilder().setCreationTime(Time.now()));
+
+// TODO: Do we need to enforce the bucket rules in this 

[jira] [Assigned] (HDDS-1619) Changes for OM HA.

2019-07-16 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-1619:


Assignee: Xiaoyu Yao

> Changes for OM HA.
> --
>
> Key: HDDS-1619
> URL: https://issues.apache.org/jira/browse/HDDS-1619
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>
> [HDDS-1539] adds 4 new api for Ozone rpc client. OM HA implementation needs 
> to handle them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1686) Remove check to get from openKeyTable in acl implementation for Keys

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1686?focusedWorklogId=277882=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277882
 ]

ASF GitHub Bot logged work on HDDS-1686:


Author: ASF GitHub Bot
Created on: 17/Jul/19 00:28
Start Date: 17/Jul/19 00:28
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #966: HDDS-1686. 
Remove check to get from openKeyTable in acl implementatio…
URL: https://github.com/apache/hadoop/pull/966#discussion_r304178181
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -1370,17 +1370,10 @@ public boolean addAcl(OzoneObj obj, OzoneAcl acl) 
throws IOException {
   validateBucket(volume, bucket);
   String objectKey = metadataManager.getOzoneKey(volume, bucket, keyName);
   OmKeyInfo keyInfo = metadataManager.getKeyTable().get(objectKey);
-  Table keyTable;
   if (keyInfo == null) {
 
 Review comment:
   I don't have specific use case for setting acl for uncommitted key as it is 
only visible to the client with lease. Based on that, I think it is OK to skip 
check the openKeyTable for acl.  
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277882)
Time Spent: 1h  (was: 50m)

> Remove check to get from openKeyTable in acl implementation for Keys
> 
>
> Key: HDDS-1686
> URL: https://issues.apache.org/jira/browse/HDDS-1686
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Remove the checks for openKeyTable in acl Implementation for Keys.
> As entries in openKeyTable are appended with clientID, so the entry will not 
> be found in openKey table 100% of the time. We can save 1 DB read in this way.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1686) Remove check to get from openKeyTable in acl implementation for Keys

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1686?focusedWorklogId=277883=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277883
 ]

ASF GitHub Bot logged work on HDDS-1686:


Author: ASF GitHub Bot
Created on: 17/Jul/19 00:28
Start Date: 17/Jul/19 00:28
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #966: HDDS-1686. Remove 
check to get from openKeyTable in acl implementatio…
URL: https://github.com/apache/hadoop/pull/966#issuecomment-512051705
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277883)
Time Spent: 1h 10m  (was: 1h)

> Remove check to get from openKeyTable in acl implementation for Keys
> 
>
> Key: HDDS-1686
> URL: https://issues.apache.org/jira/browse/HDDS-1686
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Remove the checks for openKeyTable in acl Implementation for Keys.
> As entries in openKeyTable are appended with clientID, so the entry will not 
> be found in openKey table 100% of the time. We can save 1 DB read in this way.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?focusedWorklogId=277881=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277881
 ]

ASF GitHub Bot logged work on HDDS-1689:


Author: ASF GitHub Bot
Created on: 17/Jul/19 00:23
Start Date: 17/Jul/19 00:23
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1088: HDDS-1689. 
Implement S3 Create Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088#discussion_r304177406
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketCreateRequest.java
 ##
 @@ -0,0 +1,350 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.volume.OMVolumeRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.bucket.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.s3.bucket.S3BucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateBucketRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateBucketResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateVolumeInfo;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.VolumeList;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import static org.apache.hadoop.ozone.OzoneConsts.OM_S3_VOLUME_PREFIX;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.USER_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.VOLUME_LOCK;
+
+/**
+ * Handles S3 Bucket create request.
+ */
+public class S3BucketCreateRequest extends OMVolumeRequest {
+
+  private static final String S3_ADMIN_NAME = "OzoneS3Manager";
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3CreateBucketRequest.class);
+
+  public S3BucketCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+S3CreateBucketRequest s3CreateBucketRequest =
+getOmRequest().getCreateS3BucketRequest();
+Preconditions.checkNotNull(s3CreateBucketRequest);
+
+S3CreateBucketRequest.Builder newS3CreateBucketRequest =
+s3CreateBucketRequest.toBuilder().setS3CreateVolumeInfo(
+S3CreateVolumeInfo.newBuilder().setCreationTime(Time.now()));
+
+// TODO: Do we need to enforce the bucket rules in this 

[jira] [Work logged] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?focusedWorklogId=277880=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277880
 ]

ASF GitHub Bot logged work on HDDS-1689:


Author: ASF GitHub Bot
Created on: 17/Jul/19 00:22
Start Date: 17/Jul/19 00:22
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1088: HDDS-1689. 
Implement S3 Create Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088#discussion_r304177261
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketCreateRequest.java
 ##
 @@ -0,0 +1,350 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.volume.OMVolumeRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.bucket.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.s3.bucket.S3BucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateBucketRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateBucketResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateVolumeInfo;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.VolumeList;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import static org.apache.hadoop.ozone.OzoneConsts.OM_S3_VOLUME_PREFIX;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.USER_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.VOLUME_LOCK;
+
+/**
+ * Handles S3 Bucket create request.
+ */
+public class S3BucketCreateRequest extends OMVolumeRequest {
+
+  private static final String S3_ADMIN_NAME = "OzoneS3Manager";
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3CreateBucketRequest.class);
+
+  public S3BucketCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+S3CreateBucketRequest s3CreateBucketRequest =
+getOmRequest().getCreateS3BucketRequest();
+Preconditions.checkNotNull(s3CreateBucketRequest);
+
+S3CreateBucketRequest.Builder newS3CreateBucketRequest =
+s3CreateBucketRequest.toBuilder().setS3CreateVolumeInfo(
+S3CreateVolumeInfo.newBuilder().setCreationTime(Time.now()));
+
+// TODO: Do we need to enforce the bucket rules in this 

[jira] [Work logged] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?focusedWorklogId=277877=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277877
 ]

ASF GitHub Bot logged work on HDDS-1689:


Author: ASF GitHub Bot
Created on: 17/Jul/19 00:17
Start Date: 17/Jul/19 00:17
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1088: HDDS-1689. 
Implement S3 Create Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088#discussion_r304176464
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/s3/bucket/S3BucketCreateRequest.java
 ##
 @@ -0,0 +1,350 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.s3.bucket;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.request.volume.OMVolumeRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.bucket.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.s3.bucket.S3BucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateBucketRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateBucketResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.S3CreateVolumeInfo;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.VolumeList;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import static org.apache.hadoop.ozone.OzoneConsts.OM_S3_VOLUME_PREFIX;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.S3_BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.USER_LOCK;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.VOLUME_LOCK;
+
+/**
+ * Handles S3 Bucket create request.
+ */
+public class S3BucketCreateRequest extends OMVolumeRequest {
+
+  private static final String S3_ADMIN_NAME = "OzoneS3Manager";
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(S3CreateBucketRequest.class);
+
+  public S3BucketCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+S3CreateBucketRequest s3CreateBucketRequest =
+getOmRequest().getCreateS3BucketRequest();
+Preconditions.checkNotNull(s3CreateBucketRequest);
+
+S3CreateBucketRequest.Builder newS3CreateBucketRequest =
+s3CreateBucketRequest.toBuilder().setS3CreateVolumeInfo(
+S3CreateVolumeInfo.newBuilder().setCreationTime(Time.now()));
+
+// TODO: Do we need to enforce the bucket rules in this 

[jira] [Work logged] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?focusedWorklogId=277875=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277875
 ]

ASF GitHub Bot logged work on HDDS-1689:


Author: ASF GitHub Bot
Created on: 17/Jul/19 00:14
Start Date: 17/Jul/19 00:14
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1088: HDDS-1689. 
Implement S3 Create Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088#discussion_r304175893
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/bucket/S3BucketCreateResponse.java
 ##
 @@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.s3.bucket;
+
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.bucket.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import javax.annotation.Nonnull;
+import javax.annotation.Nullable;
+import java.io.IOException;
+
+/**
+ * Response for S3Bucket create request.
+ */
+public class S3BucketCreateResponse extends OMClientResponse {
+
+  private OMVolumeCreateResponse omVolumeCreateResponse;
+  private OMBucketCreateResponse omBucketCreateResponse;
+  private String s3Bucket;
+  private String s3Mapping;
+
+  public S3BucketCreateResponse(
+  @Nullable OMVolumeCreateResponse omVolumeCreateResponse,
+  @Nullable OMBucketCreateResponse omBucketCreateResponse,
+  @Nullable String s3BucketName,
+  @Nullable String s3Mapping, @Nonnull OMResponse omResponse) {
+super(omResponse);
+this.omVolumeCreateResponse = omVolumeCreateResponse;
+this.omBucketCreateResponse = omBucketCreateResponse;
+this.s3Bucket = s3BucketName;
+this.s3Mapping = s3Mapping;
+  }
+
+  @Override
+  public void addToDBBatch(OMMetadataManager omMetadataManager,
+  BatchOperation batchOperation) throws IOException {
+
+if (getOMResponse().getStatus() == OzoneManagerProtocolProtos.Status.OK) {
+  if (omVolumeCreateResponse != null) {
+omVolumeCreateResponse.addToDBBatch(omMetadataManager, batchOperation);
+  }
+
+  if (omBucketCreateResponse != null) {
 
 Review comment:
   If status is OK then omBucketCreateResponse must be non-null right?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277875)
Time Spent: 1h 50m  (was: 1h 40m)

> Implement S3 Create Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1689
> URL: https://issues.apache.org/jira/browse/HDDS-1689
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Work logged] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?focusedWorklogId=277874=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277874
 ]

ASF GitHub Bot logged work on HDDS-1689:


Author: ASF GitHub Bot
Created on: 17/Jul/19 00:14
Start Date: 17/Jul/19 00:14
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1088: HDDS-1689. 
Implement S3 Create Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088#discussion_r304175893
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/bucket/S3BucketCreateResponse.java
 ##
 @@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.s3.bucket;
+
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.bucket.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import javax.annotation.Nonnull;
+import javax.annotation.Nullable;
+import java.io.IOException;
+
+/**
+ * Response for S3Bucket create request.
+ */
+public class S3BucketCreateResponse extends OMClientResponse {
+
+  private OMVolumeCreateResponse omVolumeCreateResponse;
+  private OMBucketCreateResponse omBucketCreateResponse;
+  private String s3Bucket;
+  private String s3Mapping;
+
+  public S3BucketCreateResponse(
+  @Nullable OMVolumeCreateResponse omVolumeCreateResponse,
+  @Nullable OMBucketCreateResponse omBucketCreateResponse,
+  @Nullable String s3BucketName,
+  @Nullable String s3Mapping, @Nonnull OMResponse omResponse) {
+super(omResponse);
+this.omVolumeCreateResponse = omVolumeCreateResponse;
+this.omBucketCreateResponse = omBucketCreateResponse;
+this.s3Bucket = s3BucketName;
+this.s3Mapping = s3Mapping;
+  }
+
+  @Override
+  public void addToDBBatch(OMMetadataManager omMetadataManager,
+  BatchOperation batchOperation) throws IOException {
+
+if (getOMResponse().getStatus() == OzoneManagerProtocolProtos.Status.OK) {
+  if (omVolumeCreateResponse != null) {
+omVolumeCreateResponse.addToDBBatch(omMetadataManager, batchOperation);
+  }
+
+  if (omBucketCreateResponse != null) {
 
 Review comment:
   If status is okay then omBucketCreateResponse must be non-null right?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277874)
Time Spent: 1h 40m  (was: 1.5h)

> Implement S3 Create Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1689
> URL: https://issues.apache.org/jira/browse/HDDS-1689
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Work logged] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?focusedWorklogId=277873=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277873
 ]

ASF GitHub Bot logged work on HDDS-1689:


Author: ASF GitHub Bot
Created on: 17/Jul/19 00:11
Start Date: 17/Jul/19 00:11
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1088: HDDS-1689. 
Implement S3 Create Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088#discussion_r304175519
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/s3/bucket/S3BucketCreateResponse.java
 ##
 @@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.s3.bucket;
+
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.bucket.OMBucketCreateResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeCreateResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.utils.db.BatchOperation;
+
+import javax.annotation.Nonnull;
+import javax.annotation.Nullable;
+import java.io.IOException;
+
+/**
+ * Response for S3Bucket create request.
+ */
+public class S3BucketCreateResponse extends OMClientResponse {
+
+  private OMVolumeCreateResponse omVolumeCreateResponse;
+  private OMBucketCreateResponse omBucketCreateResponse;
+  private String s3Bucket;
+  private String s3Mapping;
+
+  public S3BucketCreateResponse(
+  @Nullable OMVolumeCreateResponse omVolumeCreateResponse,
+  @Nullable OMBucketCreateResponse omBucketCreateResponse,
+  @Nullable String s3BucketName,
+  @Nullable String s3Mapping, @Nonnull OMResponse omResponse) {
+super(omResponse);
+this.omVolumeCreateResponse = omVolumeCreateResponse;
+this.omBucketCreateResponse = omBucketCreateResponse;
+this.s3Bucket = s3BucketName;
+this.s3Mapping = s3Mapping;
+  }
+
+  @Override
+  public void addToDBBatch(OMMetadataManager omMetadataManager,
+  BatchOperation batchOperation) throws IOException {
+
+if (getOMResponse().getStatus() == OzoneManagerProtocolProtos.Status.OK) {
+  if (omVolumeCreateResponse != null) {
+omVolumeCreateResponse.addToDBBatch(omMetadataManager, batchOperation);
+  }
+
+  if (omBucketCreateResponse != null) {
+omBucketCreateResponse.addToDBBatch(omMetadataManager, batchOperation);
+  }
+
+  omMetadataManager.getS3Table().putWithBatch(batchOperation, s3Bucket,
+  s3Mapping);
+}
+  }
+}
 
 Review comment:
   Add newline please. I missed this in general, it's good to have files end in 
newlines.
   
   Perhaps not true these days but historically some tools used to choke on 
text files that don't end in newline.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277873)
Time Spent: 1.5h  (was: 1h 20m)

> Implement S3 Create Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1689
> URL: https://issues.apache.org/jira/browse/HDDS-1689
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 

[jira] [Work logged] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?focusedWorklogId=277869=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277869
 ]

ASF GitHub Bot logged work on HDDS-1689:


Author: ASF GitHub Bot
Created on: 17/Jul/19 00:02
Start Date: 17/Jul/19 00:02
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1088: HDDS-1689. 
Implement S3 Create Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088#discussion_r304173803
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/s3/bucket/package-info.java
 ##
 @@ -0,0 +1,23 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+/**
+ * Package contains test classes for s3 bucket requests.
+ */
+package org.apache.hadoop.ozone.om.request.s3.bucket;
 
 Review comment:
   Add newline at end of file.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277869)
Time Spent: 1h 20m  (was: 1h 10m)

> Implement S3 Create Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1689
> URL: https://issues.apache.org/jira/browse/HDDS-1689
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1775) Make OM KeyDeletingService compatible with HA model

2019-07-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886562#comment-16886562
 ] 

Hudson commented on HDDS-1775:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16931 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16931/])
HDDS-1775. Make OM KeyDeletingService compatible with HA model (#1063) (github: 
rev 7adc4a327ace7a435fe5b4fd92ab99018d0eb365)
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestKeyPurging.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/audit/OMAction.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/key/TestOMKeyPurgeRequestAndResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/key/TestOMKeyDeleteResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerHARequestHandlerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyDeleteResponse.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyPurgeRequest.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyPurgeResponse.java


> Make OM KeyDeletingService compatible with HA model
> ---
>
> Key: HDDS-1775
> URL: https://issues.apache.org/jira/browse/HDDS-1775
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Currently OM KeyDeletingService directly deletes all the keys in DeletedTable 
> after deleting the corresponding blocks through SCM. For HA compatibility, 
> the key purging should happen through the OM Ratis server. This Jira 
> introduces PurgeKeys request in OM protocol. This request will be submitted 
> to OMs Ratis server after SCM deletes blocks corresponding to deleted keys.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1775) Make OM KeyDeletingService compatible with HA model

2019-07-16 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-1775:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Make OM KeyDeletingService compatible with HA model
> ---
>
> Key: HDDS-1775
> URL: https://issues.apache.org/jira/browse/HDDS-1775
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Currently OM KeyDeletingService directly deletes all the keys in DeletedTable 
> after deleting the corresponding blocks through SCM. For HA compatibility, 
> the key purging should happen through the OM Ratis server. This Jira 
> introduces PurgeKeys request in OM protocol. This request will be submitted 
> to OMs Ratis server after SCM deletes blocks corresponding to deleted keys.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1775) Make OM KeyDeletingService compatible with HA model

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1775?focusedWorklogId=277868=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277868
 ]

ASF GitHub Bot logged work on HDDS-1775:


Author: ASF GitHub Bot
Created on: 17/Jul/19 00:00
Start Date: 17/Jul/19 00:00
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #1063: 
HDDS-1775. Make OM KeyDeletingService compatible with HA model
URL: https://github.com/apache/hadoop/pull/1063
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277868)
Time Spent: 3h 10m  (was: 3h)

> Make OM KeyDeletingService compatible with HA model
> ---
>
> Key: HDDS-1775
> URL: https://issues.apache.org/jira/browse/HDDS-1775
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Currently OM KeyDeletingService directly deletes all the keys in DeletedTable 
> after deleting the corresponding blocks through SCM. For HA compatibility, 
> the key purging should happen through the OM Ratis server. This Jira 
> introduces PurgeKeys request in OM protocol. This request will be submitted 
> to OMs Ratis server after SCM deletes blocks corresponding to deleted keys.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1775) Make OM KeyDeletingService compatible with HA model

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1775?focusedWorklogId=277867=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277867
 ]

ASF GitHub Bot logged work on HDDS-1775:


Author: ASF GitHub Bot
Created on: 16/Jul/19 23:59
Start Date: 16/Jul/19 23:59
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #1063: HDDS-1775. Make 
OM KeyDeletingService compatible with HA model
URL: https://github.com/apache/hadoop/pull/1063#issuecomment-512045807
 
 
   Thank you all for the reviews.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277867)
Time Spent: 3h  (was: 2h 50m)

> Make OM KeyDeletingService compatible with HA model
> ---
>
> Key: HDDS-1775
> URL: https://issues.apache.org/jira/browse/HDDS-1775
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Currently OM KeyDeletingService directly deletes all the keys in DeletedTable 
> after deleting the corresponding blocks through SCM. For HA compatibility, 
> the key purging should happen through the OM Ratis server. This Jira 
> introduces PurgeKeys request in OM protocol. This request will be submitted 
> to OMs Ratis server after SCM deletes blocks corresponding to deleted keys.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?focusedWorklogId=277864=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277864
 ]

ASF GitHub Bot logged work on HDDS-1689:


Author: ASF GitHub Bot
Created on: 16/Jul/19 23:55
Start Date: 16/Jul/19 23:55
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1088: HDDS-1689. 
Implement S3 Create Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088#discussion_r304172499
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeCreateRequest.java
 ##
 @@ -145,28 +136,24 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 try {
   acquiredUserLock = omMetadataManager.getLock().acquireLock(USER_LOCK,
   owner);
+  String dbVolumeKey = omMetadataManager.getVolumeKey(volume);
+
   OmVolumeArgs dbVolumeArgs =
   omMetadataManager.getVolumeTable().get(dbVolumeKey);
 
-  // Validation: Check if volume already exists
-  if (dbVolumeArgs != null) {
+  if (dbVolumeArgs == null) {
+String dbUserKey = omMetadataManager.getUserKey(owner);
+volumeList = addVolumeToOwnerList(volumeList, volume, owner,
+ozoneManager.getMaxUserVolumeCount());
+createVolume(omMetadataManager, omVolumeArgs, volumeList, dbVolumeKey,
+  dbUserKey, transactionLogIndex);
+LOG.debug("volume:{} successfully created", omVolumeArgs.getVolume());
+  } else {
 LOG.debug("volume:{} already exists", omVolumeArgs.getVolume());
 throw new OMException("Volume already exists",
 OMException.ResultCodes.VOLUME_ALREADY_EXISTS);
   }
 
-  volumeList = omMetadataManager.getUserTable().get(dbUserKey);
 
 Review comment:
   Was this line of code lost? I can't find this anywhere in the changes.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277864)
Time Spent: 1h 10m  (was: 1h)

> Implement S3 Create Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1689
> URL: https://issues.apache.org/jira/browse/HDDS-1689
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1689) Implement S3 Create Bucket request to use Cache and DoubleBuffer

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1689?focusedWorklogId=277862=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277862
 ]

ASF GitHub Bot logged work on HDDS-1689:


Author: ASF GitHub Bot
Created on: 16/Jul/19 23:55
Start Date: 16/Jul/19 23:55
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1088: HDDS-1689. 
Implement S3 Create Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088#discussion_r304172499
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/OMVolumeCreateRequest.java
 ##
 @@ -145,28 +136,24 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 try {
   acquiredUserLock = omMetadataManager.getLock().acquireLock(USER_LOCK,
   owner);
+  String dbVolumeKey = omMetadataManager.getVolumeKey(volume);
+
   OmVolumeArgs dbVolumeArgs =
   omMetadataManager.getVolumeTable().get(dbVolumeKey);
 
-  // Validation: Check if volume already exists
-  if (dbVolumeArgs != null) {
+  if (dbVolumeArgs == null) {
+String dbUserKey = omMetadataManager.getUserKey(owner);
+volumeList = addVolumeToOwnerList(volumeList, volume, owner,
+ozoneManager.getMaxUserVolumeCount());
+createVolume(omMetadataManager, omVolumeArgs, volumeList, dbVolumeKey,
+  dbUserKey, transactionLogIndex);
+LOG.debug("volume:{} successfully created", omVolumeArgs.getVolume());
+  } else {
 LOG.debug("volume:{} already exists", omVolumeArgs.getVolume());
 throw new OMException("Volume already exists",
 OMException.ResultCodes.VOLUME_ALREADY_EXISTS);
   }
 
-  volumeList = omMetadataManager.getUserTable().get(dbUserKey);
 
 Review comment:
   Was this line of code lost?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277862)
Time Spent: 1h  (was: 50m)

> Implement S3 Create Bucket request to use Cache and DoubleBuffer
> 
>
> Key: HDDS-1689
> URL: https://issues.apache.org/jira/browse/HDDS-1689
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Implement S3 Bucket write requests to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1805?focusedWorklogId=277850=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277850
 ]

ASF GitHub Bot logged work on HDDS-1805:


Author: ASF GitHub Bot
Created on: 16/Jul/19 23:45
Start Date: 16/Jul/19 23:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1108: HDDS-1805. 
Implement S3 Initiate MPU request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1108#issuecomment-512043093
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 7 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for branch |
   | +1 | mvninstall | 466 | trunk passed |
   | +1 | compile | 247 | trunk passed |
   | +1 | checkstyle | 59 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 784 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 314 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 508 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 450 | the patch passed |
   | +1 | compile | 278 | the patch passed |
   | +1 | cc | 278 | the patch passed |
   | +1 | javac | 278 | the patch passed |
   | +1 | checkstyle | 85 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 683 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | the patch passed |
   | +1 | findbugs | 525 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 194 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1777 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 6713 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1108 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 20a26e73c615 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 71c21f0 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/2/testReport/ |
   | Max. process+thread count | 5386 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277850)
Time Spent: 0.5h  (was: 20m)

> Implement S3 Initiate MPU request to use Cache and 

[jira] [Work logged] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1805?focusedWorklogId=277842=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277842
 ]

ASF GitHub Bot logged work on HDDS-1805:


Author: ASF GitHub Bot
Created on: 16/Jul/19 23:31
Start Date: 16/Jul/19 23:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1108: HDDS-1805. 
Implement S3 Initiate MPU request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1108#issuecomment-512040420
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 7 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for branch |
   | +1 | mvninstall | 454 | trunk passed |
   | +1 | compile | 255 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 797 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 151 | trunk passed |
   | 0 | spotbugs | 319 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 507 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | -1 | mvninstall | 42 | hadoop-hdds in the patch failed. |
   | +1 | compile | 253 | the patch passed |
   | +1 | cc | 253 | the patch passed |
   | +1 | javac | 253 | the patch passed |
   | +1 | checkstyle | 73 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 675 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | the patch passed |
   | +1 | findbugs | 523 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 284 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1771 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 6636 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1108 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit javadoc 
mvninstall shadedclient findbugs checkstyle |
   | uname | Linux 7c942938bb42 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 71c21f0 |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/1/testReport/ |
   | Max. process+thread count | 5386 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1108/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277842)
Time Spent: 20m  (was: 10m)

> Implement S3 Initiate MPU request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1805
> URL: 

[jira] [Resolved] (HDDS-1712) Remove sudo access from Ozone docker image

2019-07-16 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-1712.

Resolution: Not A Problem

Docker images are examples: we have clearly documented that in the Ozone-0.4.1 
Documentation. Therefore this discussion is point less. I am resolving this 
JIRA.

 

> Remove sudo access from Ozone docker image
> --
>
> Key: HDDS-1712
> URL: https://issues.apache.org/jira/browse/HDDS-1712
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1712.001.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone docker image is given unlimited sudo access to hadoop user.  This poses 
> a security risk where host level user uid 1000 can attach a debugger to the 
> container process to obtain root access.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1712) Remove sudo access from Ozone docker image

2019-07-16 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886519#comment-16886519
 ] 

Anu Engineer commented on HDDS-1712:


This whole discussion is pointless. Here is why. The images are like 
documentation. They are examples. We don't release them to any one. They are 
like binary artifacts. We cannot in good faith start supporting docker images 
as first class objects of Ozone releases.

 

There are too many combinations:
 # OS - Which flavor of linux should I use ?
 # JVM  - We ship with JVM 11, I am not sure if Hadoop even formally has 
finished a run with JVM.
 # All the other applications like profiler, debuggers etc. etc.

realistically, we should add comment into the docker files saying this is an 
example, and get away from the completely point less discussion.

 

Once again, Apache does not release binary artifacts, so spending too much time 
on this line of developement is not productive and few weeks later someone will 
have a different opinion. Like why are we not using JVM 12.. This is a 
non-ending, fertile for trolling kind of JIRA.

 

We already have documentation that docker images are examples and we should 
just stick to that.

 

Thanks

Anu

 

> Remove sudo access from Ozone docker image
> --
>
> Key: HDDS-1712
> URL: https://issues.apache.org/jira/browse/HDDS-1712
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1712.001.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone docker image is given unlimited sudo access to hadoop user.  This poses 
> a security risk where host level user uid 1000 can attach a debugger to the 
> container process to obtain root access.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1775) Make OM KeyDeletingService compatible with HA model

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1775?focusedWorklogId=277811=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277811
 ]

ASF GitHub Bot logged work on HDDS-1775:


Author: ASF GitHub Bot
Created on: 16/Jul/19 22:31
Start Date: 16/Jul/19 22:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1063: HDDS-1775. Make 
OM KeyDeletingService compatible with HA model
URL: https://github.com/apache/hadoop/pull/1063#issuecomment-512026513
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 487 | trunk passed |
   | +1 | compile | 273 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 830 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | trunk passed |
   | 0 | spotbugs | 333 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 543 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 454 | the patch passed |
   | +1 | compile | 265 | the patch passed |
   | +1 | cc | 265 | the patch passed |
   | +1 | javac | 265 | the patch passed |
   | -0 | checkstyle | 37 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 650 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 86 | hadoop-ozone generated 3 new + 12 unchanged - 0 fixed 
= 15 total (was 12) |
   | +1 | findbugs | 529 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 273 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1486 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 6497 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1063 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 27867ac4aa2e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d429d74 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/5/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/5/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/5/testReport/ |
   | Max. process+thread count | 5290 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact 

[jira] [Commented] (HDDS-1815) Add SVG Logo for Ozone

2019-07-16 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886511#comment-16886511
 ] 

Anu Engineer commented on HDDS-1815:


Never worked with any graphical format. I just happen to know that .AI is a 
vector format and was hoping that it would be easy to convert to .svg. That is 
all, if it does not work we will commit your logo. Thanks

> Add SVG Logo for Ozone
> --
>
> Key: HDDS-1815
> URL: https://issues.apache.org/jira/browse/HDDS-1815
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Salvatore LaMendola
>Priority: Trivial
> Attachments: ozone_logo.svg
>
>
> First time ever using Inkscape, so apologies in advance if I did it 
> completely wrong...
> Anyway, I created a single color SVG by tracing the bitmap from the PNG logo. 
> I'll attach the SVG to this issue. It's supposed to match the design of 
> Material UI icons, hence the single color. If there's a desire for the green 
> bolt color, let me know, and I'll try to also make a colorized version.
> Also, I'm no designer, but feedback is welcome :)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1815) Add SVG Logo for Ozone

2019-07-16 Thread Salvatore LaMendola (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886510#comment-16886510
 ] 

Salvatore LaMendola commented on HDDS-1815:
---

Oh cool I didn't see that logo JIRA. Looking at the .ai files in that 
Logo_Final.zip, I see lots of PNG data in them, so are those actually vector 
images? I didn't try scaling them up, but I'd assume they'll start to look 
distorted if you do...

> Add SVG Logo for Ozone
> --
>
> Key: HDDS-1815
> URL: https://issues.apache.org/jira/browse/HDDS-1815
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Salvatore LaMendola
>Priority: Trivial
> Attachments: ozone_logo.svg
>
>
> First time ever using Inkscape, so apologies in advance if I did it 
> completely wrong...
> Anyway, I created a single color SVG by tracing the bitmap from the PNG logo. 
> I'll attach the SVG to this issue. It's supposed to match the design of 
> Material UI icons, hence the single color. If there's a desire for the green 
> bolt color, let me know, and I'll try to also make a colorized version.
> Also, I'm no designer, but feedback is welcome :)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1815) Add SVG Logo for Ozone

2019-07-16 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886506#comment-16886506
 ] 

Anu Engineer commented on HDDS-1815:


it looks very good, but take a look at this Jira 
https://issues.apache.org/jira/browse/HDDS-333

There are some files called .ai, and they are in monochorme too. Not sure if 
you can use them for your purpose, if not, I will commit these images which you 
have attached. 

 

Thanks

Anu

 

> Add SVG Logo for Ozone
> --
>
> Key: HDDS-1815
> URL: https://issues.apache.org/jira/browse/HDDS-1815
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Salvatore LaMendola
>Priority: Trivial
> Attachments: ozone_logo.svg
>
>
> First time ever using Inkscape, so apologies in advance if I did it 
> completely wrong...
> Anyway, I created a single color SVG by tracing the bitmap from the PNG logo. 
> I'll attach the SVG to this issue. It's supposed to match the design of 
> Material UI icons, hence the single color. If there's a desire for the green 
> bolt color, let me know, and I'll try to also make a colorized version.
> Also, I'm no designer, but feedback is welcome :)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1815) Add SVG Logo for Ozone

2019-07-16 Thread Salvatore LaMendola (JIRA)
Salvatore LaMendola created HDDS-1815:
-

 Summary: Add SVG Logo for Ozone
 Key: HDDS-1815
 URL: https://issues.apache.org/jira/browse/HDDS-1815
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
Reporter: Salvatore LaMendola
 Attachments: ozone_logo.svg

First time ever using Inkscape, so apologies in advance if I did it completely 
wrong...

Anyway, I created a single color SVG by tracing the bitmap from the PNG logo. 
I'll attach the SVG to this issue. It's supposed to match the design of 
Material UI icons, hence the single color. If there's a desire for the green 
bolt color, let me know, and I'll try to also make a colorized version.

Also, I'm no designer, but feedback is welcome :)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1654) Ensure container state on datanode gets synced to disk whenever state change happens

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1654?focusedWorklogId=277802=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277802
 ]

ASF GitHub Bot logged work on HDDS-1654:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:57
Start Date: 16/Jul/19 21:57
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #923: HDDS-1654. 
Ensure container state on datanode gets synced to disk whennever state change 
happens.
URL: https://github.com/apache/hadoop/pull/923#issuecomment-512003747
 
 
   This patch needs to be rebased I believe, previously there is a bug where 
acceptance test run used to hang. To get a Jenkins run, this needs to be 
rebased.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277802)
Time Spent: 1h 20m  (was: 1h 10m)

> Ensure container state on datanode gets synced to disk whenever state change 
> happens
> 
>
> Key: HDDS-1654
> URL: https://issues.apache.org/jira/browse/HDDS-1654
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently, whenever there is a container state change, it updates the 
> container but doesn't sync.
> The idea is here to is to force sync the state to disk everytime there is a 
> state change.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277791=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277791
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:38
Start Date: 16/Jul/19 21:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r304131563
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/security/acl/TestOzoneNativeAuthorizer.java
 ##
 @@ -256,9 +258,10 @@ public void testCheckAccessForBucket() throws Exception {
 
   @Test
   public void testCheckAccessForKey() throws Exception {
-OzoneAcl userAcl = new OzoneAcl(USER, ugi.getUserName(), parentDirUserAcl);
+OzoneAcl userAcl = new OzoneAcl(USER, ugi.getUserName(), parentDirUserAcl, 
+ACCESS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277791)
Time Spent: 19.5h  (was: 19h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 19.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12979) StandbyNode should upload FsImage to ObserverNode after checkpointing.

2019-07-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886478#comment-16886478
 ] 

Hadoop QA commented on HDFS-12979:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 456 unchanged - 6 fixed = 457 total (was 462) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-12979 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12974861/HDFS-12979.015.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0f2ade3bb493 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c5e3ab5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27240/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27240/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27240/testReport/ |
| Max. process+thread 

[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277793=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277793
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:38
Start Date: 16/Jul/19 21:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r304131570
 
 

 ##
 File path: 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/storage/DistributedStorageHandler.java
 ##
 @@ -71,6 +70,8 @@
 import java.util.Objects;
 import java.util.concurrent.TimeUnit;
 
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277793)
Time Spent: 19h 50m  (was: 19h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 19h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277794=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277794
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:38
Start Date: 16/Jul/19 21:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1101: HDDS-1544. 
Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#issuecomment-511998035
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 49 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for branch |
   | +1 | mvninstall | 474 | trunk passed |
   | +1 | compile | 253 | trunk passed |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 836 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | trunk passed |
   | 0 | spotbugs | 313 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 503 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 439 | the patch passed |
   | +1 | compile | 268 | the patch passed |
   | +1 | cc | 268 | the patch passed |
   | +1 | javac | 268 | the patch passed |
   | +1 | checkstyle | 86 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 15 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 627 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | the patch passed |
   | +1 | findbugs | 504 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 290 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1911 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 6861 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1101/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1101 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux ac07267047c5 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c5e3ab5 |
   | Default Java | 1.8.0_212 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1101/3/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1101/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1101/3/testReport/ |
   | Max. process+thread count | 4582 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service hadoop-ozone/dist 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1101/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277794)
Time Spent: 20h  (was: 19h 50m)

> Support default Acls for volume, 

[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277789=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277789
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:38
Start Date: 16/Jul/19 21:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r304131579
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -412,8 +420,9 @@ public OpenKeySession openKey(OmKeyArgs args) throws 
IOException {
 
 FileEncryptionInfo encInfo;
 metadataManager.getLock().acquireLock(BUCKET_LOCK, volumeName, bucketName);
+OmBucketInfo bucketInfo;
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277789)
Time Spent: 19h 10m  (was: 19h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 19h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277790=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277790
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:38
Start Date: 16/Jul/19 21:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r304131588
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -429,7 +438,8 @@ public OpenKeySession openKey(OmKeyArgs args) throws 
IOException {
 if (keyInfo == null) {
   // the key does not exist, create a new object, the new blocks are the
   // version 0
-  keyInfo = createKeyInfo(args, locations, factor, type, size, encInfo);
+  keyInfo = createKeyInfo(args, locations, factor, type, size, 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277790)
Time Spent: 19h 20m  (was: 19h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 19h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277792=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277792
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:38
Start Date: 16/Jul/19 21:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r304131597
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -839,7 +875,7 @@ public OmMultipartInfo 
applyInitiateMultipartUpload(OmKeyArgs keyArgs,
 String keyName = keyArgs.getKeyName();
 
 metadataManager.getLock().acquireLock(BUCKET_LOCK, volumeName, bucketName);
-validateS3Bucket(volumeName, bucketName);
+OmBucketInfo bucketInfo = validateS3Bucket(volumeName, bucketName);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277792)
Time Spent: 19h 40m  (was: 19.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 19h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277785=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277785
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:37
Start Date: 16/Jul/19 21:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r304131520
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
 ##
 @@ -365,4 +369,30 @@ public static boolean checkIfAclBitIsSet(ACLType acl, 
BitSet bitset) {
 || bitset.get(ALL.ordinal()))
 && !bitset.get(NONE.ordinal()));
   }
+
+  /**
+   * Helper function to find and return all DEFAULT acls in input list with
+   * scope changed to ACCESS.
+   * @param acls
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277785)
Time Spent: 18.5h  (was: 18h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 18.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277788=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277788
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:37
Start Date: 16/Jul/19 21:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r304131558
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/security/acl/TestOzoneNativeAuthorizer.java
 ##
 @@ -242,9 +243,10 @@ public void testCheckAccessForVolume() throws Exception {
   @Test
   public void testCheckAccessForBucket() throws Exception {
 
-OzoneAcl userAcl = new OzoneAcl(USER, ugi.getUserName(), parentDirUserAcl);
+OzoneAcl userAcl = new OzoneAcl(USER, ugi.getUserName(), parentDirUserAcl, 
+ACCESS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277788)
Time Spent: 19h  (was: 18h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 19h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277787=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277787
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:37
Start Date: 16/Jul/19 21:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r304131536
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/security/acl/TestOzoneNativeAuthorizer.java
 ##
 @@ -280,9 +283,10 @@ public void testCheckAccessForPrefix() throws Exception {
 .setStoreType(OZONE)
 .build();
 
-OzoneAcl userAcl = new OzoneAcl(USER, ugi.getUserName(), parentDirUserAcl);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277787)
Time Spent: 18h 50m  (was: 18h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 18h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277783=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277783
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:37
Start Date: 16/Jul/19 21:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r304131499
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -49,53 +52,70 @@
 @SuppressWarnings("ProtocolBufferOrdinal")
 public class OmOzoneAclMap {
   // per Acl Type user:rights map
-  private ArrayList> aclMaps;
+  private ArrayList> accessAclMap;
+  private List defaultAclList;
 
   OmOzoneAclMap() {
-aclMaps = new ArrayList<>();
+accessAclMap = new ArrayList<>();
+defaultAclList = new ArrayList<>();
 for (OzoneAclType aclType : OzoneAclType.values()) {
-  aclMaps.add(aclType.ordinal(), new HashMap<>());
+  accessAclMap.add(aclType.ordinal(), new HashMap<>());
 }
   }
 
-  private Map getMap(OzoneAclType type) {
-return aclMaps.get(type.ordinal());
+  private Map getAccessAclMap(OzoneAclType type) {
+return accessAclMap.get(type.ordinal());
   }
 
   // For a given acl type and user, get the stored acl
   private BitSet getAcl(OzoneAclType type, String user) {
-return getMap(type).get(user);
+return getAccessAclMap(type).get(user);
   }
 
   public List getAcl() {
 List acls = new ArrayList<>();
 
+acls.addAll(getAccessAcls());
+acls.addAll(defaultAclList.stream().map(a ->
+OzoneAcl.fromProtobuf(a)).collect(Collectors.toList()));
+return acls;
+  }
+
+  private Collection getAccessAcls() {
+List acls = new ArrayList<>();
 for (OzoneAclType type : OzoneAclType.values()) {
-  aclMaps.get(type.ordinal()).entrySet().stream().
+  accessAclMap.get(type.ordinal()).entrySet().stream().
   forEach(entry -> acls.add(new OzoneAcl(ACLIdentityType.
-  valueOf(type.name()), entry.getKey(), entry.getValue(;
+  valueOf(type.name()), entry.getKey(), entry.getValue(),
+  OzoneAcl.AclScope.ACCESS)));
 }
 return acls;
   }
 
   // Add a new acl to the map
   public void addAcl(OzoneAcl acl) throws OMException {
 Objects.requireNonNull(acl, "Acl should not be null.");
+if (acl.getAclScope().equals(OzoneAcl.AclScope.DEFAULT)) {
+  defaultAclList.add(OzoneAcl.toProtobuf(acl));
+  return;
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277783)
Time Spent: 18h 10m  (was: 18h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 18h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277784=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277784
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:37
Start Date: 16/Jul/19 21:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r304131509
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -116,9 +135,14 @@ public void setAcls(List acls) throws 
OMException {
   // Add a new acl to the map
   public void removeAcl(OzoneAcl acl) throws OMException {
 Objects.requireNonNull(acl, "Acl should not be null.");
+if (acl.getAclScope().equals(OzoneAcl.AclScope.DEFAULT)) {
+  defaultAclList.remove(OzoneAcl.toProtobuf(acl));
+  return;
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277784)
Time Spent: 18h 20m  (was: 18h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 18h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277782=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277782
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:37
Start Date: 16/Jul/19 21:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r304131491
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -178,19 +204,55 @@ public static OzoneAclInfo toProtobuf(OzoneAcl acl) {
 OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
 .setName(acl.getName())
 .setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.valueOf(acl.getAclScope().name()))
 .setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
 return builder.build();
   }
 
   public static OzoneAcl fromProtobuf(OzoneAclInfo protoAcl) {
 BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
 return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
-protoAcl.getName(), aclRights);
+protoAcl.getName(), aclRights,
+AclScope.valueOf(protoAcl.getAclScope().name()));
+  }
+
+  /**
+   * Helper function to convert a proto message of type {@link OzoneAclInfo}
+   * to {@link OzoneAcl} with acl scope of type ACCESS.
+   * 
+   * @param protoAcl
+   * @return OzoneAcl
+   * */
+  public static OzoneAcl fromProtobufWithAccessType(OzoneAclInfo protoAcl) {
+BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
+return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
+protoAcl.getName(), aclRights, AclScope.ACCESS);
   }
 
+  /**
+   * Helper function to convert an {@link OzoneAcl} to proto message of type
+   * {@link OzoneAclInfo} with acl scope of type ACCESS.
+   *
+   * @param acl
+   * @return OzoneAclInfo
+   * */
+  public static OzoneAclInfo toProtobufWithAccessType(OzoneAcl acl) {
+OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
+.setName(acl.getName())
+.setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.ACCESS)
+.setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
+return builder.build();
+  }
+
+  public AclScope getAclScope() {
+return aclScope;
+  }
+  
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277782)
Time Spent: 18h  (was: 17h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 18h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277786=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277786
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:37
Start Date: 16/Jul/19 21:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r304131524
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
 ##
 @@ -365,4 +369,30 @@ public static boolean checkIfAclBitIsSet(ACLType acl, 
BitSet bitset) {
 || bitset.get(ALL.ordinal()))
 && !bitset.get(NONE.ordinal()));
   }
+
+  /**
+   * Helper function to find and return all DEFAULT acls in input list with
+   * scope changed to ACCESS.
+   * @param acls
+   * 
+   * @return list of default Acls.
+   * */
+  public static Collection getDefaultAclsProto(
+  List acls) {
+return acls.stream().filter(a -> a.getAclScope() == DEFAULT)
+.map(OzoneAcl::toProtobufWithAccessType).collect(Collectors.toList());
+  }
+
+  /**
+   * Helper function to find and return all DEFAULT acls in input list with
+   * scope changed to ACCESS.
+   * @param acls
+   *
+   * @return list of default Acls.
+   * */
+  public static Collection getDefaultAcls(List acls) {
+return acls.stream().filter(a -> a.getAclScope() == DEFAULT)
+.collect(Collectors.toList());
+  }
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277786)
Time Spent: 18h 40m  (was: 18.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 18h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=29=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-29
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:37
Start Date: 16/Jul/19 21:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r304131466
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -141,13 +153,27 @@ public static OzoneAcl parseAcl(String acl) throws 
IllegalArgumentException {
 ACLIdentityType aclType = ACLIdentityType.valueOf(parts[0].toUpperCase());
 BitSet acls = new BitSet(ACLType.getNoOfAcls());
 
-for (char ch : parts[2].toCharArray()) {
+String bits = parts[2];
+
+// Default acl scope is ACCESS.
+AclScope aclScope = AclScope.ACCESS;
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 29)
Time Spent: 17.5h  (was: 17h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 17.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277781=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277781
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:37
Start Date: 16/Jul/19 21:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r304131482
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -178,19 +204,55 @@ public static OzoneAclInfo toProtobuf(OzoneAcl acl) {
 OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
 .setName(acl.getName())
 .setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.valueOf(acl.getAclScope().name()))
 .setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
 return builder.build();
   }
 
   public static OzoneAcl fromProtobuf(OzoneAclInfo protoAcl) {
 BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
 return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
-protoAcl.getName(), aclRights);
+protoAcl.getName(), aclRights,
+AclScope.valueOf(protoAcl.getAclScope().name()));
+  }
+
+  /**
+   * Helper function to convert a proto message of type {@link OzoneAclInfo}
+   * to {@link OzoneAcl} with acl scope of type ACCESS.
+   * 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277781)
Time Spent: 17h 50m  (was: 17h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 17h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277780=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277780
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:37
Start Date: 16/Jul/19 21:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r304131474
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -141,13 +153,27 @@ public static OzoneAcl parseAcl(String acl) throws 
IllegalArgumentException {
 ACLIdentityType aclType = ACLIdentityType.valueOf(parts[0].toUpperCase());
 BitSet acls = new BitSet(ACLType.getNoOfAcls());
 
-for (char ch : parts[2].toCharArray()) {
+String bits = parts[2];
+
+// Default acl scope is ACCESS.
+AclScope aclScope = AclScope.ACCESS;
+
+// Check if acl string contains scope info.
+if(parts[2].matches(ACL_SCOPE_REGEX)) {
+  int indexOfOpenBracket = parts[2].indexOf("[");
+  bits = parts[2].substring(0, indexOfOpenBracket);
+  aclScope = AclScope.valueOf(parts[2].substring(indexOfOpenBracket + 1,
+  parts[2].indexOf("]")));
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277780)
Time Spent: 17h 40m  (was: 17.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 17h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1805:
-
Labels: pull-request-available  (was: )

> Implement S3 Initiate MPU request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1805
> URL: https://issues.apache.org/jira/browse/HDDS-1805
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Implement S3 Initiate MPU request to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-16 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1805:
-
Status: Patch Available  (was: Open)

> Implement S3 Initiate MPU request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1805
> URL: https://issues.apache.org/jira/browse/HDDS-1805
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implement S3 Initiate MPU request to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1805?focusedWorklogId=20=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-20
 ]

ASF GitHub Bot logged work on HDDS-1805:


Author: ASF GitHub Bot
Created on: 16/Jul/19 21:27
Start Date: 16/Jul/19 21:27
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1108: 
HDDS-1805. Implement S3 Initiate MPU request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1108
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 20)
Time Spent: 10m
Remaining Estimate: 0h

> Implement S3 Initiate MPU request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1805
> URL: https://issues.apache.org/jira/browse/HDDS-1805
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implement S3 Initiate MPU request to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12979) StandbyNode should upload FsImage to ObserverNode after checkpointing.

2019-07-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886464#comment-16886464
 ] 

Hadoop QA commented on HDFS-12979:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 457 unchanged - 6 fixed = 458 total (was 463) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-12979 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12974856/HDFS-12979.014.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2b1b84a3b1ec 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c5e3ab5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27239/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDFS-12979) StandbyNode should upload FsImage to ObserverNode after checkpointing.

2019-07-16 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886457#comment-16886457
 ] 

Erik Krogen commented on HDFS-12979:


Great, thanks [~vagarychen]. +1  pending Jenkins.

> StandbyNode should upload FsImage to ObserverNode after checkpointing.
> --
>
> Key: HDFS-12979
> URL: https://issues.apache.org/jira/browse/HDFS-12979
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-12979.001.patch, HDFS-12979.002.patch, 
> HDFS-12979.003.patch, HDFS-12979.004.patch, HDFS-12979.005.patch, 
> HDFS-12979.006.patch, HDFS-12979.007.patch, HDFS-12979.008.patch, 
> HDFS-12979.009.patch, HDFS-12979.010.patch, HDFS-12979.011.patch, 
> HDFS-12979.012.patch, HDFS-12979.013.patch, HDFS-12979.014.patch, 
> HDFS-12979.015.patch
>
>
> ObserverNode does not create checkpoints. So it's fsimage file can get very 
> old making bootstrap of ObserverNode too long. A StandbyNode should copy 
> latest fsimage to ObserverNode(s) along with ANN.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1813) Fix false warning from ozones3 acceptance test

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1813?focusedWorklogId=277751=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277751
 ]

ASF GitHub Bot logged work on HDDS-1813:


Author: ASF GitHub Bot
Created on: 16/Jul/19 20:51
Start Date: 16/Jul/19 20:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1107: HDDS-1813. Fix 
false warning from ozones3 acceptance test. Contribute…
URL: https://github.com/apache/hadoop/pull/1107#issuecomment-511981080
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 476 | trunk passed |
   | +1 | compile | 260 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1517 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 441 | the patch passed |
   | +1 | compile | 270 | the patch passed |
   | +1 | javac | 270 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 696 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 160 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 191 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2005 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 58 | The patch does not generate ASF License warnings. |
   | | | 5749 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1107/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1107 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient |
   | uname | Linux 77c7c4e3eec7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c5e3ab5 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1107/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1107/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1107/1/testReport/ |
   | Max. process+thread count | 5387 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1107/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277751)
Time Spent: 0.5h  (was: 20m)

> Fix false warning from ozones3 acceptance test
> --
>
> Key: HDDS-1813
> URL: https://issues.apache.org/jira/browse/HDDS-1813
> Project: Hadoop Distributed Data Store
>  Issue 

[jira] [Resolved] (HDDS-1770) SCM crashes when ReplicationManager is trying to re-replicate under replicated containers

2019-07-16 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-1770.
--
Resolution: Duplicate

> SCM crashes when ReplicationManager is trying to re-replicate under 
> replicated containers
> -
>
> Key: HDDS-1770
> URL: https://issues.apache.org/jira/browse/HDDS-1770
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nanda kumar
>Priority: Blocker
>
> SCM crashes with the following exception when ReplicationManager is trying to 
> re-replicate under replicated containers
> {noformat}
> 2019-07-08 12:46:36 ERROR ReplicationManager:215 - Exception in Replication 
> Monitor Thread.
> java.lang.IllegalArgumentException: Affinity node /default-rack/aab15e2d07cc 
> is not a member of topology
> at 
> org.apache.hadoop.hdds.scm.net.NetworkTopologyImpl.checkAffinityNode(NetworkTopologyImpl.java:767)
> at 
> org.apache.hadoop.hdds.scm.net.NetworkTopologyImpl.chooseRandom(NetworkTopologyImpl.java:407)
> at 
> org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementRackAware.chooseNode(SCMContainerPlacementRackAware.java:242)
> at 
> org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementRackAware.chooseDatanodes(SCMContainerPlacementRackAware.java:168)
> at 
> org.apache.hadoop.hdds.scm.container.ReplicationManager.handleUnderReplicatedContainer(ReplicationManager.java:487)
> at 
> org.apache.hadoop.hdds.scm.container.ReplicationManager.processContainer(ReplicationManager.java:293)
> at 
> java.base/java.util.concurrent.ConcurrentHashMap$KeySetView.forEach(ConcurrentHashMap.java:4698)
> at 
> java.base/java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1083)
> at 
> org.apache.hadoop.hdds.scm.container.ReplicationManager.run(ReplicationManager.java:205)
> at java.base/java.lang.Thread.run(Thread.java:834)
> 2019-07-08 12:46:36 INFO  ExitUtil:210 - Exiting with status 1: 
> java.lang.IllegalArgumentException: Affinity node /default-rack/aab15e2d07cc 
> is not a member of topology
> 2019-07-08 12:46:36 INFO  StorageContainerManagerStarter:51 - SHUTDOWN_MSG: 
> /
> SHUTDOWN_MSG: Shutting down StorageContainerManager at 
> 8c763563f672/192.168.112.2
> /
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1813) Fix false warning from ozones3 acceptance test

2019-07-16 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1813:
-
Fix Version/s: 0.5.0
   Status: In Progress  (was: Patch Available)

Fix has been committed to trunk. 

> Fix false warning from ozones3 acceptance test
> --
>
> Key: HDDS-1813
> URL: https://issues.apache.org/jira/browse/HDDS-1813
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Affects Versions: 0.4.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> All acceptance passed but the results are marked failed due to the following 
> warnings.
> [https://ci.anzix.net/job/ozone/17381/RobotTests/log.html]
> {code}
> [ WARN ] Collapsing consecutive whitespace during parsing is deprecated. Fix 
> ' # Bucket already is created in Test Setup.' in file 
> '/opt/hadoop/smoketest/s3/bucketcreate.robot' on line 31.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1813) Fix false warning from ozones3 acceptance test

2019-07-16 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-1813.
--
Resolution: Fixed

> Fix false warning from ozones3 acceptance test
> --
>
> Key: HDDS-1813
> URL: https://issues.apache.org/jira/browse/HDDS-1813
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Affects Versions: 0.4.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> All acceptance passed but the results are marked failed due to the following 
> warnings.
> [https://ci.anzix.net/job/ozone/17381/RobotTests/log.html]
> {code}
> [ WARN ] Collapsing consecutive whitespace during parsing is deprecated. Fix 
> ' # Bucket already is created in Test Setup.' in file 
> '/opt/hadoop/smoketest/s3/bucketcreate.robot' on line 31.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1813) Fix false warning from ozones3 acceptance test

2019-07-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886451#comment-16886451
 ] 

Hudson commented on HDDS-1813:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16930 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16930/])
HDDS-1813. Fix false warning from ozones3 acceptance test. Contributed (github: 
rev 71c21f0a9ba3c684f204198a8340749a1816458d)
* (edit) hadoop-ozone/dist/src/main/smoketest/s3/bucketcreate.robot


> Fix false warning from ozones3 acceptance test
> --
>
> Key: HDDS-1813
> URL: https://issues.apache.org/jira/browse/HDDS-1813
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Affects Versions: 0.4.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> All acceptance passed but the results are marked failed due to the following 
> warnings.
> [https://ci.anzix.net/job/ozone/17381/RobotTests/log.html]
> {code}
> [ WARN ] Collapsing consecutive whitespace during parsing is deprecated. Fix 
> ' # Bucket already is created in Test Setup.' in file 
> '/opt/hadoop/smoketest/s3/bucketcreate.robot' on line 31.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1813) Fix false warning from ozones3 acceptance test

2019-07-16 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1813:
-
Status: Patch Available  (was: Open)

> Fix false warning from ozones3 acceptance test
> --
>
> Key: HDDS-1813
> URL: https://issues.apache.org/jira/browse/HDDS-1813
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Affects Versions: 0.4.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> All acceptance passed but the results are marked failed due to the following 
> warnings.
> [https://ci.anzix.net/job/ozone/17381/RobotTests/log.html]
> {code}
> [ WARN ] Collapsing consecutive whitespace during parsing is deprecated. Fix 
> ' # Bucket already is created in Test Setup.' in file 
> '/opt/hadoop/smoketest/s3/bucketcreate.robot' on line 31.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1813) Fix false warning from ozones3 acceptance test

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1813?focusedWorklogId=277748=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277748
 ]

ASF GitHub Bot logged work on HDDS-1813:


Author: ASF GitHub Bot
Created on: 16/Jul/19 20:47
Start Date: 16/Jul/19 20:47
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1107: HDDS-1813. 
Fix false warning from ozones3 acceptance test. Contribute…
URL: https://github.com/apache/hadoop/pull/1107
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277748)
Time Spent: 20m  (was: 10m)

> Fix false warning from ozones3 acceptance test
> --
>
> Key: HDDS-1813
> URL: https://issues.apache.org/jira/browse/HDDS-1813
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Affects Versions: 0.4.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> All acceptance passed but the results are marked failed due to the following 
> warnings.
> [https://ci.anzix.net/job/ozone/17381/RobotTests/log.html]
> {code}
> [ WARN ] Collapsing consecutive whitespace during parsing is deprecated. Fix 
> ' # Bucket already is created in Test Setup.' in file 
> '/opt/hadoop/smoketest/s3/bucketcreate.robot' on line 31.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-16 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1787:
-
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

Thanks [~Sammi] for the contribution and all for the reviews. I've commit the 
patch to trunk. 

> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> 2019-07-10 16:45:53,176 WARN  om.KeyManagerImpl 
> (KeyManagerImpl.java:lambda$sortDatanodeInPipeline$7(2129)) - Unable to sort 
> datanodes based on distance to client, volume=xqoyzocpse, bucket=vxwajaczqh, 
> key=pool-444-thread-7-201077822, client=127.0.0.1, 
> datanodes=[10f15723-45d7-4a0c-8f01-8b101744a110{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, 7ac2777f-0a5c-4414-9e7f-bfbc47d696ea{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}], exception=java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at 

[jira] [Work logged] (HDDS-1775) Make OM KeyDeletingService compatible with HA model

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1775?focusedWorklogId=277744=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277744
 ]

ASF GitHub Bot logged work on HDDS-1775:


Author: ASF GitHub Bot
Created on: 16/Jul/19 20:42
Start Date: 16/Jul/19 20:42
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #1063: HDDS-1775. Make 
OM KeyDeletingService compatible with HA model
URL: https://github.com/apache/hadoop/pull/1063#issuecomment-511977786
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277744)
Time Spent: 2h 40m  (was: 2.5h)

> Make OM KeyDeletingService compatible with HA model
> ---
>
> Key: HDDS-1775
> URL: https://issues.apache.org/jira/browse/HDDS-1775
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Currently OM KeyDeletingService directly deletes all the keys in DeletedTable 
> after deleting the corresponding blocks through SCM. For HA compatibility, 
> the key purging should happen through the OM Ratis server. This Jira 
> introduces PurgeKeys request in OM protocol. This request will be submitted 
> to OMs Ratis server after SCM deletes blocks corresponding to deleted keys.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1802) Add Eviction policy for table cache

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1802?focusedWorklogId=277727=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277727
 ]

ASF GitHub Bot logged work on HDDS-1802:


Author: ASF GitHub Bot
Created on: 16/Jul/19 20:25
Start Date: 16/Jul/19 20:25
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1100: HDDS-1802. Add 
Eviction policy for table cache.
URL: https://github.com/apache/hadoop/pull/1100#issuecomment-511971954
 
 
   > Capturing our offline discussion. We discussed the following:
   > 
   > 1. Populate full cache on process startup (mentioned by Bharat).
   > 2. Move decision for full lookup into cache implementation. The result 
will be communicated to TypedTable.
   > 3. Rename AFTER_FLUSH to MANUAL.
   > 
   > Anything I missed?
   
   And one more comment by you, when delete of cache entry (like delete 
volume/bucket), we should delete cache entry, as for full cache, cleanup will 
not happen.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277727)
Time Spent: 1h 40m  (was: 1.5h)

> Add Eviction policy for table cache
> ---
>
> Key: HDDS-1802
> URL: https://issues.apache.org/jira/browse/HDDS-1802
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In this Jira we will add eviction policy for table cache.
> In this Jira, we will add 2 eviction policies for the cache.
> NEVER, // Cache will not be cleaned up. This mean's the table maintains full 
> cache.
> AFTERFLUSH // Cache will be cleaned up, once after flushing to DB.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1802) Add Eviction policy for table cache

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1802?focusedWorklogId=277728=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277728
 ]

ASF GitHub Bot logged work on HDDS-1802:


Author: ASF GitHub Bot
Created on: 16/Jul/19 20:25
Start Date: 16/Jul/19 20:25
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1100: HDDS-1802. Add 
Eviction policy for table cache.
URL: https://github.com/apache/hadoop/pull/1100#issuecomment-511971954
 
 
   > Capturing our offline discussion. We discussed the following:
   > 
   > 1. Populate full cache on process startup (mentioned by Bharat).
   > 2. Move decision for full lookup into cache implementation. The result 
will be communicated to TypedTable.
   > 3. Rename AFTER_FLUSH to MANUAL.
   > 
   > Anything I missed?
   
   Thank You @arp7 for the offline discussion.
   And one more comment by you, when delete of cache entry (like delete 
volume/bucket), we should delete cache entry, as for full cache, cleanup will 
not happen.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277728)
Time Spent: 1h 50m  (was: 1h 40m)

> Add Eviction policy for table cache
> ---
>
> Key: HDDS-1802
> URL: https://issues.apache.org/jira/browse/HDDS-1802
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In this Jira we will add eviction policy for table cache.
> In this Jira, we will add 2 eviction policies for the cache.
> NEVER, // Cache will not be cleaned up. This mean's the table maintains full 
> cache.
> AFTERFLUSH // Cache will be cleaned up, once after flushing to DB.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13647) Fix the description of storageType option for space quota

2019-07-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886431#comment-16886431
 ] 

Hadoop QA commented on HDFS-13647:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}177m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-13647 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926295/HDFS-13647.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8efc5c3b96f1 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c5e3ab5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27238/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27238/testReport/ |
| Max. process+thread 

[jira] [Work logged] (HDDS-1802) Add Eviction policy for table cache

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1802?focusedWorklogId=277724=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277724
 ]

ASF GitHub Bot logged work on HDDS-1802:


Author: ASF GitHub Bot
Created on: 16/Jul/19 20:15
Start Date: 16/Jul/19 20:15
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1100: HDDS-1802. Add Eviction 
policy for table cache.
URL: https://github.com/apache/hadoop/pull/1100#issuecomment-511968524
 
 
   Capturing our offline discussion. We discussed the following:
   1. Populate full cache on process startup (mentioned by Bharat).
   1. Move decision for full lookup into cache implementation. The result will 
be communicated to TypedTable.
   1. Rename AFTER_FLUSH to MANUAL.
   
   Anything I missed?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277724)
Time Spent: 1.5h  (was: 1h 20m)

> Add Eviction policy for table cache
> ---
>
> Key: HDDS-1802
> URL: https://issues.apache.org/jira/browse/HDDS-1802
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In this Jira we will add eviction policy for table cache.
> In this Jira, we will add 2 eviction policies for the cache.
> NEVER, // Cache will not be cleaned up. This mean's the table maintains full 
> cache.
> AFTERFLUSH // Cache will be cleaned up, once after flushing to DB.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886430#comment-16886430
 ] 

Anu Engineer commented on HDDS-1544:


Cherry-picked to Ozone-0.4.1 branch.

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 17h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1802) Add Eviction policy for table cache

2019-07-16 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1802:

Status: Patch Available  (was: Open)

> Add Eviction policy for table cache
> ---
>
> Key: HDDS-1802
> URL: https://issues.apache.org/jira/browse/HDDS-1802
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> In this Jira we will add eviction policy for table cache.
> In this Jira, we will add 2 eviction policies for the cache.
> NEVER, // Cache will not be cleaned up. This mean's the table maintains full 
> cache.
> AFTERFLUSH // Cache will be cleaned up, once after flushing to DB.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-16 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1805:

Labels:   (was: pull-request-available)

> Implement S3 Initiate MPU request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1805
> URL: https://issues.apache.org/jira/browse/HDDS-1805
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Implement S3 Initiate MPU request to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277721=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277721
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 20:08
Start Date: 16/Jul/19 20:08
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1101: HDDS-1544. 
Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#issuecomment-511966295
 
 
   I have committed this to trunk and cleaned up all the white space issues. 
Thanks for the patch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277721)
Time Spent: 17h 20m  (was: 17h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 17h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277720=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277720
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 20:08
Start Date: 16/Jul/19 20:08
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1101: HDDS-1544. 
Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277720)
Time Spent: 17h 10m  (was: 17h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 17h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277719=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277719
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 20:08
Start Date: 16/Jul/19 20:08
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1074: HDDS-1544. 
Support default Acls for volume, bucket, keys and prefix. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/1074#issuecomment-511966060
 
 
   I have committed this to trunk. Thanks for your work @ajayydv 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277719)
Time Spent: 17h  (was: 16h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 17h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277718=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277718
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 20:08
Start Date: 16/Jul/19 20:08
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1074: HDDS-1544. 
Support default Acls for volume, bucket, keys and prefix. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/1074
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277718)
Time Spent: 16h 50m  (was: 16h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 16h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886424#comment-16886424
 ] 

Hudson commented on HDDS-1544:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16929 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16929/])
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. 
(aengineer: rev d429d742f08d71b5a55cd3b7e65d9aa91df1f0f7)
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmPrefixInfo.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3BucketManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestBuckets.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/codec/TestOmPrefixInfoCodec.java
* (edit) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/storage/DistributedStorageHandler.java
* (edit) hadoop-ozone/dist/src/main/smoketest/security/ozone-secure-fs.robot
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestBucketManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManager.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/protocolPB/OMPBHelper.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerProtocol.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestKeyManagerImpl.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketArgs.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/security/acl/TestOzoneNativeAuthorizer.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketInfo.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/PrefixManagerImpl.java
* (edit) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/TestOzoneAcls.java
* (edit) hadoop-ozone/dist/src/main/smoketest/basic/ozone-shell.robot
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/bucket/UpdateBucketHandler.java


> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 16h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-16 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1544:
---
   Resolution: Fixed
Fix Version/s: 0.4.1
   0.5.0
   Status: Resolved  (was: Patch Available)

[~ajayydv], [~xyao] Thanks for the work on this patch. I have committed this to 
trunk and will cherry-pick into ozone-0.4.1.

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 16h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1775) Make OM KeyDeletingService compatible with HA model

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1775?focusedWorklogId=277715=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277715
 ]

ASF GitHub Bot logged work on HDDS-1775:


Author: ASF GitHub Bot
Created on: 16/Jul/19 20:00
Start Date: 16/Jul/19 20:00
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1063: 
HDDS-1775. Make OM KeyDeletingService compatible with HA model
URL: https://github.com/apache/hadoop/pull/1063#discussion_r304093467
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyPurgeRequest.java
 ##
 @@ -0,0 +1,50 @@
+package org.apache.hadoop.ozone.om.request.key;
+
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.key.OMKeyPurgeResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.PurgeKeysRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Status;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Type;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.List;
+
+/**
+ * Handles purging of keys from OM DB.
+ */
+public class OMKeyPurgeRequest extends OMClientRequest
+implements OMKeyRequest {
 
 Review comment:
   OMKeyRequest is changed to abstract class in the latest trunk
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277715)
Time Spent: 2.5h  (was: 2h 20m)

> Make OM KeyDeletingService compatible with HA model
> ---
>
> Key: HDDS-1775
> URL: https://issues.apache.org/jira/browse/HDDS-1775
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Currently OM KeyDeletingService directly deletes all the keys in DeletedTable 
> after deleting the corresponding blocks through SCM. For HA compatibility, 
> the key purging should happen through the OM Ratis server. This Jira 
> introduces PurgeKeys request in OM protocol. This request will be submitted 
> to OMs Ratis server after SCM deletes blocks corresponding to deleted keys.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16886416#comment-16886416
 ] 

Hudson commented on HDDS-1787:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16928 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16928/])
HDDS-1787. NPE thrown while trying to find DN closest to client. (xyao: rev 
493b0b57601afe3f9ce944d18ca09bdd058d2ce4)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
* (add) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMBlockProtocolServer.java


> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> 2019-07-10 16:45:53,176 WARN  om.KeyManagerImpl 
> (KeyManagerImpl.java:lambda$sortDatanodeInPipeline$7(2129)) - Unable to sort 
> datanodes based on distance to client, volume=xqoyzocpse, bucket=vxwajaczqh, 
> key=pool-444-thread-7-201077822, client=127.0.0.1, 
> datanodes=[10f15723-45d7-4a0c-8f01-8b101744a110{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, 7ac2777f-0a5c-4414-9e7f-bfbc47d696ea{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}], exception=java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at 

[jira] [Updated] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-16 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1787:
-
Issue Type: Sub-task  (was: Bug)
Parent: HDDS-698

> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> 2019-07-10 16:45:53,176 WARN  om.KeyManagerImpl 
> (KeyManagerImpl.java:lambda$sortDatanodeInPipeline$7(2129)) - Unable to sort 
> datanodes based on distance to client, volume=xqoyzocpse, bucket=vxwajaczqh, 
> key=pool-444-thread-7-201077822, client=127.0.0.1, 
> datanodes=[10f15723-45d7-4a0c-8f01-8b101744a110{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, 7ac2777f-0a5c-4414-9e7f-bfbc47d696ea{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}], exception=java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1787?focusedWorklogId=277705=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277705
 ]

ASF GitHub Bot logged work on HDDS-1787:


Author: ASF GitHub Bot
Created on: 16/Jul/19 19:52
Start Date: 16/Jul/19 19:52
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1094: HDDS-1787. 
NPE thrown while trying to find DN closest to client.
URL: https://github.com/apache/hadoop/pull/1094
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277705)
Time Spent: 1.5h  (was: 1h 20m)

> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> 2019-07-10 16:45:53,176 WARN  om.KeyManagerImpl 
> (KeyManagerImpl.java:lambda$sortDatanodeInPipeline$7(2129)) - Unable to sort 
> datanodes based on distance to client, volume=xqoyzocpse, bucket=vxwajaczqh, 
> key=pool-444-thread-7-201077822, client=127.0.0.1, 
> datanodes=[10f15723-45d7-4a0c-8f01-8b101744a110{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, 7ac2777f-0a5c-4414-9e7f-bfbc47d696ea{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}], exception=java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> 

[jira] [Work logged] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1787?focusedWorklogId=277704=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277704
 ]

ASF GitHub Bot logged work on HDDS-1787:


Author: ASF GitHub Bot
Created on: 16/Jul/19 19:51
Start Date: 16/Jul/19 19:51
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #1094: HDDS-1787. NPE 
thrown while trying to find DN closest to client.
URL: https://github.com/apache/hadoop/pull/1094#issuecomment-511960824
 
 
   +1, I will merge it shortly. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277704)
Time Spent: 1h 20m  (was: 1h 10m)

> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> 2019-07-10 16:45:53,176 WARN  om.KeyManagerImpl 
> (KeyManagerImpl.java:lambda$sortDatanodeInPipeline$7(2129)) - Unable to sort 
> datanodes based on distance to client, volume=xqoyzocpse, bucket=vxwajaczqh, 
> key=pool-444-thread-7-201077822, client=127.0.0.1, 
> datanodes=[10f15723-45d7-4a0c-8f01-8b101744a110{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, 7ac2777f-0a5c-4414-9e7f-bfbc47d696ea{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}], exception=java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> 

[jira] [Work logged] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1787?focusedWorklogId=277703=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277703
 ]

ASF GitHub Bot logged work on HDDS-1787:


Author: ASF GitHub Bot
Created on: 16/Jul/19 19:50
Start Date: 16/Jul/19 19:50
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1094: HDDS-1787. 
NPE thrown while trying to find DN closest to client.
URL: https://github.com/apache/hadoop/pull/1094#discussion_r304089740
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
 ##
 @@ -290,7 +290,12 @@ public ScmInfo getScmInfo() throws IOException {
   NodeManager nodeManager = scm.getScmNodeManager();
   Node client = nodeManager.getNode(clientMachine);
   List nodeList = new ArrayList();
-  nodes.stream().forEach(path -> nodeList.add(nodeManager.getNode(path)));
+  nodes.stream().forEach(path -> {
+DatanodeDetails node = nodeManager.getNode(path);
+if (node != null) {
 
 Review comment:
   Thanks for the details. Makes sense to me. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277703)
Time Spent: 1h 10m  (was: 1h)

> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> 2019-07-10 16:45:53,176 WARN  om.KeyManagerImpl 
> (KeyManagerImpl.java:lambda$sortDatanodeInPipeline$7(2129)) - Unable to sort 
> datanodes based on distance to client, volume=xqoyzocpse, bucket=vxwajaczqh, 
> key=pool-444-thread-7-201077822, client=127.0.0.1, 
> datanodes=[10f15723-45d7-4a0c-8f01-8b101744a110{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, 7ac2777f-0a5c-4414-9e7f-bfbc47d696ea{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}], exception=java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> 

[jira] [Work logged] (HDDS-1775) Make OM KeyDeletingService compatible with HA model

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1775?focusedWorklogId=277698=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277698
 ]

ASF GitHub Bot logged work on HDDS-1775:


Author: ASF GitHub Bot
Created on: 16/Jul/19 19:44
Start Date: 16/Jul/19 19:44
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #1063: HDDS-1775. Make 
OM KeyDeletingService compatible with HA model
URL: https://github.com/apache/hadoop/pull/1063#issuecomment-511958269
 
 
   Thanks for the reviews @anuengineer and @bharatviswa504.
   I have opened a new Jira HDDS-1814 to address the Java 8 method reference 
comments.
   Fixed the checkstyle issues.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277698)
Time Spent: 2h 20m  (was: 2h 10m)

> Make OM KeyDeletingService compatible with HA model
> ---
>
> Key: HDDS-1775
> URL: https://issues.apache.org/jira/browse/HDDS-1775
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Currently OM KeyDeletingService directly deletes all the keys in DeletedTable 
> after deleting the corresponding blocks through SCM. For HA compatibility, 
> the key purging should happen through the OM Ratis server. This Jira 
> introduces PurgeKeys request in OM protocol. This request will be submitted 
> to OMs Ratis server after SCM deletes blocks corresponding to deleted keys.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1814) Improve KeyDeletingService using Java 8 method reference

2019-07-16 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-1814:


 Summary: Improve KeyDeletingService using Java 8 method reference 
 Key: HDDS-1814
 URL: https://issues.apache.org/jira/browse/HDDS-1814
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Hanisha Koneru


Based on [~bharatviswa]'s comments on HDDS-1775 PR 
([https://github.com/apache/hadoop/pull/1063]), instead of null checks for 
OzoneManager, we should use the Java 8 method reference.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1782) Add an option to MiniOzoneChaosCluster to read files multiple times.

2019-07-16 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1782?focusedWorklogId=277695=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277695
 ]

ASF GitHub Bot logged work on HDDS-1782:


Author: ASF GitHub Bot
Created on: 16/Jul/19 19:41
Start Date: 16/Jul/19 19:41
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1076: HDDS-1782. Add an 
option to MiniOzoneChaosCluster to read files multiple times. Contributed by 
Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1076#discussion_r304086677
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/chaos/TestProbability.java
 ##
 @@ -0,0 +1,39 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.chaos;
+
+import org.apache.commons.lang3.RandomUtils;
+
+/**
+ * Class to keep track of test probability.
+ */
+public class TestProbability {
+  private int pct;
+
+  private TestProbability(int pct) {
+this.pct = pct;
+  }
+
+  public boolean isTrue() {
+return (RandomUtils.nextInt() * pct / 100) == 1;
 
 Review comment:
   @adoroszlai are you +1 on the latest changes?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277695)
Time Spent: 1h 10m  (was: 1h)

> Add an option to MiniOzoneChaosCluster to read files multiple times.
> 
>
> Key: HDDS-1782
> URL: https://issues.apache.org/jira/browse/HDDS-1782
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Right now MiniOzoneChaosCluster writes a file/ reads it and deletes it 
> immediately. This jira proposes to add an option to read the file multiple 
> time in MiniOzoneChaosCluster.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1301) Optimize recursive ozone filesystem apis

2019-07-16 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1301:

Labels:   (was: pull-request-available)

> Optimize recursive ozone filesystem apis
> 
>
> Key: HDDS-1301
> URL: https://issues.apache.org/jira/browse/HDDS-1301
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1301.001.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This Jira aims to optimise recursive apis in ozone file system. These are the 
> apis which have a recursive flag which requires an operation to be performed 
> on all the children of the directory. The Jira would add support for 
> recursive apis in Ozone manager in order to reduce the number of rpc calls to 
> Ozone Manager. Also currently these operations are not atomic. This Jira 
> would make all the operations in ozone filesystem atomic.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >