[jira] [Work logged] (HDDS-1613) Opening of rocksDB in datanode fails with "No locks available"

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1613?focusedWorklogId=251503=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251503
 ]

ASF GitHub Bot logged work on HDDS-1613:


Author: ASF GitHub Bot
Created on: 31/May/19 05:32
Start Date: 31/May/19 05:32
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on issue #876: HDDS-1613. Opening 
of rocksDB in datanode fails with No locks available. Contributed by Mukul 
Kumar Singh.
URL: https://github.com/apache/hadoop/pull/876#issuecomment-497581469
 
 
   Thanks for the review @bshashikant, TestBCSID is passing locally.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251503)
Time Spent: 40m  (was: 0.5h)

> Opening of rocksDB in datanode fails with "No locks available"
> --
>
> Key: HDDS-1613
> URL: https://issues.apache.org/jira/browse/HDDS-1613
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Block read fails with 
> {code}
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Unable to find the block with bcsID 11777 .Container 68 bcsId is 0.
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:573)
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.getBlock(ContainerProtocolCalls.java:120)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.initializeBlockInputStream(KeyInputStream.java:295)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.getStream(KeyInputStream.java:265)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.access$000(KeyInputStream.java:229)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.getStreamEntry(KeyInputStream.java:107)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:140)
> at 
> org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:47)
> at java.io.InputStream.read(InputStream.java:101)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.load(MiniOzoneLoadGenerator.java:114)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$0(MiniOzoneLoadGenerator.java:147)
> at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Looking at the 3 datanodes, the containers are in bcs id of 11748, 11748 and 
> 0.
> {code}
> 2019-05-30 08:28:05,348 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,363 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,377 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 0. : Result: UNKNOWN_BCSID
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1587) Support dynamically adding delegated class to filteredclass loader

2019-05-30 Thread Junjie Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junjie Chen reassigned HDDS-1587:
-

Assignee: Junjie Chen  (was: Xiaoyu Yao)

> Support dynamically adding delegated class to filteredclass loader
> --
>
> Key: HDDS-1587
> URL: https://issues.apache.org/jira/browse/HDDS-1587
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Xiaoyu Yao
>Assignee: Junjie Chen
>Priority: Major
>
> HDDS-922 added a filtered class loader with a list of delegated classes that 
> will be loaded with the app launcher's classloader. With security enabled on 
> ozone-0.4, there are some incompatible changes from Hadoop-common and 
> hadoop-auth module from Hadoop-2.x to Hadoop-3.x. Some examples can be seen 
> HDDS-1080, where the fix has to be made along with a rebuild/release. 
>  
> This ticket is opened to allow dynamically adding delegated classes or class 
> prefix via environment variable. This way, we can easily adjust the setting 
> in different deployment without rebuild/release.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13955) RBF: Support secure Namenode in NamenodeHeartbeatService

2019-05-30 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852666#comment-16852666
 ] 

Ayush Saxena commented on HDFS-13955:
-

Thanx [~crh] for the patch.
v004 LGTM +1
Will Commit Shortly.

> RBF: Support secure Namenode in NamenodeHeartbeatService
> 
>
> Key: HDFS-13955
> URL: https://issues.apache.org/jira/browse/HDFS-13955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13955-HDFS-13532.000.patch, 
> HDFS-13955-HDFS-13532.001.patch, HDFS-13955-HDFS-13891.001.patch, 
> HDFS-13955-HDFS-13891.002.patch, HDFS-13955-HDFS-13891.003.patch, 
> HDFS-13955-HDFS-13891.004.patch
>
>
> Currently, the NamenodeHeartbeatService uses JMX to get the metrics from the 
> Namenodes. We should support HTTPs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1613) Opening of rocksDB in datanode fails with "No locks available"

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1613?focusedWorklogId=251497=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251497
 ]

ASF GitHub Bot logged work on HDDS-1613:


Author: ASF GitHub Bot
Created on: 31/May/19 04:55
Start Date: 31/May/19 04:55
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #876: HDDS-1613. Opening 
of rocksDB in datanode fails with No locks available. Contributed by Mukul 
Kumar Singh.
URL: https://github.com/apache/hadoop/pull/876#issuecomment-497575250
 
 
   The changes look good to me. There is a test failure reported related to 
this change. Can you please check?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251497)
Time Spent: 0.5h  (was: 20m)

> Opening of rocksDB in datanode fails with "No locks available"
> --
>
> Key: HDDS-1613
> URL: https://issues.apache.org/jira/browse/HDDS-1613
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Block read fails with 
> {code}
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Unable to find the block with bcsID 11777 .Container 68 bcsId is 0.
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:573)
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.getBlock(ContainerProtocolCalls.java:120)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.initializeBlockInputStream(KeyInputStream.java:295)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.getStream(KeyInputStream.java:265)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.access$000(KeyInputStream.java:229)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.getStreamEntry(KeyInputStream.java:107)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:140)
> at 
> org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:47)
> at java.io.InputStream.read(InputStream.java:101)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.load(MiniOzoneLoadGenerator.java:114)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$0(MiniOzoneLoadGenerator.java:147)
> at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Looking at the 3 datanodes, the containers are in bcs id of 11748, 11748 and 
> 0.
> {code}
> 2019-05-30 08:28:05,348 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,363 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,377 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 0. : Result: UNKNOWN_BCSID
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13955) RBF: Support secure Namenode in NamenodeHeartbeatService

2019-05-30 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852650#comment-16852650
 ] 

CR Hota edited comment on HDFS-13955 at 5/31/19 4:41 AM:
-

[~elgoiri] [~ayushtkn] [~brahmareddy]

Could you please help to commit this?


was (Author: crh):
[~elgoiri] [~ayushtkn] [~brahmareddy]

Could you please help commit this?

> RBF: Support secure Namenode in NamenodeHeartbeatService
> 
>
> Key: HDFS-13955
> URL: https://issues.apache.org/jira/browse/HDFS-13955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13955-HDFS-13532.000.patch, 
> HDFS-13955-HDFS-13532.001.patch, HDFS-13955-HDFS-13891.001.patch, 
> HDFS-13955-HDFS-13891.002.patch, HDFS-13955-HDFS-13891.003.patch, 
> HDFS-13955-HDFS-13891.004.patch
>
>
> Currently, the NamenodeHeartbeatService uses JMX to get the metrics from the 
> Namenodes. We should support HTTPs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13955) RBF: Support secure Namenode in NamenodeHeartbeatService

2019-05-30 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852650#comment-16852650
 ] 

CR Hota commented on HDFS-13955:


[~elgoiri] [~ayushtkn] [~brahmareddy]

Could you please help commit this?

> RBF: Support secure Namenode in NamenodeHeartbeatService
> 
>
> Key: HDFS-13955
> URL: https://issues.apache.org/jira/browse/HDFS-13955
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13955-HDFS-13532.000.patch, 
> HDFS-13955-HDFS-13532.001.patch, HDFS-13955-HDFS-13891.001.patch, 
> HDFS-13955-HDFS-13891.002.patch, HDFS-13955-HDFS-13891.003.patch, 
> HDFS-13955-HDFS-13891.004.patch
>
>
> Currently, the NamenodeHeartbeatService uses JMX to get the metrics from the 
> Namenodes. We should support HTTPs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14475) RBF: Expose router security enabled status on the UI

2019-05-30 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852649#comment-16852649
 ] 

CR Hota commented on HDFS-14475:


[~elgoiri] Thanks for the review and commit.

> RBF: Expose router security enabled status on the UI
> 
>
> Key: HDFS-14475
> URL: https://issues.apache.org/jira/browse/HDFS-14475
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14475-HDFS-13891.001.patch, 
> HDFS-14475-HDFS-13891.002.patch
>
>
> This is a branched off Jira to expose metric so that router's security status 
> can be displayed on the UI. We are still unclear if more work needs to be 
> done for dealing with CORS etc. 
> https://issues.apache.org/jira/browse/HDFS-12510 will continue to track that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14508) RBF: Clean-up and refactor UI components

2019-05-30 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852648#comment-16852648
 ] 

CR Hota commented on HDFS-14508:


{quote}
I'm a little concerned about having lots of metrics classes. Can we rename 
{{FederationMetrics}} to more abstract name, {{RBFMetrics}} or 
{{RouterServerMetrics}} or something like that, to implement both beans?
{quote}
Sounds good.

> RBF: Clean-up and refactor UI components
> 
>
> Key: HDFS-14508
> URL: https://issues.apache.org/jira/browse/HDFS-14508
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-14508-HDFS-13891.1.patch, 
> HDFS-14508-HDFS-13891.2.patch
>
>
> Router UI has tags that are not used or incorrectly set. The code should be 
> cleaned-up.
> One such example is 
> Path : 
> (\hadoop-hdfs-project\hadoop-hdfs-rbf\src\main\webapps\router\federationhealth.js)
> {code:java}
> {"name": "routerstat", "url": 
> "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1540) Implement addAcl,removeAcl,setAcl,getAcl for Bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1540?focusedWorklogId=251490=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251490
 ]

ASF GitHub Bot logged work on HDDS-1540:


Author: ASF GitHub Bot
Created on: 31/May/19 04:32
Start Date: 31/May/19 04:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #874: HDDS-1540. 
Implement addAcl,removeAcl,setAcl,getAcl for Bucket. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/874#issuecomment-497571655
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 28 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for branch |
   | +1 | mvninstall | 573 | trunk passed |
   | +1 | compile | 302 | trunk passed |
   | +1 | checkstyle | 85 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 974 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | trunk passed |
   | 0 | spotbugs | 347 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 585 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | +1 | mvninstall | 539 | the patch passed |
   | +1 | compile | 279 | the patch passed |
   | +1 | javac | 279 | the patch passed |
   | +1 | checkstyle | 81 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 730 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | the patch passed |
   | +1 | findbugs | 523 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 240 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1195 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 6736 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/874 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2291ebe574ac 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 35f1014 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/6/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/6/testReport/ |
   | Max. process+thread count | 5160 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251490)
Time Spent: 3.5h  (was: 3h 20m)

> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket
> 
>
> Key: HDDS-1540
> URL: https://issues.apache.org/jira/browse/HDDS-1540
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  

[jira] [Updated] (HDFS-14402) Use FileChannel.transferTo() method for transferring block to SCM cache

2019-05-30 Thread Uma Maheswara Rao G (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-14402:
---
Description: 
We will consider to use transferTo API to improve SCM's cach performace.

Discussion: 5th comment in 
[HDFS-14355|https://issues.apache.org/jira/browse/HDFS-14355?focusedCommentId=16797627=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16797627]

Comment: FileMappableBlockLoader : Currently you are reading data from input 
stream and verifying checksum and writing that buffer to MBB. One thought here 
is: how about using FileChannel#transferTo API for transferring data from one 
channel to other natively. and then do mmap on destination file(assuming mmap 
may be faster in target file) and do checksum verification on it? 

  was:
We will consider to use transferTo API to improve SCM's cach performace.

Discussion: 5th comment in HDFS-14355

Comment: FileMappableBlockLoader : Currently you are reading data from input 
stream and verifying checksum and writing that buffer to MBB. One thought here 
is: how about using FileChannel#transferTo API for transferring data from one 
channel to other natively. and then do mmap on destination file(assuming mmap 
may be faster in target file) and do checksum verification on it? 


> Use FileChannel.transferTo() method for transferring block to SCM cache
> ---
>
> Key: HDFS-14402
> URL: https://issues.apache.org/jira/browse/HDFS-14402
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
>  Labels: SCM
> Fix For: 3.3.0
>
> Attachments: HDFS-14402.000.patch, HDFS-14402.001.patch, 
> HDFS-14402.002.patch, With-Cache-Improvement-Patch.png, 
> Without-Cache-Improvement-Patch.png
>
>
> We will consider to use transferTo API to improve SCM's cach performace.
> Discussion: 5th comment in 
> [HDFS-14355|https://issues.apache.org/jira/browse/HDFS-14355?focusedCommentId=16797627=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16797627]
> Comment: FileMappableBlockLoader : Currently you are reading data from input 
> stream and verifying checksum and writing that buffer to MBB. One thought 
> here is: how about using FileChannel#transferTo API for transferring data 
> from one channel to other natively. and then do mmap on destination 
> file(assuming mmap may be faster in target file) and do checksum verification 
> on it? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14402) Use FileChannel.transferTo() method for transferring block to SCM cache

2019-05-30 Thread Uma Maheswara Rao G (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-14402:
---
Description: 
We will consider to use transferTo API to improve SCM's cach performace.

Discussion: 5th comment in HDFS-14355

Comment: FileMappableBlockLoader : Currently you are reading data from input 
stream and verifying checksum and writing that buffer to MBB. One thought here 
is: how about using FileChannel#transferTo API for transferring data from one 
channel to other natively. and then do mmap on destination file(assuming mmap 
may be faster in target file) and do checksum verification on it? 

  was:
We will consider to use transferTo API to improve SCM's cach performace.

Discussion: 5th comment in HDFS-14355


> Use FileChannel.transferTo() method for transferring block to SCM cache
> ---
>
> Key: HDFS-14402
> URL: https://issues.apache.org/jira/browse/HDFS-14402
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
>  Labels: SCM
> Fix For: 3.3.0
>
> Attachments: HDFS-14402.000.patch, HDFS-14402.001.patch, 
> HDFS-14402.002.patch, With-Cache-Improvement-Patch.png, 
> Without-Cache-Improvement-Patch.png
>
>
> We will consider to use transferTo API to improve SCM's cach performace.
> Discussion: 5th comment in HDFS-14355
> Comment: FileMappableBlockLoader : Currently you are reading data from input 
> stream and verifying checksum and writing that buffer to MBB. One thought 
> here is: how about using FileChannel#transferTo API for transferring data 
> from one channel to other natively. and then do mmap on destination 
> file(assuming mmap may be faster in target file) and do checksum verification 
> on it? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14402) Use FileChannel.transferTo() method for transferring block to SCM cache

2019-05-30 Thread Uma Maheswara Rao G (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-14402:
---
Description: 
We will consider to use transferTo API to improve SCM's cach performace.

Discussion: 5th comment in HDFS-14355

  was:We will consider to use transferTo API to improve SCM's cach performace.


> Use FileChannel.transferTo() method for transferring block to SCM cache
> ---
>
> Key: HDFS-14402
> URL: https://issues.apache.org/jira/browse/HDFS-14402
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
>  Labels: SCM
> Fix For: 3.3.0
>
> Attachments: HDFS-14402.000.patch, HDFS-14402.001.patch, 
> HDFS-14402.002.patch, With-Cache-Improvement-Patch.png, 
> Without-Cache-Improvement-Patch.png
>
>
> We will consider to use transferTo API to improve SCM's cach performace.
> Discussion: 5th comment in HDFS-14355



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14090) RBF: Improved isolation for downstream name nodes.

2019-05-30 Thread TanYuxin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852590#comment-16852590
 ] 

TanYuxin commented on HDFS-14090:
-

[~crh] Thanks. It's a great feature, looking forward to it resolved.

> RBF: Improved isolation for downstream name nodes.
> --
>
> Key: HDFS-14090
> URL: https://issues.apache.org/jira/browse/HDFS-14090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14090-HDFS-13891.001.patch, RBF_ Isolation 
> design.pdf
>
>
> Router is a gateway to underlying name nodes. Gateway architectures, should 
> help minimize impact of clients connecting to healthy clusters vs unhealthy 
> clusters.
> For example - If there are 2 name nodes downstream, and one of them is 
> heavily loaded with calls spiking rpc queue times, due to back pressure the 
> same with start reflecting on the router. As a result of this, clients 
> connecting to healthy/faster name nodes will also slow down as same rpc queue 
> is maintained for all calls at the router layer. Essentially the same IPC 
> thread pool is used by router to connect to all name nodes.
> Currently router uses one single rpc queue for all calls. Lets discuss how we 
> can change the architecture and add some throttling logic for 
> unhealthy/slow/overloaded name nodes.
> One way could be to read from current call queue, immediately identify 
> downstream name node and maintain a separate queue for each underlying name 
> node. Another simpler way is to maintain some sort of rate limiter configured 
> for each name node and let routers drop/reject/send error requests after 
> certain threshold. 
> This won’t be a simple change as router’s ‘Server’ layer would need redesign 
> and implementation. Currently this layer is the same as name node.
> Opening this ticket to discuss, design and implement this feature.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1540) Implement addAcl,removeAcl,setAcl,getAcl for Bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1540?focusedWorklogId=251454=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251454
 ]

ASF GitHub Bot logged work on HDDS-1540:


Author: ASF GitHub Bot
Created on: 31/May/19 02:23
Start Date: 31/May/19 02:23
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #874: HDDS-1540. 
Implement addAcl,removeAcl,setAcl,getAcl for Bucket. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/874#discussion_r289237945
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/IOzoneAcl.java
 ##
 @@ -0,0 +1,84 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om;
+
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Interface for Ozone Acl management.
+ */
+public interface IOzoneAcl {
+
+  /**
+   * Add acl for Ozone object. Return true if acl is added successfully else
+   * false.
+   * @param obj Ozone object for which acl should be added.
+   * @param acl ozone acl top be added.
+   *
+   * @throws IOException if there is error.
+   * */
+  boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException;
+
+  /**
+   * Remove acl for Ozone object. Return true if acl is removed successfully
+   * else false.
+   * @param obj Ozone object.
+   * @param acl Ozone acl to be removed.
+   *
+   * @throws IOException if there is error.
+   * */
+  boolean removeAcl(OzoneObj obj, OzoneAcl acl) throws IOException;
+
+  /**
+   * Acls to be set for given Ozone object. This operations reset ACL for
+   * given object to list of ACLs provided in argument.
+   * @param obj Ozone object.
+   * @param acls List of acls.
+   *
+   * @throws IOException if there is error.
+   * */
+  boolean setAcl(OzoneObj obj, List acls) throws IOException;
+
+  /**
+   * Returns list of ACLs for given Ozone object.
+   * @param obj Ozone object.
+   *
+   * @throws IOException if there is error.
+   * */
+  List getAcl(OzoneObj obj) throws IOException;
+
+  /**
+   * Returns list of ACLs for given Ozone object.
+   * @param newAcl new acl to be added.
+   * @param currentAcls list of acls.
+   *
+   * @return true if newAcl addition to existing acls is valid, else false.
+   * */
+  static boolean validateNewAcl(OzoneAcl newAcl, List currentAcls) {
+
+// Check 1: Check for duplicate.
+if(currentAcls.contains(newAcl)) {
 
 Review comment:
   Moved new function to BucketImpl. Remove check is already added. Seperate 
helper function for addAcl validation, as we need to do further validation with 
dacls.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251454)
Time Spent: 3h 20m  (was: 3h 10m)

> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket
> 
>
> Key: HDDS-1540
> URL: https://issues.apache.org/jira/browse/HDDS-1540
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1540) Implement addAcl,removeAcl,setAcl,getAcl for Bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1540?focusedWorklogId=251453=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251453
 ]

ASF GitHub Bot logged work on HDDS-1540:


Author: ASF GitHub Bot
Created on: 31/May/19 02:23
Start Date: 31/May/19 02:23
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #874: HDDS-1540. 
Implement addAcl,removeAcl,setAcl,getAcl for Bucket. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/874#discussion_r289237945
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/IOzoneAcl.java
 ##
 @@ -0,0 +1,84 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om;
+
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Interface for Ozone Acl management.
+ */
+public interface IOzoneAcl {
+
+  /**
+   * Add acl for Ozone object. Return true if acl is added successfully else
+   * false.
+   * @param obj Ozone object for which acl should be added.
+   * @param acl ozone acl top be added.
+   *
+   * @throws IOException if there is error.
+   * */
+  boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException;
+
+  /**
+   * Remove acl for Ozone object. Return true if acl is removed successfully
+   * else false.
+   * @param obj Ozone object.
+   * @param acl Ozone acl to be removed.
+   *
+   * @throws IOException if there is error.
+   * */
+  boolean removeAcl(OzoneObj obj, OzoneAcl acl) throws IOException;
+
+  /**
+   * Acls to be set for given Ozone object. This operations reset ACL for
+   * given object to list of ACLs provided in argument.
+   * @param obj Ozone object.
+   * @param acls List of acls.
+   *
+   * @throws IOException if there is error.
+   * */
+  boolean setAcl(OzoneObj obj, List acls) throws IOException;
+
+  /**
+   * Returns list of ACLs for given Ozone object.
+   * @param obj Ozone object.
+   *
+   * @throws IOException if there is error.
+   * */
+  List getAcl(OzoneObj obj) throws IOException;
+
+  /**
+   * Returns list of ACLs for given Ozone object.
+   * @param newAcl new acl to be added.
+   * @param currentAcls list of acls.
+   *
+   * @return true if newAcl addition to existing acls is valid, else false.
+   * */
+  static boolean validateNewAcl(OzoneAcl newAcl, List currentAcls) {
+
+// Check 1: Check for duplicate.
+if(currentAcls.contains(newAcl)) {
 
 Review comment:
   Moved new function to BucketImpl. Remove check is already added. Seperate 
helper function for addAcl validation as we need to do further validation with 
dacls.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251453)
Time Spent: 3h 10m  (was: 3h)

> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket
> 
>
> Key: HDDS-1540
> URL: https://issues.apache.org/jira/browse/HDDS-1540
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13654) Use a random secret when a secret file doesn't exist in HttpFS. This should be default.

2019-05-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852585#comment-16852585
 ] 

Hudson commented on HDFS-13654:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16640 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16640/])
HDFS-13654. Use a random secret when a secret file doesn't exist in (tasanuma: 
rev 35f1014b3e10eee27f1976f4af9815a0c2d7dacd)
* (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* (delete) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-signature.secret
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSAuthenticationFilter.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/resources/httpfs-default.xml
* (add) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerWebServerWithRandomSecret.java


> Use a random secret when a secret file doesn't  exist in HttpFS. This should 
> be default.
> 
>
> Key: HDFS-13654
> URL: https://issues.apache.org/jira/browse/HDFS-13654
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, security
>Reporter: Pulkit Bhardwaj
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13654.1.patch, HDFS-13654.2.patch, 
> HDFS-13654.3.patch, HDFS-13654.4.patch, HDFS-13654.5.patch, HDFS-13654.6.patch
>
>
> {code:java}
> curl -s 
> https://raw.githubusercontent.com/apache/hadoop/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-signature.secret
>  
> hadoop httpfs secret{code}
>  
> The "secret" is a known string, it is better to keep this a random string so 
> that it is not well known.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1579) Create OMDoubleBuffer metrics

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1579?focusedWorklogId=251451=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251451
 ]

ASF GitHub Bot logged work on HDDS-1579:


Author: ASF GitHub Bot
Created on: 31/May/19 02:16
Start Date: 31/May/19 02:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #871: HDDS-1579. Create 
OMDoubleBuffer metrics.
URL: https://github.com/apache/hadoop/pull/871#issuecomment-497550204
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 525 | trunk passed |
   | +1 | compile | 251 | trunk passed |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 809 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 160 | trunk passed |
   | 0 | spotbugs | 293 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 482 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 459 | the patch passed |
   | +1 | compile | 280 | the patch passed |
   | +1 | javac | 280 | the patch passed |
   | +1 | checkstyle | 87 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 657 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | the patch passed |
   | +1 | findbugs | 500 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 243 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1551 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 6517 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/871 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e6d3ed47546a 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 219e286 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/3/testReport/ |
   | Max. process+thread count | 3856 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-871/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251451)
Time Spent: 1h 10m  (was: 1h)

> Create OMDoubleBuffer metrics
> -
>
> Key: HDDS-1579
> URL: https://issues.apache.org/jira/browse/HDDS-1579
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This Jira is to implement OMDoubleBuffer metrics, to show metrics like.
>  # flushIterations.
>  # totalTransactionsflushed.
>  
> Any other related metrics. This Jira is created based on the comment by 
> [~anu] during HDDS-1512 review.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1600?focusedWorklogId=251444=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251444
 ]

ASF GitHub Bot logged work on HDDS-1600:


Author: ASF GitHub Bot
Created on: 31/May/19 02:04
Start Date: 31/May/19 02:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #857: HDDS-1600. Add 
userName and IPAddress as part of OMRequest.
URL: https://github.com/apache/hadoop/pull/857#issuecomment-497548215
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 44 | Maven dependency ordering for branch |
   | +1 | mvninstall | 573 | trunk passed |
   | +1 | compile | 273 | trunk passed |
   | +1 | checkstyle | 77 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 926 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 348 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 577 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 523 | the patch passed |
   | +1 | compile | 290 | the patch passed |
   | +1 | cc | 290 | the patch passed |
   | +1 | javac | 290 | the patch passed |
   | +1 | checkstyle | 84 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 720 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 80 | hadoop-ozone generated 2 new + 8 unchanged - 0 fixed = 
10 total (was 8) |
   | +1 | findbugs | 531 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 269 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1313 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 98 | The patch does not generate ASF License warnings. |
   | | | 6872 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOmAcls |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/857 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc xml |
   | uname | Linux 2aad089df69e 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon 
Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 219e286 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/4/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/4/testReport/ |
   | Max. process+thread count | 5296 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251444)
Time Spent: 50m  (was: 40m)

> Add userName and IPAddress as part of OMRequest.
> 
>
> Key: HDDS-1600
> URL: https://issues.apache.org/jira/browse/HDDS-1600
>  

[jira] [Work logged] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1600?focusedWorklogId=251443=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251443
 ]

ASF GitHub Bot logged work on HDDS-1600:


Author: ASF GitHub Bot
Created on: 31/May/19 01:59
Start Date: 31/May/19 01:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #857: HDDS-1600. Add 
userName and IPAddress as part of OMRequest.
URL: https://github.com/apache/hadoop/pull/857#issuecomment-497547441
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for branch |
   | +1 | mvninstall | 551 | trunk passed |
   | +1 | compile | 275 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 828 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 303 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 492 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 17 | Maven dependency ordering for patch |
   | +1 | mvninstall | 495 | the patch passed |
   | +1 | compile | 260 | the patch passed |
   | +1 | cc | 260 | the patch passed |
   | +1 | javac | 260 | the patch passed |
   | +1 | checkstyle | 69 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 649 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 73 | hadoop-ozone generated 2 new + 8 unchanged - 0 fixed = 
10 total (was 8) |
   | +1 | findbugs | 496 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 229 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1273 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 6307 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.om.TestOmAcls |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/857 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc xml |
   | uname | Linux 1527891eb542 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 219e286 |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/5/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/5/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/5/testReport/ |
   | Max. process+thread count | 5238 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-857/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251443)
Time Spent: 40m  (was: 0.5h)

> Add userName and IPAddress as part 

[jira] [Updated] (HDFS-13654) Use a random secret when a secret file doesn't exist in HttpFS. This should be default.

2019-05-30 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-13654:

   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks for your review, [~aajisaka]!

> Use a random secret when a secret file doesn't  exist in HttpFS. This should 
> be default.
> 
>
> Key: HDFS-13654
> URL: https://issues.apache.org/jira/browse/HDFS-13654
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, security
>Reporter: Pulkit Bhardwaj
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-13654.1.patch, HDFS-13654.2.patch, 
> HDFS-13654.3.patch, HDFS-13654.4.patch, HDFS-13654.5.patch, HDFS-13654.6.patch
>
>
> {code:java}
> curl -s 
> https://raw.githubusercontent.com/apache/hadoop/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-signature.secret
>  
> hadoop httpfs secret{code}
>  
> The "secret" is a known string, it is better to keep this a random string so 
> that it is not well known.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1371) Download RocksDB checkpoint from OM Leader to Follower

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1371?focusedWorklogId=251440=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251440
 ]

ASF GitHub Bot logged work on HDDS-1371:


Author: ASF GitHub Bot
Created on: 31/May/19 01:30
Start Date: 31/May/19 01:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #703: HDDS-1371. 
Download RocksDB checkpoint from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#issuecomment-497542613
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for branch |
   | +1 | mvninstall | 528 | trunk passed |
   | +1 | compile | 278 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 911 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 182 | trunk passed |
   | 0 | spotbugs | 355 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 608 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 73 | Maven dependency ordering for patch |
   | +1 | mvninstall | 591 | the patch passed |
   | +1 | compile | 331 | the patch passed |
   | +1 | javac | 331 | the patch passed |
   | -0 | checkstyle | 48 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | -0 | checkstyle | 41 | hadoop-ozone: The patch generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 756 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | the patch passed |
   | +1 | findbugs | 655 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 1041 | hadoop-hdds in the patch failed. |
   | -1 | unit | 153 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 6737 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.server.events.TestEventWatcher |
   |   | hadoop.ozone.om.TestOzoneManagerHttpServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/703 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 7445e1a4280d 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9122b9b |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/4/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/4/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/4/testReport/ |
   | Max. process+thread count | 1290 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/framework hadoop-ozone/client 
hadoop-ozone/common hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-703/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog 

[jira] [Work logged] (HDDS-1558) IllegalArgumentException while processing container Reports

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1558?focusedWorklogId=251437=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251437
 ]

ASF GitHub Bot logged work on HDDS-1558:


Author: ASF GitHub Bot
Created on: 31/May/19 01:17
Start Date: 31/May/19 01:17
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on issue #853: HDDS-1558. 
IllegalArgumentException while processing container Reports.
URL: https://github.com/apache/hadoop/pull/853#issuecomment-497540347
 
 
   @bshashikant We should use `KeyValueHandler` to mark the container as 
unhealthy. We should not directly update container and do state changes.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251437)
Time Spent: 20m  (was: 10m)

> IllegalArgumentException while processing container Reports
> ---
>
> Key: HDDS-1558
> URL: https://issues.apache.org/jira/browse/HDDS-1558
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> IllegalArgumentException while processing container Reports
> {code}
> 2019-05-19 23:15:04,137 ERROR events.SingleThreadExecutor 
> (SingleThreadExecutor.java:lambda$onMessage$1(88)) - Error on execution 
> message 
> org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher$ContainerReportFromDatanode@1a117ebc
> java.lang.IllegalArgumentException
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
> at 
> org.apache.hadoop.hdds.scm.container.AbstractContainerReportHandler.updateContainerState(AbstractContainerReportHandler.java:178)
> at 
> org.apache.hadoop.hdds.scm.container.AbstractContainerReportHandler.processContainerReplica(AbstractContainerReportHandler.java:85)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.processContainerReplicas(ContainerReportHandler.java:124)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:97)
> at 
> org.apache.hadoop.hdds.scm.container.ContainerReportHandler.onMessage(ContainerReportHandler.java:46)
> at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:85)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14508) RBF: Clean-up and refactor UI components

2019-05-30 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852536#comment-16852536
 ] 

Takanobu Asanuma commented on HDFS-14508:
-

Thanks for your reviews and your helps, [~elgoiri] and [~crh]! I will create 
another jira for updating the document.
{quote}We may want to have RouterMBean into a different implementation class 
(such as RouterCoreMetrics or RouterStateMetrics) instead of FederationMetrics, 
which seems misleading.
{quote}
I'm a little concerned about having lots of metrics classes. Can we rename 
{{FederationMetrics}} to more abstract name, {{RBFMetrics}} or 
{{RouterServerMetrics}} or something like that, to implement both beans?

> RBF: Clean-up and refactor UI components
> 
>
> Key: HDFS-14508
> URL: https://issues.apache.org/jira/browse/HDFS-14508
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-14508-HDFS-13891.1.patch, 
> HDFS-14508-HDFS-13891.2.patch
>
>
> Router UI has tags that are not used or incorrectly set. The code should be 
> cleaned-up.
> One such example is 
> Path : 
> (\hadoop-hdfs-project\hadoop-hdfs-rbf\src\main\webapps\router\federationhealth.js)
> {code:java}
> {"name": "routerstat", "url": 
> "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1613) Opening of rocksDB in datanode fails with "No locks available"

2019-05-30 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1613:

Status: Patch Available  (was: Open)

> Opening of rocksDB in datanode fails with "No locks available"
> --
>
> Key: HDDS-1613
> URL: https://issues.apache.org/jira/browse/HDDS-1613
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Block read fails with 
> {code}
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Unable to find the block with bcsID 11777 .Container 68 bcsId is 0.
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:573)
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.getBlock(ContainerProtocolCalls.java:120)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.initializeBlockInputStream(KeyInputStream.java:295)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.getStream(KeyInputStream.java:265)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.access$000(KeyInputStream.java:229)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.getStreamEntry(KeyInputStream.java:107)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:140)
> at 
> org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:47)
> at java.io.InputStream.read(InputStream.java:101)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.load(MiniOzoneLoadGenerator.java:114)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$0(MiniOzoneLoadGenerator.java:147)
> at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Looking at the 3 datanodes, the containers are in bcs id of 11748, 11748 and 
> 0.
> {code}
> 2019-05-30 08:28:05,348 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,363 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,377 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 0. : Result: UNKNOWN_BCSID
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1613) Opening of rocksDB in datanode fails with "No locks available"

2019-05-30 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1613 started by Mukul Kumar Singh.
---
> Opening of rocksDB in datanode fails with "No locks available"
> --
>
> Key: HDDS-1613
> URL: https://issues.apache.org/jira/browse/HDDS-1613
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Block read fails with 
> {code}
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Unable to find the block with bcsID 11777 .Container 68 bcsId is 0.
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:573)
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.getBlock(ContainerProtocolCalls.java:120)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.initializeBlockInputStream(KeyInputStream.java:295)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.getStream(KeyInputStream.java:265)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.access$000(KeyInputStream.java:229)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.getStreamEntry(KeyInputStream.java:107)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:140)
> at 
> org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:47)
> at java.io.InputStream.read(InputStream.java:101)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.load(MiniOzoneLoadGenerator.java:114)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$0(MiniOzoneLoadGenerator.java:147)
> at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Looking at the 3 datanodes, the containers are in bcs id of 11748, 11748 and 
> 0.
> {code}
> 2019-05-30 08:28:05,348 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,363 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,377 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 0. : Result: UNKNOWN_BCSID
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDDS-1613) Opening of rocksDB in datanode fails with "No locks available"

2019-05-30 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1613 stopped by Mukul Kumar Singh.
---
> Opening of rocksDB in datanode fails with "No locks available"
> --
>
> Key: HDDS-1613
> URL: https://issues.apache.org/jira/browse/HDDS-1613
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Block read fails with 
> {code}
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Unable to find the block with bcsID 11777 .Container 68 bcsId is 0.
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:573)
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.getBlock(ContainerProtocolCalls.java:120)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.initializeBlockInputStream(KeyInputStream.java:295)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.getStream(KeyInputStream.java:265)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.access$000(KeyInputStream.java:229)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.getStreamEntry(KeyInputStream.java:107)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:140)
> at 
> org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:47)
> at java.io.InputStream.read(InputStream.java:101)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.load(MiniOzoneLoadGenerator.java:114)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$0(MiniOzoneLoadGenerator.java:147)
> at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Looking at the 3 datanodes, the containers are in bcs id of 11748, 11748 and 
> 0.
> {code}
> 2019-05-30 08:28:05,348 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,363 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,377 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 0. : Result: UNKNOWN_BCSID
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12703) Exceptions are fatal to decommissioning monitor

2019-05-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-12703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852533#comment-16852533
 ] 

Íñigo Goiri commented on HDFS-12703:


{{TestWebHdfsTimeouts}} is unrelated.
{{TestDecommission}} seems reasonable; what about extending it to trigger these 
kind of exceptions?
Can we fix the warnings (checkstyles)?
In addition for the log, use the logger format {}.

> Exceptions are fatal to decommissioning monitor
> ---
>
> Key: HDFS-12703
> URL: https://issues.apache.org/jira/browse/HDFS-12703
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Xue Liu
>Priority: Critical
> Attachments: HDFS-12703.001.patch
>
>
> The {{DecommissionManager.Monitor}} runs as an executor scheduled task.  If 
> an exception occurs, all decommissioning ceases until the NN is restarted.  
> Per javadoc for {{executor#scheduleAtFixedRate}}: *If any execution of the 
> task encounters an exception, subsequent executions are suppressed*.  The 
> monitor thread is alive but blocked waiting for an executor task that will 
> never come.  The code currently disposes of the future so the actual 
> exception that aborted the task is gone.
> Failover is insufficient since the task is also likely dead on the standby.  
> Replication queue init after the transition to active will fix the under 
> replication of blocks on currently decommissioning nodes but future nodes 
> never decommission.  The standby must be bounced prior to failover – and 
> hopefully the error condition does not reoccur.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12703) Exceptions are fatal to decommissioning monitor

2019-05-30 Thread Xue Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852532#comment-16852532
 ] 

Xue Liu commented on HDFS-12703:


Thanks [~goiri] for reviewing! I think TestDecommission already covers the 
scenarios pretty well.

Test fails all due to TestWebHdfsTimeouts, should be unrelated?

> Exceptions are fatal to decommissioning monitor
> ---
>
> Key: HDFS-12703
> URL: https://issues.apache.org/jira/browse/HDFS-12703
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Xue Liu
>Priority: Critical
> Attachments: HDFS-12703.001.patch
>
>
> The {{DecommissionManager.Monitor}} runs as an executor scheduled task.  If 
> an exception occurs, all decommissioning ceases until the NN is restarted.  
> Per javadoc for {{executor#scheduleAtFixedRate}}: *If any execution of the 
> task encounters an exception, subsequent executions are suppressed*.  The 
> monitor thread is alive but blocked waiting for an executor task that will 
> never come.  The code currently disposes of the future so the actual 
> exception that aborted the task is gone.
> Failover is insufficient since the task is also likely dead on the standby.  
> Replication queue init after the transition to active will fix the under 
> replication of blocks on currently decommissioning nodes but future nodes 
> never decommission.  The standby must be bounced prior to failover – and 
> hopefully the error condition does not reoccur.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1540) Implement addAcl,removeAcl,setAcl,getAcl for Bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1540?focusedWorklogId=251434=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251434
 ]

ASF GitHub Bot logged work on HDDS-1540:


Author: ASF GitHub Bot
Created on: 31/May/19 01:09
Start Date: 31/May/19 01:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #874: HDDS-1540. 
Implement addAcl,removeAcl,setAcl,getAcl for Bucket. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/874#issuecomment-497539183
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 547 | trunk passed |
   | +1 | compile | 293 | trunk passed |
   | +1 | checkstyle | 86 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 887 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 191 | trunk passed |
   | 0 | spotbugs | 311 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 500 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 502 | the patch passed |
   | +1 | compile | 295 | the patch passed |
   | +1 | javac | 295 | the patch passed |
   | +1 | checkstyle | 99 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 146 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 171 | the patch passed |
   | +1 | findbugs | 511 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 237 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1036 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 5809 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/874 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8dda786bc398 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9122b9b |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/5/testReport/ |
   | Max. process+thread count | 4453 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251434)
Time Spent: 3h  (was: 2h 50m)

> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket
> 
>
> Key: HDDS-1540
> URL: https://issues.apache.org/jira/browse/HDDS-1540
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1540) Implement addAcl,removeAcl,setAcl,getAcl for Bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1540?focusedWorklogId=251432=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251432
 ]

ASF GitHub Bot logged work on HDDS-1540:


Author: ASF GitHub Bot
Created on: 31/May/19 01:05
Start Date: 31/May/19 01:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #874: HDDS-1540. 
Implement addAcl,removeAcl,setAcl,getAcl for Bucket. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/874#issuecomment-497538327
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 37 | Maven dependency ordering for branch |
   | +1 | mvninstall | 589 | trunk passed |
   | +1 | compile | 288 | trunk passed |
   | +1 | checkstyle | 89 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 905 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   | 0 | spotbugs | 299 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 491 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for patch |
   | +1 | mvninstall | 502 | the patch passed |
   | +1 | compile | 316 | the patch passed |
   | +1 | javac | 316 | the patch passed |
   | +1 | checkstyle | 90 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 649 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | the patch passed |
   | +1 | findbugs | 571 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 146 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1154 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 61 | The patch does not generate ASF License warnings. |
   | | | 6499 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler
 |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/874 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 666edfb3af78 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9122b9b |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/4/testReport/ |
   | Max. process+thread count | 4615 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251432)
Time Spent: 2h 50m  (was: 2h 40m)

> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket
> 
>
> Key: HDDS-1540
> URL: https://issues.apache.org/jira/browse/HDDS-1540
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time 

[jira] [Commented] (HDFS-13654) Use a random secret when a secret file doesn't exist in HttpFS. This should be default.

2019-05-30 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852528#comment-16852528
 ] 

Akira Ajisaka commented on HDFS-13654:
--

+1, thanks [~tasanuma].

> Use a random secret when a secret file doesn't  exist in HttpFS. This should 
> be default.
> 
>
> Key: HDFS-13654
> URL: https://issues.apache.org/jira/browse/HDFS-13654
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, security
>Reporter: Pulkit Bhardwaj
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDFS-13654.1.patch, HDFS-13654.2.patch, 
> HDFS-13654.3.patch, HDFS-13654.4.patch, HDFS-13654.5.patch, HDFS-13654.6.patch
>
>
> {code:java}
> curl -s 
> https://raw.githubusercontent.com/apache/hadoop/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-signature.secret
>  
> hadoop httpfs secret{code}
>  
> The "secret" is a known string, it is better to keep this a random string so 
> that it is not well known.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1540) Implement addAcl,removeAcl,setAcl,getAcl for Bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1540?focusedWorklogId=251430=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251430
 ]

ASF GitHub Bot logged work on HDDS-1540:


Author: ASF GitHub Bot
Created on: 31/May/19 01:00
Start Date: 31/May/19 01:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #874: HDDS-1540. 
Implement addAcl,removeAcl,setAcl,getAcl for Bucket. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/874#issuecomment-497537679
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 51 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for branch |
   | +1 | mvninstall | 507 | trunk passed |
   | +1 | compile | 259 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 813 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 143 | trunk passed |
   | 0 | spotbugs | 302 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 494 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | +1 | mvninstall | 507 | the patch passed |
   | +1 | compile | 284 | the patch passed |
   | +1 | javac | 284 | the patch passed |
   | +1 | checkstyle | 85 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 668 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 145 | the patch passed |
   | +1 | findbugs | 519 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 334 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1313 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 6418 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/874 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 52f230d9b19a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9122b9b |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/3/testReport/ |
   | Max. process+thread count | 3231 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251430)
Time Spent: 2h 40m  (was: 2.5h)

> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket
> 
>
> Key: HDDS-1540
> URL: 

[jira] [Work logged] (HDDS-1618) Merge code for HA and Non-HA OM requests for bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1618?focusedWorklogId=251428=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251428
 ]

ASF GitHub Bot logged work on HDDS-1618:


Author: ASF GitHub Bot
Created on: 31/May/19 00:55
Start Date: 31/May/19 00:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #877: HDDS-1618. Merge 
code for HA and Non-HA OM write type requests for bucket
URL: https://github.com/apache/hadoop/pull/877#issuecomment-497536806
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 822 | trunk passed |
   | +1 | compile | 397 | trunk passed |
   | +1 | checkstyle | 99 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1075 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 207 | trunk passed |
   | 0 | spotbugs | 402 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 635 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 700 | the patch passed |
   | +1 | compile | 370 | the patch passed |
   | +1 | javac | 370 | the patch passed |
   | -0 | checkstyle | 62 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 867 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 227 | the patch passed |
   | +1 | findbugs | 676 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 247 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1492 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 69 | The patch does not generate ASF License warnings. |
   | | | 8361 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/877 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 940da78fc1df 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9122b9b |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/2/testReport/ |
   | Max. process+thread count | 2470 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251428)
Time Spent: 40m  (was: 0.5h)

> Merge code for HA and Non-HA OM requests for bucket
> ---
>
> Key: HDDS-1618
> URL: https://issues.apache.org/jira/browse/HDDS-1618
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  

[jira] [Work logged] (HDDS-1618) Merge code for HA and Non-HA OM requests for bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1618?focusedWorklogId=251424=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251424
 ]

ASF GitHub Bot logged work on HDDS-1618:


Author: ASF GitHub Bot
Created on: 31/May/19 00:46
Start Date: 31/May/19 00:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #877: HDDS-1618. Merge 
code for HA and Non-HA OM write type requests for bucket
URL: https://github.com/apache/hadoop/pull/877#issuecomment-497535333
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 632 | trunk passed |
   | +1 | compile | 281 | trunk passed |
   | +1 | checkstyle | 79 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 961 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | trunk passed |
   | 0 | spotbugs | 328 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 551 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 498 | the patch passed |
   | +1 | compile | 287 | the patch passed |
   | +1 | javac | 287 | the patch passed |
   | -0 | checkstyle | 41 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 717 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 173 | the patch passed |
   | +1 | findbugs | 512 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 263 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1362 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 6867 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.om.TestOmAcls |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/877 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f29c8cca7822 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9122b9b |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/3/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/3/testReport/ |
   | Max. process+thread count | 4520 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251424)
Time Spent: 0.5h  (was: 20m)

> Merge code for HA and Non-HA OM requests for bucket
> ---
>
> Key: HDDS-1618
> URL: https://issues.apache.org/jira/browse/HDDS-1618
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: 

[jira] [Updated] (HDDS-1620) Implement Volume Write Requests to use Cache and DoubleBuffer

2019-05-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1620:
-
Description: 
Implement Volume write requests to use OM Cache, double buffer. 

In this Jira will add the changes to implement volume operations, and HA/Non-HA 
will have a different code path, but once all requests are implemented will 
have a single code path.

  was:
Implement Bucket write requests to use OM Cache, double buffer.

And also in OM previously we used to Ratis client for communication to Ratis 
server, instead of that use Ratis server API's.

 

In this Jira will add the changes to implement bucket operations, and HA/Non-HA 
will have a different code path, but once all requests are implemented will 
have a single code path.


> Implement Volume Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1620
> URL: https://issues.apache.org/jira/browse/HDDS-1620
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>
> Implement Volume write requests to use OM Cache, double buffer. 
> In this Jira will add the changes to implement volume operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1620) Implement Volume Write Requests to use Cache and DoubleBuffer

2019-05-30 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1620:


 Summary: Implement Volume Write Requests to use Cache and 
DoubleBuffer
 Key: HDDS-1620
 URL: https://issues.apache.org/jira/browse/HDDS-1620
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham
 Fix For: 0.5.0


Implement Bucket write requests to use OM Cache, double buffer.

And also in OM previously we used to Ratis client for communication to Ratis 
server, instead of that use Ratis server API's.

 

In this Jira will add the changes to implement bucket operations, and HA/Non-HA 
will have a different code path, but once all requests are implemented will 
have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1579) Create OMDoubleBuffer metrics

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1579?focusedWorklogId=251421=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251421
 ]

ASF GitHub Bot logged work on HDDS-1579:


Author: ASF GitHub Bot
Created on: 31/May/19 00:27
Start Date: 31/May/19 00:27
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #871: 
HDDS-1579. Create OMDoubleBuffer metrics.
URL: https://github.com/apache/hadoop/pull/871#discussion_r289220559
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
 ##
 @@ -134,6 +141,23 @@ private void flushTransactions() {
 }
   }
 
+  /**
+   * Set OzoneManagerDoubleBuffer metrics values.
+   * @param flushedTransactionsSize
+   */
+  private void setOzoneManagerDoubleBufferMetrics(
+  long flushedTransactionsSize) {
+ozoneManagerDoubleBufferMetrics.incTotalNumOfFlushIterations();
+ozoneManagerDoubleBufferMetrics.setTotalNumOfFlushedTransactions(
 
 Review comment:
   Renamed the function as suggested.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251421)
Time Spent: 1h  (was: 50m)

> Create OMDoubleBuffer metrics
> -
>
> Key: HDDS-1579
> URL: https://issues.apache.org/jira/browse/HDDS-1579
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This Jira is to implement OMDoubleBuffer metrics, to show metrics like.
>  # flushIterations.
>  # totalTransactionsflushed.
>  
> Any other related metrics. This Jira is created based on the comment by 
> [~anu] during HDDS-1512 review.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1579) Create OMDoubleBuffer metrics

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1579?focusedWorklogId=251419=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251419
 ]

ASF GitHub Bot logged work on HDDS-1579:


Author: ASF GitHub Bot
Created on: 31/May/19 00:27
Start Date: 31/May/19 00:27
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #871: HDDS-1579. 
Create OMDoubleBuffer metrics.
URL: https://github.com/apache/hadoop/pull/871#issuecomment-497532112
 
 
   Resolved conflicts with the latest trunk and addressed review comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251419)
Time Spent: 50m  (was: 40m)

> Create OMDoubleBuffer metrics
> -
>
> Key: HDDS-1579
> URL: https://issues.apache.org/jira/browse/HDDS-1579
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> This Jira is to implement OMDoubleBuffer metrics, to show metrics like.
>  # flushIterations.
>  # totalTransactionsflushed.
>  
> Any other related metrics. This Jira is created based on the comment by 
> [~anu] during HDDS-1512 review.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1618) Merge code for HA and Non-HA OM requests for bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1618?focusedWorklogId=251414=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251414
 ]

ASF GitHub Bot logged work on HDDS-1618:


Author: ASF GitHub Bot
Created on: 31/May/19 00:15
Start Date: 31/May/19 00:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #877: HDDS-1618. Merge 
code for HA and Non-HA OM write type requests for bucket
URL: https://github.com/apache/hadoop/pull/877#issuecomment-497530156
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 54 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 17 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 69 | Maven dependency ordering for branch |
   | +1 | mvninstall | 586 | trunk passed |
   | +1 | compile | 274 | trunk passed |
   | +1 | checkstyle | 67 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 828 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | trunk passed |
   | 0 | spotbugs | 316 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 524 | trunk passed |
   | -0 | patch | 363 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 521 | the patch passed |
   | +1 | compile | 304 | the patch passed |
   | +1 | cc | 304 | the patch passed |
   | +1 | javac | 304 | the patch passed |
   | -0 | checkstyle | 42 | hadoop-ozone: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 699 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 76 | hadoop-ozone generated 3 new + 5 unchanged - 0 fixed = 
8 total (was 5) |
   | +1 | findbugs | 548 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 266 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1774 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 7144 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestOmAcls |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/877 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux d00cb2bf67aa 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 33c62f8 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/1/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/1/testReport/ |
   | Max. process+thread count | 5056 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-877/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

[jira] [Commented] (HDDS-1608) Support Ozone Prefix ACLs in OM metadata table

2019-05-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852471#comment-16852471
 ] 

Hudson commented on HDDS-1608:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16639 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16639/])
HDDS-1608. Support Ozone Prefix ACLs in OM metadata table. Contributed (github: 
rev 219e2867220f805fb69ff9c991113f92c3f4ed2f)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/TestCSMMetrics.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/codec/OmPrefixInfoCodec.java
* (add) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmPrefixInfo.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMMetadataManager.java
* (add) 
hadoop-ozone/common/src/test/java/org/apache/hadoop/ozone/om/codec/TestOmPrefixInfoCodec.java


> Support Ozone Prefix ACLs in OM metadata table
> --
>
> Key: HDDS-1608
> URL: https://issues.apache.org/jira/browse/HDDS-1608
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1540) Implement addAcl,removeAcl,setAcl,getAcl for Bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1540?focusedWorklogId=251408=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251408
 ]

ASF GitHub Bot logged work on HDDS-1540:


Author: ASF GitHub Bot
Created on: 30/May/19 23:51
Start Date: 30/May/19 23:51
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #874: HDDS-1540. 
Implement addAcl,removeAcl,setAcl,getAcl for Bucket. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/874#discussion_r289214524
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/IOzoneAcl.java
 ##
 @@ -0,0 +1,84 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om;
+
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Interface for Ozone Acl management.
+ */
+public interface IOzoneAcl {
+
+  /**
+   * Add acl for Ozone object. Return true if acl is added successfully else
+   * false.
+   * @param obj Ozone object for which acl should be added.
+   * @param acl ozone acl top be added.
+   *
+   * @throws IOException if there is error.
+   * */
+  boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException;
+
+  /**
+   * Remove acl for Ozone object. Return true if acl is removed successfully
+   * else false.
+   * @param obj Ozone object.
+   * @param acl Ozone acl to be removed.
+   *
+   * @throws IOException if there is error.
+   * */
+  boolean removeAcl(OzoneObj obj, OzoneAcl acl) throws IOException;
+
+  /**
+   * Acls to be set for given Ozone object. This operations reset ACL for
+   * given object to list of ACLs provided in argument.
+   * @param obj Ozone object.
+   * @param acls List of acls.
+   *
+   * @throws IOException if there is error.
+   * */
+  boolean setAcl(OzoneObj obj, List acls) throws IOException;
+
+  /**
+   * Returns list of ACLs for given Ozone object.
+   * @param obj Ozone object.
+   *
+   * @throws IOException if there is error.
+   * */
+  List getAcl(OzoneObj obj) throws IOException;
+
+  /**
+   * Returns list of ACLs for given Ozone object.
+   * @param newAcl new acl to be added.
+   * @param currentAcls list of acls.
+   *
+   * @return true if newAcl addition to existing acls is valid, else false.
+   * */
+  static boolean validateNewAcl(OzoneAcl newAcl, List currentAcls) {
+
+// Check 1: Check for duplicate.
+if(currentAcls.contains(newAcl)) {
 
 Review comment:
   Let's remove the default implementation as add/remove have different 
validation semantics.
   
   when you add, we check for duplication
   when you remove, we check for existence. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251408)
Time Spent: 2.5h  (was: 2h 20m)

> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket
> 
>
> Key: HDDS-1540
> URL: https://issues.apache.org/jira/browse/HDDS-1540
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1608) Support Ozone Prefix ACLs in OM metadata table

2019-05-30 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1608:
-
   Resolution: Fixed
Fix Version/s: 0.4.1
   Status: Resolved  (was: Patch Available)

Thanks all for the reviews. I've committed the patch to trunk. 

> Support Ozone Prefix ACLs in OM metadata table
> --
>
> Key: HDDS-1608
> URL: https://issues.apache.org/jira/browse/HDDS-1608
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1608) Support Ozone Prefix ACLs in OM metadata table

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1608?focusedWorklogId=251404=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251404
 ]

ASF GitHub Bot logged work on HDDS-1608:


Author: ASF GitHub Bot
Created on: 30/May/19 23:44
Start Date: 30/May/19 23:44
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #875: HDDS-1608. 
Support Ozone Prefix ACLs in OM metadata table. Contribute…
URL: https://github.com/apache/hadoop/pull/875
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251404)
Time Spent: 1h 20m  (was: 1h 10m)

> Support Ozone Prefix ACLs in OM metadata table
> --
>
> Key: HDDS-1608
> URL: https://issues.apache.org/jira/browse/HDDS-1608
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1371) Download RocksDB checkpoint from OM Leader to Follower

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1371?focusedWorklogId=251402=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251402
 ]

ASF GitHub Bot logged work on HDDS-1371:


Author: ASF GitHub Bot
Created on: 30/May/19 23:38
Start Date: 30/May/19 23:38
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on pull request #703: 
HDDS-1371. Download RocksDB checkpoint from OM Leader to Follower.
URL: https://github.com/apache/hadoop/pull/703#discussion_r289211998
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -597,6 +597,18 @@
   ozone.om.http-address.
 
   
+  
+ozone.om.http.policy
+HTTP_ONLY
+OM, MANAGEMENT
 
 Review comment:
   Updated BaseHttpServer to take ozone.http.policy if set. Otherwise, use 
dfs.http.policy.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251402)
Time Spent: 2.5h  (was: 2h 20m)

> Download RocksDB checkpoint from OM Leader to Follower
> --
>
> Key: HDDS-1371
> URL: https://issues.apache.org/jira/browse/HDDS-1371
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> If a follower OM is lagging way behind the leader OM or in case of a restart 
> or bootstrapping, a follower OM might need RocksDB checkpoint from the leader 
> to catch up with it. This is because the leader might have purged its logs 
> after taking a snapshot.
>  This Jira aims to add support to download a RocksDB checkpoint from leader 
> OM to follower OM through a HTTP servlet. We reuse the DBCheckpoint servlet 
> used by Recon server. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1540) Implement addAcl,removeAcl,setAcl,getAcl for Bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1540?focusedWorklogId=251401=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251401
 ]

ASF GitHub Bot logged work on HDDS-1540:


Author: ASF GitHub Bot
Created on: 30/May/19 23:32
Start Date: 30/May/19 23:32
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #874: HDDS-1540. 
Implement addAcl,removeAcl,setAcl,getAcl for Bucket. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/874#discussion_r289210837
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
 ##
 @@ -423,4 +428,205 @@ private void commitDeleteBucketInfoToOMDB(String 
dbBucketKey)
 volumeName, startBucket, bucketPrefix, maxNumOfBuckets);
 
   }
+
+  /**
+   * Add acl for Ozone object. Return true if acl is added successfully else
+   * false.
+   *
+   * @param obj Ozone object for which acl should be added.
+   * @param acl ozone acl top be added.
+   * @throws IOException if there is error.
+   */
+  @Override
+  public boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException {
+Objects.requireNonNull(obj);
+Objects.requireNonNull(acl);
+if (!obj.getResourceType().equals(OzoneObj.ResourceType.BUCKET)) {
+  throw new IllegalArgumentException("Unexpected argument passed to " +
+  "BucketManager. OzoneObj type:" + obj.getResourceType());
+}
+String volume = obj.getVolumeName();
+String bucket = obj.getBucketName();
+metadataManager.getLock().acquireBucketLock(volume, bucket);
+try {
+  String dbBucketKey = metadataManager.getBucketKey(volume, bucket);
+  OmBucketInfo bucketInfo =
+  metadataManager.getBucketTable().get(dbBucketKey);
+  if (bucketInfo == null) {
+LOG.debug("Bucket:{}/{} does not exist", volume, bucket);
+throw new OMException("Bucket " + bucket + " is not found",
+BUCKET_NOT_FOUND);
+  }
+  List list = bucketInfo.getAcls();
+  list.add(acl);
+  OmBucketInfo updatedBucket = OmBucketInfo.newBuilder()
+  .setVolumeName(bucketInfo.getVolumeName())
+  .setBucketName(bucketInfo.getBucketName())
+  .setStorageType(bucketInfo.getStorageType())
+  .setIsVersionEnabled(bucketInfo.getIsVersionEnabled())
+  .setCreationTime(bucketInfo.getCreationTime())
+  .setBucketEncryptionKey(bucketInfo.getEncryptionKeyInfo())
+  .addAllMetadata(bucketInfo.getMetadata())
+  .setAcls(list)
+  .build();
+
+  metadataManager.getBucketTable().put(dbBucketKey, updatedBucket);
+} catch (IOException ex) {
+  if (!(ex instanceof OMException)) {
+LOG.error("Add acl operation failed for bucket:{}/{} acl:{}",
+volume, bucket, acl, ex);
+  }
+  throw ex;
+} finally {
+  metadataManager.getLock().releaseBucketLock(volume, bucket);
+}
+
+return true;
+  }
+
+  /**
+   * Remove acl for Ozone object. Return true if acl is removed successfully
+   * else false.
+   *
+   * @param obj Ozone object.
+   * @param acl Ozone acl to be removed.
+   * @throws IOException if there is error.
+   */
+  @Override
+  public boolean removeAcl(OzoneObj obj, OzoneAcl acl) throws IOException {
+Objects.requireNonNull(obj);
+Objects.requireNonNull(acl);
+if (!obj.getResourceType().equals(OzoneObj.ResourceType.BUCKET)) {
+  throw new IllegalArgumentException("Unexpected argument passed to " +
+  "BucketManager. OzoneObj type:" + obj.getResourceType());
+}
+String volume = obj.getVolumeName();
+String bucket = obj.getBucketName();
+metadataManager.getLock().acquireBucketLock(volume, bucket);
+try {
+  String dbBucketKey = metadataManager.getBucketKey(volume, bucket);
+  OmBucketInfo bucketInfo =
+  metadataManager.getBucketTable().get(dbBucketKey);
+  if (bucketInfo == null) {
+LOG.debug("Bucket:{}/{} does not exist", volume, bucket);
+throw new OMException("Bucket " + bucket + " is not found",
+BUCKET_NOT_FOUND);
+  }
+  List list = bucketInfo.getAcls();
+  list.remove(acl);
 
 Review comment:
   added check which return false if it doesn't exist.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251401)
Time Spent: 2h 20m  (was: 2h 10m)

> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket
> 
>
> Key: HDDS-1540
> URL: 

[jira] [Commented] (HDFS-14508) RBF: Clean-up and refactor UI components

2019-05-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852451#comment-16852451
 ] 

Íñigo Goiri commented on HDFS-14508:


HDFS-14475 is committed to the branch now.
We should rebase and show the proper security setting now.

> RBF: Clean-up and refactor UI components
> 
>
> Key: HDFS-14508
> URL: https://issues.apache.org/jira/browse/HDFS-14508
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-14508-HDFS-13891.1.patch, 
> HDFS-14508-HDFS-13891.2.patch
>
>
> Router UI has tags that are not used or incorrectly set. The code should be 
> cleaned-up.
> One such example is 
> Path : 
> (\hadoop-hdfs-project\hadoop-hdfs-rbf\src\main\webapps\router\federationhealth.js)
> {code:java}
> {"name": "routerstat", "url": 
> "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14475) RBF: Expose router security enabled status on the UI

2019-05-30 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14475:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-13891
   Status: Resolved  (was: Patch Available)

> RBF: Expose router security enabled status on the UI
> 
>
> Key: HDFS-14475
> URL: https://issues.apache.org/jira/browse/HDFS-14475
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14475-HDFS-13891.001.patch, 
> HDFS-14475-HDFS-13891.002.patch
>
>
> This is a branched off Jira to expose metric so that router's security status 
> can be displayed on the UI. We are still unclear if more work needs to be 
> done for dealing with CORS etc. 
> https://issues.apache.org/jira/browse/HDFS-12510 will continue to track that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14475) RBF: Expose router security enabled status on the UI

2019-05-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852450#comment-16852450
 ] 

Íñigo Goiri commented on HDFS-14475:


+1 on  [^HDFS-14475-HDFS-13891.002.patch].
Committed to HDFS-13891.

> RBF: Expose router security enabled status on the UI
> 
>
> Key: HDFS-14475
> URL: https://issues.apache.org/jira/browse/HDFS-14475
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14475-HDFS-13891.001.patch, 
> HDFS-14475-HDFS-13891.002.patch
>
>
> This is a branched off Jira to expose metric so that router's security status 
> can be displayed on the UI. We are still unclear if more work needs to be 
> done for dealing with CORS etc. 
> https://issues.apache.org/jira/browse/HDFS-12510 will continue to track that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14525) JspHelper ignores hadoop.http.authentication.type

2019-05-30 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852447#comment-16852447
 ] 

Eric Yang commented on HDFS-14525:
--

[~daryn], [~Prabhu Joseph] This boils down to if there is any valid use case to 
keep hadoop.http.authentication.type independent of 
hadoop.security.authentication?

In various Hadoop code, there are inter-exchange of using 
UserGroupInformation.isSecurityEnabled() for web protocol, for example in 
[DFSUtil.java|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java#L1614].
  This mistake indicates that developers have the interests in heart to 
standardize the use of hadoop.security.authentication setting and keep 
anonymous user out.

Maybe it is time to revisit if there is valid use case to set 
hadoop.http.authentication.type differently from 
hadoop.security.authentication?  If there is no valid use case, then we 
probably want to deprecate hadoop.http.authentication.type to avoid the 
circular discussions.  All web app can depend on 
UserGroupInformation.isSecurityEnabled().  Hence, this bug can be invalided.

My vote is to deprecate hadoop.http.authentication.type setting to avoid code 
incorrectness and confusions.

> JspHelper ignores hadoop.http.authentication.type
> -
>
> Key: HDFS-14525
> URL: https://issues.apache.org/jira/browse/HDFS-14525
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Major
>
> On Secure Cluster With hadoop.http.authentication.type simple and 
> hadoop.http.authentication.anonymous.allowed is true, WebHdfs Rest Api fails 
> when user.name is not set. It runs fine if user.name=ambari-qa is set..
> {code}
> [knox@pjosephdocker-1 ~]$ curl -sS -L -w '%{http_code}' -X GET -d '' -H 
> 'Content-Length: 0' --negotiate -u : 
> 'http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/services/sync/yarn-ats?op=GETFILESTATUS'
> {"RemoteException":{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed
>  to obtain user group information: java.io.IOException: Security enabled but 
> user not authenticated by filter"}}403[knox@pjosephdocker-1 ~]$ 
> {code}
> JspHelper#getUGI checks UserGroupInformation.isSecurityEnabled() instead of 
> conf.get(hadoop.http.authentication.type).equals("kerberos") to check if Http 
> is Secure causing the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1540) Implement addAcl,removeAcl,setAcl,getAcl for Bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1540?focusedWorklogId=251398=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251398
 ]

ASF GitHub Bot logged work on HDDS-1540:


Author: ASF GitHub Bot
Created on: 30/May/19 23:16
Start Date: 30/May/19 23:16
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #874: HDDS-1540. 
Implement addAcl,removeAcl,setAcl,getAcl for Bucket. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/874#discussion_r289207459
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
 ##
 @@ -423,4 +428,205 @@ private void commitDeleteBucketInfoToOMDB(String 
dbBucketKey)
 volumeName, startBucket, bucketPrefix, maxNumOfBuckets);
 
   }
+
+  /**
+   * Add acl for Ozone object. Return true if acl is added successfully else
+   * false.
+   *
+   * @param obj Ozone object for which acl should be added.
+   * @param acl ozone acl top be added.
+   * @throws IOException if there is error.
+   */
+  @Override
+  public boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException {
+Objects.requireNonNull(obj);
+Objects.requireNonNull(acl);
+if (!obj.getResourceType().equals(OzoneObj.ResourceType.BUCKET)) {
+  throw new IllegalArgumentException("Unexpected argument passed to " +
+  "BucketManager. OzoneObj type:" + obj.getResourceType());
+}
+String volume = obj.getVolumeName();
+String bucket = obj.getBucketName();
+metadataManager.getLock().acquireBucketLock(volume, bucket);
+try {
+  String dbBucketKey = metadataManager.getBucketKey(volume, bucket);
+  OmBucketInfo bucketInfo =
+  metadataManager.getBucketTable().get(dbBucketKey);
+  if (bucketInfo == null) {
+LOG.debug("Bucket:{}/{} does not exist", volume, bucket);
+throw new OMException("Bucket " + bucket + " is not found",
+BUCKET_NOT_FOUND);
+  }
+  List list = bucketInfo.getAcls();
+  list.add(acl);
+  OmBucketInfo updatedBucket = OmBucketInfo.newBuilder()
+  .setVolumeName(bucketInfo.getVolumeName())
+  .setBucketName(bucketInfo.getBucketName())
+  .setStorageType(bucketInfo.getStorageType())
+  .setIsVersionEnabled(bucketInfo.getIsVersionEnabled())
+  .setCreationTime(bucketInfo.getCreationTime())
+  .setBucketEncryptionKey(bucketInfo.getEncryptionKeyInfo())
+  .addAllMetadata(bucketInfo.getMetadata())
+  .setAcls(list)
+  .build();
+
+  metadataManager.getBucketTable().put(dbBucketKey, updatedBucket);
 
 Review comment:
   Added a todo to handle this. jira is [HDDS-1619].
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251398)
Time Spent: 2h 10m  (was: 2h)

> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket
> 
>
> Key: HDDS-1540
> URL: https://issues.apache.org/jira/browse/HDDS-1540
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1540) Implement addAcl,removeAcl,setAcl,getAcl for Bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1540?focusedWorklogId=251397=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251397
 ]

ASF GitHub Bot logged work on HDDS-1540:


Author: ASF GitHub Bot
Created on: 30/May/19 23:14
Start Date: 30/May/19 23:14
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #874: HDDS-1540. 
Implement addAcl,removeAcl,setAcl,getAcl for Bucket. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/874#discussion_r289207003
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
 ##
 @@ -423,4 +428,205 @@ private void commitDeleteBucketInfoToOMDB(String 
dbBucketKey)
 volumeName, startBucket, bucketPrefix, maxNumOfBuckets);
 
   }
+
+  /**
+   * Add acl for Ozone object. Return true if acl is added successfully else
+   * false.
+   *
+   * @param obj Ozone object for which acl should be added.
+   * @param acl ozone acl top be added.
+   * @throws IOException if there is error.
+   */
+  @Override
+  public boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException {
+Objects.requireNonNull(obj);
+Objects.requireNonNull(acl);
+if (!obj.getResourceType().equals(OzoneObj.ResourceType.BUCKET)) {
+  throw new IllegalArgumentException("Unexpected argument passed to " +
+  "BucketManager. OzoneObj type:" + obj.getResourceType());
+}
+String volume = obj.getVolumeName();
+String bucket = obj.getBucketName();
+metadataManager.getLock().acquireBucketLock(volume, bucket);
+try {
+  String dbBucketKey = metadataManager.getBucketKey(volume, bucket);
+  OmBucketInfo bucketInfo =
+  metadataManager.getBucketTable().get(dbBucketKey);
+  if (bucketInfo == null) {
+LOG.debug("Bucket:{}/{} does not exist", volume, bucket);
+throw new OMException("Bucket " + bucket + " is not found",
+BUCKET_NOT_FOUND);
+  }
+  List list = bucketInfo.getAcls();
+  list.add(acl);
 
 Review comment:
   added a static function in new interface to check for existing duplicate 
acls, we can add more rules when we add dacls.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251397)
Time Spent: 2h  (was: 1h 50m)

> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket
> 
>
> Key: HDDS-1540
> URL: https://issues.apache.org/jira/browse/HDDS-1540
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1540) Implement addAcl,removeAcl,setAcl,getAcl for Bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1540?focusedWorklogId=251392=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251392
 ]

ASF GitHub Bot logged work on HDDS-1540:


Author: ASF GitHub Bot
Created on: 30/May/19 23:10
Start Date: 30/May/19 23:10
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #874: 
HDDS-1540. Implement addAcl,removeAcl,setAcl,getAcl for Bucket. Contributed by 
Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/874#discussion_r289200410
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
 ##
 @@ -423,4 +428,205 @@ private void commitDeleteBucketInfoToOMDB(String 
dbBucketKey)
 volumeName, startBucket, bucketPrefix, maxNumOfBuckets);
 
   }
+
+  /**
+   * Add acl for Ozone object. Return true if acl is added successfully else
+   * false.
+   *
+   * @param obj Ozone object for which acl should be added.
+   * @param acl ozone acl top be added.
+   * @throws IOException if there is error.
+   */
+  @Override
+  public boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException {
+Objects.requireNonNull(obj);
+Objects.requireNonNull(acl);
+if (!obj.getResourceType().equals(OzoneObj.ResourceType.BUCKET)) {
+  throw new IllegalArgumentException("Unexpected argument passed to " +
+  "BucketManager. OzoneObj type:" + obj.getResourceType());
+}
+String volume = obj.getVolumeName();
+String bucket = obj.getBucketName();
+metadataManager.getLock().acquireBucketLock(volume, bucket);
+try {
+  String dbBucketKey = metadataManager.getBucketKey(volume, bucket);
+  OmBucketInfo bucketInfo =
+  metadataManager.getBucketTable().get(dbBucketKey);
+  if (bucketInfo == null) {
+LOG.debug("Bucket:{}/{} does not exist", volume, bucket);
+throw new OMException("Bucket " + bucket + " is not found",
+BUCKET_NOT_FOUND);
+  }
+  List list = bucketInfo.getAcls();
+  list.add(acl);
+  OmBucketInfo updatedBucket = OmBucketInfo.newBuilder()
+  .setVolumeName(bucketInfo.getVolumeName())
+  .setBucketName(bucketInfo.getBucketName())
+  .setStorageType(bucketInfo.getStorageType())
+  .setIsVersionEnabled(bucketInfo.getIsVersionEnabled())
+  .setCreationTime(bucketInfo.getCreationTime())
+  .setBucketEncryptionKey(bucketInfo.getEncryptionKeyInfo())
+  .addAllMetadata(bucketInfo.getMetadata())
+  .setAcls(list)
+  .build();
+
+  metadataManager.getBucketTable().put(dbBucketKey, updatedBucket);
 
 Review comment:
   Yes in HA path, we shall not do table put, we will use cache and double 
buffer.
   For reference, have a look into HDDS-1551.
   
   I have opened HDDS-1618 to use the newly added code for HA/Non-HA. As this 
is newly implemented OM request, it can follow the new approach, so that we can 
avoid refactoring again.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251392)
Time Spent: 1h 50m  (was: 1h 40m)

> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket
> 
>
> Key: HDDS-1540
> URL: https://issues.apache.org/jira/browse/HDDS-1540
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1619) Changes for OM HA.

2019-05-30 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-1619:


 Summary: Changes for OM HA.
 Key: HDDS-1619
 URL: https://issues.apache.org/jira/browse/HDDS-1619
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Ajay Kumar


[HDDS-1539] adds 4 new api for Ozone rpc client. OM HA implementation needs to 
handle them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14521) Suppress setReplication logging.

2019-05-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852430#comment-16852430
 ] 

Hudson commented on HDFS-14521:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16638 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16638/])
HDFS-14521. Suppress setReplication logging. Contributed by Kihwal Lee (kihwal: 
rev 9122b9b6493d315c8910fbc06eaba96273ee335c)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java


> Suppress setReplication logging.
> 
>
> Key: HDFS-14521
> URL: https://issues.apache.org/jira/browse/HDFS-14521
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14521-01.patch, HDFS-14521.patch
>
>
> Currently processing of setReplication on standby NNs cause logging.
> {noformat}
> 2101-14-29 17:49:04,026 [Edit log tailer] INFO namenode.FSDirectory: 
> Increasing replication from 3 to 10 for xxx
> {noformat}
> This should be suppressed during edit replaying.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14508) RBF: Clean-up and refactor UI components

2019-05-30 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852426#comment-16852426
 ] 

CR Hota commented on HDFS-14508:


[~tasanuma] Thanks for working on this.

We may want to have RouterMBean into a different implementation class (such as 
RouterCoreMetrics or RouterStateMetrics) instead of FederationMetrics, which 
seems misleading.

We should update metrics section in HDFSRouterFederation.md#Metrics as well. Am 
fine with a new Jira for that as it may need us to touch hadoop-commons.

[~elgoiri] Could you help commit HDFS-14475.

> RBF: Clean-up and refactor UI components
> 
>
> Key: HDFS-14508
> URL: https://issues.apache.org/jira/browse/HDFS-14508
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-14508-HDFS-13891.1.patch, 
> HDFS-14508-HDFS-13891.2.patch
>
>
> Router UI has tags that are not used or incorrectly set. The code should be 
> cleaned-up.
> One such example is 
> Path : 
> (\hadoop-hdfs-project\hadoop-hdfs-rbf\src\main\webapps\router\federationhealth.js)
> {code:java}
> {"name": "routerstat", "url": 
> "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1608) Support Ozone Prefix ACLs in OM metadata table

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1608?focusedWorklogId=251371=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251371
 ]

ASF GitHub Bot logged work on HDDS-1608:


Author: ASF GitHub Bot
Created on: 30/May/19 22:43
Start Date: 30/May/19 22:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #875: HDDS-1608. 
Support Ozone Prefix ACLs in OM metadata table. Contribute…
URL: https://github.com/apache/hadoop/pull/875#issuecomment-497511495
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 84 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 60 | Maven dependency ordering for branch |
   | +1 | mvninstall | 675 | trunk passed |
   | +1 | compile | 284 | trunk passed |
   | +1 | checkstyle | 77 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 892 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | trunk passed |
   | 0 | spotbugs | 296 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 489 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 509 | the patch passed |
   | +1 | compile | 273 | the patch passed |
   | +1 | cc | 273 | the patch passed |
   | +1 | javac | 273 | the patch passed |
   | +1 | checkstyle | 44 | The patch passed checkstyle in hadoop-hdds |
   | +1 | checkstyle | 46 | hadoop-ozone: The patch generated 0 new + 0 
unchanged - 2 fixed = 0 total (was 2) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 670 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | the patch passed |
   | +1 | findbugs | 513 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 314 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1552 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 60 | The patch does not generate ASF License warnings. |
   | | | 7079 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/875 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux c171e1df5d5b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 33c62f8 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/4/testReport/ |
   | Max. process+thread count | 4952 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251371)
Time Spent: 1h 10m  (was: 1h)

> Support Ozone Prefix ACLs in OM metadata table
> --
>
> Key: HDDS-1608
> URL: https://issues.apache.org/jira/browse/HDDS-1608
> Project: Hadoop 

[jira] [Work logged] (HDDS-1540) Implement addAcl,removeAcl,setAcl,getAcl for Bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1540?focusedWorklogId=251373=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251373
 ]

ASF GitHub Bot logged work on HDDS-1540:


Author: ASF GitHub Bot
Created on: 30/May/19 22:43
Start Date: 30/May/19 22:43
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #874: 
HDDS-1540. Implement addAcl,removeAcl,setAcl,getAcl for Bucket. Contributed by 
Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/874#discussion_r289200410
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
 ##
 @@ -423,4 +428,205 @@ private void commitDeleteBucketInfoToOMDB(String 
dbBucketKey)
 volumeName, startBucket, bucketPrefix, maxNumOfBuckets);
 
   }
+
+  /**
+   * Add acl for Ozone object. Return true if acl is added successfully else
+   * false.
+   *
+   * @param obj Ozone object for which acl should be added.
+   * @param acl ozone acl top be added.
+   * @throws IOException if there is error.
+   */
+  @Override
+  public boolean addAcl(OzoneObj obj, OzoneAcl acl) throws IOException {
+Objects.requireNonNull(obj);
+Objects.requireNonNull(acl);
+if (!obj.getResourceType().equals(OzoneObj.ResourceType.BUCKET)) {
+  throw new IllegalArgumentException("Unexpected argument passed to " +
+  "BucketManager. OzoneObj type:" + obj.getResourceType());
+}
+String volume = obj.getVolumeName();
+String bucket = obj.getBucketName();
+metadataManager.getLock().acquireBucketLock(volume, bucket);
+try {
+  String dbBucketKey = metadataManager.getBucketKey(volume, bucket);
+  OmBucketInfo bucketInfo =
+  metadataManager.getBucketTable().get(dbBucketKey);
+  if (bucketInfo == null) {
+LOG.debug("Bucket:{}/{} does not exist", volume, bucket);
+throw new OMException("Bucket " + bucket + " is not found",
+BUCKET_NOT_FOUND);
+  }
+  List list = bucketInfo.getAcls();
+  list.add(acl);
+  OmBucketInfo updatedBucket = OmBucketInfo.newBuilder()
+  .setVolumeName(bucketInfo.getVolumeName())
+  .setBucketName(bucketInfo.getBucketName())
+  .setStorageType(bucketInfo.getStorageType())
+  .setIsVersionEnabled(bucketInfo.getIsVersionEnabled())
+  .setCreationTime(bucketInfo.getCreationTime())
+  .setBucketEncryptionKey(bucketInfo.getEncryptionKeyInfo())
+  .addAllMetadata(bucketInfo.getMetadata())
+  .setAcls(list)
+  .build();
+
+  metadataManager.getBucketTable().put(dbBucketKey, updatedBucket);
 
 Review comment:
   Yes in HA path, we shall not do table put, we will use cache and double 
buffer.
   For reference, have a look into HDDS-1551.
   
   I have opened HDDS-1618 to use the same code for HA/Non-HA. As this is newly 
implemented OM request, it can follow that approach, so that we can avoid 
refactoring again.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251373)
Time Spent: 1h 40m  (was: 1.5h)

> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket
> 
>
> Key: HDDS-1540
> URL: https://issues.apache.org/jira/browse/HDDS-1540
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852423#comment-16852423
 ] 

Hudson commented on HDDS-1551:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16637 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16637/])
HDDS-1551. Implement Bucket Write Requests to use Cache and (github: rev 
c965f7f499011d4497547304d754b1085fdf79d4)
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/package-info.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/package-info.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketDeleteRequest.java
* (delete) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/OMBucketDeleteResponse.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/bucket/package-info.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/bucket/TestOMBucketDeleteResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/utils/OzoneManagerRatisUtils.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/response/bucket/TestOMBucketCreateResponse.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/bucket/TestOMBucketDeleteRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/OMVolumeCreateResponse.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/bucket/package-info.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketSetPropertyRequest.java
* (delete) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/OMBucketCreateResponse.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/package-info.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/OMVolumeDeleteResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOmMetrics.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBufferWithOMResponse.java
* (add) 
hadoop-ozone/ozone-manager/src/test/resources/mockito-extensions/org.mockito.plugins.MockMaker
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketDeleteResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/PartialTableCache.java
* (add) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/bucket/TestOMBucketSetPropertyRequest.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketCreateResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBufferWithDummyResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/S3BucketManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerRatisServer.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerHARequestHandlerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerHARequestHandler.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerHAProtocol.java
* (add) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/package-info.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/OMClientResponse.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManager.java
* (add) 

[jira] [Updated] (HDFS-14521) Suppress setReplication logging.

2019-05-30 Thread Kihwal Lee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-14521:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.3
   2.9.3
   3.2.1
   2.8.6
   3.3.0
   2.10.0
   Status: Resolved  (was: Patch Available)

> Suppress setReplication logging.
> 
>
> Key: HDFS-14521
> URL: https://issues.apache.org/jira/browse/HDFS-14521
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-14521-01.patch, HDFS-14521.patch
>
>
> Currently processing of setReplication on standby NNs cause logging.
> {noformat}
> 2101-14-29 17:49:04,026 [Edit log tailer] INFO namenode.FSDirectory: 
> Increasing replication from 3 to 10 for xxx
> {noformat}
> This should be suppressed during edit replaying.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=251368=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251368
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 22:31
Start Date: 30/May/19 22:31
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497508851
 
 
   I have committed this to the trunk.
   Thank You @hanishakoneru for the review.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251368)
Time Spent: 11h 40m  (was: 11.5h)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 11h 40m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=251367=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251367
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 22:29
Start Date: 30/May/19 22:29
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497505653
 
 
   Thank You @hanishakoneru for the review.
   CI has already run for this. Checkstyle in OzoneManagerRequestHandler.java 
is related to this PR, will take care of it during commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251367)
Time Spent: 11.5h  (was: 11h 20m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=251366=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251366
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 22:28
Start Date: 30/May/19 22:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497508214
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   | -1 | patch | 14 | https://github.com/apache/hadoop/pull/850 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/850 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/16/console |
   | versions | git=2.7.4 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251366)
Time Spent: 11h 20m  (was: 11h 10m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 11h 20m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-30 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1551:
-
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 11h 20m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=251365=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251365
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 22:28
Start Date: 30/May/19 22:28
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251365)
Time Spent: 11h 10m  (was: 11h)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 11h 10m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14203) Refactor OIV Delimited output entry building mechanism

2019-05-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852415#comment-16852415
 ] 

Hadoop QA commented on HDFS-14203:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 35 new + 23 unchanged - 0 fixed = 58 total (was 23) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
38s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}129m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}201m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Call to method of static java.text.DateFormat in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageDelimitedTextWriter.formatDate(long)
  At PBImageDelimitedTextWriter.java:java.text.DateFormat in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageDelimitedTextWriter.formatDate(long)
  At PBImageDelimitedTextWriter.java:[line 52] |
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14203 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12970367/HDFS-14203.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2c95da8ef2d1 4.4.0-143-generic #169~14.04.2-Ubuntu 

[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=251362=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251362
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 22:23
Start Date: 30/May/19 22:23
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497505653
 
 
   Thank You @hanishakoneru for the review. Below is the latest CI run.
   CI has already run for this. Checkstyle in OzoneManagerRequestHandler.java 
is related to this PR, will take care of it during commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251362)
Time Spent: 11h  (was: 10h 50m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 11h
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14521) Suppress setReplication logging.

2019-05-30 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852410#comment-16852410
 ] 

Daryn Sharp commented on HDFS-14521:


+1

> Suppress setReplication logging.
> 
>
> Key: HDFS-14521
> URL: https://issues.apache.org/jira/browse/HDFS-14521
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Major
> Attachments: HDFS-14521-01.patch, HDFS-14521.patch
>
>
> Currently processing of setReplication on standby NNs cause logging.
> {noformat}
> 2101-14-29 17:49:04,026 [Edit log tailer] INFO namenode.FSDirectory: 
> Increasing replication from 3 to 10 for xxx
> {noformat}
> This should be suppressed during edit replaying.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=251360=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251360
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 22:17
Start Date: 30/May/19 22:17
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497505653
 
 
   Thank You @hanishakoneru for the review.
   CI has already run for this. Checkstyle in OzoneManagerRequestHandler.java 
is related to this PR, will take care of it during commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251360)
Time Spent: 10h 50m  (was: 10h 40m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10h 50m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=251359=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251359
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 22:15
Start Date: 30/May/19 22:15
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497505122
 
 
   Thank you @bharatviswa504 . +1 pending CI.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251359)
Time Spent: 10h 40m  (was: 10.5h)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10h 40m
>  Remaining Estimate: 0h
>
> Implement Bucket write requests to use OM Cache, double buffer.
> And also in OM previously we used to Ratis client for communication to Ratis 
> server, instead of that use Ratis server API's.
>  
> In this Jira will add the changes to implement bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1618) Merge code for HA and Non-HA OM requests for bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1618?focusedWorklogId=251358=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251358
 ]

ASF GitHub Bot logged work on HDDS-1618:


Author: ASF GitHub Bot
Created on: 30/May/19 22:14
Start Date: 30/May/19 22:14
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #877: 
HDDS-1618. Merge code for HA and Non-HA OM write type requests for bucket
URL: https://github.com/apache/hadoop/pull/877
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251358)
Time Spent: 10m
Remaining Estimate: 0h

> Merge code for HA and Non-HA OM requests for bucket
> ---
>
> Key: HDDS-1618
> URL: https://issues.apache.org/jira/browse/HDDS-1618
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In this Jira, we shall use the new code added in HDDS-1551 for Non-HA flow.
>  
> This Jira modifies the bucket requests only, further requests will be handled 
> in further Jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1618) Merge code for HA and Non-HA OM requests for bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1618:
-
Labels: pull-request-available  (was: )

> Merge code for HA and Non-HA OM requests for bucket
> ---
>
> Key: HDDS-1618
> URL: https://issues.apache.org/jira/browse/HDDS-1618
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> In this Jira, we shall use the new code added in HDDS-1551 for Non-HA flow.
>  
> This Jira modifies the bucket requests only, further requests will be handled 
> in further Jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1446) Grpc channels are leaked in XceiverClientGrpc

2019-05-30 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852407#comment-16852407
 ] 

Xiaoyu Yao commented on HDDS-1446:
--

Hit a similar issue when running Terasort on ozfs with the following stack.

 

{code}

May 30, 2019 10:07:32 PM 
org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference
 cleanQueue
SEVERE: *~*~*~ Channel ManagedChannelImpl\{logId=1, target=172.20.0.2:9859} was 
not shutdown properly!!! ~*~*~*
 Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() 
returns true.
java.lang.RuntimeException: ManagedChannel allocation site
 at 
org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.(ManagedChannelOrphanWrapper.java:103)
 at 
org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper.(ManagedChannelOrphanWrapper.java:53)
 at 
org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper.(ManagedChannelOrphanWrapper.java:44)
 at 
org.apache.ratis.thirdparty.io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:411)
 at 
org.apache.hadoop.hdds.scm.XceiverClientGrpc.connectToDatanode(XceiverClientGrpc.java:165)
 at 
org.apache.hadoop.hdds.scm.XceiverClientGrpc.reconnect(XceiverClientGrpc.java:389)
 at 
org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommandAsync(XceiverClientGrpc.java:340)
 at 
org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommandWithRetry(XceiverClientGrpc.java:268)
 at 
org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommandWithTraceIDAndRetry(XceiverClientGrpc.java:236)
 at 
org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommand(XceiverClientGrpc.java:210)
 at 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.getBlock(ContainerProtocolCalls.java:119)
 at 
org.apache.hadoop.ozone.client.io.KeyInputStream.getFromOmKeyInfo(KeyInputStream.java:302)
 at org.apache.hadoop.ozone.client.rpc.RpcClient.getKey(RpcClient.java:676)
 at org.apache.hadoop.ozone.client.OzoneBucket.readKey(OzoneBucket.java:323)
 at 
org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.createInputStream(BasicOzoneClientAdapterImpl.java:152)
 at 
org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.open(BasicOzoneFileSystem.java:213)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:768)
 at 
org.apache.hadoop.examples.terasort.TeraInputFormat$TeraRecordReader.initialize(TeraInputFormat.java:226)
 at 
org.apache.hadoop.examples.terasort.TeraInputFormat$1.run(TeraInputFormat.java:152)

{code}

> Grpc channels are leaked in XceiverClientGrpc
> -
>
> Key: HDDS-1446
> URL: https://issues.apache.org/jira/browse/HDDS-1446
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Priority: Major
>
> Grpc Channels are leaked in MiniOzoneChaosCluster runs.
> {code}
> SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=522, 
> target=10.200.4.160:52415} was not shutdown properly!!! ~*~*~*
> Make sure to call shutdown()/shutdownNow() and wait until 
> awaitTermination() returns true.
> java.lang.RuntimeException: ManagedChannel allocation site
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.(ManagedChannelOrphanWrapper.java:103)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper.(ManagedChannelOrphanWrapper.java:53)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper.(ManagedChannelOrphanWrapper.java:44)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:411)
> at 
> org.apache.hadoop.hdds.scm.XceiverClientGrpc.connectToDatanode(XceiverClientGrpc.java:165)
> at 
> org.apache.hadoop.hdds.scm.XceiverClientGrpc.reconnect(XceiverClientGrpc.java:389)
> at 
> org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommandAsync(XceiverClientGrpc.java:340)
> at 
> org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommandWithRetry(XceiverClientGrpc.java:268)
> at 
> org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommandWithTraceIDAndRetry(XceiverClientGrpc.java:236)
> at 
> org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommand(XceiverClientGrpc.java:210)
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.getBlock(ContainerProtocolCalls.java:119)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.getFromOmKeyInfo(KeyInputStream.java:302)
> at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.createInputStream(RpcClient.java:993)
> at 
> 

[jira] [Created] (HDDS-1618) Merge code for HA and Non-HA OM requests for bucket

2019-05-30 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1618:


 Summary: Merge code for HA and Non-HA OM requests for bucket
 Key: HDDS-1618
 URL: https://issues.apache.org/jira/browse/HDDS-1618
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


In this Jira, we shall use the new code added in HDDS-1551 for Non-HA flow.

 

This Jira modifies the bucket requests only, further requests will be handled 
in further Jira's.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1609) Remove hard coded uid from Ozone docker image

2019-05-30 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852406#comment-16852406
 ] 

Eric Yang commented on HDDS-1609:
-

According to /usr/share/doc/setup-*/uidgid, the reserved uid for apache is 48.  
This could be a good candidate for default user without creating another 
service user that can conflict with host users.  The home directory is set to 
/var/www and www is public read-only directory.

> Remove hard coded uid from Ozone docker image
> -
>
> Key: HDDS-1609
> URL: https://issues.apache.org/jira/browse/HDDS-1609
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> Hadoop-runner image is hard coded to [USER 
> hadoop|https://github.com/apache/hadoop/blob/docker-hadoop-runner-jdk11/Dockerfile#L45]
>  and user hadoop is hard coded to uid 1000.  This arrangement complicates 
> development environment where host user is different uid from 1000.  External 
> bind mount locations are written data as uid 1000.  This can prevent 
> development environment from clean up test data.  
> Docker documentation stated that "The best way to prevent 
> privilege-escalation attacks from within a container is to configure your 
> container’s applications to run as unprivileged users."  From Ozone 
> architecture point of view, there is no reason to run Ozone daemon to require 
> privileged user or hard coded user.
> h3. Solution 1
> It would be best to support running docker container as host user to reduce 
> friction.  User should be able to run:
> {code}
> docker run -u $(id -u):$(id -g) ...
> {code}
> or in docker-compose file:
> {code}
> user: "${UID}:${GID}"
> {code}
> By doing this, the user will be name less in docker container.  Some commands 
> may warn that user does not have a name.  This can be resolved by mounting 
> /etc/passwd or a file that looks like /etc/passwd that contain host user 
> entry.
> h3. Solution 2
> Move the hard coded user to range < 200.  The default linux profile reserves 
> service users < 200 to have umask that keep data private to service user or 
> group writable, if service shares group with other service users.  Register 
> the service user with Linux vendors to ensure that there is a reserved uid 
> for Hadoop user or pick one that works for Hadoop.  This is a longer route to 
> pursuit, and may not be fruitful.  
> h3. Solution 3
> Default the docker image to have sssd client installed.  This will allow 
> docker image to see host level names by binding sssd socket.  The instruction 
> for doing this is located at in [Hadoop website| 
> https://hadoop.apache.org/docs/r3.1.2/hadoop-yarn/hadoop-yarn-site/DockerContainers.html#User_Management_in_Docker_Container].
> The pre-requisites for this approach will require the host level system to 
> have sssd installed.  For production system, there is a 99% chance that sssd 
> is installed.
> We may want to support combined solution of 1 and 3 to be proper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1613) Opening of rocksDB in datanode fails with "No locks available"

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1613?focusedWorklogId=251347=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251347
 ]

ASF GitHub Bot logged work on HDDS-1613:


Author: ASF GitHub Bot
Created on: 30/May/19 21:58
Start Date: 30/May/19 21:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #876: HDDS-1613. 
Opening of rocksDB in datanode fails with No locks available. Contributed by 
Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/876#issuecomment-497500840
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 515 | trunk passed |
   | +1 | compile | 282 | trunk passed |
   | +1 | checkstyle | 78 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 973 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | trunk passed |
   | 0 | spotbugs | 332 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 547 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 496 | the patch passed |
   | +1 | compile | 293 | the patch passed |
   | +1 | javac | 293 | the patch passed |
   | +1 | checkstyle | 85 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 731 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | the patch passed |
   | +1 | findbugs | 745 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 169 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1240 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 84 | The patch does not generate ASF License warnings. |
   | | | 6741 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-876/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/876 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f963c23aace1 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f1552f6 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-876/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-876/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-876/1/testReport/ |
   | Max. process+thread count | 4854 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-876/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251347)
Time Spent: 20m  (was: 10m)

> Opening of rocksDB in datanode fails with "No locks available"
> --
>
> Key: HDDS-1613
> URL: https://issues.apache.org/jira/browse/HDDS-1613
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> 

[jira] [Commented] (HDDS-1609) Remove hard coded uid from Ozone docker image

2019-05-30 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852391#comment-16852391
 ] 

Eric Yang commented on HDDS-1609:
-

{code:java}
$ docker run -it -v /tmp/t1:/tmp/t1 -u $(id -u):$(id -g) apache/hadoop-runner 
bash
docker: Error response from daemon: OCI runtime create failed: 
container_linux.go:345: starting container process caused "chdir to cwd 
(\"/opt/hadoop\") set in config.json failed: permission denied": unknown.
{code}
Without the fix, host user may not have access into /opt/hadoop directory.

> Remove hard coded uid from Ozone docker image
> -
>
> Key: HDDS-1609
> URL: https://issues.apache.org/jira/browse/HDDS-1609
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> Hadoop-runner image is hard coded to [USER 
> hadoop|https://github.com/apache/hadoop/blob/docker-hadoop-runner-jdk11/Dockerfile#L45]
>  and user hadoop is hard coded to uid 1000.  This arrangement complicates 
> development environment where host user is different uid from 1000.  External 
> bind mount locations are written data as uid 1000.  This can prevent 
> development environment from clean up test data.  
> Docker documentation stated that "The best way to prevent 
> privilege-escalation attacks from within a container is to configure your 
> container’s applications to run as unprivileged users."  From Ozone 
> architecture point of view, there is no reason to run Ozone daemon to require 
> privileged user or hard coded user.
> h3. Solution 1
> It would be best to support running docker container as host user to reduce 
> friction.  User should be able to run:
> {code}
> docker run -u $(id -u):$(id -g) ...
> {code}
> or in docker-compose file:
> {code}
> user: "${UID}:${GID}"
> {code}
> By doing this, the user will be name less in docker container.  Some commands 
> may warn that user does not have a name.  This can be resolved by mounting 
> /etc/passwd or a file that looks like /etc/passwd that contain host user 
> entry.
> h3. Solution 2
> Move the hard coded user to range < 200.  The default linux profile reserves 
> service users < 200 to have umask that keep data private to service user or 
> group writable, if service shares group with other service users.  Register 
> the service user with Linux vendors to ensure that there is a reserved uid 
> for Hadoop user or pick one that works for Hadoop.  This is a longer route to 
> pursuit, and may not be fruitful.  
> h3. Solution 3
> Default the docker image to have sssd client installed.  This will allow 
> docker image to see host level names by binding sssd socket.  The instruction 
> for doing this is located at in [Hadoop website| 
> https://hadoop.apache.org/docs/r3.1.2/hadoop-yarn/hadoop-yarn-site/DockerContainers.html#User_Management_in_Docker_Container].
> The pre-requisites for this approach will require the host level system to 
> have sssd installed.  For production system, there is a 99% chance that sssd 
> is installed.
> We may want to support combined solution of 1 and 3 to be proper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1608) Support Ozone Prefix ACLs in OM metadata table

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1608?focusedWorklogId=251343=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251343
 ]

ASF GitHub Bot logged work on HDDS-1608:


Author: ASF GitHub Bot
Created on: 30/May/19 21:46
Start Date: 30/May/19 21:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #875: HDDS-1608. 
Support Ozone Prefix ACLs in OM metadata table. Contribute…
URL: https://github.com/apache/hadoop/pull/875#issuecomment-497497632
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 56 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for branch |
   | +1 | mvninstall | 573 | trunk passed |
   | +1 | compile | 258 | trunk passed |
   | +1 | checkstyle | 71 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 814 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 155 | trunk passed |
   | 0 | spotbugs | 297 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 492 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | +1 | mvninstall | 481 | the patch passed |
   | +1 | compile | 291 | the patch passed |
   | +1 | cc | 291 | the patch passed |
   | +1 | javac | 291 | the patch passed |
   | -0 | checkstyle | 33 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 660 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | the patch passed |
   | +1 | findbugs | 497 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 230 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1741 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 66 | The patch does not generate ASF License warnings. |
   | | | 6823 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/875 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux d8f1c068b93d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f1552f6 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/3/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/3/testReport/ |
   | Max. process+thread count | 4364 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251343)
Time Spent: 1h  (was: 50m)

> Support Ozone Prefix ACLs in OM metadata table
> --
>
> Key: HDDS-1608
> URL: https://issues.apache.org/jira/browse/HDDS-1608
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>

[jira] [Created] (HDDS-1617) Restructure the code layout for Ozone Manager

2019-05-30 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1617:
--

 Summary: Restructure the code layout for Ozone Manager
 Key: HDDS-1617
 URL: https://issues.apache.org/jira/browse/HDDS-1617
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Manager
Reporter: Anu Engineer
Assignee: Anu Engineer


The Ozone Manager has a flat structure that deals with lot of specific 
functions. This Jira proposes to refactor ozone managers code base and move 
function specific packages.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-05-30 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852376#comment-16852376
 ] 

Eric Yang commented on HDDS-1458:
-

[~elek] 

Patch 14 keeps ozoneblockade files in dist.  There is a separate set of compose 
file which runs in maven verify using ozone image.  This helps to keep 
hadoop-runner and apache/ozone images separated and both can be tested.

{code}os.path.dirname(os.path.dirname(os.path.realpath(__file__)){code}

Os.path.realpath has some limitations in [recursive 
symlink|https://bugs.python.org/issue11397] and directory prefix started with 
~, and [change directory|https://bugs.python.org/issue24670].  We can not 
guarantee that user doesn't create symlink to OZONE_HOME, nor we can guarantee 
that user doesn't expand ozone tarball in user's home directory with symlinks.  
Pytest uses os.chdir to create temp directory for report generation.  The 
chance of running into problem is much higher using os.path.realpath(__file__) 
in permutations that were not thought out.

Many of the issues are only addressed in Python 3.4+.  Given that we are 
working with older version of python because of pytest and blockade.  The fixes 
are not available in python2.7.  This is my reasoning to use os.getcwd() as 
OZONE_HOME reference.  I admit that getcwd() may inconvenient individual that 
already developed the habit of running python code from tests/blockade 
directory.  However, that was never in the documentation, and there is a 
shorter route to use mvn clean verify -Pit to run the tests.  May I suggest to 
keep using getcwd() until we can moved to newer version of python, pytest and 
blockade?

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1458.001.patch, HDDS-1458.002.patch, 
> HDDS-1458.003.patch, HDDS-1458.004.patch, HDDS-1458.005.patch, 
> HDDS-1458.006.patch, HDDS-1458.007.patch, HDDS-1458.008.patch, 
> HDDS-1458.009.patch, HDDS-1458.010.patch, HDDS-1458.011.patch, 
> HDDS-1458.012.patch, HDDS-1458.013.patch, HDDS-1458.014.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1608) Support Ozone Prefix ACLs in OM metadata table

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1608?focusedWorklogId=251327=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251327
 ]

ASF GitHub Bot logged work on HDDS-1608:


Author: ASF GitHub Bot
Created on: 30/May/19 21:25
Start Date: 30/May/19 21:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #875: HDDS-1608. 
Support Ozone Prefix ACLs in OM metadata table. Contribute…
URL: https://github.com/apache/hadoop/pull/875#issuecomment-497491781
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for branch |
   | +1 | mvninstall | 541 | trunk passed |
   | +1 | compile | 283 | trunk passed |
   | +1 | checkstyle | 89 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 908 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | trunk passed |
   | 0 | spotbugs | 304 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 495 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 502 | the patch passed |
   | +1 | compile | 274 | the patch passed |
   | +1 | cc | 274 | the patch passed |
   | +1 | javac | 274 | the patch passed |
   | -0 | checkstyle | 38 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 656 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | the patch passed |
   | +1 | findbugs | 494 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 238 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1276 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 6423 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/875 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 5b1c3b32eed0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f1552f6 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/2/testReport/ |
   | Max. process+thread count | 4569 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251327)
Time Spent: 50m  (was: 40m)

> Support Ozone Prefix ACLs in OM metadata table
> --
>
> Key: HDDS-1608
> URL: https://issues.apache.org/jira/browse/HDDS-1608
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>

[jira] [Work logged] (HDDS-1608) Support Ozone Prefix ACLs in OM metadata table

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1608?focusedWorklogId=251322=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251322
 ]

ASF GitHub Bot logged work on HDDS-1608:


Author: ASF GitHub Bot
Created on: 30/May/19 21:12
Start Date: 30/May/19 21:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #875: HDDS-1608. 
Support Ozone Prefix ACLs in OM metadata table. Contribute…
URL: https://github.com/apache/hadoop/pull/875#issuecomment-497488070
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for branch |
   | +1 | mvninstall | 583 | trunk passed |
   | +1 | compile | 273 | trunk passed |
   | +1 | checkstyle | 77 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 921 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | trunk passed |
   | 0 | spotbugs | 304 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 500 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | +1 | mvninstall | 479 | the patch passed |
   | +1 | compile | 277 | the patch passed |
   | +1 | cc | 277 | the patch passed |
   | +1 | javac | 277 | the patch passed |
   | -0 | checkstyle | 41 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 725 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | the patch passed |
   | +1 | findbugs | 542 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 234 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1446 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 6747 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/875 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 29913530280b 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f1552f6 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/1/testReport/ |
   | Max. process+thread count | 5355 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-875/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251322)
Time Spent: 40m  (was: 0.5h)

> Support Ozone Prefix ACLs in OM metadata table
> --
>
> Key: HDDS-1608
> URL: https://issues.apache.org/jira/browse/HDDS-1608
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao

[jira] [Commented] (HDFS-14521) Suppress setReplication logging.

2019-05-30 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852364#comment-16852364
 ] 

Hadoop QA commented on HDFS-14521:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
3m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.checker.TestThrottledAsyncCheckerTimeout |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14521 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12970370/HDFS-14521-01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 733a0cf6bf4e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f1552f6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26875/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26875/testReport/ |
| Max. process+thread count | 4761 (vs. ulimit of 

[jira] [Commented] (HDFS-14497) Write lock held by metasave impact following RPC processing

2019-05-30 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852354#comment-16852354
 ] 

Hudson commented on HDFS-14497:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16636 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16636/])
HDFS-14497. Write lock held by metasave impact following RPC processing. 
(weichiu: rev 33c62f8f4e94442825fe286c2b18518925d980e6)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java


> Write lock held by metasave impact following RPC processing
> ---
>
> Key: HDFS-14497
> URL: https://issues.apache.org/jira/browse/HDFS-14497
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14497.001.patch
>
>
> NameNode meta save hold global write lock currently, so following RPC r/w 
> request or inner-thread of NameNode could be paused if they try to acquire 
> global read/write lock and have to wait before metasave release it.
> I propose to change write lock to read lock and let some read request could 
> be process normally. I think it could not change informations which meta save 
> try to get if we try to open read request.
> Actually, we need ensure that there are only one thread to execute metaSave, 
> otherwise, output streams could meet exception especially both streams hold 
> the same file handle or some other same output stream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14508) RBF: Clean-up and refactor UI components

2019-05-30 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852352#comment-16852352
 ] 

Íñigo Goiri commented on HDFS-14508:


Oh, I thought HDFS-14475 was already in.
[~crh] can you take a look at  [^HDFS-14508-HDFS-13891.2.patch]?
The new approach seems reasonable to me.
I remember there was some kind of documentation for the metrics too.

> RBF: Clean-up and refactor UI components
> 
>
> Key: HDFS-14508
> URL: https://issues.apache.org/jira/browse/HDFS-14508
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-14508-HDFS-13891.1.patch, 
> HDFS-14508-HDFS-13891.2.patch
>
>
> Router UI has tags that are not used or incorrectly set. The code should be 
> cleaned-up.
> One such example is 
> Path : 
> (\hadoop-hdfs-project\hadoop-hdfs-rbf\src\main\webapps\router\federationhealth.js)
> {code:java}
> {"name": "routerstat", "url": 
> "/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus"},{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14497) Write lock held by metasave impact following RPC processing

2019-05-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14497:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk. Thanks [~hexiaoqiao]!
There are conflicts cherry picking into branch-3.2 and lower so I'll stop at 
trunk for now.

> Write lock held by metasave impact following RPC processing
> ---
>
> Key: HDFS-14497
> URL: https://issues.apache.org/jira/browse/HDFS-14497
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14497.001.patch
>
>
> NameNode meta save hold global write lock currently, so following RPC r/w 
> request or inner-thread of NameNode could be paused if they try to acquire 
> global read/write lock and have to wait before metasave release it.
> I propose to change write lock to read lock and let some read request could 
> be process normally. I think it could not change informations which meta save 
> try to get if we try to open read request.
> Actually, we need ensure that there are only one thread to execute metaSave, 
> otherwise, output streams could meet exception especially both streams hold 
> the same file handle or some other same output stream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14524) NNTop total counts does not add up as expected

2019-05-30 Thread Ahmed Hussein (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HDFS-14524:
-
Attachment: HDFS-14524.001.patch

> NNTop total counts does not add up as expected
> --
>
> Key: HDFS-14524
> URL: https://issues.apache.org/jira/browse/HDFS-14524
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Minor
> Attachments: HDFS-14524.001.patch
>
>
> {{opType='*'}} is sometimes smaller than the sum of the individual operation 
> types.
> {code:java}
> {
>   "windows": [
> {
>   "windowLenMs": 30,
>   "ops": [
> {
>   "totalCount": 24158,
>   "opType": "rpc.complete",
>   "topUsers": [{ "count": 2944, "user": "user1" }]
> },
> {
>   "totalCount": 15921,
>   "opType": "rpc.rename",
>   "topUsers": [{ "count": 2891, "user": "user1" }]
> },
> {
>   "totalCount": 3015834,
>   "opType": "*",
>   "topUsers": [{ "count": 66652, "user": "user1" }]
> },
> {
>   "totalCount": 2086,
>   "opType": "rpc.abandonBlock",
>   "topUsers": [{ "count": 603, "user": "user1" }]
> },
> {
>   "totalCount": 30258,
>   "opType": "rpc.addBlock",
>   "topUsers": [{ "count": 3182, "user": "user1" }]
> },
> {
>   "totalCount": 101440,
>   "opType": "rpc.getServerDefaults",
>   "topUsers": [{ "count": 3521, "user": "user1" }]
> },
> {
>   "totalCount": 25258,
>   "opType": "rpc.create",
>   "topUsers": [{ "count": 1864, "user": "user1" }]
> },
> {
>   "totalCount": 1377563,
>   "opType": "rpc.getFileInfo",
>   "topUsers": [{ "count": 56541, "user": "user1" }]
> },
> {
>   "totalCount": 60836,
>   "opType": "rpc.renewLease",
>   "topUsers": [{ "count": 3783, "user": "user1" }]
> },
> {
>   "totalCount": 182212,
>   "opType": "rpc.getListing",
>   "topUsers": [{ "count": 1848, "user": "user1" }]
> },
> {
>   "totalCount": 380,
>   "opType": "rpc.updateBlockForPipeline",
>   "topUsers": [{ "count": 58, "user": "user1" }]
> },
> {
>   "totalCount": 215,
>   "opType": "rpc.updatePipeline",
>   "topUsers": [{ "count": 18, "user": "user1" }]
> }
>   ]
> }
>   ],
>   "timestamp": "2019-01-12"
> }
> {code}
>  
>  {{opType='*'}} from user {{user1}} is {{66652}}, but the sum of counts for 
> other {{optype}} values by {{user1}} is actually larger: {{77253}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14497) Write lock held by metasave impact following RPC processing

2019-05-30 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14497:
---
Summary: Write lock held by metasave impact following RPC processing  (was: 
Write lock hold by metasave impact following RPC processing)

> Write lock held by metasave impact following RPC processing
> ---
>
> Key: HDFS-14497
> URL: https://issues.apache.org/jira/browse/HDFS-14497
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14497.001.patch
>
>
> NameNode meta save hold global write lock currently, so following RPC r/w 
> request or inner-thread of NameNode could be paused if they try to acquire 
> global read/write lock and have to wait before metasave release it.
> I propose to change write lock to read lock and let some read request could 
> be process normally. I think it could not change informations which meta save 
> try to get if we try to open read request.
> Actually, we need ensure that there are only one thread to execute metaSave, 
> otherwise, output streams could meet exception especially both streams hold 
> the same file handle or some other same output stream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14494) Move Server logging of StatedId inside receiveRequestState()

2019-05-30 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852327#comment-16852327
 ] 

Shweta commented on HDFS-14494:
---

[~xkrogen], [~shv] Can one of you please review this?

> Move Server logging of StatedId inside receiveRequestState()
> 
>
> Key: HDFS-14494
> URL: https://issues.apache.org/jira/browse/HDFS-14494
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Konstantin Shvachko
>Assignee: Shweta
>Priority: Major
>  Labels: newbie++
> Attachments: HDFS-14494.001.patch
>
>
> HDFS-14270 introduced logging of the client and server StateIds in trace 
> level. Unfortunately one of the arguments 
> {{alignmentContext.getLastSeenStateId()}} holds a lock on FSEdits, which is 
> called even if trace logging level is disabled. I propose to move logging 
> message inside {{GlobalStateIdContext.receiveRequestState()}} where 
> {{clientStateId}} and {{serverStateId}} already calculated and can be easily 
> printed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=251272=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251272
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 20:04
Start Date: 30/May/19 20:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #850: HDDS-1551. 
Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#issuecomment-497466452
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 17 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 596 | trunk passed |
   | +1 | compile | 323 | trunk passed |
   | +1 | checkstyle | 86 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 952 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   | 0 | spotbugs | 348 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 588 | trunk passed |
   | -0 | patch | 398 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 539 | the patch passed |
   | +1 | compile | 301 | the patch passed |
   | +1 | cc | 301 | the patch passed |
   | +1 | javac | 301 | the patch passed |
   | -0 | checkstyle | 40 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 761 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 77 | hadoop-ozone generated 3 new + 5 unchanged - 0 fixed = 
8 total (was 5) |
   | +1 | findbugs | 536 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 239 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1276 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 83 | The patch does not generate ASF License warnings. |
   | | | 6952 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/850 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 5e8e3ead135c 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 30c6dd9 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/15/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/15/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/15/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/15/testReport/ |
   | Max. process+thread count | 5232 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-850/15/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251272)
Time Spent: 10.5h  (was: 10h 20m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: 

[jira] [Work logged] (HDDS-1540) Implement addAcl,removeAcl,setAcl,getAcl for Bucket

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1540?focusedWorklogId=251254=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251254
 ]

ASF GitHub Bot logged work on HDDS-1540:


Author: ASF GitHub Bot
Created on: 30/May/19 19:28
Start Date: 30/May/19 19:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #874: HDDS-1540. 
Implement addAcl,removeAcl,setAcl,getAcl for Bucket. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/874#issuecomment-497454916
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for branch |
   | +1 | mvninstall | 509 | trunk passed |
   | +1 | compile | 262 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 842 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | trunk passed |
   | 0 | spotbugs | 304 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 498 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 502 | the patch passed |
   | +1 | compile | 265 | the patch passed |
   | +1 | javac | 265 | the patch passed |
   | +1 | checkstyle | 72 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 639 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 141 | the patch passed |
   | +1 | findbugs | 527 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 237 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1060 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 6041 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/874 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3f1788c89eea 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 30c6dd9 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/2/testReport/ |
   | Max. process+thread count | 4469 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-874/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251254)
Time Spent: 1.5h  (was: 1h 20m)

> Implement addAcl,removeAcl,setAcl,getAcl  for Bucket
> 
>
> Key: HDDS-1540
> URL: https://issues.apache.org/jira/browse/HDDS-1540
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Implement 

[jira] [Commented] (HDFS-14525) JspHelper ignores hadoop.http.authentication.type

2019-05-30 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852267#comment-16852267
 ] 

Prabhu Joseph commented on HDFS-14525:
--

bq. You actually want a secure cluster to accept anonymous users?  Why do you 
even have security enabled?

Then why we have a separate config 
hadoop.http.authentication.simple.anonymous.allowed which adds complexity in 
testing all the scenarios while making new changes.

Yes the proposed change is wrong. I think the below will work.

{code}
 UserGroupInformation.isSecurityEnabled()  && 
!conf.get(hadoop.http.authentication.type).equals("simple") 
{code}





> JspHelper ignores hadoop.http.authentication.type
> -
>
> Key: HDFS-14525
> URL: https://issues.apache.org/jira/browse/HDFS-14525
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Major
>
> On Secure Cluster With hadoop.http.authentication.type simple and 
> hadoop.http.authentication.anonymous.allowed is true, WebHdfs Rest Api fails 
> when user.name is not set. It runs fine if user.name=ambari-qa is set..
> {code}
> [knox@pjosephdocker-1 ~]$ curl -sS -L -w '%{http_code}' -X GET -d '' -H 
> 'Content-Length: 0' --negotiate -u : 
> 'http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/services/sync/yarn-ats?op=GETFILESTATUS'
> {"RemoteException":{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed
>  to obtain user group information: java.io.IOException: Security enabled but 
> user not authenticated by filter"}}403[knox@pjosephdocker-1 ~]$ 
> {code}
> JspHelper#getUGI checks UserGroupInformation.isSecurityEnabled() instead of 
> conf.get(hadoop.http.authentication.type).equals("kerberos") to check if Http 
> is Secure causing the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1613) Opening of rocksDB in datanode fails with "No locks available"

2019-05-30 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16852264#comment-16852264
 ] 

Mukul Kumar Singh commented on HDDS-1613:
-

This problem occurs because in the current container cache, if an eviction is 
being requested on an entry with a reference, the entry is removed from the map 
while reference is still held onto the rocksdb. This reference also has 
acquired the lock as well.

When another consumer tries to fetch the rockdb from the cache, it does not 
find the entry and now tries to acquire the lock, this step will fail as the 
other reference is holding the lock.

> Opening of rocksDB in datanode fails with "No locks available"
> --
>
> Key: HDDS-1613
> URL: https://issues.apache.org/jira/browse/HDDS-1613
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> Block read fails with 
> {code}
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Unable to find the block with bcsID 11777 .Container 68 bcsId is 0.
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:573)
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.getBlock(ContainerProtocolCalls.java:120)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.initializeBlockInputStream(KeyInputStream.java:295)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.getStream(KeyInputStream.java:265)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.access$000(KeyInputStream.java:229)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.getStreamEntry(KeyInputStream.java:107)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:140)
> at 
> org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:47)
> at java.io.InputStream.read(InputStream.java:101)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.load(MiniOzoneLoadGenerator.java:114)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$0(MiniOzoneLoadGenerator.java:147)
> at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Looking at the 3 datanodes, the containers are in bcs id of 11748, 11748 and 
> 0.
> {code}
> 2019-05-30 08:28:05,348 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,363 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,377 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 0. : Result: UNKNOWN_BCSID
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1613) Opening of rocksDB in datanode fails with "No locks available"

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1613:
-
Labels: MiniOzoneChaosCluster pull-request-available  (was: 
MiniOzoneChaosCluster)

> Opening of rocksDB in datanode fails with "No locks available"
> --
>
> Key: HDDS-1613
> URL: https://issues.apache.org/jira/browse/HDDS-1613
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>
> Block read fails with 
> {code}
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Unable to find the block with bcsID 11777 .Container 68 bcsId is 0.
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:573)
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.getBlock(ContainerProtocolCalls.java:120)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.initializeBlockInputStream(KeyInputStream.java:295)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.getStream(KeyInputStream.java:265)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.access$000(KeyInputStream.java:229)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.getStreamEntry(KeyInputStream.java:107)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:140)
> at 
> org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:47)
> at java.io.InputStream.read(InputStream.java:101)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.load(MiniOzoneLoadGenerator.java:114)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$0(MiniOzoneLoadGenerator.java:147)
> at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Looking at the 3 datanodes, the containers are in bcs id of 11748, 11748 and 
> 0.
> {code}
> 2019-05-30 08:28:05,348 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,363 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,377 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 0. : Result: UNKNOWN_BCSID
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1613) Opening of rocksDB in datanode fails with "No locks available"

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1613?focusedWorklogId=251253=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251253
 ]

ASF GitHub Bot logged work on HDDS-1613:


Author: ASF GitHub Bot
Created on: 30/May/19 19:21
Start Date: 30/May/19 19:21
Worklog Time Spent: 10m 
  Work Description: mukul1987 commented on pull request #876: HDDS-1613. 
Opening of rocksDB in datanode fails with No locks available. Contributed by 
Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/876
 
 
   This problem occurs because in the current container cache, if an eviction 
is being requested on an entry with a reference, the entry is removed from the 
map while reference is still held onto the rocksdb. This reference also has 
acquired the lock as well.
   
   When another consumer tries to fetch the rockdb from the cache, it does not 
find the entry and now tries to acquire the lock, this step will fail as the 
other reference is holding the lock.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251253)
Time Spent: 10m
Remaining Estimate: 0h

> Opening of rocksDB in datanode fails with "No locks available"
> --
>
> Key: HDDS-1613
> URL: https://issues.apache.org/jira/browse/HDDS-1613
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Block read fails with 
> {code}
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Unable to find the block with bcsID 11777 .Container 68 bcsId is 0.
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:573)
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.getBlock(ContainerProtocolCalls.java:120)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.initializeBlockInputStream(KeyInputStream.java:295)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.getStream(KeyInputStream.java:265)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.access$000(KeyInputStream.java:229)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.getStreamEntry(KeyInputStream.java:107)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:140)
> at 
> org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:47)
> at java.io.InputStream.read(InputStream.java:101)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.load(MiniOzoneLoadGenerator.java:114)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$0(MiniOzoneLoadGenerator.java:147)
> at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Looking at the 3 datanodes, the containers are in bcs id of 11748, 11748 and 
> 0.
> {code}
> 2019-05-30 08:28:05,348 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,363 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,377 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 0. : Result: UNKNOWN_BCSID
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=251252=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251252
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 19:19
Start Date: 30/May/19 19:19
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288813544
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
 ##
 @@ -97,6 +112,101 @@ private static long nextCallId() {
 return CALL_ID_COUNTER.getAndIncrement() & Long.MAX_VALUE;
   }
 
+  /**
+   * Submit request to Ratis server.
+   * @param omRequest
+   * @return OMResponse - response returned to the client.
+   * @throws ServiceException
+   */
+  public OMResponse submitRequest(OMRequest omRequest) throws ServiceException 
{
+RaftClientRequest raftClientRequest =
+createWriteRaftClientRequest(omRequest);
+RaftClientReply raftClientReply;
+try {
+  raftClientReply = server.submitClientRequestAsync(raftClientRequest)
+  .get();
+} catch (Exception ex) {
+  throw new ServiceException(ex.getMessage(), ex);
+}
+
+return processReply(omRequest, raftClientReply);
+  }
+
+  /**
+   * Create Write RaftClient request from OMRequest.
+   * @param omRequest
+   * @return
+   */
+  private RaftClientRequest createWriteRaftClientRequest(OMRequest omRequest) {
+return new RaftClientRequest(clientId, server.getId(), raftGroupId,
+nextCallId(),
+Message.valueOf(OMRatisHelper.convertRequestToByteString(omRequest)),
+RaftClientRequest.writeRequestType(), null);
+  }
+
+  /**
+   * Process the raftClientReply and return OMResponse.
+   * @param omRequest
+   * @param reply
+   * @return
+   * @throws ServiceException
+   */
+  private OMResponse processReply(OMRequest omRequest, RaftClientReply reply)
+  throws ServiceException {
+// NotLeader exception is thrown only when the raft server to which the
+// request is submitted is not the leader. This can happen first time
+// when client is submitting request to OM.
+NotLeaderException notLeaderException = reply.getNotLeaderException();
+if (notLeaderException != null) {
+  throw new ServiceException(notLeaderException);
+}
+StateMachineException stateMachineException =
+reply.getStateMachineException();
+if (stateMachineException != null) {
+  OMResponse.Builder omResponse = OMResponse.newBuilder();
+  omResponse.setCmdType(omRequest.getCmdType());
+  omResponse.setSuccess(false);
+  omResponse.setMessage(stateMachineException.getCause().getMessage());
+  omResponse.setStatus(parseErrorStatus(
+  stateMachineException.getCause().getMessage()));
+  return omResponse.build();
+}
+
+try {
+  return OMRatisHelper.getOMResponseFromRaftClientReply(reply);
+} catch (InvalidProtocolBufferException ex) {
+  if (ex.getMessage() != null) {
+throw new ServiceException(ex.getMessage(), ex);
+  } else {
+throw new ServiceException(ex);
+  }
+}
+
+// TODO: Still need to handle RaftRetry failure exception and
+//  NotReplicated exception.
 
 Review comment:
   Currently using ratisClient there is a TODO for RaftRetry failure exception, 
and I don't see anything being done for handling NotReplicatedException.
   
   NotReplicatedException is thrown only for watch type requests. So, this 
exception will be never thrown in our HA case, as requests submitted through HA 
are Write requests.
   
   RaftRetryFailure exception is thrown only from RatisClient, when using 
server this exception will not be thrown.
   
   If you have any more comments, will update the TODO in next patch or I will 
take care of that in next jira if you are okay with it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251252)
Time Spent: 10h 20m  (was: 10h 10m)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat 

[jira] [Work logged] (HDDS-1607) Create smoketest for non-secure mapreduce example

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1607?focusedWorklogId=251249=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251249
 ]

ASF GitHub Bot logged work on HDDS-1607:


Author: ASF GitHub Bot
Created on: 30/May/19 19:18
Start Date: 30/May/19 19:18
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #869: HDDS-1607. 
Create smoketest for non-secure mapreduce example
URL: https://github.com/apache/hadoop/pull/869#discussion_r289133688
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozone-mr/docker-compose.yaml
 ##
 @@ -0,0 +1,95 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+version: "3"
+services:
+  datanode:
+image: apache/hadoop-runner
+volumes:
+  - ../..:/opt/hadoop
+ports:
+  - 9864
+command: ["/opt/hadoop/bin/ozone","datanode"]
+env_file:
+  - docker-config
+  om:
+image: apache/hadoop-runner
+hostname: om
+volumes:
+  - ../..:/opt/hadoop
+ports:
+  - 9874:9874
+environment:
+  WAITFOR: scm:9876
+  ENSURE_OM_INITIALIZED: /data/metadata/om/current/VERSION
+env_file:
+  - docker-config
+command: ["/opt/hadoop/bin/ozone","om"]
+  s3g:
+image: apache/hadoop-runner
+hostname: s3g
+volumes:
+  - ../..:/opt/hadoop
+ports:
+  - 9878:9878
+env_file:
+  - ./docker-config
+command: ["/opt/hadoop/bin/ozone","s3g"]
+  scm:
+image: apache/hadoop-runner:latest
+hostname: scm
+volumes:
+  - ../..:/opt/hadoop
+ports:
+  - 9876:9876
+env_file:
+  - docker-config
+environment:
+  ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION
+command: ["/opt/hadoop/bin/ozone","scm"]
+  rm:
+image: ${HADOOP_IMAGE}:${HADOOP_VERSION}
+hostname: rm
+volumes:
+  - ../..:/opt/ozone
+ports:
+  - 8088:8088
+env_file:
+  - ./docker-config
+environment:
+  HADOOP_CLASSPATH: 
/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
+command: ["yarn", "resourcemanager"]
+  nm:
+image: ${HADOOP_IMAGE}:${HADOOP_VERSION}
+hostname: nm
+volumes:
+  - ../..:/opt/ozone
+env_file:
+  - ./docker-config
+environment:
+  HADOOP_CLASSPATH: 
/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
+  WAIT_FOR: rm:8088
+command: ["yarn","nodemanager"]
+  dns:
 
 Review comment:
   Thanks for the details. That's a good to have. Let's keep it. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251249)
Time Spent: 1.5h  (was: 1h 20m)

> Create smoketest for non-secure mapreduce example
> -
>
> Key: HDDS-1607
> URL: https://issues.apache.org/jira/browse/HDDS-1607
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We had multiple problems earlier with the classpath separation and the 
> internal ozonefs classloader. Before fixing all the issues I propose to 
> create a smoketest to detect if the classpath separation is broken again .
> As a first step I created an smoketest/ozone-mr environment (based on the  
> work of [~xyao], which is secure) and a smoketest 
> Possible follow-up works:
>  * Adapt the test.sh for the ozonesecure-mr
>  * Include test runs with older hadoop versions 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Work logged] (HDDS-1607) Create smoketest for non-secure mapreduce example

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1607?focusedWorklogId=251251=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251251
 ]

ASF GitHub Bot logged work on HDDS-1607:


Author: ASF GitHub Bot
Created on: 30/May/19 19:18
Start Date: 30/May/19 19:18
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #869: HDDS-1607. Create 
smoketest for non-secure mapreduce example
URL: https://github.com/apache/hadoop/pull/869#issuecomment-497451763
 
 
   +1 pending CI.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251251)
Time Spent: 1h 40m  (was: 1.5h)

> Create smoketest for non-secure mapreduce example
> -
>
> Key: HDDS-1607
> URL: https://issues.apache.org/jira/browse/HDDS-1607
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We had multiple problems earlier with the classpath separation and the 
> internal ozonefs classloader. Before fixing all the issues I propose to 
> create a smoketest to detect if the classpath separation is broken again .
> As a first step I created an smoketest/ozone-mr environment (based on the  
> work of [~xyao], which is secure) and a smoketest 
> Possible follow-up works:
>  * Adapt the test.sh for the ozonesecure-mr
>  * Include test runs with older hadoop versions 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1551) Implement Bucket Write Requests to use Cache and DoubleBuffer

2019-05-30 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1551?focusedWorklogId=251248=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-251248
 ]

ASF GitHub Bot logged work on HDDS-1551:


Author: ASF GitHub Bot
Created on: 30/May/19 19:17
Start Date: 30/May/19 19:17
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #850: 
HDDS-1551. Implement Bucket Write Requests to use Cache and DoubleBuf…
URL: https://github.com/apache/hadoop/pull/850#discussion_r288813544
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
 ##
 @@ -97,6 +112,101 @@ private static long nextCallId() {
 return CALL_ID_COUNTER.getAndIncrement() & Long.MAX_VALUE;
   }
 
+  /**
+   * Submit request to Ratis server.
+   * @param omRequest
+   * @return OMResponse - response returned to the client.
+   * @throws ServiceException
+   */
+  public OMResponse submitRequest(OMRequest omRequest) throws ServiceException 
{
+RaftClientRequest raftClientRequest =
+createWriteRaftClientRequest(omRequest);
+RaftClientReply raftClientReply;
+try {
+  raftClientReply = server.submitClientRequestAsync(raftClientRequest)
+  .get();
+} catch (Exception ex) {
+  throw new ServiceException(ex.getMessage(), ex);
+}
+
+return processReply(omRequest, raftClientReply);
+  }
+
+  /**
+   * Create Write RaftClient request from OMRequest.
+   * @param omRequest
+   * @return
+   */
+  private RaftClientRequest createWriteRaftClientRequest(OMRequest omRequest) {
+return new RaftClientRequest(clientId, server.getId(), raftGroupId,
+nextCallId(),
+Message.valueOf(OMRatisHelper.convertRequestToByteString(omRequest)),
+RaftClientRequest.writeRequestType(), null);
+  }
+
+  /**
+   * Process the raftClientReply and return OMResponse.
+   * @param omRequest
+   * @param reply
+   * @return
+   * @throws ServiceException
+   */
+  private OMResponse processReply(OMRequest omRequest, RaftClientReply reply)
+  throws ServiceException {
+// NotLeader exception is thrown only when the raft server to which the
+// request is submitted is not the leader. This can happen first time
+// when client is submitting request to OM.
+NotLeaderException notLeaderException = reply.getNotLeaderException();
+if (notLeaderException != null) {
+  throw new ServiceException(notLeaderException);
+}
+StateMachineException stateMachineException =
+reply.getStateMachineException();
+if (stateMachineException != null) {
+  OMResponse.Builder omResponse = OMResponse.newBuilder();
+  omResponse.setCmdType(omRequest.getCmdType());
+  omResponse.setSuccess(false);
+  omResponse.setMessage(stateMachineException.getCause().getMessage());
+  omResponse.setStatus(parseErrorStatus(
+  stateMachineException.getCause().getMessage()));
+  return omResponse.build();
+}
+
+try {
+  return OMRatisHelper.getOMResponseFromRaftClientReply(reply);
+} catch (InvalidProtocolBufferException ex) {
+  if (ex.getMessage() != null) {
+throw new ServiceException(ex.getMessage(), ex);
+  } else {
+throw new ServiceException(ex);
+  }
+}
+
+// TODO: Still need to handle RaftRetry failure exception and
+//  NotReplicated exception.
 
 Review comment:
   Currently using ratisClient there is a TODO for RaftRetry failure exception, 
and I don't see anything being done for handling NotReplicatedException.
   
   NotReplicatedException is thrown only for watch type requests. So, this 
exception will be never thrown in our HA case, as requests submitted through HA 
are Write requests.
   
   If you have any more comments, will update the TODO in next patch or I will 
take care of that in next jira if you are okay with it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 251248)
Time Spent: 10h 10m  (was: 10h)

> Implement Bucket Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1551
> URL: https://issues.apache.org/jira/browse/HDDS-1551
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10h 10m
>  

[jira] [Updated] (HDDS-1613) Opening of rocksDB in datanode fails with "No locks available"

2019-05-30 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-1613:

Summary: Opening of rocksDB in datanode fails with "No locks available"  
(was: opening of rocksDB fails with "No Such lock available")

> Opening of rocksDB in datanode fails with "No locks available"
> --
>
> Key: HDDS-1613
> URL: https://issues.apache.org/jira/browse/HDDS-1613
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> Block read fails with 
> {code}
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Unable to find the block with bcsID 11777 .Container 68 bcsId is 0.
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:573)
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.getBlock(ContainerProtocolCalls.java:120)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.initializeBlockInputStream(KeyInputStream.java:295)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.getStream(KeyInputStream.java:265)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.access$000(KeyInputStream.java:229)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.getStreamEntry(KeyInputStream.java:107)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:140)
> at 
> org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:47)
> at java.io.InputStream.read(InputStream.java:101)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.load(MiniOzoneLoadGenerator.java:114)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$0(MiniOzoneLoadGenerator.java:147)
> at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Looking at the 3 datanodes, the containers are in bcs id of 11748, 11748 and 
> 0.
> {code}
> 2019-05-30 08:28:05,348 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,363 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,377 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 0. : Result: UNKNOWN_BCSID
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1613) opening of rocksDB fails with "No Such lock available"

2019-05-30 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDDS-1613:
---

Assignee: Mukul Kumar Singh  (was: Shashikant Banerjee)

> opening of rocksDB fails with "No Such lock available"
> --
>
> Key: HDDS-1613
> URL: https://issues.apache.org/jira/browse/HDDS-1613
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> Block read fails with 
> {code}
> org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException:
>  Unable to find the block with bcsID 11777 .Container 68 bcsId is 0.
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:573)
> at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.getBlock(ContainerProtocolCalls.java:120)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.initializeBlockInputStream(KeyInputStream.java:295)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.getStream(KeyInputStream.java:265)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream$ChunkInputStreamEntry.access$000(KeyInputStream.java:229)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.getStreamEntry(KeyInputStream.java:107)
> at 
> org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:140)
> at 
> org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:47)
> at java.io.InputStream.read(InputStream.java:101)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.load(MiniOzoneLoadGenerator.java:114)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$0(MiniOzoneLoadGenerator.java:147)
> at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Looking at the 3 datanodes, the containers are in bcs id of 11748, 11748 and 
> 0.
> {code}
> 2019-05-30 08:28:05,348 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,363 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 11748. : Result: UNKNOWN_BCSID
> 2019-05-30 08:28:05,377 INFO  keyvalue.KeyValueHandler 
> (ContainerUtils.java:logAndReturnError(146)) - Operation: GetBlock : Trace 
> ID: 93a2a596076d2ee4:93a2a596076d2ee4:0:0 : Message: Unable to find the block 
> with bcsID 11777 .Container 68 bcsId is 0. : Result: UNKNOWN_BCSID
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >