Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-07-10 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1194/ [Jul 10, 2019 2:19:36 AM] (msingh) HDDS-1603. Handle Ratis Append Failure in Container State Machine. [Jul 10, 2019 2:53:34 AM] (yqlin) HDFS-14632. Reduce useless #getNumLiveDataNodes call in

[jira] [Created] (HDDS-1783) Latency metric for applyTransaction in ContainerStateMachine

2019-07-10 Thread Supratim Deka (JIRA)
Supratim Deka created HDDS-1783: --- Summary: Latency metric for applyTransaction in ContainerStateMachine Key: HDDS-1783 URL: https://issues.apache.org/jira/browse/HDDS-1783 Project: Hadoop Distributed

[jira] [Created] (HDDS-1782) Add an option to MiniOzoneChaosCluster to read files multiple times.

2019-07-10 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1782: --- Summary: Add an option to MiniOzoneChaosCluster to read files multiple times. Key: HDDS-1782 URL: https://issues.apache.org/jira/browse/HDDS-1782 Project:

Re: Incorrect NOTICE files in TLP releases

2019-07-10 Thread Akira Ajisaka
Hi Vinod, This issue is now tracked by https://issues.apache.org/jira/browse/HADOOP-15958 Thanks, Akira On Fri, Jul 5, 2019 at 1:29 PM Vinod Kumar Vavilapalli wrote: > > A bit of an old email, but want to make sure this isn't missed. > > Has anyone looked into this concern? > > Ref

[jira] [Created] (HDDS-1781) Add ContainerCache metrics in ContainerMetrics

2019-07-10 Thread Supratim Deka (JIRA)
Supratim Deka created HDDS-1781: --- Summary: Add ContainerCache metrics in ContainerMetrics Key: HDDS-1781 URL: https://issues.apache.org/jira/browse/HDDS-1781 Project: Hadoop Distributed Data Store

[jira] [Created] (HDFS-14644) That replication of block failed leads to decommission is blocked when the number of replicas of block is greater than the number of datanode

2019-07-10 Thread Lisheng Sun (JIRA)
Lisheng Sun created HDFS-14644: -- Summary: That replication of block failed leads to decommission is blocked when the number of replicas of block is greater than the number of datanode Key: HDFS-14644 URL:

[jira] [Resolved] (HDDS-1611) Evaluate ACL on volume bucket key and prefix to authorize access

2019-07-10 Thread Anu Engineer (JIRA)
[ https://issues.apache.org/jira/browse/HDDS-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer resolved HDDS-1611. Resolution: Fixed > Evaluate ACL on volume bucket key and prefix to authorize access >

[jira] [Reopened] (HDDS-1611) Evaluate ACL on volume bucket key and prefix to authorize access

2019-07-10 Thread Anu Engineer (JIRA)
[ https://issues.apache.org/jira/browse/HDDS-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer reopened HDDS-1611: > Evaluate ACL on volume bucket key and prefix to authorize access >

[jira] [Resolved] (HDDS-1611) Evaluate ACL on volume bucket key and prefix to authorize access

2019-07-10 Thread Anu Engineer (JIRA)
[ https://issues.apache.org/jira/browse/HDDS-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer resolved HDDS-1611. Resolution: Fixed Fix Version/s: 0.4.1 0.5.0 Thanks for the patch. I have

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-07-10 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1193/ [Jul 9, 2019 3:12:55 AM] (msingh) HDDS-1750. Add block allocation metrics for pipelines in SCM. [Jul 9, 2019 3:24:12 AM] (aengineer) HDDS-1550. MiniOzoneCluster is not shutting down all the threads

[jira] [Created] (HDFS-14643) [Dynamometer] Merge extra commits from GitHub to Hadoop

2019-07-10 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-14643: -- Summary: [Dynamometer] Merge extra commits from GitHub to Hadoop Key: HDFS-14643 URL: https://issues.apache.org/jira/browse/HDFS-14643 Project: Hadoop HDFS

[jira] [Created] (HDFS-14642) processMisReplicatedBlocks does not return correct processed count

2019-07-10 Thread Stephen O'Donnell (JIRA)
Stephen O'Donnell created HDFS-14642: Summary: processMisReplicatedBlocks does not return correct processed count Key: HDFS-14642 URL: https://issues.apache.org/jira/browse/HDFS-14642 Project:

Re: Any thoughts making Submarine a separate Apache project?

2019-07-10 Thread Wanqiang Ji
+1 This is a fantastic recommendation. I can see the community grows fast and good collaborative, submarine can be an independent project at now, thanks for all contributors. FYI, Wanqiang Ji On Wed, Jul 10, 2019 at 3:34 PM Xun Liu wrote: > Hi all, > > This is Xun Liu contributing to the

[jira] [Resolved] (HDDS-1459) Docker compose of ozonefs has older hadoop image for hadoop 3.2

2019-07-10 Thread Elek, Marton (JIRA)
[ https://issues.apache.org/jira/browse/HDDS-1459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton resolved HDDS-1459. Resolution: Duplicate Thanks the report [~vivekratnavel] It's fixed with HDDS-1525: ozonefs

[jira] [Created] (HDDS-1780) TestFailureHandlingByClient tests are flaky

2019-07-10 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1780: - Summary: TestFailureHandlingByClient tests are flaky Key: HDDS-1780 URL: https://issues.apache.org/jira/browse/HDDS-1780 Project: Hadoop Distributed Data

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-07-10 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/ [Jul 9, 2019 3:54:37 PM] (stack) Backport HDFS-3246,HDFS-14111 ByteBuffer pread interface to branch-2.9 [Jul 9, 2019 3:57:57 PM] (stack) Revert "Backport HDFS-3246,HDFS-14111 ByteBuffer pread interface

[jira] [Created] (HDDS-1778) Fix existing blockade tests

2019-07-10 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-1778: - Summary: Fix existing blockade tests Key: HDDS-1778 URL: https://issues.apache.org/jira/browse/HDDS-1778 Project: Hadoop Distributed Data Store Issue Type: Bug

Any thoughts making Submarine a separate Apache project?

2019-07-10 Thread Xun Liu
Hi all, This is Xun Liu contributing to the Submarine project for deep learning workloads running with big data workloads together on Hadoop clusters. There are a bunch of integrations of Submarine to other projects are finished or going on, such as Apache Zeppelin, TonY, Azkaban. The next step