Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-07-27 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/540/

[Jul 26, 2018 7:54:56 PM] (aw) YETUS-406. Publish Yetus Audience Annotations to 
Maven Central
[Jul 26, 2018 1:30:23 PM] (nanda) HDDS-201. Add name for LeaseManager. 
Contributed by Sandeep Nemuri.
[Jul 26, 2018 5:24:32 PM] (xiao) HDFS-13622. mkdir should print the parent 
directory in the error message
[Jul 26, 2018 8:15:55 PM] (xyao) HDDS-277. PipelineStateMachine should handle 
closure of pipelines in
[Jul 26, 2018 8:17:37 PM] (xyao) HDDS-291. Initialize hadoop metrics system in 
standalone hdds datanodes.
[Jul 26, 2018 10:22:57 PM] (eyang) YARN-8545.  Return allocated resource to RM 
for failed container.   
[Jul 26, 2018 10:35:36 PM] (eyang) HADOOP-15593.  Fixed NPE in UGI 
spawnAutoRenewalThreadForUserCreds. 
[Jul 27, 2018 12:02:13 AM] (eyang) YARN-8429. Improve diagnostic message when 
artifact is not set properly.




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestNativeCodeLoader 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage 
   hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestHAAppend 
   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover 
   hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs.server.namenode.TestAuditLogs 
   hadoop.hdfs.server.namenode.TestEditLogAutoroll 
   hadoop.hdfs.server.namenode.TestReencryptionWithKMS 
   hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs 
   hadoop.hdfs.TestDFSShell 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy 
   hadoop.hdfs.TestDFSUpgrade 
   hadoop.hdfs.TestDFSUpgradeFromImage 
   hadoop.hdfs.TestErasureCodingPolicies 
   hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy 
   hadoop.hdfs.TestFetchImage 
   hadoop.hdfs.TestFileAppend 
   hadoop.hdfs.TestFileAppend2 
   hadoop.hdfs.TestFileCorruption 
   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.TestLeaseRecoveryStriped 
   hadoop.hdfs.TestPread 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.TestRestartDFS 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.TestRollingUpgradeDowngrade 
   hadoop.hdfs.TestSafeMode 
   hadoop.hdfs.TestSafeModeWithStripedFile 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.tools.TestDFSAdmin 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.web.TestWebHdfsUrl 
   hadoop.fs.http.server.TestHttpFSServerWebServer 
   
hadoop.yarn.logaggregation.filecontroller.ifile.TestLogAggregationIndexFileController
 
   

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-07-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/850/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestBasicDiskValidator 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/diff-checkstyle-root.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [4.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   CTEST:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
  [116K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [192K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [336K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
  [112K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/843/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [80K]
   

[jira] [Created] (HDDS-299) Add InterfaceAudience/InterfaceStability annotations

2018-07-27 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-299:
---

 Summary: Add InterfaceAudience/InterfaceStability annotations
 Key: HDDS-299
 URL: https://issues.apache.org/jira/browse/HDDS-299
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Bharat Viswanadham


This Jira is to add IntefaceAudience for datanode code in the container-service 
module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13772) Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding policies which are already enabled/disabled

2018-07-27 Thread Souryakanta Dwivedy (JIRA)
Souryakanta Dwivedy created HDFS-13772:
--

 Summary: Erasure coding: Unnecessary NameNode Logs displaying for 
Enabling/Disabling Erasure coding policies which are already enabled/disabled
 Key: HDFS-13772
 URL: https://issues.apache.org/jira/browse/HDFS-13772
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0
 Environment: 3 Node SuSE Linux cluster !Capture1.PNG!
Reporter: Souryakanta Dwivedy
 Attachments: Capture1.PNG, Capture2.PNG, Capture3.PNG, Capture4.PNG

Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding 
policies which are already enabled/disabled

- Enable any Erasure coding policy like "RS-LEGACY-6-3-1024k"
- Check the console log display as "Erasure coding policy RS-LEGACY-6-3-1024k 
is enabled"
- Again try to enable the same policy multiple times "hdfs ec -enablePolicy 
-policy RS-LEGACY-6-3-1024k"
 instead of throwing error message as ""policy already enabled"" it will 
display same messages as "Erasure coding policy RS-LEGACY-6-3-1024k is enabled"
- Also in NameNode log policy enabled logs are displaying multiple times 
unnecessarily even though the policy is already enabled.
 like this : 2018-07-27 18:50:35,084 INFO 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable the 
erasure coding policy RS-10-4-1024k
2018-07-27 18:50:35,084 INFO 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable the 
erasure coding policy RS-10-4-1024k
2018-07-27 18:50:35,084 INFO 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable the 
erasure coding policy RS-10-4-1024k
2018-07-27 18:50:35,084 INFO 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
erasure coding policy RS-LEGACY-6-3-1024k
2018-07-27 18:50:35,084 INFO 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
erasure coding policy RS-LEGACY-6-3-1024k
2018-07-27 18:50:35,084 INFO 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
erasure coding policy RS-LEGACY-6-3-1024k

- While executing the Erasure coding policy disable command also same type of 
logs coming multiple times even though the policy is already 
 disabled.It should throw error message as ""policy is already disabled"" for 
already disabled policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-298) Implement SCMClientProtocolServer.getContainerWithPipeline for closed containers

2018-07-27 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-298:
-

 Summary: Implement 
SCMClientProtocolServer.getContainerWithPipeline for closed containers
 Key: HDDS-298
 URL: https://issues.apache.org/jira/browse/HDDS-298
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Elek, Marton
 Fix For: 0.2.1


As [~ljain] mentioned during the review of HDDS-245 
SCMClientProtocolServer.getContainerWithPipeline doesn't return with good data 
for closed containers. For closed containers we are maintaining the datanodes 
for a containerId in the ContainerStateMap.contReplicaMap. We need to create 
fake Pipeline object on-request and return it for the client to locate the 
right datanodes to download data. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-297) Add pipeline reports in Ozone

2018-07-27 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-297:
--

 Summary: Add pipeline reports in Ozone
 Key: HDDS-297
 URL: https://issues.apache.org/jira/browse/HDDS-297
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


Pipeline in Ozone are created out of a group of nodes depending upon the 
replication factor and type. These pipeline provide a transport protocol for 
data transfer.

Inorder to detect any failure of pipeline, SCM should receive pipeline reports 
from Datanodes and process it to identify various raft rings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org