[jira] [Created] (HDFS-13560) Insufficient system resources exist to complete the requested service for some tests on Windows

2018-05-14 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13560:


 Summary: Insufficient system resources exist to complete the 
requested service for some tests on Windows
 Key: HDFS-13560
 URL: https://issues.apache.org/jira/browse/HDFS-13560
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Anbang Hu
Assignee: Anbang Hu


On Windows, there are 30 tests in HDFS component giving error like the 
following:
{color:#FF}[ERROR] Tests run: 7, Failures: 0, Errors: 7, Skipped: 0, Time 
elapsed: 50.149 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles{color}
{color:#FF}[ERROR] 
testDisableLazyPersistFileScrubber(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles)
 Time elapsed: 16.513 s <<< ERROR!{color}
{color:#FF}1450: Insufficient system resources exist to complete the 
requested service.{color}

{color:#FF}at 
org.apache.hadoop.io.nativeio.NativeIO$Windows.extendWorkingSetSize(Native 
Method){color}
{color:#FF} at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1339){color}
{color:#FF} at 
org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:495){color}
{color:#FF} at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695){color}
{color:#FF} at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598){color}
{color:#FF} at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1554){color}
{color:#FF} at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:904){color}
{color:#FF} at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
{color:#FF} at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
{color:#FF} at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.startUpCluster(LazyPersistTestCase.java:316){color}
{color:#FF} at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase$ClusterWithRamDiskBuilder.build(LazyPersistTestCase.java:415){color}
{color:#FF} at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles.testDisableLazyPersistFileScrubber(TestLazyPersistFiles.java:128){color}
{color:#FF} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
{color:#FF} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#FF} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#FF} at java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#FF} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#FF} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#FF} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#FF} at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
{color:#FF} at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27){color}
{color:#FF} at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}

{color:#33}The involved tests are{color}
{code:java}
TestLazyPersistFiles,TestLazyPersistPolicy,TestLazyPersistReplicaRecovery,TestLazyPersistLockedMemory#testWritePipelineFailure,TestLazyPersistLockedMemory#testShortBlockFinalized,TestLazyPersistReplicaPlacement#testRamDiskNotChosenByDefault,TestLazyPersistReplicaPlacement#testFallbackToDisk,TestLazyPersistReplicaPlacement#testPlacementOnSizeLimitedRamDisk,TestLazyPersistReplicaPlacement#testPlacementOnRamDisk,TestLazyWriter#testDfsUsageCreateDelete,TestLazyWriter#testDeleteAfterPersist,TestLazyWriter#testDeleteBeforePersist,TestLazyWriter#testLazyPersistBlocksAreSaved,TestDirectoryScanner#testDeleteBlockOnTransientStorage,TestDirectoryScanner#testRetainBlockOnPersistentStorage,TestDirectoryScanner#testExceptionHandlingWhileDirectoryScan,TestDirectoryScanner#testDirectoryScanner,TestDirectoryScanner#testThrottling,TestDirectoryScanner#testDirectoryScannerInFederatedCluster,TestNameNodeMXBean#testNameNodeMXBeanInfo{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-05-14 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/467/

[May 14, 2018 6:24:01 AM] (littlezhou) Add 2.9.1 release notes and changes 
documents
[May 14, 2018 6:38:40 AM] (sammichen) Revert "Add 2.9.1 release notes and 
changes documents"
[May 14, 2018 7:14:02 AM] (sammi.chen) Add 2.9.1 release notes and changes 
documents




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.fs.TestLocalFileSystem 
   hadoop.fs.TestRawLocalFileSystemContract 
   hadoop.fs.TestTrash 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestNativeCodeLoader 
   hadoop.fs.TestResolveHdfsSymlink 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy 
   hadoop.hdfs.crypto.TestHdfsCryptoStreams 
   hadoop.hdfs.qjournal.client.TestQuorumJournalManager 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.security.TestDelegationTokenForProxyUser 
   hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockRecovery 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeMetrics 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestHSync 
   hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport 
   hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.mover.TestStorageMover 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport 
   hadoop.hdfs.server.namenode.TestAddBlock 
   hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands 
   hadoop.hdfs.server.namenode.TestCheckpoint 
   hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate 
   hadoop.hdfs.server.namenode.TestEditLogRace 
   hadoop.hdfs.server.namenode.TestFSImage 
   hadoop.hdfs.server.namenode.TestFSImageWithSnapshot 
   hadoop.hdfs.server.namenode.TestNamenodeCapacityReport 
   hadoop.hdfs.server.namenode.TestNameNodeMXBean 
   hadoop.hdfs.server.namenode.TestNestedEncryptionZones 
   hadoop.hdfs.server.namenode.TestQuotaByStorageType 
   hadoop.hdfs.server.namenode.TestReencryptionHandler 
   hadoop.hdfs.server.namenode.TestStartup 
   hadoop.hdfs.TestDatanodeRegistration 
   hadoop.hdfs.TestDatanodeReport 
   hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs 
   hadoop.hdfs.TestDecommission 
   

RE: [VOTE] Release Apache Hadoop 2.8.4 (RC0)

2018-05-14 Thread Brahma Reddy Battula
Thanks Junping for driving this release.


+1  (binding)


-- Build successfully from the source code
-- Start HA cluster
-- Verified basic shell operations
-- Ran pi,wordcount
-- Browsed the NN and RM UI

  


-Brahma Reddy Battula

-Original Message-
From: 俊平堵 [mailto:junping...@apache.org] 
Sent: 09 May 2018 01:41
To: Hadoop Common ; Hdfs-dev 
; mapreduce-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org
Subject: [VOTE] Release Apache Hadoop 2.8.4 (RC0)

Hi all,
 I've created the first release candidate (RC0) for Apache Hadoop 2.8.4. 
This is our next maint release to follow up 2.8.3. It includes 77 important 
fixes and improvements.

The RC artifacts are available at:
http://home.apache.org/~junping_du/hadoop-2.8.4-RC0

The RC tag in git is: release-2.8.4-RC0

The maven artifacts are available via repository.apache.org< 
http://repository.apache.org> at:
https://repository.apache.org/content/repositories/orgapachehadoop-1118

Please try the release and vote; the vote will run for the usual 5 working 
days, ending on 5/14/2018 PST time.

Thanks,

Junping

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13559) TestBlockScanner does not close TestContext properly

2018-05-14 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13559:


 Summary: TestBlockScanner does not close TestContext properly
 Key: HDFS-13559
 URL: https://issues.apache.org/jira/browse/HDFS-13559
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Anbang Hu
Assignee: Anbang Hu


Without closing ctx in testMarkSuspectBlock, testIgnoreMisplacedBlock, 
testAppendWhileScanning, some tests fail on Windows:

{color:#d04437}[INFO] Running 
org.apache.hadoop.hdfs.server.datanode.TestBlockScanner{color}
{color:#d04437}[ERROR] Tests run: 14, Failures: 0, Errors: 8, Skipped: 0, Time 
elapsed: 113.398 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.datanode.TestBlockScanner{color}
{color:#d04437}[ERROR] 
testScanAllBlocksWithRescan(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner)
 Time elapsed: 0.031 s <<< ERROR!{color}
{color:#d04437}java.io.IOException: Could not fully delete 
E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.datanode.TestBlockScanner$TestContext.(TestBlockScanner.java:102){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testScanAllBlocksImpl(TestBlockScanner.java:366){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testScanAllBlocksWithRescan(TestBlockScanner.java:435){color}
{color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
{color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#d04437} at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
{color:#d04437} at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}

{color:#d04437}...{color}

{color:#d04437}[INFO]{color}
{color:#d04437}[INFO] Results:{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Errors:{color}
{color:#d04437}[ERROR] TestBlockScanner.testAppendWhileScanning:899 ╗ IO Could 
not fully delete E:\OS...{color}
{color:#d04437}[ERROR] TestBlockScanner.testCorruptBlockHandling:488 ╗ IO Could 
not fully delete E:\O...{color}
{color:#d04437}[ERROR] TestBlockScanner.testDatanodeCursor:531 ╗ IO Could not 
fully delete E:\OSS\had...{color}
{color:#d04437}[ERROR] TestBlockScanner.testMarkSuspectBlock:717 ╗ IO Could not 
fully delete E:\OSS\h...{color}
{color:#d04437}[ERROR] 
TestBlockScanner.testScanAllBlocksWithRescan:435->testScanAllBlocksImpl:366 ╗ 
IO{color}
{color:#d04437}[ERROR] TestBlockScanner.testScanRateLimit:450 ╗ IO Could not 
fully delete E:\OSS\hado...{color}
{color:#d04437}[ERROR] 
TestBlockScanner.testVolumeIteratorWithCaching:261->testVolumeIteratorImpl:169 
╗ IO{color}
{color:#d04437}[ERROR] 
TestBlockScanner.testVolumeIteratorWithoutCaching:256->testVolumeIteratorImpl:169
 ╗ IO{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Tests run: 14, Failures: 0, Errors: 8, Skipped: 0{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13558) TestDatanodeHttpXFrame does not shut down cluster

2018-05-14 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13558:


 Summary: TestDatanodeHttpXFrame does not shut down cluster
 Key: HDFS-13558
 URL: https://issues.apache.org/jira/browse/HDFS-13558
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Anbang Hu
Assignee: Anbang Hu


On Windows, without shutting down cluster properly:

{color:#d04437}[INFO] Running 
org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
{color:#d04437}[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 32.32 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame{color}
{color:#d04437}[ERROR] 
testDataNodeXFrameOptionsEnabled(org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame)
 Time elapsed: 0.034 s <<< ERROR!{color}
{color:#d04437}java.io.IOException: Could not fully delete 
E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.createCluster(TestDatanodeHttpXFrame.java:77){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame.testDataNodeXFrameOptionsEnabled(TestDatanodeHttpXFrame.java:45){color}
{color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
{color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#d04437} at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
{color:#d04437} at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168){color}
{color:#d04437} at org.junit.rules.RunRules.evaluate(RunRules.java:20){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color}
{color:#d04437} at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color}
{color:#d04437} at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color}
{color:#d04437} at 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color}
{color:#d04437} at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color}
{color:#d04437} at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.run(ParentRunner.java:309){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413){color}

{color:#d04437}[INFO]{color}
{color:#d04437}[INFO] Results:{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Errors:{color}
{color:#d04437}[ERROR] 
TestDatanodeHttpXFrame.testDataNodeXFrameOptionsEnabled:45->createCluster:77 ╗ 
IO{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2018-05-14 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/225/

[May 14, 2018 6:00:00 AM] (rohithsharmaks) YARN-8247 Incorrect HTTP status code 
returned by ATSv2 for
[May 14, 2018 7:36:30 AM] (sammi.chen) Add 2.9.1 release notes and changes 
documents
[May 14, 2018 5:21:59 PM] (hanishakoneru) HDFS-13544. Improve logging for 
JournalNode in federated cluster.




-1 overall


The following subsystems voted -1:
docker


Powered by Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-13557) TestDFSAdmin#testListOpenFiles fails on Windows

2018-05-14 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13557:


 Summary: TestDFSAdmin#testListOpenFiles fails on Windows
 Key: HDFS-13557
 URL: https://issues.apache.org/jira/browse/HDFS-13557
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Anbang Hu
Assignee: Anbang Hu


Different from Unix-like system, Windows uses \r\n to signify an enter is 
pressed.

{color:#d04437}[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 91.891 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.tools.TestDFSAdmin{color}
{color:#d04437}[ERROR] 
testListOpenFiles(org.apache.hadoop.hdfs.tools.TestDFSAdmin) Time elapsed: 
91.752 s <<< FAILURE!{color}
{color:#d04437}java.lang.AssertionError:{color}

{color:#d04437}Expected: is a string containing 
"/tmp/files/open-file-14\n"{color}
{color:#d04437} but: was "Formatting using clusterid: testClusterID{color}
{color:#d04437}Client Host Client Name Open File Path{color}
{color:#d04437}127.0.0.1 DFSClient_NONMAPREDUCE_-1619541836_214 
/tmp/files/open-file-0{color}
{color:#d04437}127.0.0.1 DFSClient_NONMAPREDUCE_-1619541836_214 
/tmp/files/open-file-1{color}
{color:#d04437}127.0.0.1 DFSClient_NONMAPREDUCE_-1619541836_214 
/tmp/files/open-file-2{color}
{color:#d04437}127.0.0.1 DFSClient_NONMAPREDUCE_-1619541836_214 
/tmp/files/open-file-3{color}
{color:#d04437}127.0.0.1 DFSClient_NONMAPREDUCE_-1619541836_214 
/tmp/files/open-file-4{color}
{color:#d04437}127.0.0.1 DFSClient_NONMAPREDUCE_-1619541836_214 
/tmp/files/open-file-5{color}
{color:#d04437}127.0.0.1 DFSClient_NONMAPREDUCE_-1619541836_214 
/tmp/files/open-file-6{color}
{color:#d04437}127.0.0.1 DFSClient_NONMAPREDUCE_-1619541836_214 
/tmp/files/open-file-7{color}
{color:#d04437}127.0.0.1 DFSClient_NONMAPREDUCE_-1619541836_214 
/tmp/files/open-file-8{color}
{color:#d04437}127.0.0.1 DFSClient_NONMAPREDUCE_-1619541836_214 
/tmp/files/open-file-9{color}
{color:#d04437}127.0.0.1 DFSClient_NONMAPREDUCE_-1619541836_214 
/tmp/files/open-file-10{color}
{color:#d04437}127.0.0.1 DFSClient_NONMAPREDUCE_-1619541836_214 
/tmp/files/open-file-11{color}
{color:#d04437}127.0.0.1 DFSClient_NONMAPREDUCE_-1619541836_214 
/tmp/files/open-file-12{color}
{color:#d04437}127.0.0.1 DFSClient_NONMAPREDUCE_-1619541836_214 
/tmp/files/open-file-13{color}
{color:#d04437}127.0.0.1 DFSClient_NONMAPREDUCE_-1619541836_214 
/tmp/files/open-file-14{color}
{color:#d04437}"{color}
{color:#d04437} at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20){color}
{color:#d04437} at org.junit.Assert.assertThat(Assert.java:865){color}
{color:#d04437} at org.junit.Assert.assertThat(Assert.java:832){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.tools.TestDFSAdmin.verifyOpenFilesListing(TestDFSAdmin.java:664){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.tools.TestDFSAdmin.testListOpenFiles(TestDFSAdmin.java:644){color}
{color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
{color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#d04437} at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
{color:#d04437} at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13556) TestNestedEncryptionZones does not shut down cluster

2018-05-14 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13556:


 Summary: TestNestedEncryptionZones does not shut down cluster
 Key: HDFS-13556
 URL: https://issues.apache.org/jira/browse/HDFS-13556
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Anbang Hu
Assignee: Anbang Hu


Without shutting down cluster, there is conflict at least on Windows.

{color:#d04437}[INFO] Running 
org.apache.hadoop.hdfs.server.namenode.TestNestedEncryptionZones{color}
{color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 33.631 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.namenode.TestNestedEncryptionZones{color}
{color:#d04437}[ERROR] 
testNestedEncryptionZones(org.apache.hadoop.hdfs.server.namenode.TestNestedEncryptionZones)
 Time elapsed: 0.03 s <<< ERROR!{color}
{color:#d04437}java.io.IOException: Could not fully delete 
E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.TestNestedEncryptionZones.setup(TestNestedEncryptionZones.java:104){color}
{color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
{color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#d04437} at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color}
{color:#d04437} at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color}
{color:#d04437} at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color}
{color:#d04437} at 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color}
{color:#d04437} at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color}
{color:#d04437} at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.run(ParentRunner.java:309){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413){color}

{color:#d04437}[INFO]{color}
{color:#d04437}[INFO] Results:{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Errors:{color}
{color:#d04437}[ERROR] TestNestedEncryptionZones.setup:104 ╗ IO Could not fully 
delete E:\OSS\hadoop-...{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13555) TestNetworkTopology#testInvalidNetworkTopologiesNotCachedInHdfs times out on Windows

2018-05-14 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13555:


 Summary: 
TestNetworkTopology#testInvalidNetworkTopologiesNotCachedInHdfs times out on 
Windows
 Key: HDFS-13555
 URL: https://issues.apache.org/jira/browse/HDFS-13555
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Anbang Hu
Assignee: Anbang Hu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13554) TestDatanodeRegistration#testForcedRegistration does not shut down cluster

2018-05-14 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13554:


 Summary: TestDatanodeRegistration#testForcedRegistration does not 
shut down cluster
 Key: HDFS-13554
 URL: https://issues.apache.org/jira/browse/HDFS-13554
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Anbang Hu
Assignee: Anbang Hu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13553) RBF: Support global quota

2018-05-14 Thread JIRA
Íñigo Goiri created HDFS-13553:
--

 Summary: RBF: Support global quota
 Key: HDFS-13553
 URL: https://issues.apache.org/jira/browse/HDFS-13553
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Íñigo Goiri
Assignee: Yiqun Lin


Add quota management to Router-based federation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13552) TestFileAppend.testAppendCorruptedBlock,TestFileAppend.testConcurrentAppendRead time out on Windows

2018-05-14 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13552:


 Summary: 
TestFileAppend.testAppendCorruptedBlock,TestFileAppend.testConcurrentAppendRead 
time out on Windows
 Key: HDFS-13552
 URL: https://issues.apache.org/jira/browse/HDFS-13552
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Anbang Hu
Assignee: Anbang Hu


{color:#d04437}[INFO] Running org.apache.hadoop.hdfs.TestFileAppend{color}
{color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
elapsed: 20.073 s <<< FAILURE! - in org.apache.hadoop.hdfs.TestFileAppend{color}
{color:#d04437}[ERROR] 
testConcurrentAppendRead(org.apache.hadoop.hdfs.TestFileAppend) Time elapsed: 
10.005 s <<< ERROR!{color}
{color:#d04437}java.lang.Exception: test timed out after 1 
milliseconds{color}
{color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native Method){color}
{color:#d04437} at 
java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
{color:#d04437} at 
java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
{color:#d04437} at 
java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
{color:#d04437} at 
org.apache.hadoop.net.DNS.resolveLocalHostname(DNS.java:284){color}
{color:#d04437} at org.apache.hadoop.net.DNS.(DNS.java:61){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:989){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:599){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:168){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.TestFileAppend.testConcurrentAppendRead(TestFileAppend.java:701){color}
{color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
{color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#d04437} at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
{color:#d04437} at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}

{color:#d04437}[ERROR] 
testAppendCorruptedBlock(org.apache.hadoop.hdfs.TestFileAppend) Time elapsed: 
10.001 s <<< ERROR!{color}
{color:#d04437}java.lang.Exception: test timed out after 1 
milliseconds{color}
{color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native Method){color}
{color:#d04437} at 
java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
{color:#d04437} at 
java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
{color:#d04437} at 
java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
{color:#d04437} at 
org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:256){color}
{color:#d04437} at 
org.apache.hadoop.security.SecurityUtil.replacePattern(SecurityUtil.java:224){color}
{color:#d04437} at 
org.apache.hadoop.security.SecurityUtil.getServerPrincipal(SecurityUtil.java:179){color}
{color:#d04437} at 
org.apache.hadoop.security.AuthenticationFilterInitializer.getFilterConfigMap(AuthenticationFilterInitializer.java:90){color}
{color:#d04437} at 
org.apache.hadoop.http.HttpServer2.getFilterProperties(HttpServer2.java:521){color}
{color:#d04437} at 
org.apache.hadoop.http.HttpServer2.constructSecretProvider(HttpServer2.java:511){color}
{color:#d04437} at 
org.apache.hadoop.http.HttpServer2.(HttpServer2.java:400){color}
{color:#d04437} at 
org.apache.hadoop.http.HttpServer2.(HttpServer2.java:115){color}
{color:#d04437} at 
org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:336){color}

[jira] [Created] (HDDS-71) Send ContainerType to Datanode during container creation

2018-05-14 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-71:
--

 Summary: Send ContainerType to Datanode during container creation
 Key: HDDS-71
 URL: https://issues.apache.org/jira/browse/HDDS-71
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13551) TestMiniDFSCluster#testClusterSetStorageCapacity does not shut down cluster

2018-05-14 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13551:


 Summary: TestMiniDFSCluster#testClusterSetStorageCapacity does not 
shut down cluster
 Key: HDFS-13551
 URL: https://issues.apache.org/jira/browse/HDFS-13551
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Anbang Hu
Assignee: Anbang Hu


{color:#33}TestMiniDFSCluster#testClusterSetStorageCapacity not shutting 
down cluster properly leads to{color}

{color:#d04437}[INFO] Running org.apache.hadoop.hdfs.TestMiniDFSCluster{color}
{color:#d04437}[ERROR] Tests run: 7, Failures: 0, Errors: 3, Skipped: 1, Time 
elapsed: 136.409 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.TestMiniDFSCluster{color}
{color:#d04437}[ERROR] 
testClusterNoStorageTypeSetForDatanodes(org.apache.hadoop.hdfs.TestMiniDFSCluster)
 Time elapsed: 0.034 s <<< ERROR!{color}
{color:#d04437}java.io.IOException: Could not fully delete 
E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.TestMiniDFSCluster.testClusterNoStorageTypeSetForDatanodes(TestMiniDFSCluster.java:255){color}
{color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
{color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#d04437} at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
{color:#d04437} at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color}
{color:#d04437} at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color}
{color:#d04437} at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color}
{color:#d04437} at 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color}
{color:#d04437} at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color}
{color:#d04437} at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.run(ParentRunner.java:309){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413){color}

{color:#d04437}[ERROR] 
testClusterSetDatanodeDifferentStorageType(org.apache.hadoop.hdfs.TestMiniDFSCluster)
 Time elapsed: 0.023 s <<< ERROR!{color}
{color:#d04437}java.io.IOException: Could not fully delete 
E:\OSS\hadoop-branch-2\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\name1{color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1047){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514){color}
{color:#d04437} at 

[jira] [Created] (HDFS-13550) TestDebugAdmin#testComputeMetaCommand fails on Windows

2018-05-14 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13550:


 Summary: TestDebugAdmin#testComputeMetaCommand fails on Windows
 Key: HDFS-13550
 URL: https://issues.apache.org/jira/browse/HDFS-13550
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Anbang Hu
Assignee: Anbang Hu


{color:#d04437}[INFO] Running org.apache.hadoop.hdfs.tools.TestDebugAdmin{color}
{color:#d04437}[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 32.845 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.tools.TestDebugAdmin{color}
{color:#d04437}[ERROR] 
testComputeMetaCommand(org.apache.hadoop.hdfs.tools.TestDebugAdmin) Time 
elapsed: 32.792 s <<< FAILURE!{color}
{color:#d04437}org.junit.ComparisonFailure:{color}
{color:#d04437}expected:<...file, and save it to[ the specified output metadata 
file.**NOTE: Use at your own risk! If the block file is corrupt and you 
overwrite it's meta file, it will show up as good in HDFS, but you can't read 
the data. Only use as a last measure, and when you are 100% certain the block 
file is good.]> but was:<...file, and save it to[{color}
{color:#d04437} the specified output metadata file.{color}

{color:#d04437}**NOTE: Use at your own risk!{color}
{color:#d04437} If the block file is corrupt and you overwrite it's meta 
file,{color}
{color:#d04437} it will show up as good in HDFS, but you can't read the 
data.{color}
{color:#d04437} Only use as a last measure, and when you are 100% certain the 
block file is good.{color}
{color:#d04437}]>{color}
{color:#d04437} at org.junit.Assert.assertEquals(Assert.java:115){color}
{color:#d04437} at org.junit.Assert.assertEquals(Assert.java:144){color}
{color:#d04437} at 
org.apache.hadoop.hdfs.tools.TestDebugAdmin.testComputeMetaCommand(TestDebugAdmin.java:137){color}
{color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
{color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#d04437} at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
{color:#d04437} at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}

{color:#d04437}[INFO]{color}
{color:#d04437}[INFO] Results:{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Failures:{color}
{color:#d04437}[ERROR] TestDebugAdmin.testComputeMetaCommand:137 
expected:<...file, and save it to[ the specified output metadata file.**NOTE: 
Use at your own risk! If the block file is corrupt and you overwrite it's meta 
file, it will show up as good in HDFS, but you can't read the data. Only use as 
a last measure, and when you are 100% certain the block file is good.]> but 
was:<...file, and save it to[{color}
{color:#d04437} the specified output metadata file.{color}

{color:#d04437}**NOTE: Use at your own risk!{color}
{color:#d04437} If the block file is corrupt and you overwrite it's meta 
file,{color}
{color:#d04437} it will show up as good in HDFS, but you can't read the 
data.{color}
{color:#d04437} Only use as a last measure, and when you are 100% certain the 
block file is good.{color}
{color:#d04437}]>{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13549) TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess time out on Windows

2018-05-14 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13549:


 Summary: 
TestDoAsEffectiveUser#testRealUserSetup,TestDoAsEffectiveUser#testRealUserAuthorizationSuccess
 time out on Windows
 Key: HDFS-13549
 URL: https://issues.apache.org/jira/browse/HDFS-13549
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Anbang Hu
Assignee: Anbang Hu


{color:#d04437}[INFO] Running 
org.apache.hadoop.security.TestDoAsEffectiveUser{color}
{color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time 
elapsed: 8.307 s <<< FAILURE! - in 
org.apache.hadoop.security.TestDoAsEffectiveUser{color}
{color:#d04437}[ERROR] 
testRealUserSetup(org.apache.hadoop.security.TestDoAsEffectiveUser) Time 
elapsed: 4.107 s <<< ERROR!{color}
{color:#d04437}java.lang.Exception: test timed out after 4000 
milliseconds{color}
{color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native Method){color}
{color:#d04437} at 
java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
{color:#d04437} at 
java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
{color:#d04437} at 
java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
{color:#d04437} at 
org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
{color:#d04437} at 
org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserSetup(TestDoAsEffectiveUser.java:188){color}
{color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
{color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#d04437} at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
{color:#d04437} at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}

{color:#d04437}[ERROR] 
testRealUserAuthorizationSuccess(org.apache.hadoop.security.TestDoAsEffectiveUser)
 Time elapsed: 4.002 s <<< ERROR!{color}
{color:#d04437}java.lang.Exception: test timed out after 4000 
milliseconds{color}
{color:#d04437} at java.net.Inet4AddressImpl.getHostByAddr(Native Method){color}
{color:#d04437} at 
java.net.InetAddress$2.getHostByAddr(InetAddress.java:932){color}
{color:#d04437} at 
java.net.InetAddress.getHostFromNameService(InetAddress.java:617){color}
{color:#d04437} at 
java.net.InetAddress.getCanonicalHostName(InetAddress.java:588){color}
{color:#d04437} at 
org.apache.hadoop.security.TestDoAsEffectiveUser.configureSuperUserIPAddresses(TestDoAsEffectiveUser.java:103){color}
{color:#d04437} at 
org.apache.hadoop.security.TestDoAsEffectiveUser.testRealUserAuthorizationSuccess(TestDoAsEffectiveUser.java:218){color}
{color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
{color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#d04437} at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
{color:#d04437} at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}

{color:#d04437}[INFO]{color}
{color:#d04437}[INFO] Results:{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Errors:{color}
{color:#d04437}[ERROR] 
TestDoAsEffectiveUser.testRealUserAuthorizationSuccess:218->configureSuperUserIPAddresses:103
 ╗{color}
{color:#d04437}[ERROR] 
TestDoAsEffectiveUser.testRealUserSetup:188->configureSuperUserIPAddresses:103 
╗{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Created] (HDFS-13548) TestResolveHdfsSymlink#testFcResolveAfs fails on Windows

2018-05-14 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13548:


 Summary: TestResolveHdfsSymlink#testFcResolveAfs fails on Windows
 Key: HDFS-13548
 URL: https://issues.apache.org/jira/browse/HDFS-13548
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Anbang Hu
Assignee: Anbang Hu


{color:#33}TestResolveHdfsSymlink#testFcResolveAfs fails on Windows with 
error message:{color}

{color:#d04437}[INFO] Running org.apache.hadoop.fs.TestResolveHdfsSymlink{color}
{color:#d04437}[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 28.574 s <<< FAILURE! - in 
org.apache.hadoop.fs.TestResolveHdfsSymlink{color}
{color:#d04437}[ERROR] 
testFcResolveAfs(org.apache.hadoop.fs.TestResolveHdfsSymlink) Time elapsed: 
0.039 s <<< ERROR!{color}
{color:#d04437}java.io.IOException: Mkdirs failed to create 
file:/E:/OSS/hadoop-branch-2/hadoop-hdfs-project/hadoop-hdfs/file:/E:/OSS/hadoop-branch-2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/n014HnmeeA{color}
{color:#d04437} at 
org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:360){color}
{color:#d04437} at 
org.apache.hadoop.fs.TestResolveHdfsSymlink.testFcResolveAfs(TestResolveHdfsSymlink.java:88){color}
{color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
{color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#d04437} at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271){color}
{color:#d04437} at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70){color}
{color:#d04437} at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50){color}
{color:#d04437} at 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238){color}
{color:#d04437} at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53){color}
{color:#d04437} at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229){color}
{color:#d04437} at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26){color}
{color:#d04437} at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27){color}
{color:#d04437} at 
org.junit.runners.ParentRunner.run(ParentRunner.java:309){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238){color}
{color:#d04437} at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125){color}
{color:#d04437} at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413){color}

{color:#d04437}[INFO]{color}
{color:#d04437}[INFO] Results:{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Errors:{color}
{color:#d04437}[ERROR] TestResolveHdfsSymlink.testFcResolveAfs:88 ╗ IO Mkdirs 
failed to create file:/...{color}
{color:#d04437}[INFO]{color}
{color:#d04437}[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-8537) Fix hdfs debug command document in HDFSCommands.md

2018-05-14 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-8537.

Resolution: Not A Problem

> Fix hdfs debug command document in HDFSCommands.md
> --
>
> Key: HDFS-8537
> URL: https://issues.apache.org/jira/browse/HDFS-8537
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Minor
>
> From HDFSCommands.md, the *dfs* should be *debug*
> {code}
> Usage: `hdfs dfs verify [-meta ] [-block ]`
> Usage: `hdfs dfs recoverLease [-path ] [-retries ]`
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-8536) HDFS debug command is missed from top level help message

2018-05-14 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-8536.

Resolution: Not A Problem

> HDFS debug command is missed from top level help message
> 
>
> Key: HDFS-8536
> URL: https://issues.apache.org/jira/browse/HDFS-8536
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.0
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Minor
>
> HDFS top level help message missed debug command. This JIRA is opened to add 
> it to hdfs top level command help. 
> {code}
> HW11217:hadoop xyao$ hdfs
> Usage: hdfs [--config confdir] [--daemon (start|stop|status)]
>[--loglevel loglevel] COMMAND
>where COMMAND is one of:
>   balancer run a cluster balancing utility
>   cacheadmin   configure the HDFS cache
>   classpathprints the class path needed to get the
>Hadoop jar and the required libraries
>   crypto   configure HDFS encryption zones
>   datanode run a DFS datanode
>   dfs  run a filesystem command on the file system
>   dfsadmin run a DFS admin client
>   fetchdt  fetch a delegation token from the NameNode
>   fsck run a DFS filesystem checking utility
>   getconf  get config values from configuration
>   groups   get the groups which users belong to
>   haadmin  run a DFS HA admin client
>   jmxget   get JMX exported values from NameNode or DataNode.
>   journalnode  run the DFS journalnode
>   lsSnapshottableDir   list all snapshottable dirs owned by the current user
>Use -help to see options
>   moverrun a utility to move block replicas across
>storage types
>   namenode run the DFS namenode
>Use -format to initialize the DFS filesystem
>   nfs3 run an NFS version 3 gateway
>   oev  apply the offline edits viewer to an edits file
>   oiv  apply the offline fsimage viewer to an fsimage
>   oiv_legacy   apply the offline fsimage viewer to a legacy fsimage
>   portmap  run a portmap service
>   secondarynamenoderun the DFS secondary namenode
>   snapshotDiff diff two snapshots of a directory or diff the
>current directory contents with a snapshot
>   storagepolicies  list/get/set block storage policies
>   version  print the version
>   zkfc run the ZK Failover Controller daemon
> Most commands print help when invoked w/o parameters.
> {code}
> {code}
> HW11217:hadoop xyao$ hdfs debug
> Usage: hdfs debug  [arguments]
> verify [-meta ] [-block ]
> recoverLease [-path ] [-retries ]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-70) Fix config names for secure ksm and scm

2018-05-14 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-70:
--

 Summary: Fix config names for secure ksm and scm
 Key: HDDS-70
 URL: https://issues.apache.org/jira/browse/HDDS-70
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Ajay Kumar


There are some inconsistencies in ksm and scm config for kerberos. Jira intends 
to correct them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-56) TestContainerSupervisor#testAddingNewPoolWorks and TestContainerSupervisor#testDetectOverReplica fail consistently

2018-05-14 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-56?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-56.
--
Resolution: Not A Problem

> TestContainerSupervisor#testAddingNewPoolWorks and 
> TestContainerSupervisor#testDetectOverReplica fail consistently
> --
>
> Key: HDDS-56
> URL: https://issues.apache.org/jira/browse/HDDS-56
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10356) Ozone: Container server needs enhancements to control of bind address for greater flexibility and testability.

2018-05-14 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-10356.
-
Resolution: Fixed

> Ozone: Container server needs enhancements to control of bind address for 
> greater flexibility and testability.
> --
>
> Key: HDFS-10356
> URL: https://issues.apache.org/jira/browse/HDFS-10356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chris Nauroth
>Assignee: Anu Engineer
>Priority: Major
>  Labels: OzonePostMerge, tocheck
>
> The container server, as implemented in class 
> {{org.apache.hadoop.ozone.container.common.transport.server.XceiverServer}}, 
> currently does not offer the same degree of flexibility as our other RPC 
> servers for controlling the network interface and port used in the bind call. 
>  There is no "bind-host" property, so it is not possible to select all 
> available network interfaces via the 0.0.0.0 wildcard address.  If the 
> requested port is different from the actual bound port (i.e. setting port to 
> 0 in test cases), then there is no exposure of that actual bound port to 
> clients.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11910) Ozone:KSM: Add setVolumeAcls to allow adding/removing acls from a KSM volume

2018-05-14 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-11910.
-
Resolution: Duplicate

> Ozone:KSM: Add setVolumeAcls to allow adding/removing acls from a KSM volume
> 
>
> Key: HDFS-11910
> URL: https://issues.apache.org/jira/browse/HDFS-11910
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: OzonePostMerge
>
> Create KSM volumes sets the acls for the user creating a volume, however it 
> will be desirable to have setVolumeAcls to change the set of acls for the 
> volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12559) Ozone: TestContainerPersistence#testListContainer sometimes timeout

2018-05-14 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-12559.
-
Resolution: Works for Me

> Ozone: TestContainerPersistence#testListContainer sometimes timeout
> ---
>
> Key: HDFS-12559
> URL: https://issues.apache.org/jira/browse/HDFS-12559
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
>
> This test creates 1000 containers and reads them back 5 containers at a time 
> and verifies that we did get back all containers. On my laptop, it takes 11s 
> to finish but on some slow Jenkins machine this could take longer time. 
> Current the whole test suite {{TestContainerPersistence}} has a timeout rule 
> of  5 min. Need to understand  why RocksDB open is taking such a long time as 
> shown in the stack below.
> {code}
> java.lang.Exception: test timed out after 30 milliseconds
>   at org.rocksdb.RocksDB.open(Native Method)
>   at org.rocksdb.RocksDB.open(RocksDB.java:231)
>   at org.apache.hadoop.utils.RocksDBStore.(RocksDBStore.java:64)
>   at 
> org.apache.hadoop.utils.MetadataStoreBuilder.build(MetadataStoreBuilder.java:94)
>   at 
> org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.createMetadata(ContainerUtils.java:254)
>   at 
> org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.writeContainerInfo(ContainerManagerImpl.java:396)
>   at 
> org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.createContainer(ContainerManagerImpl.java:329)
>   at 
> org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence.testListContainer(TestContainerPersistence.java:341)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12962) Ozone: SCM: ContainerStateManager#updateContainerState updates incorrect AllocatedBytes to container info.

2018-05-14 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-12962.
-
Resolution: Not A Problem

> Ozone: SCM: ContainerStateManager#updateContainerState updates incorrect 
> AllocatedBytes to container info.
> --
>
> Key: HDFS-12962
> URL: https://issues.apache.org/jira/browse/HDFS-12962
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> While updating container state through 
> {{ContainerStateManager#updateContainerState}}, AllocatedBytes of 
> {{ContainerStateManager}} should be used, not the one from 
> {{ContainerMapping}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13047) Ozone: TestKeys, TestKeysRatis and TestOzoneShell are failing because of read timeout

2018-05-14 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-13047.
-
Resolution: Not A Problem

> Ozone: TestKeys, TestKeysRatis and TestOzoneShell are failing because of read 
> timeout
> -
>
> Key: HDFS-13047
> URL: https://issues.apache.org/jira/browse/HDFS-13047
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-13047-HDFS-7240.001.patch
>
>
> The tests are failing because of the following error.
> {code}
> org.apache.hadoop.ozone.web.client.OzoneRestClientException: Failed to 
> putKey: keyName=01e8b923-5876-4d5e-8adc-4214caf33f64, 
> file=/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/TestKeys/f8f75b8d-f15b-482f-afa0-babe1d6c4bf6
>   at 
> org.apache.hadoop.ozone.web.client.OzoneBucket.putKey(OzoneBucket.java:253)
>   at 
> org.apache.hadoop.ozone.web.client.TestKeys$PutHelper.putKey(TestKeys.java:218)
>   at 
> org.apache.hadoop.ozone.web.client.TestKeys$PutHelper.access$100(TestKeys.java:168)
>   at 
> org.apache.hadoop.ozone.web.client.TestKeys.runTestPutAndGetKeyWithDnRestart(TestKeys.java:297)
>   at 
> org.apache.hadoop.ozone.web.client.TestKeys.testPutAndGetKeyWithDnRestart(TestKeys.java:287)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> Caused by: java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at 
> org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
>   at 
> org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
>   at 
> org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
>   at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
>   at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
>   at 
> org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
>   at 
> org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:165)
>   at 
> org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:167)
>   at 
> org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
>   at 
> org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
>   at 
> org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:271)
>   at 
> org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
>   at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
>   at 
> org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
>   at 
> org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
>   at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
>   at 
> org.apache.hadoop.ozone.web.client.OzoneBucket.executePutKey(OzoneBucket.java:276)
>   at 
> org.apache.hadoop.ozone.web.client.OzoneBucket.putKey(OzoneBucket.java:250)
>   ... 13 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Resolved] (HDFS-13067) Ozone: Update the allocatedBlock size in SCM when delete blocks happen.

2018-05-14 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-13067.
-
Resolution: Not A Problem

> Ozone: Update the allocatedBlock size in SCM when delete blocks happen.
> ---
>
> Key: HDFS-13067
> URL: https://issues.apache.org/jira/browse/HDFS-13067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Priority: Trivial
>
> We rely on Container Reports to understand the actually allocated size of a 
> container. We also maintain another counter that keeps track of the logical 
> allocations. That is the number of blocks allocated in the container, while 
> this number is used only to queue containers for closing it might be a good 
> idea to make sure that this number is updated when a delete block operation 
> is performed, Simply because we have the data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12734) Ozone: generate optional, version specific documentation during the build

2018-05-14 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-12734.
-
Resolution: Not A Problem

> Ozone: generate optional, version specific documentation during the build
> -
>
> Key: HDFS-12734
> URL: https://issues.apache.org/jira/browse/HDFS-12734
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-12734-HDFS-7240.001.patch, 
> HDFS-12734-HDFS-7240.002.patch
>
>
> HDFS-12664 susggested a new way to include documentation in the KSM web ui.
> This patch modifies the build lifecycle to automatically generate the 
> documentation *if* hugo is on the PATH. If hugo is not there  the 
> documentation won't be generated and it won't be displayed (see HDFS-12661)
> To test: Apply this patch on top of HDFS-12664 do a full build and check the 
> KSM webui.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13221) Ozone: Make hadoop-common ozone free

2018-05-14 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-13221.
-
Resolution: Not A Problem

> Ozone: Make hadoop-common ozone free
> 
>
> Key: HDFS-13221
> URL: https://issues.apache.org/jira/browse/HDFS-13221
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Ajay Kumar
>Priority: Critical
>
> From the voting thread comments from [~daryn]. 
> {noformat}
> Common
>  
> Appear to be a number of superfluous changes.  The conf servlet must not be
> polluted with specific references and logic for ozone.  We don’t create
> dependencies from common to hdfs, mapred, yarn, hive, etc.  Common must be
> “ozone free”. 
> {noformat}
> This JIRA is to make sure that notions of HDSL or Ozone abstractions have not 
> leaked into hadoop-common; This JIRA will clean up the current instances. 
> [~daryn] Thanks for pointing this out.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13332) Ozone: update log4j.properties changes for hdsl/ozone.

2018-05-14 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-13332.
-
Resolution: Won't Do

> Ozone: update log4j.properties changes for hdsl/ozone.
> --
>
> Key: HDFS-13332
> URL: https://issues.apache.org/jira/browse/HDFS-13332
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> *hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13355) Create IO provider for hdsl

2018-05-14 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-13355.
-
Resolution: Won't Do

> Create IO provider for hdsl
> ---
>
> Key: HDFS-13355
> URL: https://issues.apache.org/jira/browse/HDFS-13355
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Priority: Major
> Fix For: HDFS-7240
>
>
> Create an abstraction like FileIoProvider for hdsl to handle disk failure and 
> other issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.4 (RC0)

2018-05-14 Thread Wangda Tan
+1 (binding)

- Build from source.
- Ran sanity jobs successfully.

Thanks,
Wangda

On Mon, May 14, 2018 at 5:44 AM, Sunil G  wrote:

> +1 (binding)
>
> 1. Build package from src
> 2. Ran few MR jobs and verified checked App Priority cases
> 3. Node Label basic functions are ok.
>
> Thanks
> Sunil
>
>
> On Tue, May 8, 2018 at 11:11 PM 俊平堵  wrote:
>
> > Hi all,
> >  I've created the first release candidate (RC0) for Apache Hadoop
> > 2.8.4. This is our next maint release to follow up 2.8.3. It includes 77
> > important fixes and improvements.
> >
> > The RC artifacts are available at:
> > http://home.apache.org/~junping_du/hadoop-2.8.4-RC0
> >
> > The RC tag in git is: release-2.8.4-RC0
> >
> > The maven artifacts are available via repository.apache.org<
> > http://repository.apache.org> at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1118
> >
> > Please try the release and vote; the vote will run for the usual 5
> > working days, ending on 5/14/2018 PST time.
> >
> > Thanks,
> >
> > Junping
> >
>


[jira] [Created] (HDDS-59) Ozone client should update blocksize in KSM for sub-block writes

2018-05-14 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-59:
-

 Summary: Ozone client should update blocksize in KSM for sub-block 
writes
 Key: HDDS-59
 URL: https://issues.apache.org/jira/browse/HDDS-59
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.2.1
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: 0.2.1


Currently ozone client allocates block of the required length from SCM through 
KSM.
However it might happen due to error cases or because of small writes that the 
allocated block is not completely written.

In these cases, client should update the KSM with the length of the block. This 
will help in error cases as well as cases where client does not write the 
complete block to Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-05-14 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/781/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdds/common 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CloseContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 17815] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CloseContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 18363] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CopyContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 34624] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CopyContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 35479] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CreateContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 12991] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DatanodeBlockID$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 1112] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteChunkResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 30029] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 15580] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 16042] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteKeyResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 23085] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$KeyValue$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 1739] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$ListContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 16530] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$ListKeyRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 23608] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$PutKeyResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 20936] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$PutSmallFileResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 32916] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$ReadContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 13417] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$UpdateContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 15107] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$WriteChunkResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 27144] 
   Found reliance on default encoding in 
org.apache.hadoop.utils.MetadataKeyFilters$KeyPrefixFilter.filterKey(byte[], 
byte[], byte[]):in 
org.apache.hadoop.utils.MetadataKeyFilters$KeyPrefixFilter.filterKey(byte[], 
byte[], byte[]): String.getBytes() At MetadataKeyFilters.java:[line 97] 

Failed junit tests :

   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

Re: [VOTE] Release Apache Hadoop 2.8.4 (RC0)

2018-05-14 Thread Sunil G
+1 (binding)

1. Build package from src
2. Ran few MR jobs and verified checked App Priority cases
3. Node Label basic functions are ok.

Thanks
Sunil


On Tue, May 8, 2018 at 11:11 PM 俊平堵  wrote:

> Hi all,
>  I've created the first release candidate (RC0) for Apache Hadoop
> 2.8.4. This is our next maint release to follow up 2.8.3. It includes 77
> important fixes and improvements.
>
> The RC artifacts are available at:
> http://home.apache.org/~junping_du/hadoop-2.8.4-RC0
>
> The RC tag in git is: release-2.8.4-RC0
>
> The maven artifacts are available via repository.apache.org<
> http://repository.apache.org> at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1118
>
> Please try the release and vote; the vote will run for the usual 5
> working days, ending on 5/14/2018 PST time.
>
> Thanks,
>
> Junping
>


Re: [VOTE] Release Apache Hadoop 2.8.4 (RC0)

2018-05-14 Thread Rohith Sharma K S
+1 (binding)
- Downloaded source and built from it. Installed 2 node RM HA cluster.
- Verified for RM HA, RM Restart, work preserving restart.
- Ran sample MR jobs and Distributed shell with HA scenario.


-Rohith Sharma K S

On 8 May 2018 at 23:11, 俊平堵  wrote:

> Hi all,
>  I've created the first release candidate (RC0) for Apache Hadoop
> 2.8.4. This is our next maint release to follow up 2.8.3. It includes 77
> important fixes and improvements.
>
> The RC artifacts are available at:
> http://home.apache.org/~junping_du/hadoop-2.8.4-RC0
>
> The RC tag in git is: release-2.8.4-RC0
>
> The maven artifacts are available via repository.apache.org<
> http://repository.apache.org> at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1118
>
> Please try the release and vote; the vote will run for the usual 5
> working days, ending on 5/14/2018 PST time.
>
> Thanks,
>
> Junping
>


[jira] [Created] (HDDS-57) TestContainerCloser#testRepeatedClose and TestContainerCloser#testCleanupThreadRuns fail consistently

2018-05-14 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-57:
---

 Summary: TestContainerCloser#testRepeatedClose and 
TestContainerCloser#testCleanupThreadRuns fail consistently
 Key: HDDS-57
 URL: https://issues.apache.org/jira/browse/HDDS-57
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-56) TestContainerSupervisor#testAddingNewPoolWorks and TestContainerSupervisor#testDetectOverReplica fail consistently

2018-05-14 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-56:
---

 Summary: TestContainerSupervisor#testAddingNewPoolWorks and 
TestContainerSupervisor#testDetectOverReplica fail consistently
 Key: HDDS-56
 URL: https://issues.apache.org/jira/browse/HDDS-56
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-55) Fix the findBug issue SCMDatanodeProtocolServer#updateContainerReportMetrics

2018-05-14 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-55:
---

 Summary: Fix the findBug issue 
SCMDatanodeProtocolServer#updateContainerReportMetrics
 Key: HDDS-55
 URL: https://issues.apache.org/jira/browse/HDDS-55
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee


The findBug issue is reported because we are using synchronized on 
ConcurrentHashMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11442) Ozone: Fix the Cluster ID generation code in SCM

2018-05-14 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh resolved HDFS-11442.
--
Resolution: Fixed

All the tasks in this jira have been fixed and resolved.

Marking this as resolved. HDDS-54 will be tracked as a standalone improvement.

Thanks for the contribution [~shashikant].

> Ozone: Fix the Cluster ID generation code in SCM
> 
>
> Key: HDFS-11442
> URL: https://issues.apache.org/jira/browse/HDFS-11442
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Shashikant Banerjee
>Priority: Blocker
> Fix For: HDFS-7240
>
> Attachments: Ozone Cluster Life Cycle Management - Google Docs.pdf
>
>
> The Cluster ID is randomly generated right now when SCM is started and we 
> avoid verifying the clients cluster ID matches what SCM expects. This JIRA is 
> to track the comments in code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org