[jira] [Commented] (HDFS-10333) Intermittent org.apache.hadoop.hdfs.TestFileAppend failure in trunk

2016-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284172#comment-15284172
 ] 

Hudson commented on HDFS-10333:
---

FAILURE: Integrated in Hadoop-trunk-Commit #9764 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9764/])
HDFS-10333. Intermittent org.apache.hadoop.hdfs.TestFileAppend failure (wang: 
rev 45788204ae2ac82ccb3b4fe2fd22aead1dd79f0d)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java


> Intermittent org.apache.hadoop.hdfs.TestFileAppend failure in trunk
> ---
>
> Key: HDFS-10333
> URL: https://issues.apache.org/jira/browse/HDFS-10333
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Yongjun Zhang
>Assignee: Yiqun Lin
> Fix For: 2.8.0
>
> Attachments: HDFS-10333.001.patch
>
>
> Java8 (I used JAVA_HOME=/opt/toolchain/jdk1.8.0_25):
> {code}
> --
>  T E S T S
> ---
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
> support was removed in 8.0
> Running org.apache.hadoop.hdfs.TestFileAppend
> Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 27.75 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestFileAppend
> testMultipleAppends(org.apache.hadoop.hdfs.TestFileAppend)  Time elapsed: 
> 3.674 sec  <<< ERROR!
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:43067,DS-cf80da41-3697-4afa-8f89-93693cd5035d,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:32946,DS-3b08422c-959e-42f0-a624-91b2524c4371,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:43067,DS-cf80da41-3697-4afa-8f89-93693cd5035d,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:32946,DS-3b08422c-959e-42f0-a624-91b2524c4371,DISK]]).
>  The current failed datanode replacement policy is DEFAULT, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
> at 
> org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1166)
> at 
> org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1232)
> at 
> org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1423)
> at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
> at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1321)
> at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:599)
> {code}
> However, when I run with Java1.7, the test is sometimes successful, and it 
> sometimes fails with 
> {code}
> Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 41.32 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestFileAppend
> testMultipleAppends(org.apache.hadoop.hdfs.TestFileAppend)  Time elapsed: 
> 9.099 sec  <<< ERROR!
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:49006,DS-498240fa-d1c7-4ba1-b97e-a1761cbbefa5,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:43097,DS-b83b49ce-fc14-4b9e-a3fc-7df2cd9fc753,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:49006,DS-498240fa-d1c7-4ba1-b97e-a1761cbbefa5,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:43097,DS-b83b49ce-fc14-4b9e-a3fc-7df2cd9fc753,DISK]]).
>  The current failed datanode replacement policy is DEFAULT, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1162)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1232)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1423)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1321)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:599)
> {code}
> The failure of this test is intermittent, but it fails pretty often.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10392) Test failures on trunk: TestEditLog and TestReconstructStripedBlocks

2016-05-15 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou resolved HDFS-10392.
--
Resolution: Duplicate

Resolved this since it's been split into two separate Jiras: HDFS-10405 and 
HDFS-10406.

> Test failures on trunk: TestEditLog and TestReconstructStripedBlocks
> 
>
> Key: HDFS-10392
> URL: https://issues.apache.org/jira/browse/HDFS-10392
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>
> It's been noticed there are some test failures: TestEditLog and 
> TestReconstructStripedBlocks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-05-15 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284168#comment-15284168
 ] 

Yongjun Zhang commented on HDFS-9732:
-

HI [~steve_l], now we have Allen's agreement. Would you please +1? Allen's 
agreement itself is a +1 but I'd like to see yours official one also:-) thanks.
 

> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732-000.patch, 
> HDFS-9732.001.patch, HDFS-9732.002.patch, HDFS-9732.003.patch, 
> HDFS-9732.004.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-2173) saveNamespace should not throw IOE when only one storage directory fails to write VERSION file

2016-05-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-2173:
--
Attachment: HDFS-2173.02.patch

> saveNamespace should not throw IOE when only one storage directory fails to 
> write VERSION file
> --
>
> Key: HDFS-2173
> URL: https://issues.apache.org/jira/browse/HDFS-2173
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Edit log branch (HDFS-1073), 0.23.0
>Reporter: Todd Lipcon
>Assignee: Andras Bokor
> Attachments: HDFS-2173.01.patch, HDFS-2173.02.patch, 
> HDFS-2173.02.patch, HDFS-2173.03.patch
>
>
> This JIRA tracks a TODO in TestSaveNamespace. Currently, if, while writing 
> the VERSION files in the storage directories, one of the directories fails, 
> the entire operation throws IOE. This is unnecessary -- instead, just that 
> directory should be marked as failed.
> This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it 
> does not ever dataloss, and would rarely occur in practice (the dir would 
> have to fail between writing the fsimage file and writing VERSION)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-2173) saveNamespace should not throw IOE when only one storage directory fails to write VERSION file

2016-05-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-2173:
--
Target Version/s: 2.8.0

> saveNamespace should not throw IOE when only one storage directory fails to 
> write VERSION file
> --
>
> Key: HDFS-2173
> URL: https://issues.apache.org/jira/browse/HDFS-2173
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Edit log branch (HDFS-1073), 0.23.0
>Reporter: Todd Lipcon
>Assignee: Andras Bokor
> Attachments: HDFS-2173.01.patch, HDFS-2173.02.patch, 
> HDFS-2173.02.patch, HDFS-2173.03.patch
>
>
> This JIRA tracks a TODO in TestSaveNamespace. Currently, if, while writing 
> the VERSION files in the storage directories, one of the directories fails, 
> the entire operation throws IOE. This is unnecessary -- instead, just that 
> directory should be marked as failed.
> This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it 
> does not ever dataloss, and would rarely occur in practice (the dir would 
> have to fail between writing the fsimage file and writing VERSION)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10406) Test failure on trunk: TestReconstructStripedBlocks

2016-05-15 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284163#comment-15284163
 ] 

Xiaobing Zhou commented on HDFS-10406:
--

{noformat}
Running org.apache.hadoop.hdfs.server.namenode.TestReconstructStripedBlocks
Tests run: 4, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 18.55 sec <<< 
FAILURE! - in 
org.apache.hadoop.hdfs.server.namenode.TestReconstructStripedBlocks
testMissingStripedBlockWithBusyNode(org.apache.hadoop.hdfs.server.namenode.TestReconstructStripedBlocks)
  Time elapsed: 7.417 sec  <<< ERROR!
java.lang.IllegalStateException: failed to create a child event loop
at sun.nio.ch.KQueueArrayWrapper.init(Native Method)
at sun.nio.ch.KQueueArrayWrapper.(KQueueArrayWrapper.java:100)
at sun.nio.ch.KQueueSelectorImpl.(KQueueSelectorImpl.java:87)
at 
sun.nio.ch.KQueueSelectorProvider.openSelector(KQueueSelectorProvider.java:42)
at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:125)
at io.netty.channel.nio.NioEventLoop.(NioEventLoop.java:119)
at 
io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:97)
at 
io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:31)
at 
io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:77)
at 
io.netty.channel.MultithreadEventLoopGroup.(MultithreadEventLoopGroup.java:50)
at 
io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:72)
at 
io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:58)
at 
io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:46)
at 
io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:38)
at 
org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:122)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:891)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1285)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:479)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2584)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2472)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:844)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
at 
org.apache.hadoop.hdfs.server.namenode.TestReconstructStripedBlocks.doTestMissingStripedBlock(TestReconstructStripedBlocks.java:108)
at 
org.apache.hadoop.hdfs.server.namenode.TestReconstructStripedBlocks.testMissingStripedBlockWithBusyNode(TestReconstructStripedBlocks.java:92)

testMissingStripedBlock(org.apache.hadoop.hdfs.server.namenode.TestReconstructStripedBlocks)
  Time elapsed: 3.685 sec  <<< ERROR!
java.lang.IllegalStateException: failed to create a child event loop
at sun.nio.ch.IOUtil.makePipe(Native Method)
at sun.nio.ch.KQueueSelectorImpl.(KQueueSelectorImpl.java:84)
at 
sun.nio.ch.KQueueSelectorProvider.openSelector(KQueueSelectorProvider.java:42)
at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:125)
at io.netty.channel.nio.NioEventLoop.(NioEventLoop.java:119)
at 
io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:97)
at 
io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:31)
at 
io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:77)
at 
io.netty.channel.MultithreadEventLoopGroup.(MultithreadEventLoopGroup.java:50)
at 
io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:72)
at 
io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:58)
at 
io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:46)
at 
io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:38)
at 
org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:121)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:891)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1285)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:479)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2584)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2472)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592)
at 

[jira] [Commented] (HDFS-2173) saveNamespace should not throw IOE when only one storage directory fails to write VERSION file

2016-05-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284164#comment-15284164
 ] 

Andrew Wang commented on HDFS-2173:
---

The 02 patch LGTM, thanks for the update! Now the challenge is getting QA bot 
to pick it up. We've been having some JIRA issues over the last few days, which 
might be why it didn't work the first time.

I'll try reattaching the 02 patch to see if that does the trick.

> saveNamespace should not throw IOE when only one storage directory fails to 
> write VERSION file
> --
>
> Key: HDFS-2173
> URL: https://issues.apache.org/jira/browse/HDFS-2173
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Edit log branch (HDFS-1073), 0.23.0
>Reporter: Todd Lipcon
>Assignee: Andras Bokor
> Attachments: HDFS-2173.01.patch, HDFS-2173.02.patch, 
> HDFS-2173.03.patch
>
>
> This JIRA tracks a TODO in TestSaveNamespace. Currently, if, while writing 
> the VERSION files in the storage directories, one of the directories fails, 
> the entire operation throws IOE. This is unnecessary -- instead, just that 
> directory should be marked as failed.
> This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it 
> does not ever dataloss, and would rarely occur in practice (the dir would 
> have to fail between writing the fsimage file and writing VERSION)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10405) Test failure on trunk: TestEditLog

2016-05-15 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284162#comment-15284162
 ] 

Xiaobing Zhou commented on HDFS-10405:
--

The stack trace:
{noformat}
Picked up _JAVA_OPTIONS: -Dfile.encoding=UTF-8
Running org.apache.hadoop.hdfs.server.namenode.TestEditLog
Tests run: 48, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 109.01 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.server.namenode.TestEditLog
testResetThreadLocalCachedOps[0](org.apache.hadoop.hdfs.server.namenode.TestEditLog)
  Time elapsed: 11.898 sec  <<< ERROR!
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1345)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:848)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
at 
org.apache.hadoop.hdfs.server.namenode.TestEditLog.testResetThreadLocalCachedOps(TestEditLog.java:1567)

testReadActivelyUpdatedLog[1](org.apache.hadoop.hdfs.server.namenode.TestEditLog)
  Time elapsed: 2.433 sec  <<< ERROR!
java.lang.IllegalStateException: failed to create a child event loop
at sun.nio.ch.KQueueArrayWrapper.init(Native Method)
at sun.nio.ch.KQueueArrayWrapper.(KQueueArrayWrapper.java:100)
at sun.nio.ch.KQueueSelectorImpl.(KQueueSelectorImpl.java:87)
at 
sun.nio.ch.KQueueSelectorProvider.openSelector(KQueueSelectorProvider.java:42)
at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:125)
at io.netty.channel.nio.NioEventLoop.(NioEventLoop.java:119)
at 
io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:97)
at 
io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:31)
at 
io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:77)
at 
io.netty.channel.MultithreadEventLoopGroup.(MultithreadEventLoopGroup.java:50)
at 
io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:72)
at 
io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:58)
at 
io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:46)
at 
io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:38)
at 
org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:122)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:891)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1285)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:479)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2584)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2472)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:844)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
at 
org.apache.hadoop.hdfs.server.namenode.TestEditLog.testReadActivelyUpdatedLog(TestEditLog.java:1647){noformat}

> Test failure on trunk: TestEditLog
> --
>
> Key: HDFS-10405
> URL: https://issues.apache.org/jira/browse/HDFS-10405
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>
> It's been noticed there are some test failures: TestEditLog and 
> TestReconstructStripedBlocks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10406) Test failure on trunk: TestReconstructStripedBlocks

2016-05-15 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-10406:


 Summary: Test failure on trunk: TestReconstructStripedBlocks
 Key: HDFS-10406
 URL: https://issues.apache.org/jira/browse/HDFS-10406
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Xiaobing Zhou


It's been noticed there are some test failures: TestEditLog and 
TestReconstructStripedBlocks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8449) Add tasks count metrics to datanode for ECWorker

2016-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284159#comment-15284159
 ] 

Hadoop QA commented on HDFS-8449:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s 
{color} | {color:green} trunk passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 3 new + 
97 unchanged - 3 fixed = 100 total (was 100) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 8s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 55m 49s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 150m 32s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804098/HDFS-8449-v12.patch |
| JIRA Issue | HDFS-8449 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6fb24220a12c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Created] (HDFS-10405) Test failure on trunk: TestEditLog

2016-05-15 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-10405:


 Summary: Test failure on trunk: TestEditLog
 Key: HDFS-10405
 URL: https://issues.apache.org/jira/browse/HDFS-10405
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Xiaobing Zhou


It's been noticed there are some test failures: TestEditLog and 
TestReconstructStripedBlocks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10392) Test failures on trunk: TestEditLog and TestReconstructStripedBlocks

2016-05-15 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284158#comment-15284158
 ] 

Xiaobing Zhou commented on HDFS-10392:
--

Sure, let's do this. Thanks [~andrew.wang].

> Test failures on trunk: TestEditLog and TestReconstructStripedBlocks
> 
>
> Key: HDFS-10392
> URL: https://issues.apache.org/jira/browse/HDFS-10392
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>
> It's been noticed there are some test failures: TestEditLog and 
> TestReconstructStripedBlocks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9833) Erasure coding: recomputing block checksum on the fly by reconstructing the missed/corrupt block data

2016-05-15 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284155#comment-15284155
 ] 

Rakesh R commented on HDFS-9833:


[~drankye], [~umamaheswararao], I'm attaching a draft patch to show the 
proposed algo and the class responsibilities. Kindly go through the changes and 
would like to see your feedback. Thanks!. I will refine and add more unit test 
cases in subsequent patches.

> Erasure coding: recomputing block checksum on the fly by reconstructing the 
> missed/corrupt block data
> -
>
> Key: HDFS-9833
> URL: https://issues.apache.org/jira/browse/HDFS-9833
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Rakesh R
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-9833-00-draft.patch
>
>
> As discussed in HDFS-8430 and HDFS-9694, to compute striped file checksum 
> even some of striped blocks are missed, we need to consider recomputing block 
> checksum on the fly for the missed/corrupt blocks. To recompute the block 
> checksum, the block data needs to be reconstructed by erasure decoding, and 
> the main needed codes for the block reconstruction could be borrowed from 
> HDFS-9719, the refactoring of the existing {{ErasureCodingWorker}}. In EC 
> worker, reconstructed blocks need to be written out to target datanodes, but 
> here in this case, the remote writing isn't necessary, as the reconstructed 
> block data is only used to recompute the checksum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10392) Test failures on trunk: TestEditLog and TestReconstructStripedBlocks

2016-05-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10392:
---
Target Version/s: 2.8.0

> Test failures on trunk: TestEditLog and TestReconstructStripedBlocks
> 
>
> Key: HDFS-10392
> URL: https://issues.apache.org/jira/browse/HDFS-10392
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>
> It's been noticed there are some test failures: TestEditLog and 
> TestReconstructStripedBlocks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10392) Test failures on trunk: TestEditLog and TestReconstructStripedBlocks

2016-05-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10392:
---
Affects Version/s: 2.8.0

> Test failures on trunk: TestEditLog and TestReconstructStripedBlocks
> 
>
> Key: HDFS-10392
> URL: https://issues.apache.org/jira/browse/HDFS-10392
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>
> It's been noticed there are some test failures: TestEditLog and 
> TestReconstructStripedBlocks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10392) Test failures on trunk: TestEditLog and TestReconstructStripedBlocks

2016-05-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284154#comment-15284154
 ] 

Andrew Wang commented on HDFS-10392:


Hi [~xiaobingo], do you mind filing these two as different JIRAs? The failures 
don't look related. This way we can discuss and fix them separately.

> Test failures on trunk: TestEditLog and TestReconstructStripedBlocks
> 
>
> Key: HDFS-10392
> URL: https://issues.apache.org/jira/browse/HDFS-10392
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>
> It's been noticed there are some test failures: TestEditLog and 
> TestReconstructStripedBlocks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9833) Erasure coding: recomputing block checksum on the fly by reconstructing the missed/corrupt block data

2016-05-15 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-9833:
---
Attachment: HDFS-9833-00-draft.patch

> Erasure coding: recomputing block checksum on the fly by reconstructing the 
> missed/corrupt block data
> -
>
> Key: HDFS-9833
> URL: https://issues.apache.org/jira/browse/HDFS-9833
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Rakesh R
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-9833-00-draft.patch
>
>
> As discussed in HDFS-8430 and HDFS-9694, to compute striped file checksum 
> even some of striped blocks are missed, we need to consider recomputing block 
> checksum on the fly for the missed/corrupt blocks. To recompute the block 
> checksum, the block data needs to be reconstructed by erasure decoding, and 
> the main needed codes for the block reconstruction could be borrowed from 
> HDFS-9719, the refactoring of the existing {{ErasureCodingWorker}}. In EC 
> worker, reconstructed blocks need to be written out to target datanodes, but 
> here in this case, the remote writing isn't necessary, as the reconstructed 
> block data is only used to recompute the checksum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10333) Intermittent org.apache.hadoop.hdfs.TestFileAppend failure in trunk

2016-05-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10333:
---
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

LGTM +1, committed back through branch-2.8. Thank you for the contribution 
[~linyiqun]!

> Intermittent org.apache.hadoop.hdfs.TestFileAppend failure in trunk
> ---
>
> Key: HDFS-10333
> URL: https://issues.apache.org/jira/browse/HDFS-10333
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Yongjun Zhang
>Assignee: Yiqun Lin
> Fix For: 2.8.0
>
> Attachments: HDFS-10333.001.patch
>
>
> Java8 (I used JAVA_HOME=/opt/toolchain/jdk1.8.0_25):
> {code}
> --
>  T E S T S
> ---
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
> support was removed in 8.0
> Running org.apache.hadoop.hdfs.TestFileAppend
> Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 27.75 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestFileAppend
> testMultipleAppends(org.apache.hadoop.hdfs.TestFileAppend)  Time elapsed: 
> 3.674 sec  <<< ERROR!
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:43067,DS-cf80da41-3697-4afa-8f89-93693cd5035d,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:32946,DS-3b08422c-959e-42f0-a624-91b2524c4371,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:43067,DS-cf80da41-3697-4afa-8f89-93693cd5035d,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:32946,DS-3b08422c-959e-42f0-a624-91b2524c4371,DISK]]).
>  The current failed datanode replacement policy is DEFAULT, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
> at 
> org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1166)
> at 
> org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1232)
> at 
> org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1423)
> at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
> at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1321)
> at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:599)
> {code}
> However, when I run with Java1.7, the test is sometimes successful, and it 
> sometimes fails with 
> {code}
> Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 41.32 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestFileAppend
> testMultipleAppends(org.apache.hadoop.hdfs.TestFileAppend)  Time elapsed: 
> 9.099 sec  <<< ERROR!
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:49006,DS-498240fa-d1c7-4ba1-b97e-a1761cbbefa5,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:43097,DS-b83b49ce-fc14-4b9e-a3fc-7df2cd9fc753,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:49006,DS-498240fa-d1c7-4ba1-b97e-a1761cbbefa5,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:43097,DS-b83b49ce-fc14-4b9e-a3fc-7df2cd9fc753,DISK]]).
>  The current failed datanode replacement policy is DEFAULT, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1162)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1232)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1423)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1321)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:599)
> {code}
> The failure of this test is intermittent, but it fails pretty often.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10404) CacheAdmin command usage message not shows completely

2016-05-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284150#comment-15284150
 ] 

Andrew Wang commented on HDFS-10404:


Hi [~linyiqun], thanks for the patch! I added you as a contributor on JIRA so 
you can be assigned JIRAs now.

One tiny nit, do you mind appending the {{]\n}} string on a new line? That 
helps future-proof in case we add more options later. Otherwise +1.

> CacheAdmin command usage message not shows completely
> -
>
> Key: HDFS-10404
> URL: https://issues.apache.org/jira/browse/HDFS-10404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10404.001.patch
>
>
> In {{CacheAdmin}}, there are two places that not completely showing the cmd 
> usage message.
> {code}
> $ hdfs cacheadmin
> Usage: bin/hdfs cacheadmin [COMMAND]
>   [-addDirective -path  -pool  [-force] 
> [-replication ] [-ttl ]]
>   [-modifyDirective -id  [-path ] [-force] [-replication 
> ] [-pool ] [-ttl ]]
>   [-listDirectives [-stats] [-path ] [-pool ] [-id ]
>   [-removeDirective ]
>   [-removeDirectives -path ]
>   [-addPool  [-owner ] [-group ] [-mode ] 
> [-limit ] [-maxTtl ]
> {code}
> The command {{-listDirectives}} and {{-addPool}} are not showing completely, 
> they are both lacking a ']' in the end of line.
> In the {{CentralizedCacheManagement.md}}, there is also a similar problem. 
> The page of {{CentralizedCacheManagement}} can also showed this, 
> https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10404) CacheAdmin command usage message not shows completely

2016-05-15 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10404:
---
Assignee: Yiqun Lin

> CacheAdmin command usage message not shows completely
> -
>
> Key: HDFS-10404
> URL: https://issues.apache.org/jira/browse/HDFS-10404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10404.001.patch
>
>
> In {{CacheAdmin}}, there are two places that not completely showing the cmd 
> usage message.
> {code}
> $ hdfs cacheadmin
> Usage: bin/hdfs cacheadmin [COMMAND]
>   [-addDirective -path  -pool  [-force] 
> [-replication ] [-ttl ]]
>   [-modifyDirective -id  [-path ] [-force] [-replication 
> ] [-pool ] [-ttl ]]
>   [-listDirectives [-stats] [-path ] [-pool ] [-id ]
>   [-removeDirective ]
>   [-removeDirectives -path ]
>   [-addPool  [-owner ] [-group ] [-mode ] 
> [-limit ] [-maxTtl ]
> {code}
> The command {{-listDirectives}} and {{-addPool}} are not showing completely, 
> they are both lacking a ']' in the end of line.
> In the {{CentralizedCacheManagement.md}}, there is also a similar problem. 
> The page of {{CentralizedCacheManagement}} can also showed this, 
> https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10404) CacheAdmin command usage message not shows completely

2016-05-15 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284146#comment-15284146
 ] 

Yiqun Lin commented on HDFS-10404:
--

The failed unit test was tracked by HDFS-10333.

> CacheAdmin command usage message not shows completely
> -
>
> Key: HDFS-10404
> URL: https://issues.apache.org/jira/browse/HDFS-10404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
> Attachments: HDFS-10404.001.patch
>
>
> In {{CacheAdmin}}, there are two places that not completely showing the cmd 
> usage message.
> {code}
> $ hdfs cacheadmin
> Usage: bin/hdfs cacheadmin [COMMAND]
>   [-addDirective -path  -pool  [-force] 
> [-replication ] [-ttl ]]
>   [-modifyDirective -id  [-path ] [-force] [-replication 
> ] [-pool ] [-ttl ]]
>   [-listDirectives [-stats] [-path ] [-pool ] [-id ]
>   [-removeDirective ]
>   [-removeDirectives -path ]
>   [-addPool  [-owner ] [-group ] [-mode ] 
> [-limit ] [-maxTtl ]
> {code}
> The command {{-listDirectives}} and {{-addPool}} are not showing completely, 
> they are both lacking a ']' in the end of line.
> In the {{CentralizedCacheManagement.md}}, there is also a similar problem. 
> The page of {{CentralizedCacheManagement}} can also showed this, 
> https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10383) Safely close resources in DFSTestUtil

2016-05-15 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284143#comment-15284143
 ] 

Walter Su commented on HDFS-10383:
--

bq. IOUtils#cleanup swallows it in the finally block.
Great work! And good analysis about {{createStripedFile()}}. We already have 
{{createStripedFile()}} before {{DFSStripedSteam}} is implemented. The test 
still prints a warning stacktrace because of secondary {{completeFile()}}. So I 
think, which is not related to this, how about changing it together
{code}
-  out = dfs.create(file, (short) 1); // create an empty file
+  cluster.getNameNodeRpc()
+  .create(file.toString(), new FsPermission((short)0755),
+  dfs.getClient().getClientName(),
+  new EnumSetWritable<>(EnumSet.of(CreateFlag.CREATE)),
+  false, (short)1, 128*1024*1024L, null);
{code}

> Safely close resources in DFSTestUtil
> -
>
> Key: HDFS-10383
> URL: https://issues.apache.org/jira/browse/HDFS-10383
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10383.000.patch, HDFS-10383.001.patch, 
> HDFS-10383.002.patch
>
>
> There are a few of methods in {{DFSTestUtil}} that do not close the resource 
> safely, or elegantly. We can use the try-with-resource statement to address 
> this problem.
> Specially, as {{DFSTestUtil}} is popularly used in test, we need to preserve 
> any exceptions thrown during the processing of the resource while still 
> guaranteeing it's closed finally. Take for example,the current implementation 
> of {{DFSTestUtil#createFile()}} closes the FSDataOutputStream in the 
> {{finally}} block, and when closing if the internal 
> {{DFSOutputStream#close()}} throws any exception, which it often does, the 
> exception thrown during the processing will be lost. See this [test 
> failure|https://builds.apache.org/job/PreCommit-HADOOP-Build/9320/testReport/org.apache.hadoop.hdfs/TestAsyncDFSRename/testAggressiveConcurrentAsyncRenameWithOverwrite/],
>  and we have to guess what was the root cause.
> Using try-with-resource, we can close the resources safely, and the 
> exceptions thrown both in processing and closing will be available (closing 
> exception will be suppressed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10404) CacheAdmin command usage message not shows completely

2016-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284138#comment-15284138
 ] 

Hadoop QA commented on HDFS-10404:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 39s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s 
{color} | {color:green} trunk passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 45s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 53m 17s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 154m 55s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.hdfs.TestFileAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804095/HDFS-10404.001.patch |
| JIRA Issue | HDFS-10404 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bc7073e0860f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Commented] (HDFS-10402) DiskBalancer: Add QueryStatus command

2016-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284131#comment-15284131
 ] 

Hadoop QA commented on HDFS-10402:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
31s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} HDFS-1312 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} HDFS-1312 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} HDFS-1312 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} HDFS-1312 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 11s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 9s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 187m 37s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.tools.TestHdfsConfigFields |
| JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.TestAsyncDFSRename |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Commented] (HDFS-10188) libhdfs++: Implement debug allocators

2016-05-15 Thread Xiaowei Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284110#comment-15284110
 ] 

Xiaowei Zhu commented on HDFS-10188:


New patch 002 uses macro to override new/delete operators.

> libhdfs++: Implement debug allocators
> -
>
> Key: HDFS-10188
> URL: https://issues.apache.org/jira/browse/HDFS-10188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-10188.HDFS-8707.000.patch, 
> HDFS-10188.HDFS-8707.001.patch, HDFS-10188.HDFS-8707.002.patch
>
>
> I propose implementing a set of memory new/delete pairs with additional 
> checking to detect double deletes, read-after-delete, and write-after-deletes 
> to help debug resource ownership issues and prevent new ones from entering 
> the library.
> One of the most common issues we have is use-after-free issues.  The 
> continuation pattern makes these really tricky to debug because by the time a 
> segsegv is raised the context of what has caused the error is long gone.
> The plan is to add allocators that can be turned on that can do the 
> following, in order of runtime cost.
> 1: no-op, forward through to default new/delete
> 2: make sure the memory given to the constructor is dirty, memset free'd 
> memory to 0
> 3: implement operator new with mmap, lock that region of memory once it's 
> been deleted; obviously this can't be left to run forever because the memory 
> is never unmapped
> This should also put some groundwork in place for implementing specialized 
> allocators for tiny objects that we churn through like std::string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10188) libhdfs++: Implement debug allocators

2016-05-15 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-10188:
---
Attachment: (was: HDFS-10188.HDFS-8707.002.patch)

> libhdfs++: Implement debug allocators
> -
>
> Key: HDFS-10188
> URL: https://issues.apache.org/jira/browse/HDFS-10188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-10188.HDFS-8707.000.patch, 
> HDFS-10188.HDFS-8707.001.patch, HDFS-10188.HDFS-8707.002.patch
>
>
> I propose implementing a set of memory new/delete pairs with additional 
> checking to detect double deletes, read-after-delete, and write-after-deletes 
> to help debug resource ownership issues and prevent new ones from entering 
> the library.
> One of the most common issues we have is use-after-free issues.  The 
> continuation pattern makes these really tricky to debug because by the time a 
> segsegv is raised the context of what has caused the error is long gone.
> The plan is to add allocators that can be turned on that can do the 
> following, in order of runtime cost.
> 1: no-op, forward through to default new/delete
> 2: make sure the memory given to the constructor is dirty, memset free'd 
> memory to 0
> 3: implement operator new with mmap, lock that region of memory once it's 
> been deleted; obviously this can't be left to run forever because the memory 
> is never unmapped
> This should also put some groundwork in place for implementing specialized 
> allocators for tiny objects that we churn through like std::string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10188) libhdfs++: Implement debug allocators

2016-05-15 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-10188:
---
Attachment: HDFS-10188.HDFS-8707.002.patch

> libhdfs++: Implement debug allocators
> -
>
> Key: HDFS-10188
> URL: https://issues.apache.org/jira/browse/HDFS-10188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-10188.HDFS-8707.000.patch, 
> HDFS-10188.HDFS-8707.001.patch, HDFS-10188.HDFS-8707.002.patch
>
>
> I propose implementing a set of memory new/delete pairs with additional 
> checking to detect double deletes, read-after-delete, and write-after-deletes 
> to help debug resource ownership issues and prevent new ones from entering 
> the library.
> One of the most common issues we have is use-after-free issues.  The 
> continuation pattern makes these really tricky to debug because by the time a 
> segsegv is raised the context of what has caused the error is long gone.
> The plan is to add allocators that can be turned on that can do the 
> following, in order of runtime cost.
> 1: no-op, forward through to default new/delete
> 2: make sure the memory given to the constructor is dirty, memset free'd 
> memory to 0
> 3: implement operator new with mmap, lock that region of memory once it's 
> been deleted; obviously this can't be left to run forever because the memory 
> is never unmapped
> This should also put some groundwork in place for implementing specialized 
> allocators for tiny objects that we churn through like std::string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8449) Add tasks count metrics to datanode for ECWorker

2016-05-15 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-8449:

Attachment: HDFS-8449-v12.patch

Upload v12(same with v11)  to trigger the test.

> Add tasks count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8449
> URL: https://issues.apache.org/jira/browse/HDFS-8449
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8449-000.patch, HDFS-8449-001.patch, 
> HDFS-8449-002.patch, HDFS-8449-003.patch, HDFS-8449-004.patch, 
> HDFS-8449-005.patch, HDFS-8449-006.patch, HDFS-8449-007.patch, 
> HDFS-8449-008.patch, HDFS-8449-009.patch, HDFS-8449-010.patch, 
> HDFS-8449-v10.patch, HDFS-8449-v11.patch, HDFS-8449-v12.patch
>
>
> This sub task try to record ec recovery tasks that a datanode has done, 
> including total tasks, failed tasks and sucessful tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10404) CacheAdmin command usage message not shows completely

2016-05-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10404:
-
Attachment: HDFS-10404.001.patch

> CacheAdmin command usage message not shows completely
> -
>
> Key: HDFS-10404
> URL: https://issues.apache.org/jira/browse/HDFS-10404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
> Attachments: HDFS-10404.001.patch
>
>
> In {{CacheAdmin}}, there are two places that not completely showing the cmd 
> usage message.
> {code}
> $ hdfs cacheadmin
> Usage: bin/hdfs cacheadmin [COMMAND]
>   [-addDirective -path  -pool  [-force] 
> [-replication ] [-ttl ]]
>   [-modifyDirective -id  [-path ] [-force] [-replication 
> ] [-pool ] [-ttl ]]
>   [-listDirectives [-stats] [-path ] [-pool ] [-id ]
>   [-removeDirective ]
>   [-removeDirectives -path ]
>   [-addPool  [-owner ] [-group ] [-mode ] 
> [-limit ] [-maxTtl ]
> {code}
> The command {{-listDirectives}} and {{-addPool}} are not showing completely, 
> they are both lacking a ']' in the end of line.
> In the {{CentralizedCacheManagement.md}}, there is also a similar problem. 
> The page of {{CentralizedCacheManagement}} can also showed this, 
> https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10404) CacheAdmin command usage message not shows completely

2016-05-15 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10404:
-
Status: Patch Available  (was: Open)

Attach a simple patch for this, who can assign this JIRA to me? Thanks.

> CacheAdmin command usage message not shows completely
> -
>
> Key: HDFS-10404
> URL: https://issues.apache.org/jira/browse/HDFS-10404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
> Attachments: HDFS-10404.001.patch
>
>
> In {{CacheAdmin}}, there are two places that not completely showing the cmd 
> usage message.
> {code}
> $ hdfs cacheadmin
> Usage: bin/hdfs cacheadmin [COMMAND]
>   [-addDirective -path  -pool  [-force] 
> [-replication ] [-ttl ]]
>   [-modifyDirective -id  [-path ] [-force] [-replication 
> ] [-pool ] [-ttl ]]
>   [-listDirectives [-stats] [-path ] [-pool ] [-id ]
>   [-removeDirective ]
>   [-removeDirectives -path ]
>   [-addPool  [-owner ] [-group ] [-mode ] 
> [-limit ] [-maxTtl ]
> {code}
> The command {{-listDirectives}} and {{-addPool}} are not showing completely, 
> they are both lacking a ']' in the end of line.
> In the {{CentralizedCacheManagement.md}}, there is also a similar problem. 
> The page of {{CentralizedCacheManagement}} can also showed this, 
> https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10404) CacheAdmin command usage message not shows completely

2016-05-15 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284059#comment-15284059
 ] 

Yiqun Lin commented on HDFS-10404:
--

I will post a patch later.

> CacheAdmin command usage message not shows completely
> -
>
> Key: HDFS-10404
> URL: https://issues.apache.org/jira/browse/HDFS-10404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>
> In {{CacheAdmin}}, there are two places that not completely showing the cmd 
> usage message.
> {code}
> $ hdfs cacheadmin
> Usage: bin/hdfs cacheadmin [COMMAND]
>   [-addDirective -path  -pool  [-force] 
> [-replication ] [-ttl ]]
>   [-modifyDirective -id  [-path ] [-force] [-replication 
> ] [-pool ] [-ttl ]]
>   [-listDirectives [-stats] [-path ] [-pool ] [-id ]
>   [-removeDirective ]
>   [-removeDirectives -path ]
>   [-addPool  [-owner ] [-group ] [-mode ] 
> [-limit ] [-maxTtl ]
> {code}
> The command {{-listDirectives}} and {{-addPool}} are not showing completely, 
> they are both lacking a ']' in the end of line.
> In the {{CentralizedCacheManagement.md}}, there is also a similar problem. 
> The page of {{CentralizedCacheManagement}} can also showed this, 
> https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10404) CacheAdmin command usage message not shows completely

2016-05-15 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-10404:


 Summary: CacheAdmin command usage message not shows completely
 Key: HDFS-10404
 URL: https://issues.apache.org/jira/browse/HDFS-10404
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: caching
Affects Versions: 2.7.1
Reporter: Yiqun Lin


In {{CacheAdmin}}, there are two places that not completely showing the cmd 
usage message.
{code}
$ hdfs cacheadmin
Usage: bin/hdfs cacheadmin [COMMAND]
  [-addDirective -path  -pool  [-force] [-replication 
] [-ttl ]]
  [-modifyDirective -id  [-path ] [-force] [-replication 
] [-pool ] [-ttl ]]
  [-listDirectives [-stats] [-path ] [-pool ] [-id ]
  [-removeDirective ]
  [-removeDirectives -path ]
  [-addPool  [-owner ] [-group ] [-mode ] 
[-limit ] [-maxTtl ]
{code}
The command {{-listDirectives}} and {{-addPool}} are not showing completely, 
they are both lacking a ']' in the end of line.

In the {{CentralizedCacheManagement.md}}, there is also a similar problem. The 
page of {{CentralizedCacheManagement}} can also showed this, 
https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10403) DiskBalancer: Add cancel command

2016-05-15 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-10403:
---

 Summary: DiskBalancer: Add cancel  command
 Key: HDFS-10403
 URL: https://issues.apache.org/jira/browse/HDFS-10403
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer & mover
Affects Versions: HDFS-1312
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: HDFS-1312


Allows user to cancel an on-going disk balancing operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10402) DiskBalancer: Add QueryStatus command

2016-05-15 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10402:

Status: Patch Available  (was: Open)

> DiskBalancer: Add QueryStatus command
> -
>
> Key: HDFS-10402
> URL: https://issues.apache.org/jira/browse/HDFS-10402
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10402-HDFS-1312.001.patch
>
>
> Adds QueryStatus which gets disk balancer status from a specific node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10402) DiskBalancer: Add QueryStatus command

2016-05-15 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10402:

Attachment: HDFS-10402-HDFS-1312.001.patch

> DiskBalancer: Add QueryStatus command
> -
>
> Key: HDFS-10402
> URL: https://issues.apache.org/jira/browse/HDFS-10402
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10402-HDFS-1312.001.patch
>
>
> Adds QueryStatus which gets disk balancer status from a specific node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10402) DiskBalancer: Add QueryStatus command

2016-05-15 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-10402:
---

 Summary: DiskBalancer: Add QueryStatus command
 Key: HDFS-10402
 URL: https://issues.apache.org/jira/browse/HDFS-10402
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer & mover
Affects Versions: HDFS-1312
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: HDFS-1312


Adds QueryStatus which gets disk balancer status from a specific node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9547) DiskBalancer : Add user documentation

2016-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284016#comment-15284016
 ] 

Hadoop QA commented on HDFS-9547:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
17s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 20s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804086/HDFS-9547-HDFS-1312.001.patch
 |
| JIRA Issue | HDFS-9547 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 662904ca2f15 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-1312 / 82b1bd5 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15436/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer : Add user documentation
> -
>
> Key: HDFS-9547
> URL: https://issues.apache.org/jira/browse/HDFS-9547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-9547-HDFS-1312.001.patch
>
>
> Write diskbalancer.md since this is a new tool and explain the usage with 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9547) DiskBalancer : Add user documentation

2016-05-15 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-9547:
---
Status: Patch Available  (was: Open)

> DiskBalancer : Add user documentation
> -
>
> Key: HDFS-9547
> URL: https://issues.apache.org/jira/browse/HDFS-9547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-9547-HDFS-1312.001.patch
>
>
> Write diskbalancer.md since this is a new tool and explain the usage with 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9547) DiskBalancer : Add user documentation

2016-05-15 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-9547:
---
Attachment: HDFS-9547-HDFS-1312.001.patch

> DiskBalancer : Add user documentation
> -
>
> Key: HDFS-9547
> URL: https://issues.apache.org/jira/browse/HDFS-9547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-9547-HDFS-1312.001.patch
>
>
> Write diskbalancer.md since this is a new tool and explain the usage with 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10401) Negative memory stats in NameNode web interface

2016-05-15 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDFS-10401.
---
Resolution: Duplicate

This is a dup of HADOOP-11098.

> Negative memory stats in NameNode web interface
> ---
>
> Key: HDFS-10401
> URL: https://issues.apache.org/jira/browse/HDFS-10401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.2
> Environment: Ubuntu 16.04, openjdk version "1.8.0_91", Hadoop-2.7.2
>Reporter: Dmitry Sivachenko
>Priority: Minor
>
> In NameNode's web interface I see negative memory usage, which looks like a 
> bug:
> Non Heap Memory used 46.2 MB of 46.84 MB Commited Non Heap Memory. Max Non 
> Heap Memory is -1 B.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10401) Negative memory stats in NameNode web interface

2016-05-15 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15283840#comment-15283840
 ] 

Yiqun Lin commented on HDFS-10401:
--

The max non heap memory shows -1B might means that this value is undefined. If 
I am think right, maybe we can optimize this for showing in {{dfshealth.html}}.

> Negative memory stats in NameNode web interface
> ---
>
> Key: HDFS-10401
> URL: https://issues.apache.org/jira/browse/HDFS-10401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.2
> Environment: Ubuntu 16.04, openjdk version "1.8.0_91", Hadoop-2.7.2
>Reporter: Dmitry Sivachenko
>Priority: Minor
>
> In NameNode's web interface I see negative memory usage, which looks like a 
> bug:
> Non Heap Memory used 46.2 MB of 46.84 MB Commited Non Heap Memory. Max Non 
> Heap Memory is -1 B.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10401) Negative memory stats in NameNode web interface

2016-05-15 Thread Dmitry Sivachenko (JIRA)
Dmitry Sivachenko created HDFS-10401:


 Summary: Negative memory stats in NameNode web interface
 Key: HDFS-10401
 URL: https://issues.apache.org/jira/browse/HDFS-10401
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.2
 Environment: Ubuntu 16.04, openjdk version "1.8.0_91", Hadoop-2.7.2
Reporter: Dmitry Sivachenko
Priority: Minor


In NameNode's web interface I see negative memory usage, which looks like a bug:

Non Heap Memory used 46.2 MB of 46.84 MB Commited Non Heap Memory. Max Non Heap 
Memory is -1 B.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org