[jira] [Updated] (HDFS-9950) TestDecommissioningStatus fails intermittently in trunk

2016-03-26 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-9950:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> TestDecommissioningStatus fails intermittently in trunk
> ---
>
> Key: HDFS-9950
> URL: https://issues.apache.org/jira/browse/HDFS-9950
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-9950.001.patch
>
>
> I often found that the testcase {{TestDecommissioningStatus}} failed 
> sometimes. And I looked the test failed report, it always show these error 
> infos:
> {code}
> testDecommissionStatus(org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus)
>   Time elapsed: 0.462 sec  <<< FAILURE!
> java.lang.AssertionError: Unexpected num under-replicated blocks expected:<3> 
> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.checkDecommissionStatus(TestDecommissioningStatus.java:196)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.testDecommissionStatus(TestDecommissioningStatus.java:291)
> {code}
> And I know the reason is that the under-replicated num is not correct in 
> method checkDecommissionStatus of 
> {{TestDecommissioningStatus#testDecommissionStatus}}. 
> In this testcase, each datanode should have 4 blocks(2 for decommission.dat, 
> 2 for decommission.dat1)The expect num 3 on first node is because the 
> lastBlock of  uc blockCollection can not be replicated if its numlive just 
> more than blockManager minReplication(in this case is 1). And before decommed 
> second datanode, it has already one live replication for the uc 
> blockCollection' lastBlock in this node. 
> So in this failed case, the first node's under-replicat changes to 4 
> indicated that the uc blockCollection lastBlock's livenum is already 0 before 
> the second datanode decommed. So I think there are two possibilitys will lead 
> to it. 
> * The second datanode was already decommed before node one.
> * Creating file decommission.dat1 failed that lead that the second datanode 
> has no this block.
> And I read the code, it has checked the decommission-in-progress nodes here
> {code}
> if (iteration == 0) {
> assertEquals(decommissioningNodes.size(), 1);
> DatanodeDescriptor decommNode = decommissioningNodes.get(0);
> checkDecommissionStatus(decommNode, 3, 0, 1);
> checkDFSAdminDecommissionStatus(decommissioningNodes.subList(0, 1),
> fileSys, admin);
>   }
> {code}
> So it seems the second possibility are more likely the reason. And in 
> addition, it hasn't did a block number check when finished the creating file. 
> So we could do a check and retry operatons here if block number is not 
> correct as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9950) TestDecommissioningStatus fails intermittently in trunk

2016-03-26 Thread Lin Yiqun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213344#comment-15213344
 ] 

Lin Yiqun commented on HDFS-9950:
-

Duplicate to HDFS-9599, HDFS-9599 did a great analysation. Can see the latest 
patch there.

> TestDecommissioningStatus fails intermittently in trunk
> ---
>
> Key: HDFS-9950
> URL: https://issues.apache.org/jira/browse/HDFS-9950
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-9950.001.patch
>
>
> I often found that the testcase {{TestDecommissioningStatus}} failed 
> sometimes. And I looked the test failed report, it always show these error 
> infos:
> {code}
> testDecommissionStatus(org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus)
>   Time elapsed: 0.462 sec  <<< FAILURE!
> java.lang.AssertionError: Unexpected num under-replicated blocks expected:<3> 
> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.checkDecommissionStatus(TestDecommissioningStatus.java:196)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.testDecommissionStatus(TestDecommissioningStatus.java:291)
> {code}
> And I know the reason is that the under-replicated num is not correct in 
> method checkDecommissionStatus of 
> {{TestDecommissioningStatus#testDecommissionStatus}}. 
> In this testcase, each datanode should have 4 blocks(2 for decommission.dat, 
> 2 for decommission.dat1)The expect num 3 on first node is because the 
> lastBlock of  uc blockCollection can not be replicated if its numlive just 
> more than blockManager minReplication(in this case is 1). And before decommed 
> second datanode, it has already one live replication for the uc 
> blockCollection' lastBlock in this node. 
> So in this failed case, the first node's under-replicat changes to 4 
> indicated that the uc blockCollection lastBlock's livenum is already 0 before 
> the second datanode decommed. So I think there are two possibilitys will lead 
> to it. 
> * The second datanode was already decommed before node one.
> * Creating file decommission.dat1 failed that lead that the second datanode 
> has no this block.
> And I read the code, it has checked the decommission-in-progress nodes here
> {code}
> if (iteration == 0) {
> assertEquals(decommissioningNodes.size(), 1);
> DatanodeDescriptor decommNode = decommissioningNodes.get(0);
> checkDecommissionStatus(decommNode, 3, 0, 1);
> checkDFSAdminDecommissionStatus(decommissioningNodes.subList(0, 1),
> fileSys, admin);
>   }
> {code}
> So it seems the second possibility are more likely the reason. And in 
> addition, it hasn't did a block number check when finished the creating file. 
> So we could do a check and retry operatons here if block number is not 
> correct as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9599) TestDecommissioningStatus.testDecommissionStatus occasionally fails

2016-03-26 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-9599:

Status: Patch Available  (was: Open)

> TestDecommissioningStatus.testDecommissionStatus occasionally fails
> ---
>
> Key: HDFS-9599
> URL: https://issues.apache.org/jira/browse/HDFS-9599
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Lin Yiqun
> Attachments: HDFS-9599.001.patch
>
>
> From test result of a recent jenkins nightly 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2663/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestDecommissioningStatus/testDecommissionStatus/
> The test failed because the number of under replicated blocks is 4, instead 
> of 3.
> Looking at the log, there is a strayed block, which might have caused the 
> faillure:
> {noformat}
> 2015-12-23 00:42:05,820 [Block report processor] INFO  BlockStateChange 
> (BlockManager.java:processReport(2131)) - BLOCK* processReport: 
> blk_1073741825_1001 on node 127.0.0.1:57382 size 16384 does not belong to any 
> file
> {noformat}
> The block size 16384 suggests this is left over from the sibling test case 
> testDecommissionStatusAfterDNRestart. This can happen, because the same 
> minidfs cluster is reused between tests.
> The test implementation should do a better job isolating tests.
> Another case of failure is when the load factor comes into play, and a block 
> can not find sufficient data nodes to place replica. In this test, the 
> runtime should not consider load factor:
> {noformat}
> conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_REPLICATION_CONSIDERLOAD_KEY, 
> false);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9599) TestDecommissioningStatus.testDecommissionStatus occasionally fails

2016-03-26 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-9599:

Attachment: HDFS-9599.001.patch

> TestDecommissioningStatus.testDecommissionStatus occasionally fails
> ---
>
> Key: HDFS-9599
> URL: https://issues.apache.org/jira/browse/HDFS-9599
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Lin Yiqun
> Attachments: HDFS-9599.001.patch
>
>
> From test result of a recent jenkins nightly 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2663/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestDecommissioningStatus/testDecommissionStatus/
> The test failed because the number of under replicated blocks is 4, instead 
> of 3.
> Looking at the log, there is a strayed block, which might have caused the 
> faillure:
> {noformat}
> 2015-12-23 00:42:05,820 [Block report processor] INFO  BlockStateChange 
> (BlockManager.java:processReport(2131)) - BLOCK* processReport: 
> blk_1073741825_1001 on node 127.0.0.1:57382 size 16384 does not belong to any 
> file
> {noformat}
> The block size 16384 suggests this is left over from the sibling test case 
> testDecommissionStatusAfterDNRestart. This can happen, because the same 
> minidfs cluster is reused between tests.
> The test implementation should do a better job isolating tests.
> Another case of failure is when the load factor comes into play, and a block 
> can not find sufficient data nodes to place replica. In this test, the 
> runtime should not consider load factor:
> {noformat}
> conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_REPLICATION_CONSIDERLOAD_KEY, 
> false);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9599) TestDecommissioningStatus.testDecommissionStatus occasionally fails

2016-03-26 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun reassigned HDFS-9599:
---

Assignee: Lin Yiqun

> TestDecommissioningStatus.testDecommissionStatus occasionally fails
> ---
>
> Key: HDFS-9599
> URL: https://issues.apache.org/jira/browse/HDFS-9599
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Lin Yiqun
> Attachments: HDFS-9599.001.patch
>
>
> From test result of a recent jenkins nightly 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2663/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestDecommissioningStatus/testDecommissionStatus/
> The test failed because the number of under replicated blocks is 4, instead 
> of 3.
> Looking at the log, there is a strayed block, which might have caused the 
> faillure:
> {noformat}
> 2015-12-23 00:42:05,820 [Block report processor] INFO  BlockStateChange 
> (BlockManager.java:processReport(2131)) - BLOCK* processReport: 
> blk_1073741825_1001 on node 127.0.0.1:57382 size 16384 does not belong to any 
> file
> {noformat}
> The block size 16384 suggests this is left over from the sibling test case 
> testDecommissionStatusAfterDNRestart. This can happen, because the same 
> minidfs cluster is reused between tests.
> The test implementation should do a better job isolating tests.
> Another case of failure is when the load factor comes into play, and a block 
> can not find sufficient data nodes to place replica. In this test, the 
> runtime should not consider load factor:
> {noformat}
> conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_REPLICATION_CONSIDERLOAD_KEY, 
> false);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9599) TestDecommissioningStatus.testDecommissionStatus occasionally fails

2016-03-26 Thread Lin Yiqun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213341#comment-15213341
 ] 

Lin Yiqun commented on HDFS-9599:
-

Thanks [~jojochuang] for great analysation. I agree on this comment:
{quote}
This can happen, because the same minidfs cluster is reused between tests.
The test implementation should do a better job isolating tests.
{quote}
Because the config {{DFS_NAMENODE_REPLICATION_CONSIDERLOAD_KEY}} has set false 
in test, so the first comment seems the main reason.
Update a initial patch from me. I did a test isolation in minidfs cluster, 
pending jenkins.

> TestDecommissioningStatus.testDecommissionStatus occasionally fails
> ---
>
> Key: HDFS-9599
> URL: https://issues.apache.org/jira/browse/HDFS-9599
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>
> From test result of a recent jenkins nightly 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2663/testReport/junit/org.apache.hadoop.hdfs.server.namenode/TestDecommissioningStatus/testDecommissionStatus/
> The test failed because the number of under replicated blocks is 4, instead 
> of 3.
> Looking at the log, there is a strayed block, which might have caused the 
> faillure:
> {noformat}
> 2015-12-23 00:42:05,820 [Block report processor] INFO  BlockStateChange 
> (BlockManager.java:processReport(2131)) - BLOCK* processReport: 
> blk_1073741825_1001 on node 127.0.0.1:57382 size 16384 does not belong to any 
> file
> {noformat}
> The block size 16384 suggests this is left over from the sibling test case 
> testDecommissionStatusAfterDNRestart. This can happen, because the same 
> minidfs cluster is reused between tests.
> The test implementation should do a better job isolating tests.
> Another case of failure is when the load factor comes into play, and a block 
> can not find sufficient data nodes to place replica. In this test, the 
> runtime should not consider load factor:
> {noformat}
> conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_REPLICATION_CONSIDERLOAD_KEY, 
> false);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-03-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213312#comment-15213312
 ] 

Kai Zheng commented on HDFS-9694:
-

Thanks all for the takings!
Thanks [~umamaheswararao] for the great review and committing of this!

> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HDFS-9694-v1.patch, HDFS-9694-v2.patch, 
> HDFS-9694-v3.patch, HDFS-9694-v4.patch, HDFS-9694-v5.patch, 
> HDFS-9694-v6.patch, HDFS-9694-v7.patch, HDFS-9694-v8.patch, HDFS-9694-v9.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213302#comment-15213302
 ] 

Hudson commented on HDFS-9694:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9505 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9505/])
HDFS-9694. Make existing DFSClient#getFileChecksum() work for striped 
(uma.gangumalla: rev 3a4ff7776e8fab6cc87932b9aa8fb48f7b69c720)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/FileChecksumHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Op.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/StripedBlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockChecksumHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileChecksum.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Sender.java
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto


> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HDFS-9694-v1.patch, HDFS-9694-v2.patch, 
> HDFS-9694-v3.patch, HDFS-9694-v4.patch, HDFS-9694-v5.patch, 
> HDFS-9694-v6.patch, HDFS-9694-v7.patch, HDFS-9694-v8.patch, HDFS-9694-v9.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-03-26 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-9694.
---
Resolution: Fixed

I have just committed this. Before it was my mistake, missed to add newly added 
file. Thanks

> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HDFS-9694-v1.patch, HDFS-9694-v2.patch, 
> HDFS-9694-v3.patch, HDFS-9694-v4.patch, HDFS-9694-v5.patch, 
> HDFS-9694-v6.patch, HDFS-9694-v7.patch, HDFS-9694-v8.patch, HDFS-9694-v9.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-03-26 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213294#comment-15213294
 ] 

Uma Maheswara Rao G commented on HDFS-9694:
---

Thanks [~arpitagarwal] and [~kaisasak] for noticing and reverting. I will check 
and recommit it. Thanks

> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HDFS-9694-v1.patch, HDFS-9694-v2.patch, 
> HDFS-9694-v3.patch, HDFS-9694-v4.patch, HDFS-9694-v5.patch, 
> HDFS-9694-v6.patch, HDFS-9694-v7.patch, HDFS-9694-v8.patch, HDFS-9694-v9.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-03-26 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213284#comment-15213284
 ] 

Kai Sasaki commented on HDFS-9694:
--

[~arpiagariu] Thank you so much!

> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HDFS-9694-v1.patch, HDFS-9694-v2.patch, 
> HDFS-9694-v3.patch, HDFS-9694-v4.patch, HDFS-9694-v5.patch, 
> HDFS-9694-v6.patch, HDFS-9694-v7.patch, HDFS-9694-v8.patch, HDFS-9694-v9.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8101) DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS classes at runtime

2016-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8101:
-
Fix Version/s: (was: 2.8.0)
   2.7.3

> DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS classes at 
> runtime
> ---
>
> Key: HDFS-8101
> URL: https://issues.apache.org/jira/browse/HDFS-8101
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.7.3
>
> Attachments: HDFS-8101.1.patch.txt
>
>
> Previously, all references to DFSConfigKeys in DFSClient were compile time 
> constants which meant that normal users of DFSClient wouldn't resolve 
> DFSConfigKeys at run time. As of HDFS-7718, DFSClient has a reference to a 
> member of DFSConfigKeys that isn't compile time constant 
> (DFS_CLIENT_KEY_PROVIDER_CACHE_EXPIRY_DEFAULT).
> Since the class must be resolved now, this particular member
> {code}
> public static final String  DFS_WEBHDFS_AUTHENTICATION_FILTER_DEFAULT = 
> AuthFilter.class.getName();
> {code}
> means that javax.servlet.Filter needs to be on the classpath.
> javax-servlet-api is one of the properly listed dependencies for HDFS, 
> however if we replace {{AuthFilter.class.getName()}} with the equivalent 
> String literal then downstream folks can avoid including it while maintaining 
> compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9005) Provide support for upgrade domain script

2016-03-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213269#comment-15213269
 ] 

Allen Wittenauer commented on HDFS-9005:


It's concerning that:

a) anyone would even suggest that it is ok to add a file that fails the ASF 
license check
b) no one bothered to see how the *other* JSON files are handled in the source 
tree
c) there are a ton of watchers (including PMC) and not a single one of you said 
anything

To save you folks some time: no, it's not OK to just let a license violation 
hang out in the source tree. Never ever ever.  That's an instant "do not 
commit."  No, you don't need to generate it dynamically.  Just add the file to 
the RAT exclusions list in the pom file and all will be well.

Next up, is there a reason why the fix version is set to both 2.8.0 and 3.0.0?

> Provide support for upgrade domain script
> -
>
> Key: HDFS-9005
> URL: https://issues.apache.org/jira/browse/HDFS-9005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9005-2.patch, HDFS-9005-3.patch, HDFS-9005-4.patch, 
> HDFS-9005.patch
>
>
> As part of the upgrade domain feature, we need to provide a mechanism to 
> specify upgrade domain for each datanode. One way to accomplish that is to 
> allow admins specify an upgrade domain script that takes DN ip or hostname as 
> input and return the upgrade domain. Then namenode will use it at run time to 
> set {{DatanodeInfo}}'s upgrade domain string. The configuration can be 
> something like:
> {noformat}
> 
> dfs.namenode.upgrade.domain.script.file.name
> /etc/hadoop/conf/upgrade-domain.sh
> 
> {noformat}
> just like topology script, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9005) Provide support for upgrade domain script

2016-03-26 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213213#comment-15213213
 ] 

Lei (Eddy) Xu commented on HDFS-9005:
-

Hi, [~aw], please see [~mingma]'s comment:

bq. For asflicense issue, it is due to the test json file, which won't allow 
comments. 

If this is unacceptable, let us to file a new JIRA to generate this json file 
on demand as [~mingma] suggested?

> Provide support for upgrade domain script
> -
>
> Key: HDFS-9005
> URL: https://issues.apache.org/jira/browse/HDFS-9005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9005-2.patch, HDFS-9005-3.patch, HDFS-9005-4.patch, 
> HDFS-9005.patch
>
>
> As part of the upgrade domain feature, we need to provide a mechanism to 
> specify upgrade domain for each datanode. One way to accomplish that is to 
> allow admins specify an upgrade domain script that takes DN ip or hostname as 
> input and return the upgrade domain. Then namenode will use it at run time to 
> set {{DatanodeInfo}}'s upgrade domain string. The configuration can be 
> something like:
> {noformat}
> 
> dfs.namenode.upgrade.domain.script.file.name
> /etc/hadoop/conf/upgrade-domain.sh
> 
> {noformat}
> just like topology script, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-03-26 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213210#comment-15213210
 ] 

Yongjun Zhang edited comment on HDFS-9732 at 3/26/16 9:29 PM:
--

Thanks [~aw]. 

I like your idea of checking over all places about the use of {{toString()}} in 
CLI outputs. However, I think we can do it in separate jira as a master jira. 
And let each subtask fix some related commands, or even fix one particular 
issue (like this one). It would be really ambitious to use one jira to cover 
all IMHO. But good thing is, we are aware of that this has been an 
long-standing issue based on the discussion here, and we need keep this issue 
in mind when reviewing jiras that may incur output incompatibility.

About method naming. Wonder if we can do {{toStringFrozen()}} or 
{{toStringStable()}} and javadoc it as "stable API for backward compatibility, 
currently only for CLI"? I can see that if we put the {{CLI}} keyword in the 
name, then calling these methods for anything else would look awkward, I guess 
there is still chance of calling these methods for something else. 

The point we are trying to make is, *the method need to stable*, and do not 
change it otherwise compatibility will be broken; and it's *not just* because 
they are for CLI. For example, some methods called by CLI don't need to be 
stable at all.

Probably we use name {{toStringStable()}} instead?

Thanks.

 


was (Author: yzhangal):
Thanks [~aw]. 

I like your idea of checking over all places about the use of {{toString()}} in 
CLI outputs. However, I think we can do it in separate jira as a master jira. 
And let each subtask fix some related commands, or even fix one particular 
issue (like this one). It would be really ambitious to use one jira to cover 
all IMHO. But good thing is, we are aware of that this has been issue for long 
based on the discussion here.

About method naming. Wonder if we can do {{toStringFrozen()}} or 
{{toStringStable()}} and javadoc it as "stable API for backward compatibility, 
currently only for CLI"? I can see that if we put the {{CLI}} keyword in the 
name, then calling these methods for anything else would look awkward, I guess 
there is still chance of calling these methods for something else. 

The point we are trying to make is, *the method need to stable*, and do not 
change it otherwise compatibility will be broken; and it's *not just* because 
they are for CLI. For example, some methods called by CLI don't need to be 
stable at all.

Probably we use name {{toStringStable()}} instead?

Thanks.

 

> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732.001.patch, 
> HDFS-9732.002.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-03-26 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213210#comment-15213210
 ] 

Yongjun Zhang commented on HDFS-9732:
-

Thanks [~aw]. 

I like your idea of checking over all places about the use of {{toString()}} in 
CLI outputs. However, I think we can do it in separate jira as a master jira. 
And let each subtask fix some related commands, or even fix one particular 
issue (like this one). It would be really ambitious to use one jira to cover 
all IMHO. But good thing is, we are aware of that this has been issue for long 
based on the discussion here.

About method naming. Wonder if we can do {{toStringFrozen()}} or 
{{toStringStable()}} and javadoc it as "stable API for backward compatibility, 
currently only for CLI"? I can see that if we put the {{CLI}} keyword in the 
name, then calling these methods for anything else would look awkward, I guess 
there is still chance of calling these methods for something else. 

The point we are trying to make is, *the method need to stable*, and do not 
change it otherwise compatibility will be broken; and it's *not just* because 
they are for CLI. For example, some methods called by CLI don't need to be 
stable at all.

Probably we use name {{toStringStable()}} instead?

Thanks.

 

> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732.001.patch, 
> HDFS-9732.002.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213203#comment-15213203
 ] 

Hadoop QA commented on HDFS-9732:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 38s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 29s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 44s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 50s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 30s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | 

[jira] [Updated] (HDFS-10195) Ozone: Add container persistence

2016-03-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-10195:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

+1 for the patch.  I have committed this to the HDFS-7240 feature branch.  
[~anu], thank you for the patch.  Jing, thank you for the code review.

> Ozone: Add container persistence
> 
>
> Key: HDFS-10195
> URL: https://issues.apache.org/jira/browse/HDFS-10195
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-10195-HDFS-7240.001.patch, 
> HDFS-10195-HDFS-7240.002.patch, HDFS-10195-HDFS-7240.003.patch, 
> HDFS-10195-HDFS-7240.004.patch
>
>
> Adds file based persistence for containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-03-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213131#comment-15213131
 ] 

Allen Wittenauer commented on HDFS-9732:


bq. do we have any other occasion that we want a "frozen" output then CLI? 

Double check with the compat guidelines, but I'm fairly confident that the only 
output that can't change is the CLI.  Web output, for example, is specifically 
called out as Unstable.  (Because if it wasn't, the deadly web UI redo in 
2.5/2.6 that basically screwed over most of the people with security deployed 
wouldn't have went in.)

> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732.001.patch, 
> HDFS-9732.002.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-03-26 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213129#comment-15213129
 ] 

Yongjun Zhang commented on HDFS-9732:
-

Hi [~ste...@apache.org],

Ah, sorry, thanks for your comments, I did not see until now, 

About the method name, {{detailsForCLI()}}, do we have any other occasion that 
we want a "frozen" output then CLI? For example, one day we need to call the 
frozen version for webui? If CLI is the only situation, I can change the new 
method name per your suggestion.

One question:
{quote}
here subclasses would need to know to not call super.toString() and instead 
call some other method
{quote}
Derived class may have new field to print on top of the base class'. If we 
don't call {{super.toString()}}, we probably introduce a new method to be 
called by both the base class'  and child class' {{toString()}}?

In this case, all the info printed by the base {{toString()}} is applicable to 
child class. And in my rev 2, I added {{getKind()}} which is overriden by child 
classes. Basically we are in full control of what the output looks like. Would 
you please explain why calling super.toString()  is a bad idea (especially we 
do define our own base class' {{toString()}})?  I can see a problem when we 
don't define {{toString()}} class for the base, in which case the java base 
{{Object}}'s {{toString()}} would be called.

Thanks.

 
 


> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732.001.patch, 
> HDFS-9732.002.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-03-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213127#comment-15213127
 ] 

Allen Wittenauer edited comment on HDFS-9732 at 3/26/16 5:49 PM:
-

While I can appreciate what you folks are saying, experience has shown that 
Hadoop devs are not disciplined enough to use this appropriately.  Incredibly 
bad code gets tossed over the fence all the time that are obviously bad\-\-my 
current favorite for 2.8 is still the "log metrics to files outside of the 
metrics subsystem via log4j rather than just fixing the file metrics plug-in".  
Something subtle like this difference is going to blow up big time. I still 
believe that it should be harder to do the wrong thing, but I'll acquiesce in 
order to move this forward.

That said, I'd still like to see:

a) audit *every* direct and indirect usage of toString to make sure it isn't 
getting used for CLI output
b) the javadoc for the toString method needs to explicitly say that it is not 
to be used for CLI output because it evolves and point to the relevant section 
in the compat guidelines
c) the toStringFrozen should be renamed toStringCLI or something similar to 
actually state what it does not what it is so that in 3.x it can be changed.




was (Author: aw):
While I can appreciate what you folks are saying, experience has shown that 
Hadoop devs are not disciplined enough to use this appropriately.  Incredibly 
bad code gets tossed over the fence all the time that are obviously bad\-\-my 
current is still the "log metrics to files outside of the metrics subsystem via 
log4j rather than just fixing the file metrics plug-in".  Something subtle like 
this difference is going to blow up big time. I still believe that it should be 
harder to do the wrong thing, but I'll acquiesce in order to move this forward.

That said, I'd still like to see:

a) audit *every* direct and indirect usage of toString to make sure it isn't 
getting used for CLI output
b) the javadoc for the toString method needs to explicitly say that it is not 
to be used for CLI output because it evolves and point to the relevant section 
in the compat guidelines
c) the toStringFrozen should be renamed toStringCLI or something similar to 
actually state what it does not what it is so that in 3.x it can be changed.



> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732.001.patch, 
> HDFS-9732.002.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-03-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213127#comment-15213127
 ] 

Allen Wittenauer commented on HDFS-9732:


While I can appreciate what you folks are saying, experience has shown that 
Hadoop devs are not disciplined enough to use this appropriately.  Incredibly 
bad code gets tossed over the fence all the time that are obviously bad\-\-my 
current is still the "log metrics to files outside of the metrics subsystem via 
log4j rather than just fixing the file metrics plug-in".  Something subtle like 
this difference is going to blow up big time. I still believe that it should be 
harder to do the wrong thing, but I'll acquiesce in order to move this forward.

That said, I'd still like to see:

a) audit *every* direct and indirect usage of toString to make sure it isn't 
getting used for CLI output
b) the javadoc for the toString method needs to explicitly say that it is not 
to be used for CLI output because it evolves and point to the relevant section 
in the compat guidelines
c) the toStringFrozen should be renamed toStringCLI or something similar to 
actually state what it does not what it is so that in 3.x it can be changed.



> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732.001.patch, 
> HDFS-9732.002.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-03-26 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213123#comment-15213123
 ] 

Yongjun Zhang commented on HDFS-9732:
-

Thanks [~cnauroth], did not see your comment until now. I hope [~aw] would 
agree too. Thanks Allen.



> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732.001.patch, 
> HDFS-9732.002.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-03-26 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-9732:

Attachment: HDFS-9732.002.patch

New patch rev 002 to fix a test failure and with some massage.


> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732.001.patch, 
> HDFS-9732.002.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7651) [ NN Bench ] Refactor nnbench as a Tool implementation.

2016-03-26 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213106#comment-15213106
 ] 

Akira AJISAKA commented on HDFS-7651:
-

Thanks Brahma for the refactoring, and thanks Vinayakumar for pinging me. I'm 
thinking we can refactor NNBench.java more.

{code}
writer = SequenceFile.createWriter(tempFS, getConf(), filePath, 
Text.class,
LongWritable.class, CompressionType.NONE);
{code}
1. {{createWriter(FileSystem, Configuration, Path, Class, Class, 
CompressionType)}} is deperecated. Would you use {{createWriter(Configuration, 
Writer.Option...)}} instead?

{code}
PrintStream res = new PrintStream(
new FileOutputStream(new File(DEFAULT_RES_FILE_NAME), true));
try {
{code}
2. Would you use try-with-resources statement?

{code}
  for (int i = 0; i < resultLines.length; i++) {
LOG.info(resultLines[i]);
res.println(resultLines[i]);
  }
{code}
3. foreach statement can be used.

{code}
  throw new HadoopIllegalArgumentException(
  "\"Error: Unknown operation: \" + operation");
{code}
4. I'm thinking we don't need to escape {{"}}.
5. Would you add a regression test for NNBench.java? I don't want to cause a 
bug such as MAPREDUCE-6656. I'd like to test the following 2 cases.
* create_write -> open_read -> delete
* create_write -> rename

> [ NN Bench ] Refactor nnbench as a Tool implementation.
> ---
>
> Key: HDFS-7651
> URL: https://issues.apache.org/jira/browse/HDFS-7651
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-7651-001.patch, HDFS-7651-002.patch
>
>
> {code}
> public class NNBench {
>   private static final Log LOG = LogFactory.getLog(
>   "org.apache.hadoop.hdfs.NNBench");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213099#comment-15213099
 ] 

Hudson commented on HDFS-9694:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9504 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9504/])
Revert "HDFS-9694. Make existing DFSClient#getFileChecksum() work for (arp: rev 
a337ceb74e984991dbf976236d2e785cf5921b16)
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockChecksumHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Op.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Sender.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/FileChecksumHelper.java
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml


> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HDFS-9694-v1.patch, HDFS-9694-v2.patch, 
> HDFS-9694-v3.patch, HDFS-9694-v4.patch, HDFS-9694-v5.patch, 
> HDFS-9694-v6.patch, HDFS-9694-v7.patch, HDFS-9694-v8.patch, HDFS-9694-v9.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-03-26 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HDFS-9694:
-

Reopening - I've reverted this commit for now as it broke the trunk build.

> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HDFS-9694-v1.patch, HDFS-9694-v2.patch, 
> HDFS-9694-v3.patch, HDFS-9694-v4.patch, HDFS-9694-v5.patch, 
> HDFS-9694-v6.patch, HDFS-9694-v7.patch, HDFS-9694-v8.patch, HDFS-9694-v9.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-03-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213081#comment-15213081
 ] 

Chris Nauroth commented on HDFS-9732:
-

I agree with the approach of using {{toString}} as the method for detailed 
debugging information that may freely evolve as we think of more helpful 
troubleshooting tips.  {{hdfs fetchdt}} can move to another dedicated method, 
with output that must adhere to the compatibility policy.

In general, I think developers expect the primary use case for {{toString}} to 
be debugging and logging of object innards.  I consider it poor practice to use 
{{toString}} for any kind of user-facing object serialization.  I've seen 
projects that go as far as forbidding use of {{toString}} for serialization.

> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732.001.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9005) Provide support for upgrade domain script

2016-03-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213065#comment-15213065
 ] 

Allen Wittenauer commented on HDFS-9005:


bq. Patch generated 1 ASF License warnings. 

Why was this committed with a license warning?

> Provide support for upgrade domain script
> -
>
> Key: HDFS-9005
> URL: https://issues.apache.org/jira/browse/HDFS-9005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9005-2.patch, HDFS-9005-3.patch, HDFS-9005-4.patch, 
> HDFS-9005.patch
>
>
> As part of the upgrade domain feature, we need to provide a mechanism to 
> specify upgrade domain for each datanode. One way to accomplish that is to 
> allow admins specify an upgrade domain script that takes DN ip or hostname as 
> input and return the upgrade domain. Then namenode will use it at run time to 
> set {{DatanodeInfo}}'s upgrade domain string. The configuration can be 
> something like:
> {noformat}
> 
> dfs.namenode.upgrade.domain.script.file.name
> /etc/hadoop/conf/upgrade-domain.sh
> 
> {noformat}
> just like topology script, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HDFS-10218) Pull QuotaException from HDFS into Common

2016-03-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran moved HADOOP-12965 to HDFS-10218:


Affects Version/s: (was: 3.0.0)
   3.0.0
  Key: HDFS-10218  (was: HADOOP-12965)
  Project: Hadoop HDFS  (was: Hadoop Common)

> Pull QuotaException from HDFS into Common
> -
>
> Key: HDFS-10218
> URL: https://issues.apache.org/jira/browse/HDFS-10218
> Project: Hadoop HDFS
>  Issue Type: Wish
>Affects Versions: 3.0.0
>Reporter: Plamen Jeliazkov
>Priority: Minor
>
> While QuotaException is HDFS-specific there is little reason why other 
> FileSystems could not leverage it as an Exception or a FS-agnostic client 
> couldn't attempt to handle it.
> In order to do this we should move QuotaException to the hadoop-common 
> project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-03-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213041#comment-15213041
 ] 

Steve Loughran commented on HDFS-9732:
--

The thing about those logfiles is they are the ones making more use of it, they 
are the ones where you only look at them when things have gone wrong and you 
want as much detail as you want. There's also the tradition of expanding 
toString diagnostics in subclasses; here subclasses would need to know to not 
call {{super.toString()}} and instead call some other method


Update that and alongside it add a public @stable method {{, say 
detailsForCLI()}} javadoced as "do not change this output". A unique name and 
text will keep anyone from adding it to it later.

I absolutely do not want to break CLI output here —and I'm glad you picked up 
on it— all we need to do know is work out the way to both improve log 
diagnostics and ensure that nobody else tries to improve the output later

> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732.001.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-03-26 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15213029#comment-15213029
 ] 

Kai Sasaki commented on HDFS-9694:
--

[~umamaheswararao] Did you commit the latest patch? We cannot find 
{{StripedBlockInfo}} and building trunk fails.
{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-hdfs-client: Compilation failure: Compilation failure:
[ERROR] 
/Users/sasakikai/dev/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/FileChecksumHelper.java:[31,39]
 cannot find symbol
[ERROR] symbol:   class StripedBlockInfo
[ERROR] location: package org.apache.hadoop.hdfs.protocol
[ERROR] 
/Users/sasakikai/dev/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Sender.java:[31,39]
 cannot find symbol
{code}

https://github.com/apache/hadoop/commit/e5ff0ea7ba087984262f1f27200ae5bb40d9b838


> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HDFS-9694-v1.patch, HDFS-9694-v2.patch, 
> HDFS-9694-v3.patch, HDFS-9694-v4.patch, HDFS-9694-v5.patch, 
> HDFS-9694-v6.patch, HDFS-9694-v7.patch, HDFS-9694-v8.patch, HDFS-9694-v9.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10217) show "blockScheduled count" in datanodes table.

2016-03-26 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15212951#comment-15212951
 ] 

Brahma Reddy Battula commented on HDFS-10217:
-

bq.Patch generated 1 ASF License warnings.

not related to this patch..This is introduced in HDFS-9005.

> show "blockScheduled count" in datanodes table.
> ---
>
> Key: HDFS-10217
> URL: https://issues.apache.org/jira/browse/HDFS-10217
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10217.patch
>
>
> It will more useful for debugging purpose where user can see how many blocks 
> got schduled for DN



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10217) show "blockScheduled count" in datanodes table.

2016-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15212949#comment-15212949
 ] 

Hadoop QA commented on HDFS-10217:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 18s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 44s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795509/HDFS-10217.patch |
| JIRA Issue | HDFS-10217 |
| Optional Tests |  asflicense  |
| uname | Linux 7546561d723e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e5ff0ea |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14949/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14949/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> show "blockScheduled count" in datanodes table.
> ---
>
> Key: HDFS-10217
> URL: https://issues.apache.org/jira/browse/HDFS-10217
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10217.patch
>
>
> It will more useful for debugging purpose where user can see how many blocks 
> got schduled for DN



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10217) show "blockScheduled count" in datanodes table.

2016-03-26 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15212947#comment-15212947
 ] 

Brahma Reddy Battula commented on HDFS-10217:
-

Uploaded the patch. Kindly Review.

> show "blockScheduled count" in datanodes table.
> ---
>
> Key: HDFS-10217
> URL: https://issues.apache.org/jira/browse/HDFS-10217
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10217.patch
>
>
> It will more useful for debugging purpose where user can see how many blocks 
> got schduled for DN



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10217) show "blockScheduled count" in datanodes table.

2016-03-26 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-10217:

Status: Patch Available  (was: Open)

> show "blockScheduled count" in datanodes table.
> ---
>
> Key: HDFS-10217
> URL: https://issues.apache.org/jira/browse/HDFS-10217
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10217.patch
>
>
> It will more useful for debugging purpose where user can see how many blocks 
> got schduled for DN



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10217) show "blockScheduled count" in datanodes table.

2016-03-26 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-10217:

Attachment: HDFS-10217.patch

> show "blockScheduled count" in datanodes table.
> ---
>
> Key: HDFS-10217
> URL: https://issues.apache.org/jira/browse/HDFS-10217
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10217.patch
>
>
> It will more useful for debugging purpose where user can see how many blocks 
> got schduled for DN



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10217) show "blockScheduled count" in datanodes table.

2016-03-26 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-10217:
---

 Summary: show "blockScheduled count" in datanodes table.
 Key: HDFS-10217
 URL: https://issues.apache.org/jira/browse/HDFS-10217
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


It will more useful for debugging purpose where user can see how many blocks 
got schduled for DN



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5177) blocksScheduled count should be decremented for abandoned blocks

2016-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15212930#comment-15212930
 ] 

Hadoop QA commented on HDFS-5177:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 3m 4s 
{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 37s 
{color} | {color:red} hadoop-hdfs in trunk failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 27s 
{color} | {color:red} hadoop-hdfs in trunk failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 28s 
{color} | {color:red} hadoop-hdfs in trunk failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-hdfs in trunk failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 31s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 31s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 30s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 30s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 27s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 28s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 41s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 26s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 18s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 55s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795507/HDFS-5177-04.patch |
| JIRA Issue | HDFS-5177 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c21667e765dd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e5ff0ea |
| 

[jira] [Updated] (HDFS-5177) blocksScheduled count should be decremented for abandoned blocks

2016-03-26 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-5177:

Labels:   (was: BB2015-05-TBR)

> blocksScheduled  count should be decremented for abandoned blocks
> -
>
> Key: HDFS-5177
> URL: https://issues.apache.org/jira/browse/HDFS-5177
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-5177-04.patch, HDFS-5177.patch, HDFS-5177.patch, 
> HDFS-5177.patch
>
>
> DatanodeDescriptor#incBlocksScheduled() will be called for all datanodes of 
> the block on each allocation. But same should be decremented for abandoned 
> blocks.
> When one of the datanodes is down and same is allocated for the block along 
> with other live datanodes, then this block will be abandoned, but the 
> scheduled count on other datanodes will consider live datanodes as loaded, 
> but in reality these datanodes may not be loaded.
> Anyway this scheduled count will be rolled every 20 mins.
> Problem will come if the rate of creation of files is more. Due to increase 
> in the scheduled count, there might be chances of missing local datanode to 
> write to. and some times writes also can fail in small clusters.
> So we need to decrement the unnecessary count on abandon block call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5177) blocksScheduled count should be decremented for abandoned blocks

2016-03-26 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-5177:

Attachment: HDFS-5177-04.patch

Updated the patch.
No change in protocol in latest patch. (Previous patches changed the 
clientprotocol).

Please review.


> blocksScheduled  count should be decremented for abandoned blocks
> -
>
> Key: HDFS-5177
> URL: https://issues.apache.org/jira/browse/HDFS-5177
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-5177-04.patch, HDFS-5177.patch, HDFS-5177.patch, 
> HDFS-5177.patch
>
>
> DatanodeDescriptor#incBlocksScheduled() will be called for all datanodes of 
> the block on each allocation. But same should be decremented for abandoned 
> blocks.
> When one of the datanodes is down and same is allocated for the block along 
> with other live datanodes, then this block will be abandoned, but the 
> scheduled count on other datanodes will consider live datanodes as loaded, 
> but in reality these datanodes may not be loaded.
> Anyway this scheduled count will be rolled every 20 mins.
> Problem will come if the rate of creation of files is more. Due to increase 
> in the scheduled count, there might be chances of missing local datanode to 
> write to. and some times writes also can fail in small clusters.
> So we need to decrement the unnecessary count on abandon block call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15212876#comment-15212876
 ] 

Hudson commented on HDFS-9694:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9503 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9503/])
HDFS-9694. Make existing DFSClient#getFileChecksum() work for striped 
(uma.gangumalla: rev e5ff0ea7ba087984262f1f27200ae5bb40d9b838)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/FileChecksumHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Op.java
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/datatransfer.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockChecksumHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Sender.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtocol.java


> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HDFS-9694-v1.patch, HDFS-9694-v2.patch, 
> HDFS-9694-v3.patch, HDFS-9694-v4.patch, HDFS-9694-v5.patch, 
> HDFS-9694-v6.patch, HDFS-9694-v7.patch, HDFS-9694-v8.patch, HDFS-9694-v9.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-03-26 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-9694:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
 Release Note: Makes the getFileChecksum API works with striped layout EC 
files. Checksum computation done by block level in the distributed fashion. The 
current API does not support to compare the checksum generated with normal file 
and the checksum generated for the same file but in striped layout.
   Status: Resolved  (was: Patch Available)

I have just committed this patch to trunk.

> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HDFS-9694-v1.patch, HDFS-9694-v2.patch, 
> HDFS-9694-v3.patch, HDFS-9694-v4.patch, HDFS-9694-v5.patch, 
> HDFS-9694-v6.patch, HDFS-9694-v7.patch, HDFS-9694-v8.patch, HDFS-9694-v9.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9579) Provide bytes-read-by-network-distance metrics at FileSystem.Statistics level

2016-03-26 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15212860#comment-15212860
 ] 

Brahma Reddy Battula commented on HDFS-9579:


After this in, I can see , there is one extra log for each client operation " 
Adding a new node: "

{noformat}BLR106554:/home/Trunk/hadoop/bin # ./hdfs dfs -put hadoop /test2
16/03/26 15:07:22 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/03/26 15:07:23 INFO net.NetworkTopology: Adding a new node: 
/default-rack/BLR106554 {noformat}

If the ScriptBasedMapping is used, then topology script should be configured 
and placed in all machines wherever HDFS clients created to get the correct 
values.It will still work, but will not have correct statistics.Since everytime 
client is treated will be resolved as DEFAULT_RACK

> Provide bytes-read-by-network-distance metrics at FileSystem.Statistics level
> -
>
> Key: HDFS-9579
> URL: https://issues.apache.org/jira/browse/HDFS-9579
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 3.0.0, 2.9.0
>
> Attachments: HDFS-9579-10.patch, HDFS-9579-2.patch, 
> HDFS-9579-3.patch, HDFS-9579-4.patch, HDFS-9579-5.patch, HDFS-9579-6.patch, 
> HDFS-9579-7.patch, HDFS-9579-8.patch, HDFS-9579-9.patch, 
> HDFS-9579-branch-2.patch, HDFS-9579.patch, MR job counters.png
>
>
> For cross DC distcp or other applications, it becomes useful to have insight 
> as to the traffic volume for each network distance to distinguish cross-DC 
> traffic, local-DC-remote-rack, etc.
> FileSystem's existing {{bytesRead}} metrics tracks all the bytes read. To 
> provide additional metrics for each network distance, we can add additional 
> metrics to FileSystem level and have {{DFSInputStream}} update the value 
> based on the network distance between client and the datanode.
> {{DFSClient}} will resolve client machine's network location as part of its 
> initialization. It doesn't need to resolve datanode's network location for 
> each read as {{DatanodeInfo}} already has the info.
> There are existing HDFS specific metrics such as {{ReadStatistics}} and 
> {{DFSHedgedReadMetrics}}. But these metrics are only accessible via 
> {{DFSClient}} or {{DFSInputStream}}. Not something that application framework 
> such as MR and Tez can get to. That is the benefit of storing these new 
> metrics in FileSystem.Statistics.
> This jira only includes metrics generation by HDFS. The consumption of these 
> metrics at MR and Tez will be tracked by separated jiras.
> We can add similar metrics for HDFS write scenario later if it is necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9694) Make existing DFSClient#getFileChecksum() work for striped blocks

2016-03-26 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15212855#comment-15212855
 ] 

Uma Maheswara Rao G commented on HDFS-9694:
---

Overall latest patch looking good to me. +1
I will go ahead and push this patch shortly. Thanks Kai for your hard work on 
this.

> Make existing DFSClient#getFileChecksum() work for striped blocks
> -
>
> Key: HDFS-9694
> URL: https://issues.apache.org/jira/browse/HDFS-9694
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HDFS-9694-v1.patch, HDFS-9694-v2.patch, 
> HDFS-9694-v3.patch, HDFS-9694-v4.patch, HDFS-9694-v5.patch, 
> HDFS-9694-v6.patch, HDFS-9694-v7.patch, HDFS-9694-v8.patch, HDFS-9694-v9.patch
>
>
> This is a sub-task of HDFS-8430 and will get the existing API 
> {{FileSystem#getFileChecksum(path)}} work for striped files. It will also 
> refactor existing codes and layout basic work for subsequent tasks like 
> support of the new API proposed there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)