[jira] [Commented] (HDFS-8101) DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS classes at runtime

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487656#comment-14487656
 ] 

Hudson commented on HDFS-8101:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7546 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7546/])
HDFS-8101. DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS classes 
at runtime. Contributed by Sean Busbey. (atm: rev 
3fe61e0bb0d025a6acbb754027f73f3084b2f4d1)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS classes at 
 runtime
 ---

 Key: HDFS-8101
 URL: https://issues.apache.org/jira/browse/HDFS-8101
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8101.1.patch.txt


 Previously, all references to DFSConfigKeys in DFSClient were compile time 
 constants which meant that normal users of DFSClient wouldn't resolve 
 DFSConfigKeys at run time. As of HDFS-7718, DFSClient has a reference to a 
 member of DFSConfigKeys that isn't compile time constant 
 (DFS_CLIENT_KEY_PROVIDER_CACHE_EXPIRY_DEFAULT).
 Since the class must be resolved now, this particular member
 {code}
 public static final String  DFS_WEBHDFS_AUTHENTICATION_FILTER_DEFAULT = 
 AuthFilter.class.getName();
 {code}
 means that javax.servlet.Filter needs to be on the classpath.
 javax-servlet-api is one of the properly listed dependencies for HDFS, 
 however if we replace {{AuthFilter.class.getName()}} with the equivalent 
 String literal then downstream folks can avoid including it while maintaining 
 compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8090) Erasure Coding: Add RPC to client-namenode to list all ECSchemas loaded in Namenode.

2015-04-09 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487167#comment-14487167
 ] 

Vinayakumar B commented on HDFS-8090:
-

bq. FSNamesystem#getECSchemas - do we need audit log here? Actually I got 
confused after going through the existing logs, it looks like audit log has 
been done for getter calls also. For example, getEZForPath, getfileinfo, 
listStatus et.
IMO audit log not required for this call as this is systemwide call. not 
particular to any path. all the above calls referred are have a path.
Similar to this call is {{getStoragePolicies()}} which also doesnt have audit 
log.

 Erasure Coding: Add RPC to client-namenode to list all ECSchemas loaded in 
 Namenode.
 

 Key: HDFS-8090
 URL: https://issues.apache.org/jira/browse/HDFS-8090
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8090-01.patch, HDFS-8090-02.patch


 ECSchemas will be configured and loaded only at the Namenode to avoid 
 conflicts.
 Client has to specify one of these schemas during creation of ecZones.
 So, add an RPC to ClientProtocol to get all ECSchemas loaded at namenode, so 
 that client can choose only any one of these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7240) Object store in HDFS

2015-04-09 Thread Edward Bortnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487635#comment-14487635
 ] 

Edward Bortnikov commented on HDFS-7240:


Great stuff. Block- and object- level storage scales much better from the 
metadata perspective (flat space). Could play really well with the 
block-management-as-a-service proposal (HDFS-5477) that splits the namenode 
into the FS manager and the block manager services, and scales the latter 
horizontally. 

 Object store in HDFS
 

 Key: HDFS-7240
 URL: https://issues.apache.org/jira/browse/HDFS-7240
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: Ozone-architecture-v1.pdf


 This jira proposes to add object store capabilities into HDFS. 
 As part of the federation work (HDFS-1052) we separated block storage as a 
 generic storage layer. Using the Block Pool abstraction, new kinds of 
 namespaces can be built on top of the storage layer i.e. datanodes.
 In this jira I will explore building an object store using the datanode 
 storage, but independent of namespace metadata.
 I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8096) DatanodeMetrics#blocksReplicated will get incremented early and even for failed transfers

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487179#comment-14487179
 ] 

Hudson commented on HDFS-8096:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2090 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2090/])
HDFS-8096. DatanodeMetrics#blocksReplicated will get incremented early and even 
for failed transfers (Contributed by Vinayakumar B) (vinayakumarb: rev 
9d8952f97f638ede27e4336b9601507d7bb1de7b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMetrics.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java


 DatanodeMetrics#blocksReplicated will get incremented early and even for 
 failed transfers
 -

 Key: HDFS-8096
 URL: https://issues.apache.org/jira/browse/HDFS-8096
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Fix For: 2.8.0

 Attachments: HDFS-8096-01.patch


 {code}case DatanodeProtocol.DNA_TRANSFER:
   // Send a copy of a block to another datanode
   dn.transferBlocks(bcmd.getBlockPoolId(), bcmd.getBlocks(),
   bcmd.getTargets(), bcmd.getTargetStorageTypes());
   dn.metrics.incrBlocksReplicated(bcmd.getBlocks().length);{code}
 In the above code to handle replication transfers from namenode, 
 {{DatanodeMetrics#blocksReplicated}} is getting incremented early, since the 
 transfer will happen in background. 
 And even failed transfers also getting counted.
 Correct place to increment this counter is {{DataTransfer#run()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8115) Make PermissionStatusFormat public

2015-04-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487981#comment-14487981
 ] 

Hadoop QA commented on HDFS-8115:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12724291/HDFS-8115.1.patch
  against trunk revision 30acb73.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10235//console

This message is automatically generated.

 Make PermissionStatusFormat public
 --

 Key: HDFS-8115
 URL: https://issues.apache.org/jira/browse/HDFS-8115
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Arun Suresh
Priority: Minor
 Attachments: HDFS-8115.1.patch


 implementations of {{INodeAttributeProvider}} are required to provide an 
 implementation of {{getPermissionLong()}} method. Unfortunately, the long 
 permission format is an encoding of the user, group and mode with each field 
 converted to int using {{SerialNumberManager}} which is package protected.
 Thus it would be nice to make the {{PermissionStatusFormat}} enum public (and 
 also make the {{toLong()}} static method public) so that user specified 
 implementations of {{INodeAttributeProvider}} may use it.
 This would also make it more consistent with {{AclStatusFormat}} which I 
 guess has been made public for the same reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8091) ACLStatus and XAttributes not properly presented to INodeAttributesProvider before returning to client

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14488274#comment-14488274
 ] 

Hudson commented on HDFS-8091:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7554 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7554/])
Fix CHANGES.txt for HDFS-8091 (Arun Suresh: rev 
a813db0b1bed36dc846705640db9a8f9e2cc33de)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 ACLStatus and XAttributes not properly presented to INodeAttributesProvider 
 before returning to client 
 ---

 Key: HDFS-8091
 URL: https://issues.apache.org/jira/browse/HDFS-8091
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: Arun Suresh
Assignee: Arun Suresh
 Fix For: 2.8.0

 Attachments: HDFS-8091.1.patch


 HDFS-6826 introduced the concept of an {{INodeAttributesProvider}}, an 
 implementation of which can be plugged-in so that the Attributes (user / 
 group / permission / acls and xattrs) that are returned for an HDFS path can 
 be altered/enhanced by the user specified code before it is returned to the 
 client.
 Unfortunately, it looks like the AclStatus and XAttributes are not properly 
 presented to the user specified {{INodeAttributedProvider}} before it is 
 returned to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8117) More accurate verification in SimulatedFSDataset: replace DEFAULT_DATABYTE with patterned data

2015-04-09 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8117:

Status: Patch Available  (was: Open)

 More accurate verification in SimulatedFSDataset: replace DEFAULT_DATABYTE 
 with patterned data
 --

 Key: HDFS-8117
 URL: https://issues.apache.org/jira/browse/HDFS-8117
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Zhe Zhang
Assignee: Zhe Zhang

 Currently {{SimulatedFSDataset}} uses a single {{DEFAULT_DATABYTE}} to 
 simulate _all_ block content. This is not accurate because the return of this 
 byte just means the read request has hit an arbitrary position in an 
 arbitrary simulated block.
 This JIRA aims to improve it with a more accurate verification. When position 
 {{p}} of a simulated block {{b}} is accessed, the returned byte is {{b}}'s 
 block ID plus {{p}}, moduled by the max value of a byte.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8096) DatanodeMetrics#blocksReplicated will get incremented early and even for failed transfers

2015-04-09 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14486802#comment-14486802
 ] 

Uma Maheswara Rao G commented on HDFS-8096:
---

Thanks Vinay for the patch. Makesense to me for incrementing the metric count 
only when real transfer happened. I agree for having test for intermediate 
failures would be difficult. Thanks for having one positive test here.
+1


 DatanodeMetrics#blocksReplicated will get incremented early and even for 
 failed transfers
 -

 Key: HDFS-8096
 URL: https://issues.apache.org/jira/browse/HDFS-8096
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8096-01.patch


 {code}case DatanodeProtocol.DNA_TRANSFER:
   // Send a copy of a block to another datanode
   dn.transferBlocks(bcmd.getBlockPoolId(), bcmd.getBlocks(),
   bcmd.getTargets(), bcmd.getTargetStorageTypes());
   dn.metrics.incrBlocksReplicated(bcmd.getBlocks().length);{code}
 In the above code to handle replication transfers from namenode, 
 {{DatanodeMetrics#blocksReplicated}} is getting incremented early, since the 
 transfer will happen in background. 
 And even failed transfers also getting counted.
 Correct place to increment this counter is {{DataTransfer#run()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8117) More accurate verification in SimulatedFSDataset: replace DEFAULT_DATABYTE with patterned data

2015-04-09 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8117:

Attachment: HDFS-8117.000.patch

 More accurate verification in SimulatedFSDataset: replace DEFAULT_DATABYTE 
 with patterned data
 --

 Key: HDFS-8117
 URL: https://issues.apache.org/jira/browse/HDFS-8117
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8117.000.patch


 Currently {{SimulatedFSDataset}} uses a single {{DEFAULT_DATABYTE}} to 
 simulate _all_ block content. This is not accurate because the return of this 
 byte just means the read request has hit an arbitrary position in an 
 arbitrary simulated block.
 This JIRA aims to improve it with a more accurate verification. When position 
 {{p}} of a simulated block {{b}} is accessed, the returned byte is {{b}}'s 
 block ID plus {{p}}, moduled by the max value of a byte.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8113) NullPointerException in BlockInfoContiguous causes block report failure

2015-04-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487492#comment-14487492
 ] 

Hadoop QA commented on HDFS-8113:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12724205/HDFS-8113.patch
  against trunk revision 6495940.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10230//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10230//console

This message is automatically generated.

 NullPointerException in BlockInfoContiguous causes block report failure
 ---

 Key: HDFS-8113
 URL: https://issues.apache.org/jira/browse/HDFS-8113
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0, 2.7.0
Reporter: Chengbing Liu
Assignee: Chengbing Liu
 Attachments: HDFS-8113.patch


 The following copy constructor can throw NullPointerException if {{bc}} is 
 null.
 {code}
   protected BlockInfoContiguous(BlockInfoContiguous from) {
 this(from, from.bc.getBlockReplication());
 this.bc = from.bc;
   }
 {code}
 We have observed that some DataNodes keeps failing doing block reports with 
 NameNode. The stacktrace is as follows. Though we are not using the latest 
 version, the problem still exists.
 {quote}
 2015-03-08 19:28:13,442 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 RemoteException in offerService
 org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
 java.lang.NullPointerException
 at org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.(BlockInfo.java:80)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockToMarkCorrupt.(BlockManager.java:1696)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.checkReplicaCorrupt(BlockManager.java:2185)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReportedBlock(BlockManager.java:2047)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiff(BlockManager.java:1950)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1823)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1750)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1069)
 at 
 org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:152)
 at 
 org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:26382)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1623)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7933) fsck should also report decommissioning replicas.

2015-04-09 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-7933:

Target Version/s: 2.8.0

+1 for patch v03 pending Jenkins.  Thanks for incorporating the feedback.  I'm 
targeting this to 2.8.0 since the release process has begun for 2.7.0.

 fsck should also report decommissioning replicas. 
 --

 Key: HDFS-7933
 URL: https://issues.apache.org/jira/browse/HDFS-7933
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Jitendra Nath Pandey
Assignee: Xiaoyu Yao
 Attachments: HDFS-7933.00.patch, HDFS-7933.01.patch, 
 HDFS-7933.02.patch, HDFS-7933.03.patch


 Fsck doesn't count replicas that are on decommissioning nodes. If a block has 
 all replicas on the decommissioning nodes, it will be marked as missing, 
 which is alarming for the admins, although the system will replicate them 
 before nodes are decommissioned.
 Fsck output should also show decommissioning replicas along with the live 
 replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8096) DatanodeMetrics#blocksReplicated will get incremented early and even for failed transfers

2015-04-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8096:

   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

 DatanodeMetrics#blocksReplicated will get incremented early and even for 
 failed transfers
 -

 Key: HDFS-8096
 URL: https://issues.apache.org/jira/browse/HDFS-8096
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Fix For: 2.8.0

 Attachments: HDFS-8096-01.patch


 {code}case DatanodeProtocol.DNA_TRANSFER:
   // Send a copy of a block to another datanode
   dn.transferBlocks(bcmd.getBlockPoolId(), bcmd.getBlocks(),
   bcmd.getTargets(), bcmd.getTargetStorageTypes());
   dn.metrics.incrBlocksReplicated(bcmd.getBlocks().length);{code}
 In the above code to handle replication transfers from namenode, 
 {{DatanodeMetrics#blocksReplicated}} is getting incremented early, since the 
 transfer will happen in background. 
 And even failed transfers also getting counted.
 Correct place to increment this counter is {{DataTransfer#run()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8096) DatanodeMetrics#blocksReplicated will get incremented early and even for failed transfers

2015-04-09 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14486821#comment-14486821
 ] 

Vinayakumar B commented on HDFS-8096:
-

Thanks [~umamaheswararao] for review.
Committed to trunk and branch-2

 DatanodeMetrics#blocksReplicated will get incremented early and even for 
 failed transfers
 -

 Key: HDFS-8096
 URL: https://issues.apache.org/jira/browse/HDFS-8096
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8096-01.patch


 {code}case DatanodeProtocol.DNA_TRANSFER:
   // Send a copy of a block to another datanode
   dn.transferBlocks(bcmd.getBlockPoolId(), bcmd.getBlocks(),
   bcmd.getTargets(), bcmd.getTargetStorageTypes());
   dn.metrics.incrBlocksReplicated(bcmd.getBlocks().length);{code}
 In the above code to handle replication transfers from namenode, 
 {{DatanodeMetrics#blocksReplicated}} is getting incremented early, since the 
 transfer will happen in background. 
 And even failed transfers also getting counted.
 Correct place to increment this counter is {{DataTransfer#run()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8114) Erasure coding: Add auditlog FSNamesystem#createErasureCodingZone if this operation fails

2015-04-09 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8114:
---
Description: 
While reviewing, I've noticed {{createErasureCodingZone}} is not adding 
auditlog if this operation fails. IMHO its good to capture failure case also.
{code}
logAuditEvent(true, createErasureCodingZone, srcArg, null, resultingStat);
{code}

  was:
While reviewing, I've noticed {{createErasureCodingZone}} is not adding 
auditlog if this operation fails. IMHO its good to capture failure case also.
{code}
logAuditEvent(success, createErasureCodingZone, srcArg, null, resultingStat);
{code}


 Erasure coding: Add auditlog FSNamesystem#createErasureCodingZone if this 
 operation fails
 -

 Key: HDFS-8114
 URL: https://issues.apache.org/jira/browse/HDFS-8114
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Attachments: HDFS-8114-001.patch


 While reviewing, I've noticed {{createErasureCodingZone}} is not adding 
 auditlog if this operation fails. IMHO its good to capture failure case also.
 {code}
 logAuditEvent(true, createErasureCodingZone, srcArg, null, resultingStat);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8114) Erasure coding: Add auditlog FSNamesystem#createErasureCodingZone if this operation fails

2015-04-09 Thread Rakesh R (JIRA)
Rakesh R created HDFS-8114:
--

 Summary: Erasure coding: Add auditlog 
FSNamesystem#createErasureCodingZone if this operation fails
 Key: HDFS-8114
 URL: https://issues.apache.org/jira/browse/HDFS-8114
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor


While reviewing, I've noticed {{createErasureCodingZone}} is not adding 
auditlog if this operation fails. IMHO its good to capture failure case also.
{code}
logAuditEvent(success, createErasureCodingZone, srcArg, null, resultingStat);
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8111) NPE thrown when invalid FSImage filename given for hdfs oiv_legacy cmd

2015-04-09 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487275#comment-14487275
 ] 

Brahma Reddy Battula commented on HDFS-8111:


[~surendrasingh] Thanks for taking this issueLGTM,,+1 ( non binding)

 NPE thrown when invalid FSImage filename given for hdfs oiv_legacy cmd
 

 Key: HDFS-8111
 URL: https://issues.apache.org/jira/browse/HDFS-8111
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.6.0
Reporter: Archana T
Assignee: surendra singh lilhore
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-8111.patch


 NPE thrown when invalid filename is given as argument for hdfs oiv_legacy 
 command
 {code}
 ./hdfs oiv_legacy -i 
 /home/hadoop/hadoop/hadoop-3.0.0/dfs/name/current/fsimage_00042 
 -o fsimage.txt 
 Exception in thread main java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer.go(OfflineImageViewer.java:140)
 at 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer.main(OfflineImageViewer.java:260)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8110) Remove unsupported operation , -rollingUpgrade downgrade related information from document

2015-04-09 Thread J.Andreina (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487299#comment-14487299
 ] 

J.Andreina commented on HDFS-8110:
--

Testcase failures are not related  to this patch.

 Remove unsupported operation , -rollingUpgrade downgrade related information 
 from document
 --

 Key: HDFS-8110
 URL: https://issues.apache.org/jira/browse/HDFS-8110
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Attachments: HDFS-8110.1.patch


 Support for -rollingUpgrade downgrade is been removed as part of HDFS-7302.
 Corresponding information should be removed from document also.
 {noformat}
 Downgrade with Downtime
 Administrator may choose to first shutdown the cluster and then downgrade it. 
 The following are the steps:
 Shutdown all NNs and DNs.
 Restore the pre-upgrade release in all machines.
 Start NNs with the -rollingUpgrade downgrade option.
 Start DNs normally.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8109) ECManager should be able to manage multiple ECSchemas

2015-04-09 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14486985#comment-14486985
 ] 

Kai Zheng commented on HDFS-8109:
-

Hi [~rakeshr],

Yes I'm working on HDFS-7866. This should be resolved as a duplicate of it. 
Thanks.
If you're interested in or would like to help with, I would see if any sub-task 
chance from HDFS-7866.

 ECManager should be able to manage multiple ECSchemas
 -

 Key: HDFS-8109
 URL: https://issues.apache.org/jira/browse/HDFS-8109
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Hui Zheng

 [HDFS-8074|https://issues.apache.org/jira/browse/HDFS-8074] has implemented a 
 default EC Schema.
 But a user may use another predefined schema when he creates an EC zone.  
 Maybe we need to implement to get a ECSchema from ECManager by its schema 
 name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8063) Fix intermittent test failures in TestTracing

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487978#comment-14487978
 ] 

Hudson commented on HDFS-8063:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7550 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7550/])
HDFS-8063: Fix intermittent test failures in TestTracing (Masatake Iwasaki via 
Colin P. McCabe) (cmccabe: rev 61dc2ea3fee4085b19cd2d01de9eacdc4c42e21f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracing.java


 Fix intermittent test failures in TestTracing
 -

 Key: HDFS-8063
 URL: https://issues.apache.org/jira/browse/HDFS-8063
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-8063.001.patch, HDFS-8063.002.patch, 
 testReadTraceHooks.html


 Tests in TestTracing sometimes fails, especially on slow machine. The cause 
 is that spans is possible to arrive at receiver after 
 {{assertSpanNamesFound}} passed and 
 {{SetSpanReceiver.SetHolder.spans.clear()}} is called for next test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8109) ECManager should be able to manage multiple ECSchemas

2015-04-09 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487009#comment-14487009
 ] 

Rakesh R commented on HDFS-8109:


bq.If you're interested in or would like to help with, I would see if any 
sub-task chance from HDFS-7866.
Thats great!

 ECManager should be able to manage multiple ECSchemas
 -

 Key: HDFS-8109
 URL: https://issues.apache.org/jira/browse/HDFS-8109
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Hui Zheng

 [HDFS-8074|https://issues.apache.org/jira/browse/HDFS-8074] has implemented a 
 default EC Schema.
 But a user may use another predefined schema when he creates an EC zone.  
 Maybe we need to implement to get a ECSchema from ECManager by its schema 
 name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8115) Make PermissionStatusFormat public

2015-04-09 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reassigned HDFS-8115:
-

Assignee: Arun Suresh

 Make PermissionStatusFormat public
 --

 Key: HDFS-8115
 URL: https://issues.apache.org/jira/browse/HDFS-8115
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Arun Suresh
Priority: Minor

 implementations of {{INodeAttributeProvider}} are required to provide an 
 implementation of {{getPermissionLong()}} method. Unfortunately, the long 
 permission format is an encoding of the user, group and mode with each field 
 converted to int using {{SerialNumberManager}} which is package protected.
 Thus it would be nice to make the {{PermissionStatusFormat}} enum public (and 
 also make the {{toLong()}} static method public) so that user specified 
 implementations of {{INodeAttributeProvider}} may use it.
 This would also make it more consistent with {{AclStatusFormat}} which I 
 guess has been made public for the same reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8111) NPE thrown when invalid FSImage filename given for hdfs oiv_legacy cmd

2015-04-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487541#comment-14487541
 ] 

Hadoop QA commented on HDFS-8111:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12724208/HDFS-8111.patch
  against trunk revision 6495940.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10231//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10231//console

This message is automatically generated.

 NPE thrown when invalid FSImage filename given for hdfs oiv_legacy cmd
 

 Key: HDFS-8111
 URL: https://issues.apache.org/jira/browse/HDFS-8111
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.6.0
Reporter: Archana T
Assignee: surendra singh lilhore
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-8111.patch


 NPE thrown when invalid filename is given as argument for hdfs oiv_legacy 
 command
 {code}
 ./hdfs oiv_legacy -i 
 /home/hadoop/hadoop/hadoop-3.0.0/dfs/name/current/fsimage_00042 
 -o fsimage.txt 
 Exception in thread main java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer.go(OfflineImageViewer.java:140)
 at 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer.main(OfflineImageViewer.java:260)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8117) More accurate verification in SimulatedFSDataset: replace DEFAULT_DATABYTE with patterned data

2015-04-09 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14488305#comment-14488305
 ] 

Zhe Zhang commented on HDFS-8117:
-

I also had to change some unit tests to actually obtain the list of blocks of a 
file, in order to verify file content.

 More accurate verification in SimulatedFSDataset: replace DEFAULT_DATABYTE 
 with patterned data
 --

 Key: HDFS-8117
 URL: https://issues.apache.org/jira/browse/HDFS-8117
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8117.000.patch


 Currently {{SimulatedFSDataset}} uses a single {{DEFAULT_DATABYTE}} to 
 simulate _all_ block content. This is not accurate because the return of this 
 byte just means the read request has hit an arbitrary position in an 
 arbitrary simulated block.
 This JIRA aims to improve it with a more accurate verification. When position 
 {{p}} of a simulated block {{b}} is accessed, the returned byte is {{b}}'s 
 block ID plus {{p}}, moduled by the max value of a byte.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7725) Incorrect nodes in service metrics caused all writes to fail

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487189#comment-14487189
 ] 

Hudson commented on HDFS-7725:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2090 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2090/])
HDFS-7725. Incorrect 'nodes in service' metrics caused all writes to fail. 
Contributed by Ming Ma. (wang: rev 6af0d74a75f0f58d5e92e2e91e87735b9a62bb12)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeCapacityReport.java


 Incorrect nodes in service metrics caused all writes to fail
 --

 Key: HDFS-7725
 URL: https://issues.apache.org/jira/browse/HDFS-7725
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
 Fix For: 2.8.0

 Attachments: HDFS-7725-2.patch, HDFS-7725-3.patch, HDFS-7725.patch


 One of our clusters sometimes couldn't allocate blocks from any DNs. 
 BlockPlacementPolicyDefault complains with the following messages for all DNs.
 {noformat}
 the node is too busy (load:x  y)
 {noformat}
 It turns out the {{HeartbeatManager}}'s {{nodesInService}} was computed 
 incorrectly when admins decomm or recomm dead nodes. Here are two scenarios.
 * Decomm dead nodes. It turns out HDFS-7374 has fixed it; not sure if it is 
 intentional. cc / [~zhz], [~andrew.wang], [~atm] Here is the sequence of 
 event without HDFS-7374.
 ** Cluster has one live node. nodesInService == 1
 ** The node becomes dead. nodesInService == 0
 ** Decomm the node. nodesInService == -1
 * However, HDFS-7374 introduces another inconsistency when recomm is involved.
 ** Cluster has one live node. nodesInService == 1
 ** The node becomes dead. nodesInService == 0
 ** Decomm the node. nodesInService == 0
 ** Recomm the node. nodesInService == 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7931) Spurious Error message Could not find uri with key [dfs.encryption.key.provider.uri] to create a key appears even when Encryption is dissabled

2015-04-09 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-7931:
--
Attachment: HDFS-7931.3.patch

Thanks [~xyao] for the review..
re-uploading trunk-rebased patch and kicking off Jenkins again.. 
Will be committing after that..

 Spurious Error message Could not find uri with key 
 [dfs.encryption.key.provider.uri] to create a key appears even when 
 Encryption is dissabled
 

 Key: HDFS-7931
 URL: https://issues.apache.org/jira/browse/HDFS-7931
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Arun Suresh
Assignee: Arun Suresh
Priority: Minor
 Attachments: HDFS-7931.1.patch, HDFS-7931.2.patch, HDFS-7931.2.patch, 
 HDFS-7931.3.patch


 The {{addDelegationTokens}} method in {{DistributedFileSystem}} calls 
 {{DFSClient#getKeyProvider()}} which attempts to get a provider from the 
 {{KeyProvderCache}} but since the required key, 
 *dfs.encryption.key.provider.uri* is not present (due to encryption being 
 dissabled), it throws an exception.
 {noformat}
 2015-03-11 23:55:47,849 [JobControl] ER ROR 
 org.apache.hadoop.hdfs.KeyProviderCache - Could not find uri with key 
 [dfs.encryption.key.provider.uri] to create a keyProvider !!
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8112) Enforce authorization policy to protect administration operations for EC zone and schemas

2015-04-09 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487083#comment-14487083
 ] 

Kai Zheng commented on HDFS-8112:
-

Hmm, maybe you could learn a little bit about [Hadoop Service Level 
Authorization|https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/ServiceLevelAuth.html]
 I guess? In codes, please see {{RefreshAuthorizationPolicyProtocol}}, which 
ensures only privileged users/admins to be able to update and load 
{{hadoop-policy.xml}} ACL file. This is nothing special from that. Hope this 
helps, thanks.

 Enforce authorization policy to protect administration operations for EC zone 
 and schemas
 -

 Key: HDFS-8112
 URL: https://issues.apache.org/jira/browse/HDFS-8112
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng

 We should allow to enforce authorization policy to protect administration 
 operations for EC zone and schemas as such behaviors would impact too much 
 for a system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7993) Incorrect descriptions in fsck when nodes are decommissioned

2015-04-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487424#comment-14487424
 ] 

Hadoop QA commented on HDFS-7993:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12724199/HDFS-7993.2.patch
  against trunk revision 6495940.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10229//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10229//console

This message is automatically generated.

 Incorrect descriptions in fsck when nodes are decommissioned
 

 Key: HDFS-7993
 URL: https://issues.apache.org/jira/browse/HDFS-7993
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Ming Ma
Assignee: J.Andreina
 Attachments: HDFS-7993.1.patch, HDFS-7993.2.patch


 When you run fsck with -files or -racks, you will get something like 
 below if one of the replicas is decommissioned.
 {noformat}
 blk_x len=y repl=3 [dn1, dn2, dn3, dn4]
 {noformat}
 That is because in NamenodeFsck, the repl count comes from live replicas 
 count; while the actual nodes come from LocatedBlock which include 
 decommissioned nodes.
 Another issue in NamenodeFsck is BlockPlacementPolicy's verifyBlockPlacement 
 verifies LocatedBlock that includes decommissioned nodes. However, it seems 
 better to exclude the decommissioned nodes in the verification; just like how 
 fsck excludes decommissioned nodes when it check for under replicated blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HDFS-8115) Make PermissionStatusFormat public

2015-04-09 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh moved YARN-3470 to HDFS-8115:
-

Target Version/s: 2.8.0  (was: 2.7.0)
 Key: HDFS-8115  (was: YARN-3470)
 Project: Hadoop HDFS  (was: Hadoop YARN)

 Make PermissionStatusFormat public
 --

 Key: HDFS-8115
 URL: https://issues.apache.org/jira/browse/HDFS-8115
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arun Suresh
Priority: Minor

 implementations of {{INodeAttributeProvider}} are required to provide an 
 implementation of {{getPermissionLong()}} method. Unfortunately, the long 
 permission format is an encoding of the user, group and mode with each field 
 converted to int using {{SerialNumberManager}} which is package protected.
 Thus it would be nice to make the {{PermissionStatusFormat}} enum public (and 
 also make the {{toLong()}} static method public) so that user specified 
 implementations of {{INodeAttributeProvider}} may use it.
 This would also make it more consistent with {{AclStatusFormat}} which I 
 guess has been made public for the same reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7240) Object store in HDFS

2015-04-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487322#comment-14487322
 ] 

Steve Loughran commented on HDFS-7240:
--

cluster level

# is there a limit on the #of storage volumes in a cluster? does GET/ return 
all of them? 

Storage Volume Level 
# any way to enum users? e.g. GET /admin/user/
# 

Bucket Level
# what if I want to GET the 1001st entry in an object store? GET spec doesn't 
allow this.
# Propose: listing of entries to be a structure that includes length, block 
sizes, everything to be rebuild into a FileStatus

Object level
# GET on object must support ranges
# HEAD should supply content-length



 Object store in HDFS
 

 Key: HDFS-7240
 URL: https://issues.apache.org/jira/browse/HDFS-7240
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: Ozone-architecture-v1.pdf


 This jira proposes to add object store capabilities into HDFS. 
 As part of the federation work (HDFS-1052) we separated block storage as a 
 generic storage layer. Using the Block Pool abstraction, new kinds of 
 namespaces can be built on top of the storage layer i.e. datanodes.
 In this jira I will explore building an object store using the datanode 
 storage, but independent of namespace metadata.
 I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8062) Remove hard-coded values in favor of EC schema

2015-04-09 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14486784#comment-14486784
 ] 

Kai Zheng commented on HDFS-8062:
-

The change looks great. Comments so far:
1. Is it possible to use the default ECSchema object itself in 
{{DFSStripedInputStream}} as it will use a schema particularly coming to 
recover erased block data?
2. Better to mention the JIRA (HDFS-7866) in the related TODOs;
3. Please use {{ECSchema#getNumTotalUnits}} when possible since you added it;
4. Could we have a TODO in {{BlockInfoStriped#write}}? It should persist the 
schema object itself I guess.
5. In the following, I guess we should always need and use a passed schema 
object, instead of the default one here. Please also check other places. I 
thought we should limit the places to use the default schema, as it might be 
only used in higher entries where dynamic schema object will be used to replace 
it later.
{code}
+  public BlockInfoStripedUnderConstruction(Block blk) {
+this(blk, ECSchemaManager.getSystemDefaultSchema());
+  }
{code}
6. {{TestReadStripedFile}} is refactored in HDFS-8104, we could remote the 
change for it here.

 Remove hard-coded values in favor of EC schema
 --

 Key: HDFS-8062
 URL: https://issues.apache.org/jira/browse/HDFS-8062
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Sasaki
 Attachments: HDFS-8062.1.patch, HDFS-8062.2.patch


 Related issues about EC schema in NameNode side:
 HDFS-7859 is to change fsimage and editlog in NameNode to persist EC schemas;
 HDFS-7866 is to manage EC schemas in NameNode, loading, syncing between 
 persisted ones in image and predefined ones in XML.
 This is to revisit all the places in NameNode that uses hard-coded values in 
 favor of {{ECSchema}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8104) Make hard-coded values consistent with the system default schema first before remove them

2015-04-09 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8104:

Attachment: HDFS-8104-v2.patch

Update the patch according to review comments.

 Make hard-coded values consistent with the system default schema first before 
 remove them
 -

 Key: HDFS-8104
 URL: https://issues.apache.org/jira/browse/HDFS-8104
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8104-v1.patch, HDFS-8104-v2.patch


 It's not easy to remove the hard-coded values to use the system default 
 schema. We may need several steps/issues to cover relevant aspects. First of 
 all, let's make the hard-coded values consistent with the system default 
 schema first. This might not so easy, as experimental test indicated, when 
 change the following two lines, some tests then failed.
 {code}
 -  public static final byte NUM_DATA_BLOCKS = 3;
 -  public static final byte NUM_PARITY_BLOCKS = 2;
 +  public static final byte NUM_DATA_BLOCKS = 6;
 +  public static final byte NUM_PARITY_BLOCKS = 3;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8088) Reduce the number of HTrace spans generated by HDFS reads

2015-04-09 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487376#comment-14487376
 ] 

Yi Liu commented on HDFS-8088:
--

The patch needs to be rebased.
I agree that we don't need to have trace span for each read, which will affect 
performance.  I have checked that we have not added trace span for {{pread}}. 
The patch looks good to me.
BTW,
{code}
-int hedgedReadId = 0;
+int hedgedReadId = 1;
{code}
This change is not necessary.

 Reduce the number of HTrace spans generated by HDFS reads
 -

 Key: HDFS-8088
 URL: https://issues.apache.org/jira/browse/HDFS-8088
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-8088.001.patch


 HDFS generates too many trace spans on read right now.  Every call to read() 
 we make generates its own span, which is not very practical for things like 
 HBase or Accumulo that do many such reads as part of a single operation.  
 Instead of tracing every call to read(), we should only trace the cases where 
 we refill the buffer inside a BlockReader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7933) fsck should also report decommissioning replicas.

2015-04-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14488267#comment-14488267
 ] 

Hadoop QA commented on HDFS-7933:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12724284/HDFS-7933.03.patch
  against trunk revision 63c659d.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.balancer.TestBalancer
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.namenode.TestBackupNode
org.apache.hadoop.hdfs.server.namenode.TestSecondaryWebUi

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10234//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10234//console

This message is automatically generated.

 fsck should also report decommissioning replicas. 
 --

 Key: HDFS-7933
 URL: https://issues.apache.org/jira/browse/HDFS-7933
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Jitendra Nath Pandey
Assignee: Xiaoyu Yao
 Attachments: HDFS-7933.00.patch, HDFS-7933.01.patch, 
 HDFS-7933.02.patch, HDFS-7933.03.patch


 Fsck doesn't count replicas that are on decommissioning nodes. If a block has 
 all replicas on the decommissioning nodes, it will be marked as missing, 
 which is alarming for the admins, although the system will replicate them 
 before nodes are decommissioned.
 Fsck output should also show decommissioning replicas along with the live 
 replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8112) Enforce authorization policy to protect administration operations for EC zone and schemas

2015-04-09 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8112:

Assignee: Rakesh R  (was: Kai Zheng)

 Enforce authorization policy to protect administration operations for EC zone 
 and schemas
 -

 Key: HDFS-8112
 URL: https://issues.apache.org/jira/browse/HDFS-8112
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Rakesh R

 We should allow to enforce authorization policy to protect administration 
 operations for EC zone and schemas as such behaviors would impact too much 
 for a system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8112) Enforce authorization policy to protect administration operations for EC zone and schemas

2015-04-09 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487107#comment-14487107
 ] 

Rakesh R commented on HDFS-8112:


Thanks again for the details. I will go through it. Kindly assign the issue to 
me.

 Enforce authorization policy to protect administration operations for EC zone 
 and schemas
 -

 Key: HDFS-8112
 URL: https://issues.apache.org/jira/browse/HDFS-8112
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng

 We should allow to enforce authorization policy to protect administration 
 operations for EC zone and schemas as such behaviors would impact too much 
 for a system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8091) ACLStatus and XAttributes not properly presented to INodeAttributesProvider before returning to client

2015-04-09 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-8091:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

 ACLStatus and XAttributes not properly presented to INodeAttributesProvider 
 before returning to client 
 ---

 Key: HDFS-8091
 URL: https://issues.apache.org/jira/browse/HDFS-8091
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: Arun Suresh
Assignee: Arun Suresh
 Fix For: 2.8.0

 Attachments: HDFS-8091.1.patch


 HDFS-6826 introduced the concept of an {{INodeAttributesProvider}}, an 
 implementation of which can be plugged-in so that the Attributes (user / 
 group / permission / acls and xattrs) that are returned for an HDFS path can 
 be altered/enhanced by the user specified code before it is returned to the 
 client.
 Unfortunately, it looks like the AclStatus and XAttributes are not properly 
 presented to the user specified {{INodeAttributedProvider}} before it is 
 returned to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7933) fsck should also report decommissioning replicas.

2015-04-09 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7933:
-
Attachment: HDFS-7933.03.patch

Thanks Chris for the review. I've updated the patch based on your comments.

bq.In BlockManager#chooseSourceDatanode, should the decommissioning counter be 
incremented by countableReplica, like for the other counters?

Good catch. Fixed.

bq.Could the deprecated NumberReplicas#decommissionedReplicas be implemented to 
forward the call to decommissionedAndDecommissioning? If so, then we can 
eliminate the decommissionedReplicas member variable.

Agree and fixed.

bq. The new test in TestFsck appears to create a StringBuilder, append data to 
it, and then never use it for anything. Can it be removed?

Fixed.

 fsck should also report decommissioning replicas. 
 --

 Key: HDFS-7933
 URL: https://issues.apache.org/jira/browse/HDFS-7933
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Jitendra Nath Pandey
Assignee: Xiaoyu Yao
 Attachments: HDFS-7933.00.patch, HDFS-7933.01.patch, 
 HDFS-7933.02.patch, HDFS-7933.03.patch


 Fsck doesn't count replicas that are on decommissioning nodes. If a block has 
 all replicas on the decommissioning nodes, it will be marked as missing, 
 which is alarming for the admins, although the system will replicate them 
 before nodes are decommissioned.
 Fsck output should also show decommissioning replicas along with the live 
 replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8055) NullPointerException when topology script is missing.

2015-04-09 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14488202#comment-14488202
 ] 

Chris Nauroth commented on HDFS-8055:
-

Hi Anu.  Nice work tracking down this bug.  The patch looks mostly good.  I 
have just a few comments on the tests.
# I think we can get the tests running on Windows.  To do that, we'd remove the 
{{assumeTrue}} calls and add cmd scripts equivalent to the bash scripts.  In 
the calls to your helper function, you can use 
{{org.apache.hadoop.util.Shell#appendScriptExtension}} to set it up to call 
either the sh or the cmd file, i.e.: {{HelperFunction(/ + 
Shell.appendScriptExtension(topology-script))}}.
# Instead of using a script that is syntactically broken, can we have a script 
that is hard-coded to do {{exit 1}}?  We're about to add ShellCheck static 
analysis of the bash code into our pre-commit runs.  I can see that ShellCheck 
will flag an error in this script, so I'd like to avoid false notifications.

Thanks!

 NullPointerException when topology script is missing.
 -

 Key: HDFS-8055
 URL: https://issues.apache.org/jira/browse/HDFS-8055
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.2.0
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: 2.7.0

 Attachments: hdfs-8055.001.patch


 We've received reports that the NameNode can get a NullPointerException when 
 the topology script is missing. This issue tracks investigating whether or 
 not we can improve the validation logic and give a more informative error 
 message.
 Here is a sample stack trace :
 Getting NPE from HDFS:
  
  2015-02-06 23:02:12,250 ERROR [pool-4-thread-1] util.HFileV1Detector: Got 
 exception while reading trailer for 
 file:hdfs://hqhd02nm01.pclc0.merkle.local:8020/hbase/.META./1028785192/info/1490a396aea448b693da563f76a28486^M
  org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
 java.lang.NullPointerException^M
  at 
 org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.sortLocatedBlocks(DatanodeManager.java:359)^M
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1789)^M
  at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)^M
  at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)^M
  at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)^M
  at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)^M
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)^M
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)^M
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)^M
  at java.security.AccessController.doPrivileged(Native Method)^M
  at javax.security.auth.Subject.doAs(Subject.java:415)^M
  at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)^M
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)^M
  ^M
  at org.apache.hadoop.ipc.Client.call(Client.java:1468)^M
  at org.apache.hadoop.ipc.Client.call(Client.java:1399)^M
  at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)^M
  at com.sun.proxy.$Proxy14.getBlockLocations(Unknown Source)^M
  at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:254)^M
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)^M
  at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)^M
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)^M
  at java.lang.reflect.Method.invoke(Method.java:606)^M
  at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)^M
  at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)^M
  at com.sun.proxy.$Proxy15.getBlockLocations(Unknown Source)^M
  at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1220)^M
  at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1210)^M
  at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1200)^M
  at 
 

[jira] [Assigned] (HDFS-8070) ShortCircuitShmManager goes into dead mode, stopping all operations

2015-04-09 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HDFS-8070:


Assignee: Kihwal Lee

 ShortCircuitShmManager goes into dead mode, stopping all operations
 ---

 Key: HDFS-8070
 URL: https://issues.apache.org/jira/browse/HDFS-8070
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: caching
Affects Versions: 2.8.0
Reporter: Gopal V
Assignee: Kihwal Lee

 HDFS ShortCircuitShm layer keeps the task locked up during multi-threaded 
 split-generation.
 I hit this immediately after I upgraded the data, so I wonder if the 
 ShortCircuitShim wire protocol has trouble when 2.8.0 DN talks to a 2.7.0 
 Client?
 {code}
 2015-04-06 00:04:30,780 INFO [ORC_GET_SPLITS #3] orc.OrcInputFormat: ORC 
 pushdown predicate: leaf-0 = (IS_NULL ss_sold_date_sk)
 expr = (not leaf-0)
 2015-04-06 00:04:30,781 ERROR [ShortCircuitCache_SlotReleaser] 
 shortcircuit.ShortCircuitCache: ShortCircuitCache(0x29e82045): failed to 
 release short-circuit shared memory slot Slot(slotIdx=2, 
 shm=DfsClientShm(a86ee34576d93c4964005d90b0d97c38)) by sending 
 ReleaseShortCircuitAccessRequestProto to /grid/0/cluster/hdfs/dn_socket.  
 Closing shared memory segment.
 java.io.IOException: ERROR_INVALID: there is no shared memory segment 
 registered with shmId a86ee34576d93c4964005d90b0d97c38
   at 
 org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache$SlotReleaser.run(ShortCircuitCache.java:208)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   at java.lang.Thread.run(Thread.java:745)
 2015-04-06 00:04:30,781 INFO [ORC_GET_SPLITS #5] orc.OrcInputFormat: ORC 
 pushdown predicate: leaf-0 = (IS_NULL ss_sold_date_sk)
 expr = (not leaf-0)
 2015-04-06 00:04:30,781 WARN [ShortCircuitCache_SlotReleaser] 
 shortcircuit.DfsClientShmManager: EndpointShmManager(172.19.128.60:50010, 
 parent=ShortCircuitShmManager(5e763476)): error shutting down shm: got 
 IOException calling shutdown(SHUT_RDWR)
 java.nio.channels.ClosedChannelException
   at 
 org.apache.hadoop.util.CloseableReferenceCount.reference(CloseableReferenceCount.java:57)
   at 
 org.apache.hadoop.net.unix.DomainSocket.shutdown(DomainSocket.java:387)
   at 
 org.apache.hadoop.hdfs.shortcircuit.DfsClientShmManager$EndpointShmManager.shutdown(DfsClientShmManager.java:378)
   at 
 org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache$SlotReleaser.run(ShortCircuitCache.java:223)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   at java.lang.Thread.run(Thread.java:745)
 2015-04-06 00:04:30,783 INFO [ORC_GET_SPLITS #7] orc.OrcInputFormat: ORC 
 pushdown predicate: leaf-0 = (IS_NULL cs_sold_date_sk)
 expr = (not leaf-0)
 2015-04-06 00:04:30,785 ERROR [ShortCircuitCache_SlotReleaser] 
 shortcircuit.ShortCircuitCache: ShortCircuitCache(0x29e82045): failed to 
 release short-circuit shared memory slot Slot(slotIdx=4, 
 shm=DfsClientShm(a86ee34576d93c4964005d90b0d97c38)) by sending 
 ReleaseShortCircuitAccessRequestProto to /grid/0/cluster/hdfs/dn_socket.  
 Closing shared memory segment.
 java.io.IOException: ERROR_INVALID: there is no shared memory segment 
 registered with shmId a86ee34576d93c4964005d90b0d97c38
   at 
 org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache$SlotReleaser.run(ShortCircuitCache.java:208)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
   at 
 

[jira] [Created] (HDFS-8117) More accurate verification in SimulatedFSDataset: replace DEFAULT_DATABYTE with patterned data

2015-04-09 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-8117:
---

 Summary: More accurate verification in SimulatedFSDataset: replace 
DEFAULT_DATABYTE with patterned data
 Key: HDFS-8117
 URL: https://issues.apache.org/jira/browse/HDFS-8117
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Zhe Zhang
Assignee: Zhe Zhang


Currently {{SimulatedFSDataset}} uses a single {{DEFAULT_DATABYTE}} to simulate 
_all_ block content. This is not accurate because the return of this byte just 
means the read request has hit an arbitrary position in an arbitrary simulated 
block.

This JIRA aims to improve it with a more accurate verification. When position 
{{p}} of a simulated block {{b}} is accessed, the returned byte is {{b}}'s 
block ID plus {{p}}, moduled by the max value of a byte.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7808) Remove obsolete -ns options in in DFSHAAdmin.java

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487822#comment-14487822
 ] 

Hudson commented on HDFS-7808:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #159 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/159/])
Revert HDFS-7808. (wheat9: rev bd4c99bece56d1671c6f89eff8a529f4e7ac2933)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSHAAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java


 Remove obsolete -ns options in in DFSHAAdmin.java
 -

 Key: HDFS-7808
 URL: https://issues.apache.org/jira/browse/HDFS-7808
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
Priority: Minor
 Attachments: HDFS-7808-1.patch


 After HDFS-7324 fix following piece of code become unused. It should be 
 removed.
 {code}
 int i = 0;
 String cmd = argv[i++];
 if (-ns.equals(cmd)) {
   if (i == argv.length) {
 errOut.println(Missing nameservice ID);
 printUsage(errOut);
 return -1;
   }
   nameserviceId = argv[i++];
   if (i = argv.length) {
 errOut.println(Missing command);
 printUsage(errOut);
 return -1;
   }
   argv = Arrays.copyOfRange(argv, i, argv.length);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3087) Decomissioning on NN restart can complete without blocks being replicated

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487809#comment-14487809
 ] 

Hudson commented on HDFS-3087:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #159 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/159/])
HDFS-8025. Addendum fix for HDFS-3087 Decomissioning on NN restart can complete 
without blocks being replicated. Contributed by Ming Ma. (wang: rev 
5a540c3d3107199f4632e2ad7ee8ff913b107a04)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java


 Decomissioning on NN restart can complete without blocks being replicated
 -

 Key: HDFS-3087
 URL: https://issues.apache.org/jira/browse/HDFS-3087
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.0
Reporter: Kihwal Lee
Assignee: Rushabh S Shah
Priority: Critical
 Fix For: 2.5.0

 Attachments: HDFS-3087.patch


 If a data node is added to the exclude list and the name node is restarted, 
 the decomissioning happens right away on the data node registration. At this 
 point the initial block report has not been sent, so the name node thinks the 
 node has zero blocks and the decomissioning completes very quick, without 
 replicating the blocks on that node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8079) Separate the client retry conf from DFSConfigKeys

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487813#comment-14487813
 ] 

Hudson commented on HDFS-8079:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #159 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/159/])
HDFS-8079. Move CorruptFileBlockIterator to a new hdfs.client.impl package. 
(szetszwo: rev c931a3c7760e417f593f5e73f4cf55f6fe1defc5)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/CorruptFileBlockIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/impl/CorruptFileBlockIterator.java


 Separate the client retry conf from DFSConfigKeys
 -

 Key: HDFS-8079
 URL: https://issues.apache.org/jira/browse/HDFS-8079
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 2.8.0

 Attachments: h8079_20150407.patch, h8079_20150407b.patch


 A part of HDFS-8050, move dfs.client.retry.* conf from DFSConfigKeys to a new 
 class HdfsClientConfigKeys. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7725) Incorrect nodes in service metrics caused all writes to fail

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487819#comment-14487819
 ] 

Hudson commented on HDFS-7725:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #159 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/159/])
HDFS-7725. Incorrect 'nodes in service' metrics caused all writes to fail. 
Contributed by Ming Ma. (wang: rev 6af0d74a75f0f58d5e92e2e91e87735b9a62bb12)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeCapacityReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Incorrect nodes in service metrics caused all writes to fail
 --

 Key: HDFS-7725
 URL: https://issues.apache.org/jira/browse/HDFS-7725
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
 Fix For: 2.8.0

 Attachments: HDFS-7725-2.patch, HDFS-7725-3.patch, HDFS-7725.patch


 One of our clusters sometimes couldn't allocate blocks from any DNs. 
 BlockPlacementPolicyDefault complains with the following messages for all DNs.
 {noformat}
 the node is too busy (load:x  y)
 {noformat}
 It turns out the {{HeartbeatManager}}'s {{nodesInService}} was computed 
 incorrectly when admins decomm or recomm dead nodes. Here are two scenarios.
 * Decomm dead nodes. It turns out HDFS-7374 has fixed it; not sure if it is 
 intentional. cc / [~zhz], [~andrew.wang], [~atm] Here is the sequence of 
 event without HDFS-7374.
 ** Cluster has one live node. nodesInService == 1
 ** The node becomes dead. nodesInService == 0
 ** Decomm the node. nodesInService == -1
 * However, HDFS-7374 introduces another inconsistency when recomm is involved.
 ** Cluster has one live node. nodesInService == 1
 ** The node becomes dead. nodesInService == 0
 ** Decomm the node. nodesInService == 0
 ** Recomm the node. nodesInService == 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8114) Erasure coding: Add auditlog FSNamesystem#createErasureCodingZone if this operation fails

2015-04-09 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487734#comment-14487734
 ] 

Uma Maheswara Rao G commented on HDFS-8114:
---

+1

 Erasure coding: Add auditlog FSNamesystem#createErasureCodingZone if this 
 operation fails
 -

 Key: HDFS-8114
 URL: https://issues.apache.org/jira/browse/HDFS-8114
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Attachments: HDFS-8114-001.patch


 While reviewing, I've noticed {{createErasureCodingZone}} is not adding 
 auditlog if this operation fails. IMHO its good to capture failure case also.
 {code}
 logAuditEvent(true, createErasureCodingZone, srcArg, null, resultingStat);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8112) Enforce authorization policy to protect administration operations for EC zone and schemas

2015-04-09 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487063#comment-14487063
 ] 

Rakesh R commented on HDFS-8112:


Thanks a lot [~drankye]. Yes, I'm happy to take up this. BTW I'd like to know 
any draft idea/thought that comes up in your mind - about the possible way to 
enforce the authorization policy in order to protect EC zone and schemas.

 Enforce authorization policy to protect administration operations for EC zone 
 and schemas
 -

 Key: HDFS-8112
 URL: https://issues.apache.org/jira/browse/HDFS-8112
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng

 We should allow to enforce authorization policy to protect administration 
 operations for EC zone and schemas as such behaviors would impact too much 
 for a system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8063) Fix intermittent test failures in TestTracing

2015-04-09 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-8063:
---
Summary: Fix intermittent test failures in TestTracing  (was: Fix test 
failure in TestTracing)

 Fix intermittent test failures in TestTracing
 -

 Key: HDFS-8063
 URL: https://issues.apache.org/jira/browse/HDFS-8063
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HDFS-8063.001.patch, HDFS-8063.002.patch, 
 testReadTraceHooks.html


 Tests in TestTracing sometimes fails, especially on slow machine. The cause 
 is that spans is possible to arrive at receiver after 
 {{assertSpanNamesFound}} passed and 
 {{SetSpanReceiver.SetHolder.spans.clear()}} is called for next test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8096) DatanodeMetrics#blocksReplicated will get incremented early and even for failed transfers

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487197#comment-14487197
 ] 

Hudson commented on HDFS-8096:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #149 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/149/])
HDFS-8096. DatanodeMetrics#blocksReplicated will get incremented early and even 
for failed transfers (Contributed by Vinayakumar B) (vinayakumarb: rev 
9d8952f97f638ede27e4336b9601507d7bb1de7b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


 DatanodeMetrics#blocksReplicated will get incremented early and even for 
 failed transfers
 -

 Key: HDFS-8096
 URL: https://issues.apache.org/jira/browse/HDFS-8096
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Fix For: 2.8.0

 Attachments: HDFS-8096-01.patch


 {code}case DatanodeProtocol.DNA_TRANSFER:
   // Send a copy of a block to another datanode
   dn.transferBlocks(bcmd.getBlockPoolId(), bcmd.getBlocks(),
   bcmd.getTargets(), bcmd.getTargetStorageTypes());
   dn.metrics.incrBlocksReplicated(bcmd.getBlocks().length);{code}
 In the above code to handle replication transfers from namenode, 
 {{DatanodeMetrics#blocksReplicated}} is getting incremented early, since the 
 transfer will happen in background. 
 And even failed transfers also getting counted.
 Correct place to increment this counter is {{DataTransfer#run()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8104) Make hard-coded values consistent with the system default schema first before remove them

2015-04-09 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14486961#comment-14486961
 ] 

Kai Zheng commented on HDFS-8104:
-

It's committed in the branch. Thanks [~vinayrpet] for the review!

 Make hard-coded values consistent with the system default schema first before 
 remove them
 -

 Key: HDFS-8104
 URL: https://issues.apache.org/jira/browse/HDFS-8104
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8104-v1.patch, HDFS-8104-v2.patch


 It's not easy to remove the hard-coded values to use the system default 
 schema. We may need several steps/issues to cover relevant aspects. First of 
 all, let's make the hard-coded values consistent with the system default 
 schema first. This might not so easy, as experimental test indicated, when 
 change the following two lines, some tests then failed.
 {code}
 -  public static final byte NUM_DATA_BLOCKS = 3;
 -  public static final byte NUM_PARITY_BLOCKS = 2;
 +  public static final byte NUM_DATA_BLOCKS = 6;
 +  public static final byte NUM_PARITY_BLOCKS = 3;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8081) Split getAdditionalBlock() into two methods.

2015-04-09 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-8081:
--
Attachment: HDFS-8081-03.patch

This actually allows to simplify {{TestAddBlockRetry}} - do not need to mock 
{{chooseTargets()}} any more. This version simplifies the test. 

 Split getAdditionalBlock() into two methods.
 

 Key: HDFS-8081
 URL: https://issues.apache.org/jira/browse/HDFS-8081
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: HDFS-8081-01.patch, HDFS-8081-02.patch, 
 HDFS-8081-03.patch


 A minor refactoring to introduce two methods one corresponding to Part I and 
 another to Part II to make {{getAdditionalBlock()}} more readable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8115) Make PermissionStatusFormat public

2015-04-09 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-8115:
--
Status: Patch Available  (was: Open)

 Make PermissionStatusFormat public
 --

 Key: HDFS-8115
 URL: https://issues.apache.org/jira/browse/HDFS-8115
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Arun Suresh
Priority: Minor
 Attachments: HDFS-8115.1.patch


 implementations of {{INodeAttributeProvider}} are required to provide an 
 implementation of {{getPermissionLong()}} method. Unfortunately, the long 
 permission format is an encoding of the user, group and mode with each field 
 converted to int using {{SerialNumberManager}} which is package protected.
 Thus it would be nice to make the {{PermissionStatusFormat}} enum public (and 
 also make the {{toLong()}} static method public) so that user specified 
 implementations of {{INodeAttributeProvider}} may use it.
 This would also make it more consistent with {{AclStatusFormat}} which I 
 guess has been made public for the same reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8025) Addendum fix for HDFS-3087 Decomissioning on NN restart can complete without blocks being replicated

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487823#comment-14487823
 ] 

Hudson commented on HDFS-8025:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #159 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/159/])
HDFS-8025. Addendum fix for HDFS-3087 Decomissioning on NN restart can complete 
without blocks being replicated. Contributed by Ming Ma. (wang: rev 
5a540c3d3107199f4632e2ad7ee8ff913b107a04)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java


 Addendum fix for HDFS-3087 Decomissioning on NN restart can complete without 
 blocks being replicated
 

 Key: HDFS-8025
 URL: https://issues.apache.org/jira/browse/HDFS-8025
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
 Fix For: 2.7.0

 Attachments: HDFS-8025-2.patch, HDFS-8025.patch


 Per discussion with [~andrew.wang] on HDFS-7411, we should include HDFS-3087 
 and enhance the unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8115) Make PermissionStatusFormat public

2015-04-09 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487931#comment-14487931
 ] 

Haohui Mai commented on HDFS-8115:
--

The id of user / group only makes sense for a particular run of NN only.

This is an implementation detail and it should not be exposed to the public.




 Make PermissionStatusFormat public
 --

 Key: HDFS-8115
 URL: https://issues.apache.org/jira/browse/HDFS-8115
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Arun Suresh
Priority: Minor
 Attachments: HDFS-8115.1.patch


 implementations of {{INodeAttributeProvider}} are required to provide an 
 implementation of {{getPermissionLong()}} method. Unfortunately, the long 
 permission format is an encoding of the user, group and mode with each field 
 converted to int using {{SerialNumberManager}} which is package protected.
 Thus it would be nice to make the {{PermissionStatusFormat}} enum public (and 
 also make the {{toLong()}} static method public) so that user specified 
 implementations of {{INodeAttributeProvider}} may use it.
 This would also make it more consistent with {{AclStatusFormat}} which I 
 guess has been made public for the same reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8101) DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS classes at runtime

2015-04-09 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HDFS-8101:
--
Attachment: HDFS-8101.1.patch.txt

Manually inspected javap output for DFSConfigKeys and NameNodeHttpServer (it's 
what uses AuthFilter) to verify that NameNodeHttpServer didn't change. Checked 
DFSConfigKeys for other webhdfs class references.

 DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS classes at 
 runtime
 ---

 Key: HDFS-8101
 URL: https://issues.apache.org/jira/browse/HDFS-8101
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor
 Attachments: HDFS-8101.1.patch.txt


 Previously, all references to DFSConfigKeys in DFSClient were compile time 
 constants which meant that normal users of DFSClient wouldn't resolve 
 DFSConfigKeys at run time. As of HDFS-7718, DFSClient has a reference to a 
 member of DFSConfigKeys that isn't compile time constant 
 (DFS_CLIENT_KEY_PROVIDER_CACHE_EXPIRY_DEFAULT).
 Since the class must be resolved now, this particular member
 {code}
 public static final String  DFS_WEBHDFS_AUTHENTICATION_FILTER_DEFAULT = 
 AuthFilter.class.getName();
 {code}
 means that javax.servlet.Filter needs to be on the classpath.
 javax-servlet-api is one of the properly listed dependencies for HDFS, 
 however if we replace {{AuthFilter.class.getName()}} with the equivalent 
 String literal then downstream folks can avoid including it while maintaining 
 compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7889) Subclass DFSOutputStream to support writing striping layout files

2015-04-09 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487041#comment-14487041
 ] 

Li Bo commented on HDFS-7889:
-

Changes in patch 010:
check file length in unit test;
check if the parity blocks are correctly generated;
leading streamer will wait other streamers before committing block group to NN 
because it has to calculate the bytes written for this block writer;
fix other problems in Zhe's review.

 Subclass DFSOutputStream to support writing striping layout files
 -

 Key: HDFS-7889
 URL: https://issues.apache.org/jira/browse/HDFS-7889
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-7889-001.patch, HDFS-7889-002.patch, 
 HDFS-7889-003.patch, HDFS-7889-004.patch, HDFS-7889-005.patch, 
 HDFS-7889-006.patch, HDFS-7889-007.patch, HDFS-7889-008.patch, 
 HDFS-7889-009.patch, HDFS-7889-010.patch


 After HDFS-7888, we can subclass  {{DFSOutputStream}} to support writing 
 striping layout files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8104) Make hard-coded values consistent with the system default schema first before remove them

2015-04-09 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14486947#comment-14486947
 ] 

Vinayakumar B commented on HDFS-8104:
-

+1

 Make hard-coded values consistent with the system default schema first before 
 remove them
 -

 Key: HDFS-8104
 URL: https://issues.apache.org/jira/browse/HDFS-8104
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8104-v1.patch, HDFS-8104-v2.patch


 It's not easy to remove the hard-coded values to use the system default 
 schema. We may need several steps/issues to cover relevant aspects. First of 
 all, let's make the hard-coded values consistent with the system default 
 schema first. This might not so easy, as experimental test indicated, when 
 change the following two lines, some tests then failed.
 {code}
 -  public static final byte NUM_DATA_BLOCKS = 3;
 -  public static final byte NUM_PARITY_BLOCKS = 2;
 +  public static final byte NUM_DATA_BLOCKS = 6;
 +  public static final byte NUM_PARITY_BLOCKS = 3;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8115) Make PermissionStatusFormat public

2015-04-09 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-8115:
--
Target Version/s: 2.7.0  (was: 2.8.0)

 Make PermissionStatusFormat public
 --

 Key: HDFS-8115
 URL: https://issues.apache.org/jira/browse/HDFS-8115
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arun Suresh
Priority: Minor

 implementations of {{INodeAttributeProvider}} are required to provide an 
 implementation of {{getPermissionLong()}} method. Unfortunately, the long 
 permission format is an encoding of the user, group and mode with each field 
 converted to int using {{SerialNumberManager}} which is package protected.
 Thus it would be nice to make the {{PermissionStatusFormat}} enum public (and 
 also make the {{toLong()}} static method public) so that user specified 
 implementations of {{INodeAttributeProvider}} may use it.
 This would also make it more consistent with {{AclStatusFormat}} which I 
 guess has been made public for the same reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8072) Reserved RBW space is not released if client terminates while writing block

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487899#comment-14487899
 ] 

Hudson commented on HDFS-8072:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2108 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2108/])
HDFS-8072. Reserved RBW space is not released if client terminates while 
writing block. (Arpit Agarwal) (arp: rev 
608c4998419c18fd95019b28cc56b5bd5aa4cc01)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestRbwSpaceReservation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipelineInterface.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalReplicaInPipeline.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Reserved RBW space is not released if client terminates while writing block
 ---

 Key: HDFS-8072
 URL: https://issues.apache.org/jira/browse/HDFS-8072
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.7.0

 Attachments: HDFS-8072.01.patch, HDFS-8072.02.patch


 The DataNode reserves space for a full block when creating an RBW block 
 (introduced in HDFS-6898).
 The reserved space is released incrementally as data is written to disk and 
 fully when the block is finalized. However if the client process terminates 
 unexpectedly mid-write then the reserved space is not released until the DN 
 is restarted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3087) Decomissioning on NN restart can complete without blocks being replicated

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487901#comment-14487901
 ] 

Hudson commented on HDFS-3087:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2108 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2108/])
HDFS-8025. Addendum fix for HDFS-3087 Decomissioning on NN restart can complete 
without blocks being replicated. Contributed by Ming Ma. (wang: rev 
5a540c3d3107199f4632e2ad7ee8ff913b107a04)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Decomissioning on NN restart can complete without blocks being replicated
 -

 Key: HDFS-3087
 URL: https://issues.apache.org/jira/browse/HDFS-3087
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.0
Reporter: Kihwal Lee
Assignee: Rushabh S Shah
Priority: Critical
 Fix For: 2.5.0

 Attachments: HDFS-3087.patch


 If a data node is added to the exclude list and the name node is restarted, 
 the decomissioning happens right away on the data node registration. At this 
 point the initial block report has not been sent, so the name node thinks the 
 node has zero blocks and the decomissioning completes very quick, without 
 replicating the blocks on that node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8089) Move o.a.h.hdfs.web.resources.* to the client jars

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487814#comment-14487814
 ] 

Hudson commented on HDFS-8089:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #159 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/159/])
HDFS-8089. Move o.a.h.hdfs.web.resources.* to the client jars. Contributed by 
Haohui Mai. (wheat9: rev cc25823546643caf22bab63ec85fe0c8939593d8)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/SnapshotNameParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/StringParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/CreateParentParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/TokenArgumentParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/OldSnapshotNameParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/RecursiveParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrEncodingParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrNameParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrSetFlagParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/PutOpParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/NewLengthParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/Param.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/RecursiveParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/LongParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/RenewerParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/GroupParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/GroupParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/IntegerParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/BooleanParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumSetParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/TokenArgumentParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/AclPermissionParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BooleanParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ReplicationParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DoAsParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/AccessTimeParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumSetParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/RenameOptionSetParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/PutOpParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ExcludeDatanodesParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/LengthParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/AclPermissionParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/IntegerParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DestinationParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/OverwriteParam.java
* 

[jira] [Commented] (HDFS-8096) DatanodeMetrics#blocksReplicated will get incremented early and even for failed transfers

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487808#comment-14487808
 ] 

Hudson commented on HDFS-8096:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #159 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/159/])
HDFS-8096. DatanodeMetrics#blocksReplicated will get incremented early and even 
for failed transfers (Contributed by Vinayakumar B) (vinayakumarb: rev 
9d8952f97f638ede27e4336b9601507d7bb1de7b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMetrics.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java


 DatanodeMetrics#blocksReplicated will get incremented early and even for 
 failed transfers
 -

 Key: HDFS-8096
 URL: https://issues.apache.org/jira/browse/HDFS-8096
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Fix For: 2.8.0

 Attachments: HDFS-8096-01.patch


 {code}case DatanodeProtocol.DNA_TRANSFER:
   // Send a copy of a block to another datanode
   dn.transferBlocks(bcmd.getBlockPoolId(), bcmd.getBlocks(),
   bcmd.getTargets(), bcmd.getTargetStorageTypes());
   dn.metrics.incrBlocksReplicated(bcmd.getBlocks().length);{code}
 In the above code to handle replication transfers from namenode, 
 {{DatanodeMetrics#blocksReplicated}} is getting incremented early, since the 
 transfer will happen in background. 
 And even failed transfers also getting counted.
 Correct place to increment this counter is {{DataTransfer#run()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7813) TestDFSHAAdminMiniCluster#testFencer testcase is failing frequently

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487911#comment-14487911
 ] 

Hudson commented on HDFS-7813:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2108 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2108/])
Revert HDFS-7813. (wheat9: rev 82d56b337d468f4065df5005f9f67487ac97d2d7)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 TestDFSHAAdminMiniCluster#testFencer testcase is failing frequently
 ---

 Key: HDFS-7813
 URL: https://issues.apache.org/jira/browse/HDFS-7813
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, test
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 2.7.0

 Attachments: HDFS-7813-001.patch


 Following is the failure trace.
 {code}
 java.lang.AssertionError: expected:0 but was:-1
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testFencer(TestDFSHAAdminMiniCluster.java:163)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7808) Remove obsolete -ns options in in DFSHAAdmin.java

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487913#comment-14487913
 ] 

Hudson commented on HDFS-7808:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2108 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2108/])
Revert HDFS-7808. (wheat9: rev bd4c99bece56d1671c6f89eff8a529f4e7ac2933)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSHAAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java


 Remove obsolete -ns options in in DFSHAAdmin.java
 -

 Key: HDFS-7808
 URL: https://issues.apache.org/jira/browse/HDFS-7808
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
Priority: Minor
 Attachments: HDFS-7808-1.patch


 After HDFS-7324 fix following piece of code become unused. It should be 
 removed.
 {code}
 int i = 0;
 String cmd = argv[i++];
 if (-ns.equals(cmd)) {
   if (i == argv.length) {
 errOut.println(Missing nameservice ID);
 printUsage(errOut);
 return -1;
   }
   nameserviceId = argv[i++];
   if (i = argv.length) {
 errOut.println(Missing command);
 printUsage(errOut);
 return -1;
   }
   argv = Arrays.copyOfRange(argv, i, argv.length);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8089) Move o.a.h.hdfs.web.resources.* to the client jars

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487905#comment-14487905
 ] 

Hudson commented on HDFS-8089:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2108 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2108/])
HDFS-8089. Move o.a.h.hdfs.web.resources.* to the client jars. Contributed by 
Haohui Mai. (wheat9: rev cc25823546643caf22bab63ec85fe0c8939593d8)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/TokenArgumentParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/NewLengthParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrValueParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/OverwriteParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumSetParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/GetOpParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/AclPermissionParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/Param.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/IntegerParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrSetFlagParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/ReplicationParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/PostOpParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/SnapshotNameParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/ShortParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ShortParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ReplicationParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/LengthParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/PutOpParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/IntegerParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/RenameOptionSetParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/OldSnapshotNameParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/CreateParentParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ConcatSourcesParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/RenewerParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/RecursiveParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumSetParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrNameParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrEncodingParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/LongParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BooleanParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/FsActionParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/GroupParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/ModificationTimeParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/LongParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/PutOpParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/SnapshotNameParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/Param.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/OwnerParam.java
* 

[jira] [Updated] (HDFS-8110) Remove unsupported operation , -rollingUpgrade downgrade related information from document

2015-04-09 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8110:
-
Attachment: HDFS-8110.1.patch

Attached an initial patch for this issue. 
Please review.

 Remove unsupported operation , -rollingUpgrade downgrade related information 
 from document
 --

 Key: HDFS-8110
 URL: https://issues.apache.org/jira/browse/HDFS-8110
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Attachments: HDFS-8110.1.patch


 Support for -rollingUpgrade downgrade is been removed as part of HDFS-7302.
 Corresponding information should be removed from document also.
 {noformat}
 Downgrade with Downtime
 Administrator may choose to first shutdown the cluster and then downgrade it. 
 The following are the steps:
 Shutdown all NNs and DNs.
 Restore the pre-upgrade release in all machines.
 Start NNs with the -rollingUpgrade downgrade option.
 Start DNs normally.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8062) Remove hard-coded values in favor of EC schema

2015-04-09 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14486819#comment-14486819
 ] 

Kai Zheng commented on HDFS-8062:
-

Right now, for client to write and read, is it possible to use schema object 
passed from NameNode, and we limit the use of the system default one only in 
NameNode side? If not easy, we could get it done separately.

 Remove hard-coded values in favor of EC schema
 --

 Key: HDFS-8062
 URL: https://issues.apache.org/jira/browse/HDFS-8062
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Sasaki
 Attachments: HDFS-8062.1.patch, HDFS-8062.2.patch


 Related issues about EC schema in NameNode side:
 HDFS-7859 is to change fsimage and editlog in NameNode to persist EC schemas;
 HDFS-7866 is to manage EC schemas in NameNode, loading, syncing between 
 persisted ones in image and predefined ones in XML.
 This is to revisit all the places in NameNode that uses hard-coded values in 
 favor of {{ECSchema}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8102) Separate webhdfs retry configuration keys from DFSConfigKeys

2015-04-09 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8102:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks Brandon for reviews.

 Separate webhdfs retry configuration keys from DFSConfigKeys
 

 Key: HDFS-8102
 URL: https://issues.apache.org/jira/browse/HDFS-8102
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8102.000.patch, HDFS-8102.001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7859) Erasure Coding: Persist EC schemas in NameNode

2015-04-09 Thread Xinwei Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487291#comment-14487291
 ] 

Xinwei Qin  commented on HDFS-7859:
---

Hi [~drankye],
Thanks for your clarification and suggestion. I'm more clear on this issue, and 
will post the patch ASAP.

 Erasure Coding: Persist EC schemas in NameNode
 --

 Key: HDFS-7859
 URL: https://issues.apache.org/jira/browse/HDFS-7859
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Xinwei Qin 

 In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
 persist EC schemas in NameNode centrally and reliably, so that EC zones can 
 reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8081) Split getAdditionalBlock() into two methods.

2015-04-09 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-8081:
--
Attachment: HDFS-8081-02.patch

Fixed the nits. Thanks Yi.

 Split getAdditionalBlock() into two methods.
 

 Key: HDFS-8081
 URL: https://issues.apache.org/jira/browse/HDFS-8081
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: HDFS-8081-01.patch, HDFS-8081-02.patch


 A minor refactoring to introduce two methods one corresponding to Part I and 
 another to Part II to make {{getAdditionalBlock()}} more readable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8090) Erasure Coding: Add RPC to client-namenode to list all ECSchemas loaded in Namenode.

2015-04-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8090:

Attachment: HDFS-8090-02.patch

Attached the patch with test

 Erasure Coding: Add RPC to client-namenode to list all ECSchemas loaded in 
 Namenode.
 

 Key: HDFS-8090
 URL: https://issues.apache.org/jira/browse/HDFS-8090
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8090-01.patch, HDFS-8090-02.patch


 ECSchemas will be configured and loaded only at the Namenode to avoid 
 conflicts.
 Client has to specify one of these schemas during creation of ecZones.
 So, add an RPC to ClientProtocol to get all ECSchemas loaded at namenode, so 
 that client can choose only any one of these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8096) DatanodeMetrics#blocksReplicated will get incremented early and even for failed transfers

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487118#comment-14487118
 ] 

Hudson commented on HDFS-8096:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #158 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/158/])
HDFS-8096. DatanodeMetrics#blocksReplicated will get incremented early and even 
for failed transfers (Contributed by Vinayakumar B) (vinayakumarb: rev 
9d8952f97f638ede27e4336b9601507d7bb1de7b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


 DatanodeMetrics#blocksReplicated will get incremented early and even for 
 failed transfers
 -

 Key: HDFS-8096
 URL: https://issues.apache.org/jira/browse/HDFS-8096
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Fix For: 2.8.0

 Attachments: HDFS-8096-01.patch


 {code}case DatanodeProtocol.DNA_TRANSFER:
   // Send a copy of a block to another datanode
   dn.transferBlocks(bcmd.getBlockPoolId(), bcmd.getBlocks(),
   bcmd.getTargets(), bcmd.getTargetStorageTypes());
   dn.metrics.incrBlocksReplicated(bcmd.getBlocks().length);{code}
 In the above code to handle replication transfers from namenode, 
 {{DatanodeMetrics#blocksReplicated}} is getting incremented early, since the 
 transfer will happen in background. 
 And even failed transfers also getting counted.
 Correct place to increment this counter is {{DataTransfer#run()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8079) Separate the client retry conf from DFSConfigKeys

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487122#comment-14487122
 ] 

Hudson commented on HDFS-8079:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #158 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/158/])
HDFS-8079. Move CorruptFileBlockIterator to a new hdfs.client.impl package. 
(szetszwo: rev c931a3c7760e417f593f5e73f4cf55f6fe1defc5)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/impl/CorruptFileBlockIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/CorruptFileBlockIterator.java


 Separate the client retry conf from DFSConfigKeys
 -

 Key: HDFS-8079
 URL: https://issues.apache.org/jira/browse/HDFS-8079
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 2.8.0

 Attachments: h8079_20150407.patch, h8079_20150407b.patch


 A part of HDFS-8050, move dfs.client.retry.* conf from DFSConfigKeys to a new 
 class HdfsClientConfigKeys. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8046) Allow better control of getContentSummary

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487134#comment-14487134
 ] 

Hudson commented on HDFS-8046:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #158 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/158/])
HDFS-8046. Allow better control of getContentSummary. Contributed by Kihwal 
Lee. (kihwal: rev 285b31e75e51ec8e3a796c2cb0208739368ca9b8)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ContentSummaryComputationContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


 Allow better control of getContentSummary
 -

 Key: HDFS-8046
 URL: https://issues.apache.org/jira/browse/HDFS-8046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.8.0

 Attachments: HDFS-8046.v1.patch


 On busy clusters, users performing quota checks against a big directory 
 structure can affect the namenode performance. It has become a lot better 
 after HDFS-4995, but as clusters get bigger and busier, it is apparent that 
 we need finer grain control to avoid long read lock causing throughput drop.
 Even with unfair namesystem lock setting, a long read lock (10s of 
 milliseconds) can starve many readers and especially writers. So the locking 
 duration should be reduced, which can be done by imposing a lower 
 count-per-iteration limit in the existing implementation.  But HDFS-4995 came 
 with a fixed amount of sleep between locks. This needs to be made 
 configurable, so that {{getContentSummary()}} doesn't get exceedingly slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7979) Initialize block report IDs with a random number

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487124#comment-14487124
 ] 

Hudson commented on HDFS-7979:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #158 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/158/])
HDFS-7979. Initialize block report IDs with a random number. (andrew.wang: rev 
b1e059089d6a5b2b7006d7d384c6df81ed268bd9)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java


 Initialize block report IDs with a random number
 

 Key: HDFS-7979
 URL: https://issues.apache.org/jira/browse/HDFS-7979
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.7.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-7979.001.patch, HDFS-7979.002.patch, 
 HDFS-7979.003.patch, HDFS-7979.004.patch


 Right now block report IDs use system nanotime. This isn't that random, so 
 let's start it at a random number for some more safety.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8076) Code cleanup for DFSInputStream: use offset instead of LocatedBlock when possible

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487126#comment-14487126
 ] 

Hudson commented on HDFS-8076:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #158 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/158/])
HDFS-8076. Code cleanup for DFSInputStream: use offset instead of LocatedBlock 
when possible. Contributed by Zhe Zhang. (wang: rev 
a42bb1cd915abe5dc33eda3c01e8c74c64f35748)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Code cleanup for DFSInputStream: use offset instead of LocatedBlock when 
 possible
 -

 Key: HDFS-8076
 URL: https://issues.apache.org/jira/browse/HDFS-8076
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: 2.8.0

 Attachments: HDFS-8076-000.patch


 This JIRA aims to refactor the signatures {{fetchBlockByteRange}} and 
 {{actualGetFromOneDataNode}}. Instead of taking a {{LocatedBlock}}, I think 
 they should just take the starting offset of that block, since they'll later 
 call {{getBlockAt}} to refresh the location anyway. I think we should make it 
 clear so the callers are not surprised if the finally used {{LocatedBlock}} 
 is not the one they passed in. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7725) Incorrect nodes in service metrics caused all writes to fail

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487128#comment-14487128
 ] 

Hudson commented on HDFS-7725:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #158 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/158/])
HDFS-7725. Incorrect 'nodes in service' metrics caused all writes to fail. 
Contributed by Ming Ma. (wang: rev 6af0d74a75f0f58d5e92e2e91e87735b9a62bb12)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeCapacityReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java


 Incorrect nodes in service metrics caused all writes to fail
 --

 Key: HDFS-7725
 URL: https://issues.apache.org/jira/browse/HDFS-7725
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
 Fix For: 2.8.0

 Attachments: HDFS-7725-2.patch, HDFS-7725-3.patch, HDFS-7725.patch


 One of our clusters sometimes couldn't allocate blocks from any DNs. 
 BlockPlacementPolicyDefault complains with the following messages for all DNs.
 {noformat}
 the node is too busy (load:x  y)
 {noformat}
 It turns out the {{HeartbeatManager}}'s {{nodesInService}} was computed 
 incorrectly when admins decomm or recomm dead nodes. Here are two scenarios.
 * Decomm dead nodes. It turns out HDFS-7374 has fixed it; not sure if it is 
 intentional. cc / [~zhz], [~andrew.wang], [~atm] Here is the sequence of 
 event without HDFS-7374.
 ** Cluster has one live node. nodesInService == 1
 ** The node becomes dead. nodesInService == 0
 ** Decomm the node. nodesInService == -1
 * However, HDFS-7374 introduces another inconsistency when recomm is involved.
 ** Cluster has one live node. nodesInService == 1
 ** The node becomes dead. nodesInService == 0
 ** Decomm the node. nodesInService == 0
 ** Recomm the node. nodesInService == 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8025) Addendum fix for HDFS-3087 Decomissioning on NN restart can complete without blocks being replicated

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487132#comment-14487132
 ] 

Hudson commented on HDFS-8025:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #158 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/158/])
HDFS-8025. Addendum fix for HDFS-3087 Decomissioning on NN restart can complete 
without blocks being replicated. Contributed by Ming Ma. (wang: rev 
5a540c3d3107199f4632e2ad7ee8ff913b107a04)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


 Addendum fix for HDFS-3087 Decomissioning on NN restart can complete without 
 blocks being replicated
 

 Key: HDFS-8025
 URL: https://issues.apache.org/jira/browse/HDFS-8025
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
 Fix For: 2.8.0

 Attachments: HDFS-8025-2.patch, HDFS-8025.patch


 Per discussion with [~andrew.wang] on HDFS-7411, we should include HDFS-3087 
 and enhance the unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7813) TestDFSHAAdminMiniCluster#testFencer testcase is failing frequently

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487129#comment-14487129
 ] 

Hudson commented on HDFS-7813:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #158 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/158/])
Revert HDFS-7813. (wheat9: rev 82d56b337d468f4065df5005f9f67487ac97d2d7)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java


 TestDFSHAAdminMiniCluster#testFencer testcase is failing frequently
 ---

 Key: HDFS-7813
 URL: https://issues.apache.org/jira/browse/HDFS-7813
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, test
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 2.7.0

 Attachments: HDFS-7813-001.patch


 Following is the failure trace.
 {code}
 java.lang.AssertionError: expected:0 but was:-1
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testFencer(TestDFSHAAdminMiniCluster.java:163)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7808) Remove obsolete -ns options in in DFSHAAdmin.java

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487131#comment-14487131
 ] 

Hudson commented on HDFS-7808:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #158 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/158/])
Revert HDFS-7808. (wheat9: rev bd4c99bece56d1671c6f89eff8a529f4e7ac2933)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSHAAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java


 Remove obsolete -ns options in in DFSHAAdmin.java
 -

 Key: HDFS-7808
 URL: https://issues.apache.org/jira/browse/HDFS-7808
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
Priority: Minor
 Attachments: HDFS-7808-1.patch


 After HDFS-7324 fix following piece of code become unused. It should be 
 removed.
 {code}
 int i = 0;
 String cmd = argv[i++];
 if (-ns.equals(cmd)) {
   if (i == argv.length) {
 errOut.println(Missing nameservice ID);
 printUsage(errOut);
 return -1;
   }
   nameserviceId = argv[i++];
   if (i = argv.length) {
 errOut.println(Missing command);
 printUsage(errOut);
 return -1;
   }
   argv = Arrays.copyOfRange(argv, i, argv.length);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7188) support build libhdfs3 on windows

2015-04-09 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HDFS-7188.

   Resolution: Fixed
Fix Version/s: HDFS-6994

committed to HDFS-6994

 support build libhdfs3 on windows
 -

 Key: HDFS-7188
 URL: https://issues.apache.org/jira/browse/HDFS-7188
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
 Environment: Windows System, Visual Studio 2010
Reporter: Zhanwei Wang
Assignee: Thanh Do
 Fix For: HDFS-6994

 Attachments: HDFS-7188-branch-HDFS-6994-0.patch, 
 HDFS-7188-branch-HDFS-6994-1.patch, HDFS-7188-branch-HDFS-6994-2.patch, 
 HDFS-7188-branch-HDFS-6994-3.patch


 libhdfs3 should work on windows



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8111) NPE thrown when invalid FSImage filename given for hdfs oiv_legacy cmd

2015-04-09 Thread surendra singh lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

surendra singh lilhore updated HDFS-8111:
-
Fix Version/s: 3.0.0
Affects Version/s: 2.6.0
   Status: Patch Available  (was: Open)

 NPE thrown when invalid FSImage filename given for hdfs oiv_legacy cmd
 

 Key: HDFS-8111
 URL: https://issues.apache.org/jira/browse/HDFS-8111
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.6.0
Reporter: Archana T
Assignee: surendra singh lilhore
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-8111.patch


 NPE thrown when invalid filename is given as argument for hdfs oiv_legacy 
 command
 {code}
 ./hdfs oiv_legacy -i 
 /home/hadoop/hadoop/hadoop-3.0.0/dfs/name/current/fsimage_00042 
 -o fsimage.txt 
 Exception in thread main java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer.go(OfflineImageViewer.java:140)
 at 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer.main(OfflineImageViewer.java:260)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8089) Move o.a.h.hdfs.web.resources.* to the client jars

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487123#comment-14487123
 ] 

Hudson commented on HDFS-8089:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #158 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/158/])
HDFS-8089. Move o.a.h.hdfs.web.resources.* to the client jars. Contributed by 
Haohui Mai. (wheat9: rev cc25823546643caf22bab63ec85fe0c8939593d8)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrNameParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/StringParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumSetParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ConcatSourcesParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ModificationTimeParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/CreateParentParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/PutOpParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/GetOpParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/IntegerParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/LongParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/PermissionParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ShortParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/OverwriteParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/GroupParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/RecursiveParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/AccessTimeParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DestinationParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/NewLengthParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/SnapshotNameParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/BooleanParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrSetFlagParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/GroupParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/TokenArgumentParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/ExcludeDatanodesParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/FsActionParam.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrEncodingParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/FsActionParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ExcludeDatanodesParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/SnapshotNameParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/OldSnapshotNameParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrValueParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrSetFlagParam.java
* 

[jira] [Commented] (HDFS-3087) Decomissioning on NN restart can complete without blocks being replicated

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487119#comment-14487119
 ] 

Hudson commented on HDFS-3087:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #158 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/158/])
HDFS-8025. Addendum fix for HDFS-3087 Decomissioning on NN restart can complete 
without blocks being replicated. Contributed by Ming Ma. (wang: rev 
5a540c3d3107199f4632e2ad7ee8ff913b107a04)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


 Decomissioning on NN restart can complete without blocks being replicated
 -

 Key: HDFS-3087
 URL: https://issues.apache.org/jira/browse/HDFS-3087
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.0
Reporter: Kihwal Lee
Assignee: Rushabh S Shah
Priority: Critical
 Fix For: 2.5.0

 Attachments: HDFS-3087.patch


 If a data node is added to the exclude list and the name node is restarted, 
 the decomissioning happens right away on the data node registration. At this 
 point the initial block report has not been sent, so the name node thinks the 
 node has zero blocks and the decomissioning completes very quick, without 
 replicating the blocks on that node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8072) Reserved RBW space is not released if client terminates while writing block

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487117#comment-14487117
 ] 

Hudson commented on HDFS-8072:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #158 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/158/])
HDFS-8072. Reserved RBW space is not released if client terminates while 
writing block. (Arpit Agarwal) (arp: rev 
608c4998419c18fd95019b28cc56b5bd5aa4cc01)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestRbwSpaceReservation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipelineInterface.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalReplicaInPipeline.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java


 Reserved RBW space is not released if client terminates while writing block
 ---

 Key: HDFS-8072
 URL: https://issues.apache.org/jira/browse/HDFS-8072
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.7.0

 Attachments: HDFS-8072.01.patch, HDFS-8072.02.patch


 The DataNode reserves space for a full block when creating an RBW block 
 (introduced in HDFS-6898).
 The reserved space is released incrementally as data is written to disk and 
 fully when the block is finalized. However if the client process terminates 
 unexpectedly mid-write then the reserved space is not released until the DN 
 is restarted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8101) DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS classes at runtime

2015-04-09 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-8101:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this to trunk and branch-2.

Thanks very much for the contribution, Sean.

 DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS classes at 
 runtime
 ---

 Key: HDFS-8101
 URL: https://issues.apache.org/jira/browse/HDFS-8101
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8101.1.patch.txt


 Previously, all references to DFSConfigKeys in DFSClient were compile time 
 constants which meant that normal users of DFSClient wouldn't resolve 
 DFSConfigKeys at run time. As of HDFS-7718, DFSClient has a reference to a 
 member of DFSConfigKeys that isn't compile time constant 
 (DFS_CLIENT_KEY_PROVIDER_CACHE_EXPIRY_DEFAULT).
 Since the class must be resolved now, this particular member
 {code}
 public static final String  DFS_WEBHDFS_AUTHENTICATION_FILTER_DEFAULT = 
 AuthFilter.class.getName();
 {code}
 means that javax.servlet.Filter needs to be on the classpath.
 javax-servlet-api is one of the properly listed dependencies for HDFS, 
 however if we replace {{AuthFilter.class.getName()}} with the equivalent 
 String literal then downstream folks can avoid including it while maintaining 
 compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8101) DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS classes at runtime

2015-04-09 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487632#comment-14487632
 ] 

Aaron T. Myers commented on HDFS-8101:
--

+1, the patch looks good to me. Good sleuthing, Sean.

I'm going to commit this momentarily.

 DFSClient use of non-constant DFSConfigKeys pulls in WebHDFS classes at 
 runtime
 ---

 Key: HDFS-8101
 URL: https://issues.apache.org/jira/browse/HDFS-8101
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor
 Attachments: HDFS-8101.1.patch.txt


 Previously, all references to DFSConfigKeys in DFSClient were compile time 
 constants which meant that normal users of DFSClient wouldn't resolve 
 DFSConfigKeys at run time. As of HDFS-7718, DFSClient has a reference to a 
 member of DFSConfigKeys that isn't compile time constant 
 (DFS_CLIENT_KEY_PROVIDER_CACHE_EXPIRY_DEFAULT).
 Since the class must be resolved now, this particular member
 {code}
 public static final String  DFS_WEBHDFS_AUTHENTICATION_FILTER_DEFAULT = 
 AuthFilter.class.getName();
 {code}
 means that javax.servlet.Filter needs to be on the classpath.
 javax-servlet-api is one of the properly listed dependencies for HDFS, 
 however if we replace {{AuthFilter.class.getName()}} with the equivalent 
 String literal then downstream folks can avoid including it while maintaining 
 compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7993) Incorrect descriptions in fsck when nodes are decommissioned

2015-04-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487025#comment-14487025
 ] 

Hadoop QA commented on HDFS-7993:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12724131/HDFS-7993.1.patch
  against trunk revision b1e0590.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10226//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10226//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10226//console

This message is automatically generated.

 Incorrect descriptions in fsck when nodes are decommissioned
 

 Key: HDFS-7993
 URL: https://issues.apache.org/jira/browse/HDFS-7993
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Ming Ma
Assignee: J.Andreina
 Attachments: HDFS-7993.1.patch


 When you run fsck with -files or -racks, you will get something like 
 below if one of the replicas is decommissioned.
 {noformat}
 blk_x len=y repl=3 [dn1, dn2, dn3, dn4]
 {noformat}
 That is because in NamenodeFsck, the repl count comes from live replicas 
 count; while the actual nodes come from LocatedBlock which include 
 decommissioned nodes.
 Another issue in NamenodeFsck is BlockPlacementPolicy's verifyBlockPlacement 
 verifies LocatedBlock that includes decommissioned nodes. However, it seems 
 better to exclude the decommissioned nodes in the verification; just like how 
 fsck excludes decommissioned nodes when it check for under replicated blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8114) Erasure coding: Add auditlog FSNamesystem#createErasureCodingZone if this operation fails

2015-04-09 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487428#comment-14487428
 ] 

Rakesh R commented on HDFS-8114:


Attached patch to make the audit logging better. Please review. Thanks!

 Erasure coding: Add auditlog FSNamesystem#createErasureCodingZone if this 
 operation fails
 -

 Key: HDFS-8114
 URL: https://issues.apache.org/jira/browse/HDFS-8114
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Attachments: HDFS-8114-001.patch


 While reviewing, I've noticed {{createErasureCodingZone}} is not adding 
 auditlog if this operation fails. IMHO its good to capture failure case also.
 {code}
 logAuditEvent(success, createErasureCodingZone, srcArg, null, 
 resultingStat);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-8114) Erasure coding: Add auditlog FSNamesystem#createErasureCodingZone if this operation fails

2015-04-09 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-8114 started by Rakesh R.
--
 Erasure coding: Add auditlog FSNamesystem#createErasureCodingZone if this 
 operation fails
 -

 Key: HDFS-8114
 URL: https://issues.apache.org/jira/browse/HDFS-8114
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
Priority: Minor
 Attachments: HDFS-8114-001.patch


 While reviewing, I've noticed {{createErasureCodingZone}} is not adding 
 auditlog if this operation fails. IMHO its good to capture failure case also.
 {code}
 logAuditEvent(success, createErasureCodingZone, srcArg, null, 
 resultingStat);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8113) NullPointerException in BlockInfoContiguous causes block report failure

2015-04-09 Thread Chengbing Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengbing Liu updated HDFS-8113:

Status: Patch Available  (was: Open)

 NullPointerException in BlockInfoContiguous causes block report failure
 ---

 Key: HDFS-8113
 URL: https://issues.apache.org/jira/browse/HDFS-8113
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Chengbing Liu
Assignee: Chengbing Liu
 Attachments: HDFS-8113.patch


 The following copy constructor can throw NullPointerException if {{bc}} is 
 null.
 {code}
   protected BlockInfoContiguous(BlockInfoContiguous from) {
 this(from, from.bc.getBlockReplication());
 this.bc = from.bc;
   }
 {code}
 We have observed that some DataNodes keeps failing doing block reports with 
 NameNode. The stacktrace is as follows. Though we are not using the latest 
 version, the problem still exists.
 {quote}
 2015-03-08 19:28:13,442 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 RemoteException in offerService
 org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
 java.lang.NullPointerException
 at org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.(BlockInfo.java:80)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockToMarkCorrupt.(BlockManager.java:1696)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.checkReplicaCorrupt(BlockManager.java:2185)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReportedBlock(BlockManager.java:2047)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiff(BlockManager.java:1950)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1823)
 at 
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:1750)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1069)
 at 
 org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:152)
 at 
 org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:26382)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1623)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8109) ECManager should be able to manage multiple ECSchemas

2015-04-09 Thread Hui Zheng (JIRA)
Hui Zheng created HDFS-8109:
---

 Summary: ECManager should be able to manage multiple ECSchemas
 Key: HDFS-8109
 URL: https://issues.apache.org/jira/browse/HDFS-8109
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Hui Zheng


[HDFS-8074|https://issues.apache.org/jira/browse/HDFS-8074] has implemented a 
default EC Schema.
But a user may use another predefined schema when he creates an EC zone.  
Maybe we need to implement to get a ECSchema from ECManager by its schema name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8072) Reserved RBW space is not released if client terminates while writing block

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487196#comment-14487196
 ] 

Hudson commented on HDFS-8072:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #149 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/149/])
HDFS-8072. Reserved RBW space is not released if client terminates while 
writing block. (Arpit Agarwal) (arp: rev 
608c4998419c18fd95019b28cc56b5bd5aa4cc01)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestRbwSpaceReservation.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalReplicaInPipeline.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipelineInterface.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java


 Reserved RBW space is not released if client terminates while writing block
 ---

 Key: HDFS-8072
 URL: https://issues.apache.org/jira/browse/HDFS-8072
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.7.0

 Attachments: HDFS-8072.01.patch, HDFS-8072.02.patch


 The DataNode reserves space for a full block when creating an RBW block 
 (introduced in HDFS-6898).
 The reserved space is released incrementally as data is written to disk and 
 fully when the block is finalized. However if the client process terminates 
 unexpectedly mid-write then the reserved space is not released until the DN 
 is restarted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8046) Allow better control of getContentSummary

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487195#comment-14487195
 ] 

Hudson commented on HDFS-8046:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2090 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2090/])
HDFS-8046. Allow better control of getContentSummary. Contributed by Kihwal 
Lee. (kihwal: rev 285b31e75e51ec8e3a796c2cb0208739368ca9b8)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ContentSummaryComputationContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java


 Allow better control of getContentSummary
 -

 Key: HDFS-8046
 URL: https://issues.apache.org/jira/browse/HDFS-8046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 2.8.0

 Attachments: HDFS-8046.v1.patch


 On busy clusters, users performing quota checks against a big directory 
 structure can affect the namenode performance. It has become a lot better 
 after HDFS-4995, but as clusters get bigger and busier, it is apparent that 
 we need finer grain control to avoid long read lock causing throughput drop.
 Even with unfair namesystem lock setting, a long read lock (10s of 
 milliseconds) can starve many readers and especially writers. So the locking 
 duration should be reduced, which can be done by imposing a lower 
 count-per-iteration limit in the existing implementation.  But HDFS-4995 came 
 with a fixed amount of sleep between locks. This needs to be made 
 configurable, so that {{getContentSummary()}} doesn't get exceedingly slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3087) Decomissioning on NN restart can complete without blocks being replicated

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487180#comment-14487180
 ] 

Hudson commented on HDFS-3087:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2090 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2090/])
HDFS-8025. Addendum fix for HDFS-3087 Decomissioning on NN restart can complete 
without blocks being replicated. Contributed by Ming Ma. (wang: rev 
5a540c3d3107199f4632e2ad7ee8ff913b107a04)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Decomissioning on NN restart can complete without blocks being replicated
 -

 Key: HDFS-3087
 URL: https://issues.apache.org/jira/browse/HDFS-3087
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.0
Reporter: Kihwal Lee
Assignee: Rushabh S Shah
Priority: Critical
 Fix For: 2.5.0

 Attachments: HDFS-3087.patch


 If a data node is added to the exclude list and the name node is restarted, 
 the decomissioning happens right away on the data node registration. At this 
 point the initial block report has not been sent, so the name node thinks the 
 node has zero blocks and the decomissioning completes very quick, without 
 replicating the blocks on that node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7813) TestDFSHAAdminMiniCluster#testFencer testcase is failing frequently

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487190#comment-14487190
 ] 

Hudson commented on HDFS-7813:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2090 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2090/])
Revert HDFS-7813. (wheat9: rev 82d56b337d468f4065df5005f9f67487ac97d2d7)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java


 TestDFSHAAdminMiniCluster#testFencer testcase is failing frequently
 ---

 Key: HDFS-7813
 URL: https://issues.apache.org/jira/browse/HDFS-7813
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, test
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 2.7.0

 Attachments: HDFS-7813-001.patch


 Following is the failure trace.
 {code}
 java.lang.AssertionError: expected:0 but was:-1
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testFencer(TestDFSHAAdminMiniCluster.java:163)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7725) Incorrect nodes in service metrics caused all writes to fail

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487207#comment-14487207
 ] 

Hudson commented on HDFS-7725:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #149 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/149/])
HDFS-7725. Incorrect 'nodes in service' metrics caused all writes to fail. 
Contributed by Ming Ma. (wang: rev 6af0d74a75f0f58d5e92e2e91e87735b9a62bb12)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeCapacityReport.java


 Incorrect nodes in service metrics caused all writes to fail
 --

 Key: HDFS-7725
 URL: https://issues.apache.org/jira/browse/HDFS-7725
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
 Fix For: 2.8.0

 Attachments: HDFS-7725-2.patch, HDFS-7725-3.patch, HDFS-7725.patch


 One of our clusters sometimes couldn't allocate blocks from any DNs. 
 BlockPlacementPolicyDefault complains with the following messages for all DNs.
 {noformat}
 the node is too busy (load:x  y)
 {noformat}
 It turns out the {{HeartbeatManager}}'s {{nodesInService}} was computed 
 incorrectly when admins decomm or recomm dead nodes. Here are two scenarios.
 * Decomm dead nodes. It turns out HDFS-7374 has fixed it; not sure if it is 
 intentional. cc / [~zhz], [~andrew.wang], [~atm] Here is the sequence of 
 event without HDFS-7374.
 ** Cluster has one live node. nodesInService == 1
 ** The node becomes dead. nodesInService == 0
 ** Decomm the node. nodesInService == -1
 * However, HDFS-7374 introduces another inconsistency when recomm is involved.
 ** Cluster has one live node. nodesInService == 1
 ** The node becomes dead. nodesInService == 0
 ** Decomm the node. nodesInService == 0
 ** Recomm the node. nodesInService == 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7808) Remove obsolete -ns options in in DFSHAAdmin.java

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487210#comment-14487210
 ] 

Hudson commented on HDFS-7808:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #149 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/149/])
Revert HDFS-7808. (wheat9: rev bd4c99bece56d1671c6f89eff8a529f4e7ac2933)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSHAAdmin.java


 Remove obsolete -ns options in in DFSHAAdmin.java
 -

 Key: HDFS-7808
 URL: https://issues.apache.org/jira/browse/HDFS-7808
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
Priority: Minor
 Attachments: HDFS-7808-1.patch


 After HDFS-7324 fix following piece of code become unused. It should be 
 removed.
 {code}
 int i = 0;
 String cmd = argv[i++];
 if (-ns.equals(cmd)) {
   if (i == argv.length) {
 errOut.println(Missing nameservice ID);
 printUsage(errOut);
 return -1;
   }
   nameserviceId = argv[i++];
   if (i = argv.length) {
 errOut.println(Missing command);
 printUsage(errOut);
 return -1;
   }
   argv = Arrays.copyOfRange(argv, i, argv.length);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7808) Remove obsolete -ns options in in DFSHAAdmin.java

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487192#comment-14487192
 ] 

Hudson commented on HDFS-7808:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2090 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2090/])
Revert HDFS-7808. (wheat9: rev bd4c99bece56d1671c6f89eff8a529f4e7ac2933)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSHAAdmin.java


 Remove obsolete -ns options in in DFSHAAdmin.java
 -

 Key: HDFS-7808
 URL: https://issues.apache.org/jira/browse/HDFS-7808
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
Priority: Minor
 Attachments: HDFS-7808-1.patch


 After HDFS-7324 fix following piece of code become unused. It should be 
 removed.
 {code}
 int i = 0;
 String cmd = argv[i++];
 if (-ns.equals(cmd)) {
   if (i == argv.length) {
 errOut.println(Missing nameservice ID);
 printUsage(errOut);
 return -1;
   }
   nameserviceId = argv[i++];
   if (i = argv.length) {
 errOut.println(Missing command);
 printUsage(errOut);
 return -1;
   }
   argv = Arrays.copyOfRange(argv, i, argv.length);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7813) TestDFSHAAdminMiniCluster#testFencer testcase is failing frequently

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487208#comment-14487208
 ] 

Hudson commented on HDFS-7813:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #149 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/149/])
Revert HDFS-7813. (wheat9: rev 82d56b337d468f4065df5005f9f67487ac97d2d7)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 TestDFSHAAdminMiniCluster#testFencer testcase is failing frequently
 ---

 Key: HDFS-7813
 URL: https://issues.apache.org/jira/browse/HDFS-7813
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, test
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 2.7.0

 Attachments: HDFS-7813-001.patch


 Following is the failure trace.
 {code}
 java.lang.AssertionError: expected:0 but was:-1
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testFencer(TestDFSHAAdminMiniCluster.java:163)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8089) Move o.a.h.hdfs.web.resources.* to the client jars

2015-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14487184#comment-14487184
 ] 

Hudson commented on HDFS-8089:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2090 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2090/])
HDFS-8089. Move o.a.h.hdfs.web.resources.* to the client jars. Contributed by 
Haohui Mai. (wheat9: rev cc25823546643caf22bab63ec85fe0c8939593d8)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/ReplicationParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/Param.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumSetParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DoAsParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/GroupParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/BlockSizeParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/IntegerParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ModificationTimeParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/AccessTimeParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumSetParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/NewLengthParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DeleteOpParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrSetFlagParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/GetOpParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/CreateParentParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ExcludeDatanodesParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/LengthParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DelegationParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/AccessTimeParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/AclPermissionParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/OverwriteParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/PutOpParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/BooleanParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/ModificationTimeParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrEncodingParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/ExcludeDatanodesParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrEncodingParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrNameParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/BooleanParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/LengthParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrSetFlagParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/ConcatSourcesParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/TokenArgumentParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/ShortParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/OverwriteParam.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/PostOpParam.java
* 

  1   2   >