[jira] [Updated] (HDFS-7998) HDFS Federation : Command mentioned to add a NN to existing federated cluster is wrong

2015-04-08 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated HDFS-7998:
--
Status: Patch Available  (was: Open)

Submitting the patch. Updated the document for correct command. Please review 
the same

 HDFS Federation : Command mentioned to add a NN to existing federated cluster 
 is wrong 
 ---

 Key: HDFS-7998
 URL: https://issues.apache.org/jira/browse/HDFS-7998
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Ajith S
Assignee: Ajith S
Priority: Minor
 Attachments: HDFS-7998.patch


 HDFS Federation documentation 
 http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/Federation.html
 has the following command to add a namenode to existing cluster
  $HADOOP_PREFIX_HOME/bin/hdfs dfadmin -refreshNameNode 
  datanode_host_name:datanode_rpc_port
 this command is incorrect, actual correct command is 
  $HADOOP_PREFIX_HOME/bin/hdfs dfsadmin -refreshNamenodes 
  datanode_host_name:datanode_rpc_port
 need to update the same in documentation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7998) HDFS Federation : Command mentioned to add a NN to existing federated cluster is wrong

2015-04-08 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated HDFS-7998:
--
Target Version/s: 2.7.0

 HDFS Federation : Command mentioned to add a NN to existing federated cluster 
 is wrong 
 ---

 Key: HDFS-7998
 URL: https://issues.apache.org/jira/browse/HDFS-7998
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Ajith S
Assignee: Ajith S
Priority: Minor
 Attachments: HDFS-7998.patch


 HDFS Federation documentation 
 http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/Federation.html
 has the following command to add a namenode to existing cluster
  $HADOOP_PREFIX_HOME/bin/hdfs dfadmin -refreshNameNode 
  datanode_host_name:datanode_rpc_port
 this command is incorrect, actual correct command is 
  $HADOOP_PREFIX_HOME/bin/hdfs dfsadmin -refreshNamenodes 
  datanode_host_name:datanode_rpc_port
 need to update the same in documentation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8023) Erasure Coding: retrieve eraure coding schema for a file from NameNode

2015-04-08 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8023:

Summary: Erasure Coding: retrieve eraure coding schema for a file from 
NameNode  (was: Erasure Coding: retrieve eraure coding policy and schema for a 
block or file from NameNode)

 Erasure Coding: retrieve eraure coding schema for a file from NameNode
 --

 Key: HDFS-8023
 URL: https://issues.apache.org/jira/browse/HDFS-8023
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Vinayakumar B
 Attachments: HDFS-8023-01.patch, HDFS-8023-02.patch


 NameNode needs to provide RPC call for client, tool, or DataNode to retrieve 
 eraure coding policy and schema for a block or file from NameNode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8023) Erasure Coding: retrieve eraure coding schema for a file from NameNode

2015-04-08 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8023:

Description: NameNode needs to provide RPC call for client and tool to 
retrieve eraure coding schema for a file from NameNode.  (was: NameNode needs 
to provide RPC call for client, tool, or DataNode to retrieve eraure coding 
policy and schema for a block or file from NameNode.)

 Erasure Coding: retrieve eraure coding schema for a file from NameNode
 --

 Key: HDFS-8023
 URL: https://issues.apache.org/jira/browse/HDFS-8023
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Vinayakumar B
 Attachments: HDFS-8023-01.patch, HDFS-8023-02.patch


 NameNode needs to provide RPC call for client and tool to retrieve eraure 
 coding schema for a file from NameNode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8023) Erasure Coding: retrieve eraure coding schema for a file from NameNode

2015-04-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484784#comment-14484784
 ] 

Kai Zheng commented on HDFS-8023:
-

bq.Will skip in this Jira. If required we can add it later.
I'm OK to skip the one for DataNode and consider it in future, though we may do 
need it for next phase. 

I modified the JIRA description and title to reflect the scope change.

 Erasure Coding: retrieve eraure coding schema for a file from NameNode
 --

 Key: HDFS-8023
 URL: https://issues.apache.org/jira/browse/HDFS-8023
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Vinayakumar B
 Attachments: HDFS-8023-01.patch, HDFS-8023-02.patch


 NameNode needs to provide RPC call for client and tool to retrieve eraure 
 coding schema for a file from NameNode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8023) Erasure Coding: retrieve eraure coding schema for a file from NameNode

2015-04-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484789#comment-14484789
 ] 

Kai Zheng commented on HDFS-8023:
-

Hi [~vinayrpet],

I looked the latest patch and it looks fine to me. One thing I would make sure:
I have uploaded a patch in HDFS-8074 adding the system default schema. Would 
you look at it and see if it would be better to get it in first and rebase this 
patch on that? It may help save an upcoming revisit JIRA. Thanks.

Otherwise, +1 for the new patch.

 Erasure Coding: retrieve eraure coding schema for a file from NameNode
 --

 Key: HDFS-8023
 URL: https://issues.apache.org/jira/browse/HDFS-8023
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Vinayakumar B
 Attachments: HDFS-8023-01.patch, HDFS-8023-02.patch


 NameNode needs to provide RPC call for client and tool to retrieve eraure 
 coding schema for a file from NameNode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8090) Erasure Coding: Add RPC to client-namenode to list all ECSchemas loaded in Namenode.

2015-04-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484796#comment-14484796
 ] 

Kai Zheng commented on HDFS-8090:
-

Hi [~vinayrpet],

Good to have this. Thanks! Let me accelerate HDFS-7866 to prepare for the 
necessary facilities like {{ECSchemaManager#getSchemas}}, 
{{ECSchemaManager#getSchema}} and etc. for this issue. 

 Erasure Coding: Add RPC to client-namenode to list all ECSchemas loaded in 
 Namenode.
 

 Key: HDFS-8090
 URL: https://issues.apache.org/jira/browse/HDFS-8090
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B

 ECSchemas will be configured and loaded only at the Namenode to avoid 
 conflicts.
 Client has to specify one of these schemas during creation of ecZones.
 So, add an RPC to ClientProtocol to get all ECSchemas loaded at namenode, so 
 that client can choose only any one of these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8091) ACLStatus and XAttributes emitted by custom INodeAttributesProvider not being picked up correctly

2015-04-08 Thread Arun Suresh (JIRA)
Arun Suresh created HDFS-8091:
-

 Summary: ACLStatus and XAttributes emitted by custom 
INodeAttributesProvider not being picked up correctly 
 Key: HDFS-8091
 URL: https://issues.apache.org/jira/browse/HDFS-8091
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: Arun Suresh
Assignee: Arun Suresh
 Fix For: 2.7.0


HDFS-6826 introduced the concept of an {{INodeAttributesProvider}}, an 
implementation of which can be plugged-in so that the Attributes (user / group 
/ permission / acls and xattrs) that are returned for an HDFS path can be 
altered/enhanced by the user specified code before it is returned to the client.

Unfortunately, it looks like the AclStatus and XAttributes are not properly 
presented to the user specified {{INodeAttributedProvider}} before it is 
returned to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8091) ACLStatus and XAttributes not properly presented to INodeAttributesProvider before returning to client

2015-04-08 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-8091:
--
Summary: ACLStatus and XAttributes not properly presented to 
INodeAttributesProvider before returning to client   (was: ACLStatus and 
XAttributes emitted by custom INodeAttributesProvider not being picked up 
correctly )

 ACLStatus and XAttributes not properly presented to INodeAttributesProvider 
 before returning to client 
 ---

 Key: HDFS-8091
 URL: https://issues.apache.org/jira/browse/HDFS-8091
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: Arun Suresh
Assignee: Arun Suresh
 Fix For: 2.7.0


 HDFS-6826 introduced the concept of an {{INodeAttributesProvider}}, an 
 implementation of which can be plugged-in so that the Attributes (user / 
 group / permission / acls and xattrs) that are returned for an HDFS path can 
 be altered/enhanced by the user specified code before it is returned to the 
 client.
 Unfortunately, it looks like the AclStatus and XAttributes are not properly 
 presented to the user specified {{INodeAttributedProvider}} before it is 
 returned to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8091) ACLStatus and XAttributes not properly presented to INodeAttributesProvider before returning to client

2015-04-08 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-8091:
--
Status: Patch Available  (was: Open)

 ACLStatus and XAttributes not properly presented to INodeAttributesProvider 
 before returning to client 
 ---

 Key: HDFS-8091
 URL: https://issues.apache.org/jira/browse/HDFS-8091
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: Arun Suresh
Assignee: Arun Suresh
 Fix For: 2.7.0

 Attachments: HDFS-8091.1.patch


 HDFS-6826 introduced the concept of an {{INodeAttributesProvider}}, an 
 implementation of which can be plugged-in so that the Attributes (user / 
 group / permission / acls and xattrs) that are returned for an HDFS path can 
 be altered/enhanced by the user specified code before it is returned to the 
 client.
 Unfortunately, it looks like the AclStatus and XAttributes are not properly 
 presented to the user specified {{INodeAttributedProvider}} before it is 
 returned to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8091) ACLStatus and XAttributes not properly presented to INodeAttributesProvider before returning to client

2015-04-08 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-8091:
--
Attachment: HDFS-8091.1.patch

Attaching trivial patch to fix this.
[~jnp], wondering if you could possibly give this a quick review, since it is 
required for HDFS-6826 completeness. 

 ACLStatus and XAttributes not properly presented to INodeAttributesProvider 
 before returning to client 
 ---

 Key: HDFS-8091
 URL: https://issues.apache.org/jira/browse/HDFS-8091
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: Arun Suresh
Assignee: Arun Suresh
 Fix For: 2.7.0

 Attachments: HDFS-8091.1.patch


 HDFS-6826 introduced the concept of an {{INodeAttributesProvider}}, an 
 implementation of which can be plugged-in so that the Attributes (user / 
 group / permission / acls and xattrs) that are returned for an HDFS path can 
 be altered/enhanced by the user specified code before it is returned to the 
 client.
 Unfortunately, it looks like the AclStatus and XAttributes are not properly 
 presented to the user specified {{INodeAttributedProvider}} before it is 
 returned to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7994) Detect if resevered EC Block ID is already used

2015-04-08 Thread Hui Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hui Zheng reassigned HDFS-7994:
---

Assignee: Hui Zheng  (was: Tsz Wo Nicholas Sze)

 Detect if resevered EC Block ID is already used
 ---

 Key: HDFS-7994
 URL: https://issues.apache.org/jira/browse/HDFS-7994
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Hui Zheng

 Since random block IDs were supported by some early version of HDFS, the 
 block ID reserved for EC blocks could be already used by some existing blocks 
 in a cluster. During NameNode startup, it detects if there are reserved EC 
 block IDs used by non-EC blocks. If it is the case, NameNode will do an 
 additional blocksMap lookup when there is a miss in a blockGroupsMap lookup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8085) Move CorruptFileBlockIterator to the hdfs.client.impl package

2015-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484824#comment-14484824
 ] 

Hadoop QA commented on HDFS-8085:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723816/h8085_20150407.patch
  against trunk revision 4be648b.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.TestHDFSFileContextMainOperations

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10204//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10204//console

This message is automatically generated.

 Move CorruptFileBlockIterator to the hdfs.client.impl package
 -

 Key: HDFS-8085
 URL: https://issues.apache.org/jira/browse/HDFS-8085
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Attachments: h8085_20150407.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7994) Detect if resevered EC Block ID is already used

2015-04-08 Thread Hui Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484823#comment-14484823
 ] 

Hui Zheng commented on HDFS-7994:
-

Hi Nicholas
I would like to work on this issue.

 Detect if resevered EC Block ID is already used
 ---

 Key: HDFS-7994
 URL: https://issues.apache.org/jira/browse/HDFS-7994
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Hui Zheng

 Since random block IDs were supported by some early version of HDFS, the 
 block ID reserved for EC blocks could be already used by some existing blocks 
 in a cluster. During NameNode startup, it detects if there are reserved EC 
 block IDs used by non-EC blocks. If it is the case, NameNode will do an 
 additional blocksMap lookup when there is a miss in a blockGroupsMap lookup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7998) HDFS Federation : Command mentioned to add a NN to existing federated cluster is wrong

2015-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484827#comment-14484827
 ] 

Hadoop QA commented on HDFS-7998:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723840/HDFS-7998.patch
  against trunk revision ab04ff9.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs 

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10206//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10206//console

This message is automatically generated.

 HDFS Federation : Command mentioned to add a NN to existing federated cluster 
 is wrong 
 ---

 Key: HDFS-7998
 URL: https://issues.apache.org/jira/browse/HDFS-7998
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Ajith S
Assignee: Ajith S
Priority: Minor
 Attachments: HDFS-7998.patch


 HDFS Federation documentation 
 http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/Federation.html
 has the following command to add a namenode to existing cluster
  $HADOOP_PREFIX_HOME/bin/hdfs dfadmin -refreshNameNode 
  datanode_host_name:datanode_rpc_port
 this command is incorrect, actual correct command is 
  $HADOOP_PREFIX_HOME/bin/hdfs dfsadmin -refreshNamenodes 
  datanode_host_name:datanode_rpc_port
 need to update the same in documentation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8023) Erasure Coding: retrieve eraure coding schema for a file from NameNode

2015-04-08 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484855#comment-14484855
 ] 

Vinayakumar B commented on HDFS-8023:
-

Hi [~drankye],
I saw the latest patch on, HDFS-8074. Its more related to get the ECSchema from 
ECZone. Currently ECZone doesnt carry schema.
So I feel revisit required anyway. In Revisit just needs to modify 
{{FSNameSystem#getErasureCodingInfo()}} to return the correct ECSchema from 
ECZone.

 Erasure Coding: retrieve eraure coding schema for a file from NameNode
 --

 Key: HDFS-8023
 URL: https://issues.apache.org/jira/browse/HDFS-8023
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Vinayakumar B
 Attachments: HDFS-8023-01.patch, HDFS-8023-02.patch


 NameNode needs to provide RPC call for client and tool to retrieve eraure 
 coding schema for a file from NameNode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8074) Define a system-wide default EC schema

2015-04-08 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8074:

Attachment: HDFS-8074-v3.patch

Updated the patch again to address the review comments.

 Define a system-wide default EC schema
 --

 Key: HDFS-8074
 URL: https://issues.apache.org/jira/browse/HDFS-8074
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8074-v1.patch, HDFS-8074-v2.patch, 
 HDFS-8074-v3.patch


 It's good to have a system default EC schema first with fixed values before 
 we support more schemas. This makes sense to resolve some dependencies before 
 HDFS-7866 can be done in whole. The default system schema is also needed 
 anyhow essentially when admin just wants to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8094) Cluster web console (dfsclusterhealth.jsp) is not working

2015-04-08 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8094:

Affects Version/s: 2.7.0

 Cluster web console (dfsclusterhealth.jsp) is not working
 -

 Key: HDFS-8094
 URL: https://issues.apache.org/jira/browse/HDFS-8094
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: federation
Affects Versions: 2.7.0
Reporter: Ajith S
Assignee: Ajith S

 According to documentation, cluster can be monitored at 
 http://any_nn_host:port/dfsclusterhealth.jsp
 Currently, this url doesn't seem to be working. This seems to be removed as 
 part of HDFS-6252



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8062) Remove hard-coded values in favor of EC schema

2015-04-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484994#comment-14484994
 ] 

Kai Zheng commented on HDFS-8062:
-

Note HDFS-8074 was done.

 Remove hard-coded values in favor of EC schema
 --

 Key: HDFS-8062
 URL: https://issues.apache.org/jira/browse/HDFS-8062
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Sasaki
 Attachments: HDFS-8062.1.patch


 Related issues about EC schema in NameNode side:
 HDFS-7859 is to change fsimage and editlog in NameNode to persist EC schemas;
 HDFS-7866 is to manage EC schemas in NameNode, loading, syncing between 
 persisted ones in image and predefined ones in XML.
 This is to revisit all the places in NameNode that uses hard-coded values in 
 favor of {{ECSchema}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8080) Separate JSON related routines used by WebHdfsFileSystem to a package local class

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485104#comment-14485104
 ] 

Hudson commented on HDFS-8080:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2089 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2089/])
HDFS-8080. Separate JSON related routines used by WebHdfsFileSystem to a 
package local class. Contributed by Haohui Mai. (wheat9: rev 
ab04ff9efe632b4eca6faca7407ac35e00e6a379)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDelegationTokensWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFSForHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Separate JSON related routines used by WebHdfsFileSystem to a package local 
 class
 -

 Key: HDFS-8080
 URL: https://issues.apache.org/jira/browse/HDFS-8080
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8080.000.patch, HDFS-8080.001.patch


 Currently {{JSONUtil}} contains routines used by both {{WebHdfsFileSystem}} 
 and {{NamenodeWebHdfsMethods}}. This jira proposes to separate them into 
 different class so that it is easier to move {{WebHdfsFileSystem}} into a 
 separate module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8038) PBImageDelimitedTextWriter#getEntry output HDFS path in platform-specific format.

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485106#comment-14485106
 ] 

Hudson commented on HDFS-8038:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2089 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2089/])
HDFS-8038. PBImageDelimitedTextWriter#getEntry output HDFS path in 
platform-specific format. Contributed by Xiaoyu Yao. (cnauroth: rev 
1e72d98c69bef3526cf0eb617de69e0b6d2fc13c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageDelimitedTextWriter.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java


 PBImageDelimitedTextWriter#getEntry output HDFS path in platform-specific 
 format.
 -

 Key: HDFS-8038
 URL: https://issues.apache.org/jira/browse/HDFS-8038
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-8038.00.patch, HDFS-8038.01.patch, 
 HDFS-8038.02.patch


 PBImageDelimitedTextWriter#getEntry taking the HDFS path and passing it 
 through java.io.File, which causes platform-specific behavior as the actual 
 results shown in TestOfflineImageViewer#testPBDelimitedWriter() on Windows OS.
 {code}
 expected:[/emptydir, /dir0, /dir1/file2, /dir1, /dir1/file3, /dir2/file3, 
 /dir1/file0, /dir1/file1, /dir2/file1, /dir2/file2, /dir2, /dir0/file0, 
 /dir2/file0, /dir0/file1, /dir0/file2, /dir0/file3, /xattr] 
 but was:[\dir0, \dir0\file3, \dir0\file2, \dir0\file1, \xattr, \emptydir, 
 \dir0\file0, \dir1\file1, \dir1\file0, \dir1\file3, \dir1\file2, \dir2\file3, 
 \, \dir1, \dir2\file0, \dirContainingInvalidXMLChar#0;here, \dir2, 
 \dir2\file2, \dir2\file1]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8080) Separate JSON related routines used by WebHdfsFileSystem to a package local class

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485117#comment-14485117
 ] 

Hudson commented on HDFS-8080:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #148 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/148/])
HDFS-8080. Separate JSON related routines used by WebHdfsFileSystem to a 
package local class. Contributed by Haohui Mai. (wheat9: rev 
ab04ff9efe632b4eca6faca7407ac35e00e6a379)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFSForHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDelegationTokensWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java


 Separate JSON related routines used by WebHdfsFileSystem to a package local 
 class
 -

 Key: HDFS-8080
 URL: https://issues.apache.org/jira/browse/HDFS-8080
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8080.000.patch, HDFS-8080.001.patch


 Currently {{JSONUtil}} contains routines used by both {{WebHdfsFileSystem}} 
 and {{NamenodeWebHdfsMethods}}. This jira proposes to separate them into 
 different class so that it is easier to move {{WebHdfsFileSystem}} into a 
 separate module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8073) Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be easily overrided.

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485110#comment-14485110
 ] 

Hudson commented on HDFS-8073:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2089 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2089/])
HDFS-8073. Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be 
easily overrided. (Contributed by Walter Su) (vinayakumarb: rev 
d505c8acd30d6f40d0632fe9c93c886a4499a9fc)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be easily 
 overrided.
 -

 Key: HDFS-8073
 URL: https://issues.apache.org/jira/browse/HDFS-8073
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Walter Su
Assignee: Walter Su
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8073.001.patch, HDFS-8073.002.patch, 
 HDFS-8073.003.patch


 We want to implement a placement policy( eg. HDFS-7891) based on default 
 policy.  It'll be easier if we could split 
 BlockPlacementPolicyDefault.chooseTarget(..).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8096) DatanodeMetrics#blocksReplicated will get incremented early and even for failed transfers

2015-04-08 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-8096:
---

 Summary: DatanodeMetrics#blocksReplicated will get incremented 
early and even for failed transfers
 Key: HDFS-8096
 URL: https://issues.apache.org/jira/browse/HDFS-8096
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Vinayakumar B
Assignee: Vinayakumar B


{code}case DatanodeProtocol.DNA_TRANSFER:
  // Send a copy of a block to another datanode
  dn.transferBlocks(bcmd.getBlockPoolId(), bcmd.getBlocks(),
  bcmd.getTargets(), bcmd.getTargetStorageTypes());
  dn.metrics.incrBlocksReplicated(bcmd.getBlocks().length);{code}
In the above code to handle replication transfers from namenode, 
{{DatanodeMetrics#blocksReplicated}} is getting incremented early, since the 
transfer will happen in background. 
And even failed transfers also getting counted.

Correct place to increment this counter is {{DataTransfer#run()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8073) Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be easily overrided.

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485050#comment-14485050
 ] 

Hudson commented on HDFS-8073:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #157 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/157/])
HDFS-8073. Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be 
easily overrided. (Contributed by Walter Su) (vinayakumarb: rev 
d505c8acd30d6f40d0632fe9c93c886a4499a9fc)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java


 Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be easily 
 overrided.
 -

 Key: HDFS-8073
 URL: https://issues.apache.org/jira/browse/HDFS-8073
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Walter Su
Assignee: Walter Su
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8073.001.patch, HDFS-8073.002.patch, 
 HDFS-8073.003.patch


 We want to implement a placement policy( eg. HDFS-7891) based on default 
 policy.  It'll be easier if we could split 
 BlockPlacementPolicyDefault.chooseTarget(..).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5215) dfs.datanode.du.reserved is not considered while computing available space

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485053#comment-14485053
 ] 

Hudson commented on HDFS-5215:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #157 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/157/])
HDFS-5215. dfs.datanode.du.reserved is not considered while computing (yzhang: 
rev 66763bb06f107f0e072c773a5feb25903c688ddc)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


 dfs.datanode.du.reserved is not considered while computing available space
 --

 Key: HDFS-5215
 URL: https://issues.apache.org/jira/browse/HDFS-5215
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HDFS-5215-002.patch, HDFS-5215-003.patch, 
 HDFS-5215-004.patch, HDFS-5215-005.patch, HDFS-5215.patch


 {code}public long getAvailable() throws IOException {
 long remaining = getCapacity()-getDfsUsed();
 long available = usage.getAvailable();
 if (remaining  available) {
   remaining = available;
 }
 return (remaining  0) ? remaining : 0;
   } 
 {code}
 Here we are not considering the reserved space while getting the Available 
 Space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8079) Separate the client retry conf from DFSConfigKeys

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485048#comment-14485048
 ] 

Hudson commented on HDFS-8079:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #157 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/157/])
HDFS-8079. Move dfs.client.retry.* confs from DFSConfigKeys to 
HdfsClientConfigKeys.Retry. (szetszwo: rev 
4be648b55c1ce8743f6e0ea1683168e9ed9c3ee4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMissingBlocksAlert.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockMissingException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestCrcCorruption.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientReportBadBlock.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderLocalLegacy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestFailoverWithBlockTokensEnabled.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java


 Separate the client retry conf from DFSConfigKeys
 -

 Key: HDFS-8079
 URL: https://issues.apache.org/jira/browse/HDFS-8079
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 2.8.0

 Attachments: h8079_20150407.patch, h8079_20150407b.patch


 A part of HDFS-8050, move dfs.client.retry.* conf from DFSConfigKeys to a new 
 class HdfsClientConfigKeys. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8049) Annotation client implementation as private

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485052#comment-14485052
 ] 

Hudson commented on HDFS-8049:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #157 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/157/])
HDFS-8049. Add @InterfaceAudience.Private annotation to hdfs client 
implementation. Contributed by Takuya Fukudome (szetszwo: rev 
571a1ce9d037d99e7c9042bcb77ae7a2c4daf6d3)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/KeyProviderCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/CorruptFileBlockIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/LeaseRenewer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocalLegacy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSPacket.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSHedgedReadMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ExtendedBlockId.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java


 Annotation client implementation as private
 ---

 Key: HDFS-8049
 URL: https://issues.apache.org/jira/browse/HDFS-8049
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Takuya Fukudome
 Fix For: 2.8.0

 Attachments: HDFS-8049.1.patch, HDFS-8049.2.patch


 The @InterfaceAudience Annotation is missing for quite a few client classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8038) PBImageDelimitedTextWriter#getEntry output HDFS path in platform-specific format.

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485046#comment-14485046
 ] 

Hudson commented on HDFS-8038:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #157 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/157/])
HDFS-8038. PBImageDelimitedTextWriter#getEntry output HDFS path in 
platform-specific format. Contributed by Xiaoyu Yao. (cnauroth: rev 
1e72d98c69bef3526cf0eb617de69e0b6d2fc13c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageDelimitedTextWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java


 PBImageDelimitedTextWriter#getEntry output HDFS path in platform-specific 
 format.
 -

 Key: HDFS-8038
 URL: https://issues.apache.org/jira/browse/HDFS-8038
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-8038.00.patch, HDFS-8038.01.patch, 
 HDFS-8038.02.patch


 PBImageDelimitedTextWriter#getEntry taking the HDFS path and passing it 
 through java.io.File, which causes platform-specific behavior as the actual 
 results shown in TestOfflineImageViewer#testPBDelimitedWriter() on Windows OS.
 {code}
 expected:[/emptydir, /dir0, /dir1/file2, /dir1, /dir1/file3, /dir2/file3, 
 /dir1/file0, /dir1/file1, /dir2/file1, /dir2/file2, /dir2, /dir0/file0, 
 /dir2/file0, /dir0/file1, /dir0/file2, /dir0/file3, /xattr] 
 but was:[\dir0, \dir0\file3, \dir0\file2, \dir0\file1, \xattr, \emptydir, 
 \dir0\file0, \dir1\file1, \dir1\file0, \dir1\file3, \dir1\file2, \dir2\file3, 
 \, \dir1, \dir2\file0, \dirContainingInvalidXMLChar#0;here, \dir2, 
 \dir2\file2, \dir2\file1]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8080) Separate JSON related routines used by WebHdfsFileSystem to a package local class

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485044#comment-14485044
 ] 

Hudson commented on HDFS-8080:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #157 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/157/])
HDFS-8080. Separate JSON related routines used by WebHdfsFileSystem to a 
package local class. Contributed by Haohui Mai. (wheat9: rev 
ab04ff9efe632b4eca6faca7407ac35e00e6a379)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFSForHA.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDelegationTokensWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsConstants.java


 Separate JSON related routines used by WebHdfsFileSystem to a package local 
 class
 -

 Key: HDFS-8080
 URL: https://issues.apache.org/jira/browse/HDFS-8080
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8080.000.patch, HDFS-8080.001.patch


 Currently {{JSONUtil}} contains routines used by both {{WebHdfsFileSystem}} 
 and {{NamenodeWebHdfsMethods}}. This jira proposes to separate them into 
 different class so that it is easier to move {{WebHdfsFileSystem}} into a 
 separate module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8096) DatanodeMetrics#blocksReplicated will get incremented early and even for failed transfers

2015-04-08 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8096:

Attachment: HDFS-8096-01.patch

Attached the patch for the same.
Please review.

 DatanodeMetrics#blocksReplicated will get incremented early and even for 
 failed transfers
 -

 Key: HDFS-8096
 URL: https://issues.apache.org/jira/browse/HDFS-8096
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8096-01.patch


 {code}case DatanodeProtocol.DNA_TRANSFER:
   // Send a copy of a block to another datanode
   dn.transferBlocks(bcmd.getBlockPoolId(), bcmd.getBlocks(),
   bcmd.getTargets(), bcmd.getTargetStorageTypes());
   dn.metrics.incrBlocksReplicated(bcmd.getBlocks().length);{code}
 In the above code to handle replication transfers from namenode, 
 {{DatanodeMetrics#blocksReplicated}} is getting incremented early, since the 
 transfer will happen in background. 
 And even failed transfers also getting counted.
 Correct place to increment this counter is {{DataTransfer#run()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8096) DatanodeMetrics#blocksReplicated will get incremented early and even for failed transfers

2015-04-08 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8096:

Status: Patch Available  (was: Open)

 DatanodeMetrics#blocksReplicated will get incremented early and even for 
 failed transfers
 -

 Key: HDFS-8096
 URL: https://issues.apache.org/jira/browse/HDFS-8096
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8096-01.patch


 {code}case DatanodeProtocol.DNA_TRANSFER:
   // Send a copy of a block to another datanode
   dn.transferBlocks(bcmd.getBlockPoolId(), bcmd.getBlocks(),
   bcmd.getTargets(), bcmd.getTargetStorageTypes());
   dn.metrics.incrBlocksReplicated(bcmd.getBlocks().length);{code}
 In the above code to handle replication transfers from namenode, 
 {{DatanodeMetrics#blocksReplicated}} is getting incremented early, since the 
 transfer will happen in background. 
 And even failed transfers also getting counted.
 Correct place to increment this counter is {{DataTransfer#run()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8038) PBImageDelimitedTextWriter#getEntry output HDFS path in platform-specific format.

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485139#comment-14485139
 ] 

Hudson commented on HDFS-8038:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #891 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/891/])
HDFS-8038. PBImageDelimitedTextWriter#getEntry output HDFS path in 
platform-specific format. Contributed by Xiaoyu Yao. (cnauroth: rev 
1e72d98c69bef3526cf0eb617de69e0b6d2fc13c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageDelimitedTextWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java


 PBImageDelimitedTextWriter#getEntry output HDFS path in platform-specific 
 format.
 -

 Key: HDFS-8038
 URL: https://issues.apache.org/jira/browse/HDFS-8038
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-8038.00.patch, HDFS-8038.01.patch, 
 HDFS-8038.02.patch


 PBImageDelimitedTextWriter#getEntry taking the HDFS path and passing it 
 through java.io.File, which causes platform-specific behavior as the actual 
 results shown in TestOfflineImageViewer#testPBDelimitedWriter() on Windows OS.
 {code}
 expected:[/emptydir, /dir0, /dir1/file2, /dir1, /dir1/file3, /dir2/file3, 
 /dir1/file0, /dir1/file1, /dir2/file1, /dir2/file2, /dir2, /dir0/file0, 
 /dir2/file0, /dir0/file1, /dir0/file2, /dir0/file3, /xattr] 
 but was:[\dir0, \dir0\file3, \dir0\file2, \dir0\file1, \xattr, \emptydir, 
 \dir0\file0, \dir1\file1, \dir1\file0, \dir1\file3, \dir1\file2, \dir2\file3, 
 \, \dir1, \dir2\file0, \dirContainingInvalidXMLChar#0;here, \dir2, 
 \dir2\file2, \dir2\file1]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8079) Separate the client retry conf from DFSConfigKeys

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485141#comment-14485141
 ] 

Hudson commented on HDFS-8079:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #891 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/891/])
HDFS-8079. Move dfs.client.retry.* confs from DFSConfigKeys to 
HdfsClientConfigKeys.Retry. (szetszwo: rev 
4be648b55c1ce8743f6e0ea1683168e9ed9c3ee4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockMissingException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMissingBlocksAlert.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderLocalLegacy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientReportBadBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestCrcCorruption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestFailoverWithBlockTokensEnabled.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java


 Separate the client retry conf from DFSConfigKeys
 -

 Key: HDFS-8079
 URL: https://issues.apache.org/jira/browse/HDFS-8079
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 2.8.0

 Attachments: h8079_20150407.patch, h8079_20150407b.patch


 A part of HDFS-8050, move dfs.client.retry.* conf from DFSConfigKeys to a new 
 class HdfsClientConfigKeys. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8080) Separate JSON related routines used by WebHdfsFileSystem to a package local class

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485137#comment-14485137
 ] 

Hudson commented on HDFS-8080:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #891 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/891/])
HDFS-8080. Separate JSON related routines used by WebHdfsFileSystem to a 
package local class. Contributed by Haohui Mai. (wheat9: rev 
ab04ff9efe632b4eca6faca7407ac35e00e6a379)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDelegationTokensWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFSForHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsConstants.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java


 Separate JSON related routines used by WebHdfsFileSystem to a package local 
 class
 -

 Key: HDFS-8080
 URL: https://issues.apache.org/jira/browse/HDFS-8080
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-8080.000.patch, HDFS-8080.001.patch


 Currently {{JSONUtil}} contains routines used by both {{WebHdfsFileSystem}} 
 and {{NamenodeWebHdfsMethods}}. This jira proposes to separate them into 
 different class so that it is easier to move {{WebHdfsFileSystem}} into a 
 separate module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8073) Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be easily overrided.

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485143#comment-14485143
 ] 

Hudson commented on HDFS-8073:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #891 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/891/])
HDFS-8073. Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be 
easily overrided. (Contributed by Walter Su) (vinayakumarb: rev 
d505c8acd30d6f40d0632fe9c93c886a4499a9fc)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java


 Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be easily 
 overrided.
 -

 Key: HDFS-8073
 URL: https://issues.apache.org/jira/browse/HDFS-8073
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Walter Su
Assignee: Walter Su
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8073.001.patch, HDFS-8073.002.patch, 
 HDFS-8073.003.patch


 We want to implement a placement policy( eg. HDFS-7891) based on default 
 policy.  It'll be easier if we could split 
 BlockPlacementPolicyDefault.chooseTarget(..).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8049) Annotation client implementation as private

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485145#comment-14485145
 ] 

Hudson commented on HDFS-8049:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #891 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/891/])
HDFS-8049. Add @InterfaceAudience.Private annotation to hdfs client 
implementation. Contributed by Takuya Fukudome (szetszwo: rev 
571a1ce9d037d99e7c9042bcb77ae7a2c4daf6d3)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/KeyProviderCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/CorruptFileBlockIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSHedgedReadMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocalLegacy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSPacket.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/LeaseRenewer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ExtendedBlockId.java


 Annotation client implementation as private
 ---

 Key: HDFS-8049
 URL: https://issues.apache.org/jira/browse/HDFS-8049
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Takuya Fukudome
 Fix For: 2.8.0

 Attachments: HDFS-8049.1.patch, HDFS-8049.2.patch


 The @InterfaceAudience Annotation is missing for quite a few client classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5215) dfs.datanode.du.reserved is not considered while computing available space

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485146#comment-14485146
 ] 

Hudson commented on HDFS-5215:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #891 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/891/])
HDFS-5215. dfs.datanode.du.reserved is not considered while computing (yzhang: 
rev 66763bb06f107f0e072c773a5feb25903c688ddc)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 dfs.datanode.du.reserved is not considered while computing available space
 --

 Key: HDFS-5215
 URL: https://issues.apache.org/jira/browse/HDFS-5215
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HDFS-5215-002.patch, HDFS-5215-003.patch, 
 HDFS-5215-004.patch, HDFS-5215-005.patch, HDFS-5215.patch


 {code}public long getAvailable() throws IOException {
 long remaining = getCapacity()-getDfsUsed();
 long available = usage.getAvailable();
 if (remaining  available) {
   remaining = available;
 }
 return (remaining  0) ? remaining : 0;
   } 
 {code}
 Here we are not considering the reserved space while getting the Available 
 Space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5215) dfs.datanode.du.reserved is not considered while computing available space

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485113#comment-14485113
 ] 

Hudson commented on HDFS-5215:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2089 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2089/])
HDFS-5215. dfs.datanode.du.reserved is not considered while computing (yzhang: 
rev 66763bb06f107f0e072c773a5feb25903c688ddc)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


 dfs.datanode.du.reserved is not considered while computing available space
 --

 Key: HDFS-5215
 URL: https://issues.apache.org/jira/browse/HDFS-5215
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HDFS-5215-002.patch, HDFS-5215-003.patch, 
 HDFS-5215-004.patch, HDFS-5215-005.patch, HDFS-5215.patch


 {code}public long getAvailable() throws IOException {
 long remaining = getCapacity()-getDfsUsed();
 long available = usage.getAvailable();
 if (remaining  available) {
   remaining = available;
 }
 return (remaining  0) ? remaining : 0;
   } 
 {code}
 Here we are not considering the reserved space while getting the Available 
 Space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8049) Annotation client implementation as private

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485112#comment-14485112
 ] 

Hudson commented on HDFS-8049:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2089 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2089/])
HDFS-8049. Add @InterfaceAudience.Private annotation to hdfs client 
implementation. Contributed by Takuya Fukudome (szetszwo: rev 
571a1ce9d037d99e7c9042bcb77ae7a2c4daf6d3)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocalLegacy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/LeaseRenewer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSPacket.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/CorruptFileBlockIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ExtendedBlockId.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSHedgedReadMetrics.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/KeyProviderCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java


 Annotation client implementation as private
 ---

 Key: HDFS-8049
 URL: https://issues.apache.org/jira/browse/HDFS-8049
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Takuya Fukudome
 Fix For: 2.8.0

 Attachments: HDFS-8049.1.patch, HDFS-8049.2.patch


 The @InterfaceAudience Annotation is missing for quite a few client classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8073) Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be easily overrided.

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485123#comment-14485123
 ] 

Hudson commented on HDFS-8073:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #148 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/148/])
HDFS-8073. Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be 
easily overrided. (Contributed by Walter Su) (vinayakumarb: rev 
d505c8acd30d6f40d0632fe9c93c886a4499a9fc)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be easily 
 overrided.
 -

 Key: HDFS-8073
 URL: https://issues.apache.org/jira/browse/HDFS-8073
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Walter Su
Assignee: Walter Su
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8073.001.patch, HDFS-8073.002.patch, 
 HDFS-8073.003.patch


 We want to implement a placement policy( eg. HDFS-7891) based on default 
 policy.  It'll be easier if we could split 
 BlockPlacementPolicyDefault.chooseTarget(..).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8049) Annotation client implementation as private

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485125#comment-14485125
 ] 

Hudson commented on HDFS-8049:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #148 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/148/])
HDFS-8049. Add @InterfaceAudience.Private annotation to hdfs client 
implementation. Contributed by Takuya Fukudome (szetszwo: rev 
571a1ce9d037d99e7c9042bcb77ae7a2c4daf6d3)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocalLegacy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSPacket.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/KeyProviderCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemotePeerFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/LeaseRenewer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ExtendedBlockId.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSHedgedReadMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/CorruptFileBlockIterator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java


 Annotation client implementation as private
 ---

 Key: HDFS-8049
 URL: https://issues.apache.org/jira/browse/HDFS-8049
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Takuya Fukudome
 Fix For: 2.8.0

 Attachments: HDFS-8049.1.patch, HDFS-8049.2.patch


 The @InterfaceAudience Annotation is missing for quite a few client classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8038) PBImageDelimitedTextWriter#getEntry output HDFS path in platform-specific format.

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485119#comment-14485119
 ] 

Hudson commented on HDFS-8038:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #148 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/148/])
HDFS-8038. PBImageDelimitedTextWriter#getEntry output HDFS path in 
platform-specific format. Contributed by Xiaoyu Yao. (cnauroth: rev 
1e72d98c69bef3526cf0eb617de69e0b6d2fc13c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageDelimitedTextWriter.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageTextWriter.java


 PBImageDelimitedTextWriter#getEntry output HDFS path in platform-specific 
 format.
 -

 Key: HDFS-8038
 URL: https://issues.apache.org/jira/browse/HDFS-8038
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-8038.00.patch, HDFS-8038.01.patch, 
 HDFS-8038.02.patch


 PBImageDelimitedTextWriter#getEntry taking the HDFS path and passing it 
 through java.io.File, which causes platform-specific behavior as the actual 
 results shown in TestOfflineImageViewer#testPBDelimitedWriter() on Windows OS.
 {code}
 expected:[/emptydir, /dir0, /dir1/file2, /dir1, /dir1/file3, /dir2/file3, 
 /dir1/file0, /dir1/file1, /dir2/file1, /dir2/file2, /dir2, /dir0/file0, 
 /dir2/file0, /dir0/file1, /dir0/file2, /dir0/file3, /xattr] 
 but was:[\dir0, \dir0\file3, \dir0\file2, \dir0\file1, \xattr, \emptydir, 
 \dir0\file0, \dir1\file1, \dir1\file0, \dir1\file3, \dir1\file2, \dir2\file3, 
 \, \dir1, \dir2\file0, \dirContainingInvalidXMLChar#0;here, \dir2, 
 \dir2\file2, \dir2\file1]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5215) dfs.datanode.du.reserved is not considered while computing available space

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485126#comment-14485126
 ] 

Hudson commented on HDFS-5215:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #148 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/148/])
HDFS-5215. dfs.datanode.du.reserved is not considered while computing (yzhang: 
rev 66763bb06f107f0e072c773a5feb25903c688ddc)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 dfs.datanode.du.reserved is not considered while computing available space
 --

 Key: HDFS-5215
 URL: https://issues.apache.org/jira/browse/HDFS-5215
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HDFS-5215-002.patch, HDFS-5215-003.patch, 
 HDFS-5215-004.patch, HDFS-5215-005.patch, HDFS-5215.patch


 {code}public long getAvailable() throws IOException {
 long remaining = getCapacity()-getDfsUsed();
 long available = usage.getAvailable();
 if (remaining  available) {
   remaining = available;
 }
 return (remaining  0) ? remaining : 0;
   } 
 {code}
 Here we are not considering the reserved space while getting the Available 
 Space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8079) Separate the client retry conf from DFSConfigKeys

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485121#comment-14485121
 ] 

Hudson commented on HDFS-8079:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #148 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/148/])
HDFS-8079. Move dfs.client.retry.* confs from DFSConfigKeys to 
HdfsClientConfigKeys.Retry. (szetszwo: rev 
4be648b55c1ce8743f6e0ea1683168e9ed9c3ee4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestCrcCorruption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestFailoverWithBlockTokensEnabled.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderLocalLegacy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMissingBlocksAlert.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientReportBadBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockMissingException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java


 Separate the client retry conf from DFSConfigKeys
 -

 Key: HDFS-8079
 URL: https://issues.apache.org/jira/browse/HDFS-8079
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 2.8.0

 Attachments: h8079_20150407.patch, h8079_20150407b.patch


 A part of HDFS-8050, move dfs.client.retry.* conf from DFSConfigKeys to a new 
 class HdfsClientConfigKeys. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8079) Separate the client retry conf from DFSConfigKeys

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485108#comment-14485108
 ] 

Hudson commented on HDFS-8079:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2089 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2089/])
HDFS-8079. Move dfs.client.retry.* confs from DFSConfigKeys to 
HdfsClientConfigKeys.Retry. (szetszwo: rev 
4be648b55c1ce8743f6e0ea1683168e9ed9c3ee4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientReportBadBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/TestSaslDataTransfer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockReaderLocalLegacy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestCrcCorruption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestFailoverWithBlockTokensEnabled.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMissingBlocksAlert.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptedTransfer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockMissingException.java


 Separate the client retry conf from DFSConfigKeys
 -

 Key: HDFS-8079
 URL: https://issues.apache.org/jira/browse/HDFS-8079
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 2.8.0

 Attachments: h8079_20150407.patch, h8079_20150407b.patch


 A part of HDFS-8050, move dfs.client.retry.* conf from DFSConfigKeys to a new 
 class HdfsClientConfigKeys. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7949) WebImageViewer need support file size calculation with striped blocks

2015-04-08 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-7949:
---
Attachment: HDFS-7949-002.patch

 WebImageViewer need support file size calculation with striped blocks
 -

 Key: HDFS-7949
 URL: https://issues.apache.org/jira/browse/HDFS-7949
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Hui Zheng
Assignee: Rakesh R
Priority: Minor
 Attachments: HDFS-7949-001.patch, HDFS-7949-002.patch


 The file size calculation should be changed when the blocks of the file are 
 striped in WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-8090) Erasure Coding: Add RPC to client-namenode to list all ECSchemas loaded in Namenode.

2015-04-08 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-8090 started by Vinayakumar B.
---
 Erasure Coding: Add RPC to client-namenode to list all ECSchemas loaded in 
 Namenode.
 

 Key: HDFS-8090
 URL: https://issues.apache.org/jira/browse/HDFS-8090
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8090-01.patch


 ECSchemas will be configured and loaded only at the Namenode to avoid 
 conflicts.
 Client has to specify one of these schemas during creation of ecZones.
 So, add an RPC to ClientProtocol to get all ECSchemas loaded at namenode, so 
 that client can choose only any one of these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8090) Erasure Coding: Add RPC to client-namenode to list all ECSchemas loaded in Namenode.

2015-04-08 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8090:

Attachment: HDFS-8090-01.patch

Initial patch for reviews.
Please review,

 Erasure Coding: Add RPC to client-namenode to list all ECSchemas loaded in 
 Namenode.
 

 Key: HDFS-8090
 URL: https://issues.apache.org/jira/browse/HDFS-8090
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8090-01.patch


 ECSchemas will be configured and loaded only at the Namenode to avoid 
 conflicts.
 Client has to specify one of these schemas during creation of ecZones.
 So, add an RPC to ClientProtocol to get all ECSchemas loaded at namenode, so 
 that client can choose only any one of these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8074) Define a system-wide default EC schema

2015-04-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484887#comment-14484887
 ] 

Kai Zheng commented on HDFS-8074:
-

Thanks for your good catch. Please let me know how you would like the new name. 
{{SYS-DEFAULT-RS-6-3}} is used and also remove the similar one from the predef 
XMl file in case of duplication.

 Define a system-wide default EC schema
 --

 Key: HDFS-8074
 URL: https://issues.apache.org/jira/browse/HDFS-8074
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8074-v1.patch, HDFS-8074-v2.patch


 It's good to have a system default EC schema first with fixed values before 
 we support more schemas. This makes sense to resolve some dependencies before 
 HDFS-7866 can be done in whole. The default system schema is also needed 
 anyhow essentially when admin just wants to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8092) dfs -count -q should not consider snapshots under REM_QUOTA

2015-04-08 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8092:
---
Hadoop Flags:   (was: Reviewed)

 dfs -count -q should not consider snapshots under REM_QUOTA
 ---

 Key: HDFS-8092
 URL: https://issues.apache.org/jira/browse/HDFS-8092
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots, tools
Reporter: Archana T
Assignee: Rakesh R
Priority: Minor

 dfs -count -q should not consider snapshots under Remaining quota
 List of Operations performed-
 1. hdfs dfs -mkdir /Dir1
 2. hdfs dfsadmin -setQuota 2 /Dir1
 3. hadoop fs -count -q -h -v /Dir1
  
QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA 
 REM_SPACE_QUOTADIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
2   {color:red} 1 {color}none 
 inf10  0 /Dir1
 4. hdfs dfs -put hdfs /Dir1/f1
 5. hadoop fs -count -q -h -v /Dir1
  QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA 
 REM_SPACE_QUOTADIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
2  {color:red}  0{color} none 
 inf11 11.4 K /Dir1
 6. hdfs dfsadmin -allowSnapshot /Dir1
 7. hdfs dfs -createSnapshot /Dir1
 8. hadoop fs -count -q -h -v /Dir1
  QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA REM_SPACE_QUOTA
 DIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
2 {color:red}  -1 {color}none 
 inf21 11.4 K /Dir1
 Whenever snapshots created the value of REM_QUOTA gets decremented.
 When creation of snaphots are not considered under quota of that respective 
 dir then dfs -count should not decrement REM_QUOTA value



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8074) Define a system-wide default EC schema

2015-04-08 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484984#comment-14484984
 ] 

Vinayakumar B commented on HDFS-8074:
-

Thanks [~drankye] for the follow up jira,.
+1 for the current patch.

 Define a system-wide default EC schema
 --

 Key: HDFS-8074
 URL: https://issues.apache.org/jira/browse/HDFS-8074
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8074-v1.patch, HDFS-8074-v2.patch, 
 HDFS-8074-v3.patch


 It's good to have a system default EC schema first with fixed values before 
 we support more schemas. This makes sense to resolve some dependencies before 
 HDFS-7866 can be done in whole. The default system schema is also needed 
 anyhow essentially when admin just wants to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8074) Define a system-wide default EC schema

2015-04-08 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484871#comment-14484871
 ] 

Vinayakumar B commented on HDFS-8074:
-

patch looks fine.
But one nit.
{{DEFAULT_SCHEMA_NAME = rs6x3;}} this name could be same as the one defined 
in ecschema-def.xml, {{RS-6-3}}, so that need not have duplicates.

 Define a system-wide default EC schema
 --

 Key: HDFS-8074
 URL: https://issues.apache.org/jira/browse/HDFS-8074
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8074-v1.patch


 It's good to have a system default EC schema first with fixed values before 
 we support more schemas. This makes sense to resolve some dependencies before 
 HDFS-7866 can be done in whole. The default system schema is also needed 
 anyhow essentially when admin just wants to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8074) Define a system-wide default EC schema

2015-04-08 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8074:

Attachment: HDFS-8074-v2.patch

Updated the patch according to the review comments.

 Define a system-wide default EC schema
 --

 Key: HDFS-8074
 URL: https://issues.apache.org/jira/browse/HDFS-8074
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8074-v1.patch, HDFS-8074-v2.patch


 It's good to have a system default EC schema first with fixed values before 
 we support more schemas. This makes sense to resolve some dependencies before 
 HDFS-7866 can be done in whole. The default system schema is also needed 
 anyhow essentially when admin just wants to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8074) Define a system-wide default EC schema

2015-04-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484971#comment-14484971
 ] 

Kai Zheng commented on HDFS-8074:
-

Thanks [~umamaheswararao] and [~vinayrpet] for the thoughts here.

Vinay, I'm thinking if we have the system default schema defined in the XML 
file, it may be not so reliable as you meant, saying never changed after 
installation. It's because the XML file is for admin to define (additional) 
schemas for a deployment. The system default schema should be definitely 
reliable there as the fallback by default. I agree we may need to configure the 
values, but might be not in the XML file? How about just having the key 
parameters configured in core-site.xml? Thinking about that it must use the 
fixed RS codec, we only need 2 properties for the purpose.

bq.why cant make schema parameter mandatory for ec zone at the time of 
creation.? instead of having system default schema
Sure you're right. We should change ECZone constructor as you said, making it 
mandatory. We need to revisit the aspect, like adding the schema parameter to 
create an EC zone. Having the system default schema isn't for the purpose, it's 
useful some admin just wants to have a global one and then use it by default in 
most cases. Make sense?


 Define a system-wide default EC schema
 --

 Key: HDFS-8074
 URL: https://issues.apache.org/jira/browse/HDFS-8074
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8074-v1.patch, HDFS-8074-v2.patch, 
 HDFS-8074-v3.patch


 It's good to have a system default EC schema first with fixed values before 
 we support more schemas. This makes sense to resolve some dependencies before 
 HDFS-7866 can be done in whole. The default system schema is also needed 
 anyhow essentially when admin just wants to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2015-04-08 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7980:

Attachment: HDFS-7980.002.patch

 Incremental BlockReport will dramatically slow down the startup of  a namenode
 --

 Key: HDFS-7980
 URL: https://issues.apache.org/jira/browse/HDFS-7980
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hui Zheng
Assignee: Walter Su
 Attachments: HDFS-7980.001.patch, HDFS-7980.002.patch


 In the current implementation the datanode will call the 
 reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
 calling the bpNamenode.blockReport() method. So in a large(several thousands 
 of datanodes) and busy cluster it will slow down(more than one hour) the 
 startup of namenode. 
 {code}
 ListDatanodeCommand blockReport() throws IOException {
 // send block report if timer has expired.
 final long startTime = now();
 if (startTime - lastBlockReport = dnConf.blockReportInterval) {
   return null;
 }
 final ArrayListDatanodeCommand cmds = new ArrayListDatanodeCommand();
 // Flush any block information that precedes the block report. Otherwise
 // we have a chance that we will miss the delHint information
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
 lastDeletedReport = startTime;
 .
 // Send the reports to the NN.
 int numReportsSent = 0;
 int numRPCs = 0;
 boolean success = false;
 long brSendStartTime = now();
 try {
   if (totalBlockCount  dnConf.blockReportSplitThreshold) {
 // Below split threshold, send all reports in a single message.
 DatanodeCommand cmd = bpNamenode.blockReport(
 bpRegistration, bpos.getBlockPoolId(), reports);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8091) ACLStatus and XAttributes not properly presented to INodeAttributesProvider before returning to client

2015-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485013#comment-14485013
 ] 

Hadoop QA commented on HDFS-8091:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723850/HDFS-8091.1.patch
  against trunk revision ab04ff9.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10208//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10208//console

This message is automatically generated.

 ACLStatus and XAttributes not properly presented to INodeAttributesProvider 
 before returning to client 
 ---

 Key: HDFS-8091
 URL: https://issues.apache.org/jira/browse/HDFS-8091
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: Arun Suresh
Assignee: Arun Suresh
 Fix For: 2.7.0

 Attachments: HDFS-8091.1.patch


 HDFS-6826 introduced the concept of an {{INodeAttributesProvider}}, an 
 implementation of which can be plugged-in so that the Attributes (user / 
 group / permission / acls and xattrs) that are returned for an HDFS path can 
 be altered/enhanced by the user specified code before it is returned to the 
 client.
 Unfortunately, it looks like the AclStatus and XAttributes are not properly 
 presented to the user specified {{INodeAttributedProvider}} before it is 
 returned to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8093) BP does not exist or is not under Constructionnull

2015-04-08 Thread LINTE (JIRA)
LINTE created HDFS-8093:
---

 Summary: BP does not exist or is not under Constructionnull
 Key: HDFS-8093
 URL: https://issues.apache.org/jira/browse/HDFS-8093
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.6.0
 Environment: Centos 6.5
Reporter: LINTE


HDFS balancer run during several hours blancing blocs beetween datanode, it 
ended by failing with the following error.

getStoredBlock function return a null BlockInfo.

java.io.IOException: Bad response ERROR for block 
BP-970443206-192.168.0.208-1397583979378:blk_1086729930_13046030 from datanode 
192.168.0.18:1004
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:897)
15/04/08 05:52:51 WARN hdfs.DFSClient: Error Recovery for block 
BP-970443206-192.168.0.208-1397583979378:blk_1086729930_13046030 in pipeline 
192.168.0.63:1004, 192.168.0.1:1004, 192.168.0.18:1004: bad datanode 
192.168.0.18:1004
15/04/08 05:52:51 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
BP-970443206-192.168.0.208-1397583979378:blk_1086729930_13046030 does not exist 
or is not under Constructionnull
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6913)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6980)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:717)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:931)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy11.updateBlockForPipeline(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:877)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy12.updateBlockForPipeline(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1266)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1004)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:548)
15/04/08 05:52:51 ERROR hdfs.DFSClient: Failed to close inode 19801755
org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
BP-970443206-192.168.0.208-1397583979378:blk_1086729930_13046030 does not exist 
or is not under Constructionnull
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6913)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6980)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:717)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:931)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at 

[jira] [Commented] (HDFS-8074) Define a system-wide default EC schema

2015-04-08 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484952#comment-14484952
 ] 

Vinayakumar B commented on HDFS-8074:
-

I also feel that, system default schema should be one of the codec present in 
the xml file, and should never be changed after installation of cluster, this 
will have option to user to specify which could be default at first. Or make it 
hardcoded inside code itself, which doesnt depend on configured codecs. And 
this should be present always.

On the other note, why cant make schema parameter mandatory for ec zone at the 
time of creation.? instead of having system default schema

 Define a system-wide default EC schema
 --

 Key: HDFS-8074
 URL: https://issues.apache.org/jira/browse/HDFS-8074
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8074-v1.patch, HDFS-8074-v2.patch, 
 HDFS-8074-v3.patch


 It's good to have a system default EC schema first with fixed values before 
 we support more schemas. This makes sense to resolve some dependencies before 
 HDFS-7866 can be done in whole. The default system schema is also needed 
 anyhow essentially when admin just wants to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8074) Define a system-wide default EC schema

2015-04-08 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484953#comment-14484953
 ] 

Uma Maheswara Rao G commented on HDFS-8074:
---

I had offline chatt with Kai. Since other JIRAs wants to use ECschemaManager 
and do the remaining stuff. ex: HDFS-7866, I am ok with the current changes.

+1


 Define a system-wide default EC schema
 --

 Key: HDFS-8074
 URL: https://issues.apache.org/jira/browse/HDFS-8074
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8074-v1.patch, HDFS-8074-v2.patch, 
 HDFS-8074-v3.patch


 It's good to have a system default EC schema first with fixed values before 
 we support more schemas. This makes sense to resolve some dependencies before 
 HDFS-7866 can be done in whole. The default system schema is also needed 
 anyhow essentially when admin just wants to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8094) Cluster web console (dfsclusterhealth.jsp) is not working

2015-04-08 Thread Ajith S (JIRA)
Ajith S created HDFS-8094:
-

 Summary: Cluster web console (dfsclusterhealth.jsp) is not working
 Key: HDFS-8094
 URL: https://issues.apache.org/jira/browse/HDFS-8094
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ajith S
Assignee: Ajith S


According to documentation, cluster can be monitored at 
http://any_nn_host:port/dfsclusterhealth.jsp

Currently, this url doesn't seem to be working. This seems to be removed as 
part of HDFS-6252



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8094) Cluster web console (dfsclusterhealth.jsp) is not working

2015-04-08 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8094:

Component/s: federation

 Cluster web console (dfsclusterhealth.jsp) is not working
 -

 Key: HDFS-8094
 URL: https://issues.apache.org/jira/browse/HDFS-8094
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: federation
Reporter: Ajith S
Assignee: Ajith S

 According to documentation, cluster can be monitored at 
 http://any_nn_host:port/dfsclusterhealth.jsp
 Currently, this url doesn't seem to be working. This seems to be removed as 
 part of HDFS-6252



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2015-04-08 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7980:

Attachment: (was: HDFS-7980.002.patch)

 Incremental BlockReport will dramatically slow down the startup of  a namenode
 --

 Key: HDFS-7980
 URL: https://issues.apache.org/jira/browse/HDFS-7980
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hui Zheng
Assignee: Walter Su
 Attachments: HDFS-7980.001.patch


 In the current implementation the datanode will call the 
 reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
 calling the bpNamenode.blockReport() method. So in a large(several thousands 
 of datanodes) and busy cluster it will slow down(more than one hour) the 
 startup of namenode. 
 {code}
 ListDatanodeCommand blockReport() throws IOException {
 // send block report if timer has expired.
 final long startTime = now();
 if (startTime - lastBlockReport = dnConf.blockReportInterval) {
   return null;
 }
 final ArrayListDatanodeCommand cmds = new ArrayListDatanodeCommand();
 // Flush any block information that precedes the block report. Otherwise
 // we have a chance that we will miss the delHint information
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
 lastDeletedReport = startTime;
 .
 // Send the reports to the NN.
 int numReportsSent = 0;
 int numRPCs = 0;
 boolean success = false;
 long brSendStartTime = now();
 try {
   if (totalBlockCount  dnConf.blockReportSplitThreshold) {
 // Below split threshold, send all reports in a single message.
 DatanodeCommand cmd = bpNamenode.blockReport(
 bpRegistration, bpos.getBlockPoolId(), reports);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8074) Define a system-wide default EC schema

2015-04-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484902#comment-14484902
 ] 

Kai Zheng commented on HDFS-8074:
-

Thanks [~umamaheswararao] for the good thoughts and review!
bq.ECSchemaManager should load defaults from xml right
I thought it may benefit if we do so loading system default schema from 
configuration, thus any customer can have the good chance to define their own 
SYSTEM DEFAULT SCHEMAs. Can I open a follow up issue to address this? Or wait 
to see any more thoughts about this?
bq.Did I miss something?
Nothing you missed, you're quite right.
bq.Please have braces for if conditions.
Yes, good catch! I shouldn't have relied my IDE too much.:)

 Define a system-wide default EC schema
 --

 Key: HDFS-8074
 URL: https://issues.apache.org/jira/browse/HDFS-8074
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8074-v1.patch, HDFS-8074-v2.patch


 It's good to have a system default EC schema first with fixed values before 
 we support more schemas. This makes sense to resolve some dependencies before 
 HDFS-7866 can be done in whole. The default system schema is also needed 
 anyhow essentially when admin just wants to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-8074) Define a system-wide default EC schema

2015-04-08 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484928#comment-14484928
 ] 

Uma Maheswara Rao G edited comment on HDFS-8074 at 4/8/15 8:26 AM:
---

Yeah, But I thought ECSchemanager can load configs from XML. When instantiating 
ECSchemaManager, we can load required configs from edschema-def.xml file. If 
you are also thinking the similar change then we can incorporate here itself. 
May be ECSchemaManager can take config from its Ctor. BTW, I have no concern if 
we want to handle the improvements in followups.


was (Author: umamaheswararao):
Yeah, But I thought ECSchemanager can load configs from XML. When instantiating 
ECSchemaManager, we can load required configs from edschema-def.xml file. If 
you are also thinking the similar change then we can incorporate here itself. 
May be ECSchemaManager can take config from its Ctor.

 Define a system-wide default EC schema
 --

 Key: HDFS-8074
 URL: https://issues.apache.org/jira/browse/HDFS-8074
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8074-v1.patch, HDFS-8074-v2.patch, 
 HDFS-8074-v3.patch


 It's good to have a system default EC schema first with fixed values before 
 we support more schemas. This makes sense to resolve some dependencies before 
 HDFS-7866 can be done in whole. The default system schema is also needed 
 anyhow essentially when admin just wants to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8074) Define a system-wide default EC schema

2015-04-08 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484928#comment-14484928
 ] 

Uma Maheswara Rao G commented on HDFS-8074:
---

Yeah, But I thought ECSchemanager can load configs from XML. When instantiating 
ECSchemaManager, we can load required configs from edschema-def.xml file. If 
you are also thinking the similar change then we can incorporate here itself. 
May be ECSchemaManager can take config from its Ctor.

 Define a system-wide default EC schema
 --

 Key: HDFS-8074
 URL: https://issues.apache.org/jira/browse/HDFS-8074
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8074-v1.patch, HDFS-8074-v2.patch, 
 HDFS-8074-v3.patch


 It's good to have a system default EC schema first with fixed values before 
 we support more schemas. This makes sense to resolve some dependencies before 
 HDFS-7866 can be done in whole. The default system schema is also needed 
 anyhow essentially when admin just wants to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8074) Define a system-wide default EC schema

2015-04-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484990#comment-14484990
 ] 

Kai Zheng commented on HDFS-8074:
-

I just committed this in the branch. Thanks [~umamaheswararao] and [~vinayrpet] 
for the review!

 Define a system-wide default EC schema
 --

 Key: HDFS-8074
 URL: https://issues.apache.org/jira/browse/HDFS-8074
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8074-v1.patch, HDFS-8074-v2.patch, 
 HDFS-8074-v3.patch


 It's good to have a system default EC schema first with fixed values before 
 we support more schemas. This makes sense to resolve some dependencies before 
 HDFS-7866 can be done in whole. The default system schema is also needed 
 anyhow essentially when admin just wants to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8074) Define a system-wide default EC schema

2015-04-08 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng resolved HDFS-8074.
-
   Resolution: Fixed
Fix Version/s: HDFS-7285
 Hadoop Flags: Reviewed

 Define a system-wide default EC schema
 --

 Key: HDFS-8074
 URL: https://issues.apache.org/jira/browse/HDFS-8074
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-7285

 Attachments: HDFS-8074-v1.patch, HDFS-8074-v2.patch, 
 HDFS-8074-v3.patch


 It's good to have a system default EC schema first with fixed values before 
 we support more schemas. This makes sense to resolve some dependencies before 
 HDFS-7866 can be done in whole. The default system schema is also needed 
 anyhow essentially when admin just wants to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8023) Erasure Coding: retrieve eraure coding schema for a file from NameNode

2015-04-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484865#comment-14484865
 ] 

Kai Zheng commented on HDFS-8023:
-

bq.So I feel revisit required anyway. 
Yeah, I agree. Thanks for your clarifying.

+1

 Erasure Coding: retrieve eraure coding schema for a file from NameNode
 --

 Key: HDFS-8023
 URL: https://issues.apache.org/jira/browse/HDFS-8023
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Vinayakumar B
 Attachments: HDFS-8023-01.patch, HDFS-8023-02.patch


 NameNode needs to provide RPC call for client and tool to retrieve eraure 
 coding schema for a file from NameNode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8023) Erasure Coding: retrieve eraure coding schema for a file from NameNode

2015-04-08 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HDFS-8023.
-
   Resolution: Fixed
Fix Version/s: HDFS-7285
 Hadoop Flags: Reviewed

Thanks [~drankye] and [~jingzhao] for reviews.
Committed to HDFS-7285 branch.

 Erasure Coding: retrieve eraure coding schema for a file from NameNode
 --

 Key: HDFS-8023
 URL: https://issues.apache.org/jira/browse/HDFS-8023
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Vinayakumar B
 Fix For: HDFS-7285

 Attachments: HDFS-8023-01.patch, HDFS-8023-02.patch


 NameNode needs to provide RPC call for client and tool to retrieve eraure 
 coding schema for a file from NameNode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8074) Define a system-wide default EC schema

2015-04-08 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484895#comment-14484895
 ] 

Uma Maheswara Rao G commented on HDFS-8074:
---

ECSchemaManager should load defaults from xml right. I think right now it is 
not loading from XML and it is simply holding ECSchema object by passing local 
defaults. Did I miss something?

Please have braces for if conditions.
{code}
 if (this == o) return true;
+if (o == null || getClass() != o.getClass()) return false;
+
+ECSchema ecSchema = (ECSchema) o;
+
+if (numDataUnits != ecSchema.numDataUnits) return false;
+if (numParityUnits != ecSchema.numParityUnits) return false;
+if (chunkSize != ecSchema.chunkSize) return false;
+if (!schemaName.equals(ecSchema.schemaName)) return false;
+if (!codecName.equals(ecSchema.codecName)) return false;
{code}



 Define a system-wide default EC schema
 --

 Key: HDFS-8074
 URL: https://issues.apache.org/jira/browse/HDFS-8074
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8074-v1.patch, HDFS-8074-v2.patch


 It's good to have a system default EC schema first with fixed values before 
 we support more schemas. This makes sense to resolve some dependencies before 
 HDFS-7866 can be done in whole. The default system schema is also needed 
 anyhow essentially when admin just wants to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8092) dfs -count -q should not consider snapshots under REM_QUOTA

2015-04-08 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8092:
---
Assignee: Rakesh R

 dfs -count -q should not consider snapshots under REM_QUOTA
 ---

 Key: HDFS-8092
 URL: https://issues.apache.org/jira/browse/HDFS-8092
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots, tools
Reporter: Archana T
Assignee: Rakesh R
Priority: Minor

 dfs -count -q should not consider snapshots under Remaining quota
 List of Operations performed-
 1. hdfs dfs -mkdir /Dir1
 2. hdfs dfsadmin -setQuota 2 /Dir1
 3. hadoop fs -count -q -h -v /Dir1
  
QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA 
 REM_SPACE_QUOTADIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
2   {color:red} 1 {color}none 
 inf10  0 /Dir1
 4. hdfs dfs -put hdfs /Dir1/f1
 5. hadoop fs -count -q -h -v /Dir1
  QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA 
 REM_SPACE_QUOTADIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
2  {color:red}  0{color} none 
 inf11 11.4 K /Dir1
 6. hdfs dfsadmin -allowSnapshot /Dir1
 7. hdfs dfs -createSnapshot /Dir1
 8. hadoop fs -count -q -h -v /Dir1
  QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA REM_SPACE_QUOTA
 DIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
2 {color:red}  -1 {color}none 
 inf21 11.4 K /Dir1
 Whenever snapshots created the value of REM_QUOTA gets decremented.
 When creation of snaphots are not considered under quota of that respective 
 dir then dfs -count should not decrement REM_QUOTA value



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8074) Define a system-wide default EC schema

2015-04-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484981#comment-14484981
 ] 

Kai Zheng commented on HDFS-8074:
-

HDFS-8095 was opened to document the idea, allowing the system default schema 
to be configurable. It's not wanted right now but for following phases to 
consider.

 Define a system-wide default EC schema
 --

 Key: HDFS-8074
 URL: https://issues.apache.org/jira/browse/HDFS-8074
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-8074-v1.patch, HDFS-8074-v2.patch, 
 HDFS-8074-v3.patch


 It's good to have a system default EC schema first with fixed values before 
 we support more schemas. This makes sense to resolve some dependencies before 
 HDFS-7866 can be done in whole. The default system schema is also needed 
 anyhow essentially when admin just wants to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8067) haadmin commands doesn't work in Federation with HA

2015-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484983#comment-14484983
 ] 

Hadoop QA commented on HDFS-8067:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723837/HDFS-8067-01.patch
  against trunk revision ab04ff9.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10207//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10207//console

This message is automatically generated.

 haadmin commands doesn't work in Federation with HA
 ---

 Key: HDFS-8067
 URL: https://issues.apache.org/jira/browse/HDFS-8067
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Ajith S
Assignee: Ajith S
Priority: Blocker
 Attachments: HDFS-8067-01.patch


 Scenario : Setting up multiple nameservices with HA configuration for each 
 nameservice (manual failover)
 After starting the journal nodes and namenodes, both the nodes are in standby 
 mode. 
 all the following haadmin commands
  *haadmin*
-transitionToActive
-transitionToStandby 
-failover 
-getServiceState 
-checkHealth  
 failed with exception
 _Illegal argument: Unable to determine the nameservice id._



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2015-04-08 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7980:

Attachment: HDFS-7980.002.patch

 Incremental BlockReport will dramatically slow down the startup of  a namenode
 --

 Key: HDFS-7980
 URL: https://issues.apache.org/jira/browse/HDFS-7980
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hui Zheng
Assignee: Walter Su
 Attachments: HDFS-7980.001.patch, HDFS-7980.002.patch


 In the current implementation the datanode will call the 
 reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
 calling the bpNamenode.blockReport() method. So in a large(several thousands 
 of datanodes) and busy cluster it will slow down(more than one hour) the 
 startup of namenode. 
 {code}
 ListDatanodeCommand blockReport() throws IOException {
 // send block report if timer has expired.
 final long startTime = now();
 if (startTime - lastBlockReport = dnConf.blockReportInterval) {
   return null;
 }
 final ArrayListDatanodeCommand cmds = new ArrayListDatanodeCommand();
 // Flush any block information that precedes the block report. Otherwise
 // we have a chance that we will miss the delHint information
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
 lastDeletedReport = startTime;
 .
 // Send the reports to the NN.
 int numReportsSent = 0;
 int numRPCs = 0;
 boolean success = false;
 long brSendStartTime = now();
 try {
   if (totalBlockCount  dnConf.blockReportSplitThreshold) {
 // Below split threshold, send all reports in a single message.
 DatanodeCommand cmd = bpNamenode.blockReport(
 bpRegistration, bpos.getBlockPoolId(), reports);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2015-04-08 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7980:

Attachment: (was: HDFS-7980.002.patch)

 Incremental BlockReport will dramatically slow down the startup of  a namenode
 --

 Key: HDFS-7980
 URL: https://issues.apache.org/jira/browse/HDFS-7980
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hui Zheng
Assignee: Walter Su
 Attachments: HDFS-7980.001.patch


 In the current implementation the datanode will call the 
 reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
 calling the bpNamenode.blockReport() method. So in a large(several thousands 
 of datanodes) and busy cluster it will slow down(more than one hour) the 
 startup of namenode. 
 {code}
 ListDatanodeCommand blockReport() throws IOException {
 // send block report if timer has expired.
 final long startTime = now();
 if (startTime - lastBlockReport = dnConf.blockReportInterval) {
   return null;
 }
 final ArrayListDatanodeCommand cmds = new ArrayListDatanodeCommand();
 // Flush any block information that precedes the block report. Otherwise
 // we have a chance that we will miss the delHint information
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
 lastDeletedReport = startTime;
 .
 // Send the reports to the NN.
 int numReportsSent = 0;
 int numRPCs = 0;
 boolean success = false;
 long brSendStartTime = now();
 try {
   if (totalBlockCount  dnConf.blockReportSplitThreshold) {
 // Below split threshold, send all reports in a single message.
 DatanodeCommand cmd = bpNamenode.blockReport(
 bpRegistration, bpos.getBlockPoolId(), reports);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8092) dfs -count -q should not consider snapshots under REM_QUOTA

2015-04-08 Thread Archana T (JIRA)
Archana T created HDFS-8092:
---

 Summary: dfs -count -q should not consider snapshots under 
REM_QUOTA
 Key: HDFS-8092
 URL: https://issues.apache.org/jira/browse/HDFS-8092
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots, tools
Reporter: Archana T
Priority: Minor


dfs -count -q should not consider snapshots under Remaining quota

List of Operations performed-
1. hdfs dfs -mkdir /Dir1
2. hdfs dfsadmin -setQuota 2 /Dir1
3. hadoop fs -count -q -h -v /Dir1
 
   QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA 
REM_SPACE_QUOTADIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
   2   {color:red} 1 {color}none 
inf10  0 /Dir1

4. hdfs dfs -put hdfs /Dir1/f1
5. hadoop fs -count -q -h -v /Dir1
 QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA REM_SPACE_QUOTA  
  DIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
   2  {color:red}  0{color} none 
inf11 11.4 K /Dir1
6. hdfs dfsadmin -allowSnapshot /Dir1
7. hdfs dfs -createSnapshot /Dir1
8. hadoop fs -count -q -h -v /Dir1

 QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA REM_SPACE_QUOTA
DIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
   2 {color:red}  -1 {color}none 
inf21 11.4 K /Dir1

Whenever snapshots created the value of REM_QUOTA gets decremented.

When creation of snaphots are not considered under quota of that respective dir 
then dfs -count should not decrement REM_QUOTA value



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8089) Move o.a.h.hdfs.web.resources.* to the client jars

2015-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484919#comment-14484919
 ] 

Hadoop QA commented on HDFS-8089:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723824/HDFS-8089.000.patch
  against trunk revision ab04ff9.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client:

  org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10205//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10205//console

This message is automatically generated.

 Move o.a.h.hdfs.web.resources.* to the client jars
 --

 Key: HDFS-8089
 URL: https://issues.apache.org/jira/browse/HDFS-8089
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-8089.000.patch


 This jira proposes to move the parameters of used by {{WebHdfsFileSystem}} in 
 {{o.a.h.hdfs.web.resources.*}} to the hdfs client jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8095) Allow to configure the system default EC schema

2015-04-08 Thread Kai Zheng (JIRA)
Kai Zheng created HDFS-8095:
---

 Summary: Allow to configure the system default EC schema
 Key: HDFS-8095
 URL: https://issues.apache.org/jira/browse/HDFS-8095
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


As suggested by [~umamaheswararao] and [~vinayrpet] in HDFS-8074, we may desire 
allowing to configure the system default EC schema, so in any deployment a 
cluster admin may be able to define their own system default one. In the 
discussion, we have two approaches to configure the system default schema: 1) 
predefine it in the {{ecschema-def.xml}} file, making sure it's not changed; 2) 
configure the key parameter values as properties in {{core-site.xml}}. Open 
this for future consideration in case it's forgotten.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2015-04-08 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7980:

Attachment: HDFS-7980.002.patch

 Incremental BlockReport will dramatically slow down the startup of  a namenode
 --

 Key: HDFS-7980
 URL: https://issues.apache.org/jira/browse/HDFS-7980
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hui Zheng
Assignee: Walter Su
 Attachments: HDFS-7980.001.patch, HDFS-7980.002.patch


 In the current implementation the datanode will call the 
 reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
 calling the bpNamenode.blockReport() method. So in a large(several thousands 
 of datanodes) and busy cluster it will slow down(more than one hour) the 
 startup of namenode. 
 {code}
 ListDatanodeCommand blockReport() throws IOException {
 // send block report if timer has expired.
 final long startTime = now();
 if (startTime - lastBlockReport = dnConf.blockReportInterval) {
   return null;
 }
 final ArrayListDatanodeCommand cmds = new ArrayListDatanodeCommand();
 // Flush any block information that precedes the block report. Otherwise
 // we have a chance that we will miss the delHint information
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
 lastDeletedReport = startTime;
 .
 // Send the reports to the NN.
 int numReportsSent = 0;
 int numRPCs = 0;
 boolean success = false;
 long brSendStartTime = now();
 try {
   if (totalBlockCount  dnConf.blockReportSplitThreshold) {
 // Below split threshold, send all reports in a single message.
 DatanodeCommand cmd = bpNamenode.blockReport(
 bpRegistration, bpos.getBlockPoolId(), reports);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8092) dfs -count -q should not consider snapshots under REM_QUOTA

2015-04-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485402#comment-14485402
 ] 

Allen Wittenauer commented on HDFS-8092:


Snapshots should most definitely be considered part of the quota calculation. 
They are not free and do take up space.

 dfs -count -q should not consider snapshots under REM_QUOTA
 ---

 Key: HDFS-8092
 URL: https://issues.apache.org/jira/browse/HDFS-8092
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots, tools
Reporter: Archana T
Assignee: Rakesh R
Priority: Minor

 dfs -count -q should not consider snapshots under Remaining quota
 List of Operations performed-
 1. hdfs dfs -mkdir /Dir1
 2. hdfs dfsadmin -setQuota 2 /Dir1
 3. hadoop fs -count -q -h -v /Dir1
  
QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA 
 REM_SPACE_QUOTADIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
2   {color:red} 1 {color}none 
 inf10  0 /Dir1
 4. hdfs dfs -put hdfs /Dir1/f1
 5. hadoop fs -count -q -h -v /Dir1
  QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA 
 REM_SPACE_QUOTADIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
2  {color:red}  0{color} none 
 inf11 11.4 K /Dir1
 6. hdfs dfsadmin -allowSnapshot /Dir1
 7. hdfs dfs -createSnapshot /Dir1
 8. hadoop fs -count -q -h -v /Dir1
  QUOTA   {color:red} REM_QUOTA{color}  SPACE_QUOTA REM_SPACE_QUOTA
 DIR_COUNT   FILE_COUNT   CONTENT_SIZE PATHNAME
2 {color:red}  -1 {color}none 
 inf21 11.4 K /Dir1
 Whenever snapshots created the value of REM_QUOTA gets decremented.
 When creation of snaphots are not considered under quota of that respective 
 dir then dfs -count should not decrement REM_QUOTA value



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8099) Remove extraneous warning from DFSInputStream.close()

2015-04-08 Thread Charles Lamb (JIRA)
Charles Lamb created HDFS-8099:
--

 Summary: Remove extraneous warning from DFSInputStream.close()
 Key: HDFS-8099
 URL: https://issues.apache.org/jira/browse/HDFS-8099
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Minor


The hadoop fs -get command always shows this warning:

{noformat}
$ hadoop fs -get /data/schemas/sfdc/BusinessHours-2014-12-09.avsc
15/04/06 06:22:19 WARN hdfs.DFSClient: DFSInputStream has been closed already
{noformat}

This was introduced by HDFS-7494. The easiest thing is to just remove the 
warning from the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8099) Remove extraneous warning from DFSInputStream.close()

2015-04-08 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-8099:
---
Attachment: HDFS-8099.000.patch

 Remove extraneous warning from DFSInputStream.close()
 -

 Key: HDFS-8099
 URL: https://issues.apache.org/jira/browse/HDFS-8099
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Minor
 Attachments: HDFS-8099.000.patch


 The hadoop fs -get command always shows this warning:
 {noformat}
 $ hadoop fs -get /data/schemas/sfdc/BusinessHours-2014-12-09.avsc
 15/04/06 06:22:19 WARN hdfs.DFSClient: DFSInputStream has been closed already
 {noformat}
 This was introduced by HDFS-7494. The easiest thing is to just remove the 
 warning from the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8099) Remove extraneous warning from DFSInputStream.close()

2015-04-08 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-8099:
---
Status: Patch Available  (was: Open)

 Remove extraneous warning from DFSInputStream.close()
 -

 Key: HDFS-8099
 URL: https://issues.apache.org/jira/browse/HDFS-8099
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Minor
 Attachments: HDFS-8099.000.patch


 The hadoop fs -get command always shows this warning:
 {noformat}
 $ hadoop fs -get /data/schemas/sfdc/BusinessHours-2014-12-09.avsc
 15/04/06 06:22:19 WARN hdfs.DFSClient: DFSInputStream has been closed already
 {noformat}
 This was introduced by HDFS-7494. The easiest thing is to just remove the 
 warning from the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8094) Cluster web console (dfsclusterhealth.jsp) is not working

2015-04-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-8094:
---
Component/s: (was: federation)
 documentation

 Cluster web console (dfsclusterhealth.jsp) is not working
 -

 Key: HDFS-8094
 URL: https://issues.apache.org/jira/browse/HDFS-8094
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Ajith S
Assignee: Ajith S

 According to documentation, cluster can be monitored at 
 http://any_nn_host:port/dfsclusterhealth.jsp
 Currently, this url doesn't seem to be working. This seems to be removed as 
 part of HDFS-6252



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8097) TestFileTruncate.testTruncate4Symlink is failing intermittently

2015-04-08 Thread Rakesh R (JIRA)
Rakesh R created HDFS-8097:
--

 Summary: TestFileTruncate.testTruncate4Symlink is failing 
intermittently
 Key: HDFS-8097
 URL: https://issues.apache.org/jira/browse/HDFS-8097
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Rakesh R
Assignee: Rakesh R


{code}
java.lang.AssertionError: Bad disk space usage expected:45 but was:12
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncate4Symlink(TestFileTruncate.java:1158)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
at 
org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8098) Erasure coding: fix bug in TestFSImage

2015-04-08 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8098:
---
Attachment: HDFS-8098-001.patch

 Erasure coding: fix bug in TestFSImage
 --

 Key: HDFS-8098
 URL: https://issues.apache.org/jira/browse/HDFS-8098
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8098-001.patch


 Following test cases are failing continuously expecting striped redundancy 
 value of {{HdfsConstants.NUM_DATA_BLOCKS + HdfsConstants.NUM_PARITY_BLOCKS}}
 1) {code}
 java.lang.AssertionError: expected:0 but was:5
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSaveAndLoadStripedINodeFile(TestFSImage.java:202)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSaveAndLoadStripedINodeFile(TestFSImage.java:241)
 {code}
 2) 
 {code}
 java.lang.AssertionError: expected:0 but was:5
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSaveAndLoadStripedINodeFile(TestFSImage.java:202)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSaveAndLoadStripedINodeFileUC(TestFSImage.java:261)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7949) WebImageViewer need support file size calculation with striped blocks

2015-04-08 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485578#comment-14485578
 ] 

Zhe Zhang commented on HDFS-7949:
-

Thanks for the update Rakesh. 
# I think the static method should calculate the consumed space of a striped 
block, instead of a file. Something like the below under {{BlockInfoStriped}}:
{code}
  public static long spaceConsumed(long numBytes, short dataBlockNum, short  
parityBlockNum) {
// In case striped blocks, total usage by this striped blocks should
// be the total of data blocks and parity blocks because
// `getNumBytes` is the total of actual data block size.
return ((getNumBytes() - 1) / (dataBlockNum * BLOCK_STRIPED_CELL_SIZE) + 1)
* BLOCK_STRIPED_CELL_SIZE * parityBlockNum + getNumBytes();
  }
{code}
The file level logic should still be in {{FSImageLoader}}.
# Javadoc format issue:
{code}
+   * @param f
+   *  inode file
+   * @return file size
{code}

 WebImageViewer need support file size calculation with striped blocks
 -

 Key: HDFS-7949
 URL: https://issues.apache.org/jira/browse/HDFS-7949
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Hui Zheng
Assignee: Rakesh R
Priority: Minor
 Attachments: HDFS-7949-001.patch, HDFS-7949-002.patch


 The file size calculation should be changed when the blocks of the file are 
 striped in WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7702) Move metadata across namenode - Effort to a real distributed namenode

2015-04-08 Thread Ray Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Zhang updated HDFS-7702:

Attachment: Namespace Moving Tool Design Proposal.pdf

 Move metadata across namenode - Effort to a real distributed namenode
 -

 Key: HDFS-7702
 URL: https://issues.apache.org/jira/browse/HDFS-7702
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ray Zhang
Assignee: Ray Zhang
 Attachments: Namespace Moving Tool Design Proposal.pdf


 Implement a tool can show in memory namespace tree structure with 
 weight(size) and a API can move metadata across different namenode. The 
 purpose is moving data efficiently and faster, without moving blocks on 
 datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7702) Move metadata across namenode - Effort to a real distributed namenode

2015-04-08 Thread Ray Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Zhang updated HDFS-7702:

Attachment: (was: Metadata Moving Tool.pdf)

 Move metadata across namenode - Effort to a real distributed namenode
 -

 Key: HDFS-7702
 URL: https://issues.apache.org/jira/browse/HDFS-7702
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ray Zhang
Assignee: Ray Zhang
 Attachments: Namespace Moving Tool Design Proposal.pdf


 Implement a tool can show in memory namespace tree structure with 
 weight(size) and a API can move metadata across different namenode. The 
 purpose is moving data efficiently and faster, without moving blocks on 
 datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8097) TestFileTruncate.testTruncate4Symlink is failing intermittently

2015-04-08 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8097:
---
Status: Patch Available  (was: Open)

 TestFileTruncate.testTruncate4Symlink is failing intermittently
 ---

 Key: HDFS-8097
 URL: https://issues.apache.org/jira/browse/HDFS-8097
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8097-001.patch


 {code}
 java.lang.AssertionError: Bad disk space usage expected:45 but was:12
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncate4Symlink(TestFileTruncate.java:1158)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8097) TestFileTruncate.testTruncate4Symlink is failing intermittently

2015-04-08 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8097:
---
Attachment: HDFS-8097-001.patch

 TestFileTruncate.testTruncate4Symlink is failing intermittently
 ---

 Key: HDFS-8097
 URL: https://issues.apache.org/jira/browse/HDFS-8097
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8097-001.patch


 {code}
 java.lang.AssertionError: Bad disk space usage expected:45 but was:12
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncate4Symlink(TestFileTruncate.java:1158)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8097) TestFileTruncate.testTruncate4Symlink is failing intermittently

2015-04-08 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8097:
---
Attachment: HDFS-8097-002.patch

 TestFileTruncate.testTruncate4Symlink is failing intermittently
 ---

 Key: HDFS-8097
 URL: https://issues.apache.org/jira/browse/HDFS-8097
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8097-001.patch, HDFS-8097-002.patch


 {code}
 java.lang.AssertionError: Bad disk space usage expected:45 but was:12
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncate4Symlink(TestFileTruncate.java:1158)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8097) TestFileTruncate.testTruncate4Symlink is failing intermittently

2015-04-08 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485318#comment-14485318
 ] 

Rakesh R commented on HDFS-8097:


I could see many test cases are using the /test parent path. If any of these 
test case failed/missed to cleanup this /test parent path then would affect 
the {{cs.getSpaceConsumed()}} value.

{code}
 ContentSummary cs = fs.getContentSummary(parent);
assertEquals(Bad disk space usage,
cs.getSpaceConsumed(), newLength * REPLICATION);
{code}
I think {{cleanup}} before every tests would help to resolve this case. I've 
updated a patch to do the cleanups. Please review. Thanks!

 TestFileTruncate.testTruncate4Symlink is failing intermittently
 ---

 Key: HDFS-8097
 URL: https://issues.apache.org/jira/browse/HDFS-8097
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8097-001.patch, HDFS-8097-002.patch


 {code}
 java.lang.AssertionError: Bad disk space usage expected:45 but was:12
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncate4Symlink(TestFileTruncate.java:1158)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-8098) Erasure coding: fix bug in TestFSImage

2015-04-08 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-8098 started by Rakesh R.
--
 Erasure coding: fix bug in TestFSImage
 --

 Key: HDFS-8098
 URL: https://issues.apache.org/jira/browse/HDFS-8098
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8098-001.patch


 Following test cases are failing continuously expecting striped redundancy 
 value of {{HdfsConstants.NUM_DATA_BLOCKS + HdfsConstants.NUM_PARITY_BLOCKS}}
 1) {code}
 java.lang.AssertionError: expected:0 but was:5
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSaveAndLoadStripedINodeFile(TestFSImage.java:202)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSaveAndLoadStripedINodeFile(TestFSImage.java:241)
 {code}
 2) 
 {code}
 java.lang.AssertionError: expected:0 but was:5
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSaveAndLoadStripedINodeFile(TestFSImage.java:202)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSaveAndLoadStripedINodeFileUC(TestFSImage.java:261)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8077) Erasure coding: fix bug in EC zone and symlinks

2015-04-08 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485360#comment-14485360
 ] 

Rakesh R commented on HDFS-8077:


Hi [~jingzhao], [~zhz] I've got a chance to look at few of the [test 
failures|https://builds.apache.org/job/Hadoop-HDFS-7285-nightly/85/testReport/].
 Please see.
- It looks like {{TestFileTruncate}} is failing in trunk also. I've raised 
HDFS-8097 to handle one case.
- Raised HDFS-8098 to handle {{TestFSImage}} failures.

 Erasure coding: fix bug in EC zone and symlinks
 ---

 Key: HDFS-8077
 URL: https://issues.apache.org/jira/browse/HDFS-8077
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Jing Zhao
 Attachments: HDFS-8077-000.patch


 EC zone manager tries to get XAttr of an inode to determine the EC policy, 
 which doesn't work with symlinks. This patch has a simple fix to get rid of 
 test failures.
 Ideally we should also add logic to disallow creating symlinks in several 
 EC-related scenarios. But since symlinks are disabled in branch-2 and will 
 likely be disabled in trunk, this step is skipped now.
 The patch also fixes a small test error around {{getBlockReplication}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8098) Erasure coding: fix bug in TestFSImage

2015-04-08 Thread Rakesh R (JIRA)
Rakesh R created HDFS-8098:
--

 Summary: Erasure coding: fix bug in TestFSImage
 Key: HDFS-8098
 URL: https://issues.apache.org/jira/browse/HDFS-8098
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R


Following test cases are failing continuously expecting striped redundancy 
value of {{HdfsConstants.NUM_DATA_BLOCKS + HdfsConstants.NUM_PARITY_BLOCKS}}

1) {code}
java.lang.AssertionError: expected:0 but was:5
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSaveAndLoadStripedINodeFile(TestFSImage.java:202)
at 
org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSaveAndLoadStripedINodeFile(TestFSImage.java:241)
{code}
2) 
{code}
java.lang.AssertionError: expected:0 but was:5
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSaveAndLoadStripedINodeFile(TestFSImage.java:202)
at 
org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSaveAndLoadStripedINodeFileUC(TestFSImage.java:261)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8098) Erasure coding: fix bug in TestFSImage

2015-04-08 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485353#comment-14485353
 ] 

Rakesh R commented on HDFS-8098:


Attached patch where it expects {{HdfsConstants.NUM_DATA_BLOCKS + 
HdfsConstants.NUM_PARITY_BLOCKS}} redundancy value. Please review. Thanks!

 Erasure coding: fix bug in TestFSImage
 --

 Key: HDFS-8098
 URL: https://issues.apache.org/jira/browse/HDFS-8098
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-8098-001.patch


 Following test cases are failing continuously expecting striped redundancy 
 value of {{HdfsConstants.NUM_DATA_BLOCKS + HdfsConstants.NUM_PARITY_BLOCKS}}
 1) {code}
 java.lang.AssertionError: expected:0 but was:5
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSaveAndLoadStripedINodeFile(TestFSImage.java:202)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSaveAndLoadStripedINodeFile(TestFSImage.java:241)
 {code}
 2) 
 {code}
 java.lang.AssertionError: expected:0 but was:5
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSaveAndLoadStripedINodeFile(TestFSImage.java:202)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestFSImage.testSaveAndLoadStripedINodeFileUC(TestFSImage.java:261)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6874) Add GET_BLOCK_LOCATIONS operation to HttpFS

2015-04-08 Thread Gao Zhong Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gao Zhong Liang updated HDFS-6874:
--
Attachment: HDFS-6874-1.patch

Hi Charles,
Thanks so much for your review.  I put up another patch based on your comments 
and the trunk.

 Add GET_BLOCK_LOCATIONS operation to HttpFS
 ---

 Key: HDFS-6874
 URL: https://issues.apache.org/jira/browse/HDFS-6874
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Gao Zhong Liang
Assignee: Gao Zhong Liang
 Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
 HDFS-6874.patch


 GET_BLOCK_LOCATIONS operation is missing in HttpFS, which is already 
 supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
 org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
 ...
  case GETFILEBLOCKLOCATIONS: {
 response = Response.status(Response.Status.BAD_REQUEST).build();
 break;
   }
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8073) Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be easily overrided.

2015-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485729#comment-14485729
 ] 

Hudson commented on HDFS-8073:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2107 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2107/])
HDFS-8073. Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be 
easily overrided. (Contributed by Walter Su) (vinayakumarb: rev 
d505c8acd30d6f40d0632fe9c93c886a4499a9fc)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java


 Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be easily 
 overrided.
 -

 Key: HDFS-8073
 URL: https://issues.apache.org/jira/browse/HDFS-8073
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Walter Su
Assignee: Walter Su
Priority: Trivial
 Fix For: 2.8.0

 Attachments: HDFS-8073.001.patch, HDFS-8073.002.patch, 
 HDFS-8073.003.patch


 We want to implement a placement policy( eg. HDFS-7891) based on default 
 policy.  It'll be easier if we could split 
 BlockPlacementPolicyDefault.chooseTarget(..).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >