[jira] [Commented] (HDFS-7518) Heartbeat processing doesn't have to take FSN readLock

2014-12-12 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245232#comment-14245232
 ] 

Haohui Mai commented on HDFS-7518:
--

Currently the FSN lock implements mutual exclusion for both {{FSNamesystem}} 
and {{BlockManager}}. More importantly, it also implements mutual exclusions 
when updating individual {{BlockInfo}} objects.

I think that as a first step, we can move the workflow of managing datanode out 
of the FSN lock, but keeping things like populating replication queues in the 
lock.

In the slightly longer term I think we need to work on HDFS-7437 to decouple 
the implicit dependency introduced by {{BlockInfo}} objects, and gradually to 
decouple other functionalities and the FSN lock.

> Heartbeat processing doesn't have to take FSN readLock
> --
>
> Key: HDFS-7518
> URL: https://issues.apache.org/jira/browse/HDFS-7518
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Ming Ma
>
> NameNode takes global read lock when it process heartbeat RPCs from 
> DataNodes. This increases lock contention and could impact NN overall 
> throughput. Given Heartbeat processing needs to access data specific to the 
> DataNode that invokes the RPC; it could just synchronize on the specific 
> DataNode and datanodeMap.
> It looks like each DatanodeDescriptor already keeps its own recover blocks, 
> replication blocks and invalidate blocks. There are several places that 
> needed to be changed to remove FSN lock.
> As mentioned in other jiras, we need to some mechanism to reason about the 
> correctness of the solution.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7516) Fix findbugs warnings in hdfs-nfs project

2014-12-12 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245231#comment-14245231
 ] 

Haohui Mai commented on HDFS-7516:
--

The patch looks good. A nit:

{code}
-byte[] in = s.getBytes();
+byte[] in = s.getBytes(Charsets.UTF_8);
 for (int i = 0; i < in.length; i++) {
   digest.update(in[i]);
 }
{code}

You can do:

{code}
digest.update(s.getBytes(Charsets.UTF_8));
{code}

+1 after addressing it.

> Fix findbugs warnings in hdfs-nfs project
> -
>
> Key: HDFS-7516
> URL: https://issues.apache.org/jira/browse/HDFS-7516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7516.001.patch, findbugsXml.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7495) Lock inversion in DFSInputStream#getBlockAt()

2014-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245179#comment-14245179
 ] 

Hadoop QA commented on HDFS-7495:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686989/hdfs-7495-001.patch
  against trunk revision fa7b924.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9033//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9033//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9033//console

This message is automatically generated.

> Lock inversion in DFSInputStream#getBlockAt()
> -
>
> Key: HDFS-7495
> URL: https://issues.apache.org/jira/browse/HDFS-7495
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Attachments: hdfs-7495-001.patch
>
>
> There're two locks: one on DFSInputStream.this , one on 
> DFSInputStream.infoLock
> Normally lock is obtained on infoLock, then on DFSInputStream.infoLock
> However, such order is not observed in DFSInputStream#getBlockAt() :
> {code}
> synchronized(infoLock) {
> ...
>   if (updatePosition) {
> // synchronized not strictly needed, since we only get here
> // from synchronized caller methods
> synchronized(this) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7513) HDFS inotify: add defaultBlockSize to CreateEvent

2014-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245150#comment-14245150
 ] 

Hadoop QA commented on HDFS-7513:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686973/HDFS-7513.003.patch
  against trunk revision c78e3a7.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

  {color:red}-1 javac{color}.  The applied patch generated 1267 javac 
compiler warnings (more than the trunk's current 1227 warnings).

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
40 warning messages.
See 
https://builds.apache.org/job/PreCommit-HDFS-Build/9031//artifact/patchprocess/diffJavadocWarnings.txt
 for details.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9031//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9031//artifact/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9031//console

This message is automatically generated.

> HDFS inotify: add defaultBlockSize to CreateEvent
> -
>
> Key: HDFS-7513
> URL: https://issues.apache.org/jira/browse/HDFS-7513
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-7513.001.patch, HDFS-7513.002.patch, 
> HDFS-7513.003.patch
>
>
> HDFS inotify: add defaultBlockSize to CreateEvent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7506) Consolidate implementation of setting inode attributes into a single class

2014-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245148#comment-14245148
 ] 

Hadoop QA commented on HDFS-7506:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686971/HDFS-7506.003.patch
  against trunk revision c78e3a7.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

  {color:red}-1 javac{color}.  The applied patch generated 1269 javac 
compiler warnings (more than the trunk's current 1227 warnings).

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
31 warning messages.
See 
https://builds.apache.org/job/PreCommit-HDFS-Build/9032//artifact/patchprocess/diffJavadocWarnings.txt
 for details.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.cli.TestHDFSCLI

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9032//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9032//artifact/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9032//console

This message is automatically generated.

> Consolidate implementation of setting inode attributes into a single class
> --
>
> Key: HDFS-7506
> URL: https://issues.apache.org/jira/browse/HDFS-7506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-7506.000.patch, HDFS-7506.001.patch, 
> HDFS-7506.001.patch, HDFS-7506.002.patch, HDFS-7506.003.patch
>
>
> This jira proposes to consolidate the implementation of setting inode 
> attributes (i.e., times, permissions, owner, etc.) to a single class for 
> better maintainability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7513) HDFS inotify: add defaultBlockSize to CreateEvent

2014-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245141#comment-14245141
 ] 

Hadoop QA commented on HDFS-7513:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686973/HDFS-7513.003.patch
  against trunk revision c78e3a7.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9030//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9030//console

This message is automatically generated.

> HDFS inotify: add defaultBlockSize to CreateEvent
> -
>
> Key: HDFS-7513
> URL: https://issues.apache.org/jira/browse/HDFS-7513
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-7513.001.patch, HDFS-7513.002.patch, 
> HDFS-7513.003.patch
>
>
> HDFS inotify: add defaultBlockSize to CreateEvent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7494) Checking of closed in DFSInputStream#pread() should be protected by synchronization

2014-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245139#comment-14245139
 ] 

Hadoop QA commented on HDFS-7494:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686991/hdfs-7494-002.patch
  against trunk revision fa7b924.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs 

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9034//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9034//console

This message is automatically generated.

> Checking of closed in DFSInputStream#pread() should be protected by 
> synchronization
> ---
>
> Key: HDFS-7494
> URL: https://issues.apache.org/jira/browse/HDFS-7494
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: hdfs-7494-001.patch, hdfs-7494-002.patch
>
>
> {code}
>   private int pread(long position, byte[] buffer, int offset, int length)
>   throws IOException {
> // sanity checks
> dfsClient.checkOpen();
> if (closed) {
> {code}
> Checking of closed should be protected by holding lock on 
> "DFSInputStream.this"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7521) Refactor DN state management

2014-12-12 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-7521:
--
Description: 
There are two aspects w.r.t. DN state management in NN.

* State machine management within active NN
NN maintains states of each data node regarding whether it is running or being 
decommissioned. But the state machine isn’t well defined. We have dealt with 
some corner case bug in this area. It will be useful if we can refactor the 
code to use clear state machine definition that define events, available states 
and actions for state transitions. It has these benefits.
** Make it easy to define correctness of DN state management. Currently some of 
the state transitions aren't defined in the code. For example, if admins remove 
a node from include host file while the node is being decommissioned, it will 
be transitioned to DEAD and DECOMM_IN_PROGRESS. That might not be the 
intention. If we have state machine definition, we can identify this case.
** Make it easy to add new state for DN later. For example, people discussed 
about new “maintenance” state for DN to support the scenario where admins need 
to take the machine/rack down for 30 minutes for repair.

We can refactor DN with clear state machine definition based on YARN state 
related components.



* State machine consistency between active and standby NN
Another dimension of state machine management is consistency across NN pairs. 
We have dealt with bugs due to different live nodes between active NN and 
standby NN. Current design is to have each NN manage its own state based on the 
events it receives. For example, DNs will send heartbeat to both NNs; admins 
will issue decommission commands to both NNs. Alternative design approach could 
be to have ZK manage the state.

Thoughts?

  was:
There are two aspects w.r.t. DN state management in NN.

* State machine management within active NN
NN maintains states of each data node regarding whether it is running or being 
decommissioned. But the state machine isn’t well defined. We have dealt with 
some corner case bug in this area. It will be useful if we can refactor the 
code to use clear state machine definition that define events, available states 
and actions for state transitions. It has these benefits.
** Make it easy to define correctness of DN state management. Currently some of 
the state transitions aren't defined in the code. For example, if admins remove 
a node from include host file while the node is being decommissioned, it will 
be transitioned to DEAD and DECOMM_IN_PROGRESS. That might not be the 
intention. If we have state machine definition, we can identify this case.
** Make it easy to add new state for DN later. For example, people discussed 
about new “maintenance” state for DN to support the scenario where admins need 
to take the machine/rack down for 30 minutes for repair.
We can refactor DN with clear state machine definition based on YARN state 
related components.


* State machine consistency between active and standby NN
Another dimension of state machine management is consistency across NN pairs. 
We have dealt with bugs due to different live nodes between active NN and 
standby NN. Current design is to have each NN manage its own state based on the 
events it receives. For example, DNs will send heartbeat to both NNs; admins 
will issue decommission commands to both NNs. Alternative design approach could 
be to have ZK manage the state.

Thoughts?


> Refactor DN state management
> 
>
> Key: HDFS-7521
> URL: https://issues.apache.org/jira/browse/HDFS-7521
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>
> There are two aspects w.r.t. DN state management in NN.
> * State machine management within active NN
> NN maintains states of each data node regarding whether it is running or 
> being decommissioned. But the state machine isn’t well defined. We have dealt 
> with some corner case bug in this area. It will be useful if we can refactor 
> the code to use clear state machine definition that define events, available 
> states and actions for state transitions. It has these benefits.
> ** Make it easy to define correctness of DN state management. Currently some 
> of the state transitions aren't defined in the code. For example, if admins 
> remove a node from include host file while the node is being decommissioned, 
> it will be transitioned to DEAD and DECOMM_IN_PROGRESS. That might not be the 
> intention. If we have state machine definition, we can identify this case.
> ** Make it easy to add new state for DN later. For example, people discussed 
> about new “maintenance” state for DN to support the scenario where admins 
> need to take the machine/rack down for 30 minutes for repair.
> We can refactor DN with clear state machine definition based on YARN 

[jira] [Updated] (HDFS-7521) Refactor DN state management

2014-12-12 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-7521:
--
Description: 
There are two aspects w.r.t. DN state management in NN.

* State machine management within active NN
NN maintains states of each data node regarding whether it is running or being 
decommissioned. But the state machine isn’t well defined. We have dealt with 
some corner case bug in this area. It will be useful if we can refactor the 
code to use clear state machine definition that define events, available states 
and actions for state transitions. It has these benefits.
** Make it easy to define correctness of DN state management. Currently some of 
the state transitions aren't defined in the code. For example, if admins remove 
a node from include host file while the node is being decommissioned, it will 
be transitioned to DEAD and DECOMM_IN_PROGRESS. That might not be the 
intention. If we have state machine definition, we can identify this case.
** Make it easy to add new state for DN later. For example, people discussed 
about new “maintenance” state for DN to support the scenario where admins need 
to take the machine/rack down for 30 minutes for repair.

We can refactor DN with clear state machine definition based on YARN state 
related components.



* State machine consistency between active and standby NN
Another dimension of state machine management is consistency across NN pairs. 
We have dealt with bugs due to different live nodes between active NN and 
standby NN. Current design is to have each NN manage its own state based on the 
events it receives. For example, DNs will send heartbeat to both NNs; admins 
will issue decommission commands to both NNs. Alternative design approach could 
be to have ZK manage the state.

Thoughts?

  was:
There are two aspects w.r.t. DN state management in NN.

* State machine management within active NN
NN maintains states of each data node regarding whether it is running or being 
decommissioned. But the state machine isn’t well defined. We have dealt with 
some corner case bug in this area. It will be useful if we can refactor the 
code to use clear state machine definition that define events, available states 
and actions for state transitions. It has these benefits.

** Make it easy to define correctness of DN state management. Currently some of 
the state transitions aren't defined in the code. For example, if admins remove 
a node from include host file while the node is being decommissioned, it will 
be transitioned to DEAD and DECOMM_IN_PROGRESS. That might not be the 
intention. If we have state machine definition, we can identify this case.

** Make it easy to add new state for DN later. For example, people discussed 
about new “maintenance” state for DN to support the scenario where admins need 
to take the machine/rack down for 30 minutes for repair.

We can refactor DN with clear state machine definition based on YARN state 
related components.


* State machine consistency between active and standby NN

Another dimension of state machine management is consistency across NN pairs. 
We have dealt with bugs due to different live nodes between active NN and 
standby NN. Current design is to have each NN manage its own state based on the 
events it receives. For example, DNs will send heartbeat to both NNs; admins 
will issue decommission commands to both NNs. Alternative design approach we 
discuss is to have ZK manage the state.

Thoughts?


> Refactor DN state management
> 
>
> Key: HDFS-7521
> URL: https://issues.apache.org/jira/browse/HDFS-7521
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>
> There are two aspects w.r.t. DN state management in NN.
> * State machine management within active NN
> NN maintains states of each data node regarding whether it is running or 
> being decommissioned. But the state machine isn’t well defined. We have dealt 
> with some corner case bug in this area. It will be useful if we can refactor 
> the code to use clear state machine definition that define events, available 
> states and actions for state transitions. It has these benefits.
> ** Make it easy to define correctness of DN state management. Currently some 
> of the state transitions aren't defined in the code. For example, if admins 
> remove a node from include host file while the node is being decommissioned, 
> it will be transitioned to DEAD and DECOMM_IN_PROGRESS. That might not be the 
> intention. If we have state machine definition, we can identify this case.
> ** Make it easy to add new state for DN later. For example, people discussed 
> about new “maintenance” state for DN to support the scenario where admins 
> need to take the machine/rack down for 30 minutes for repair.
> We can refactor DN with clear state machine definition based

[jira] [Updated] (HDFS-7521) Refactor DN state management

2014-12-12 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-7521:
--
Description: 
There are two aspects w.r.t. DN state management in NN.

* State machine management within active NN
NN maintains states of each data node regarding whether it is running or being 
decommissioned. But the state machine isn’t well defined. We have dealt with 
some corner case bug in this area. It will be useful if we can refactor the 
code to use clear state machine definition that define events, available states 
and actions for state transitions. It has these benefits.
** Make it easy to define correctness of DN state management. Currently some of 
the state transitions aren't defined in the code. For example, if admins remove 
a node from include host file while the node is being decommissioned, it will 
be transitioned to DEAD and DECOMM_IN_PROGRESS. That might not be the 
intention. If we have state machine definition, we can identify this case.
** Make it easy to add new state for DN later. For example, people discussed 
about new “maintenance” state for DN to support the scenario where admins need 
to take the machine/rack down for 30 minutes for repair.
We can refactor DN with clear state machine definition based on YARN state 
related components.


* State machine consistency between active and standby NN
Another dimension of state machine management is consistency across NN pairs. 
We have dealt with bugs due to different live nodes between active NN and 
standby NN. Current design is to have each NN manage its own state based on the 
events it receives. For example, DNs will send heartbeat to both NNs; admins 
will issue decommission commands to both NNs. Alternative design approach could 
be to have ZK manage the state.

Thoughts?

  was:
There are two aspects w.r.t. DN state management in NN.

* State machine management within active NN
NN maintains states of each data node regarding whether it is running or being 
decommissioned. But the state machine isn’t well defined. We have dealt with 
some corner case bug in this area. It will be useful if we can refactor the 
code to use clear state machine definition that define events, available states 
and actions for state transitions. It has these benefits.
** Make it easy to define correctness of DN state management. Currently some of 
the state transitions aren't defined in the code. For example, if admins remove 
a node from include host file while the node is being decommissioned, it will 
be transitioned to DEAD and DECOMM_IN_PROGRESS. That might not be the 
intention. If we have state machine definition, we can identify this case.
** Make it easy to add new state for DN later. For example, people discussed 
about new “maintenance” state for DN to support the scenario where admins need 
to take the machine/rack down for 30 minutes for repair.

We can refactor DN with clear state machine definition based on YARN state 
related components.



* State machine consistency between active and standby NN
Another dimension of state machine management is consistency across NN pairs. 
We have dealt with bugs due to different live nodes between active NN and 
standby NN. Current design is to have each NN manage its own state based on the 
events it receives. For example, DNs will send heartbeat to both NNs; admins 
will issue decommission commands to both NNs. Alternative design approach could 
be to have ZK manage the state.

Thoughts?


> Refactor DN state management
> 
>
> Key: HDFS-7521
> URL: https://issues.apache.org/jira/browse/HDFS-7521
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>
> There are two aspects w.r.t. DN state management in NN.
> * State machine management within active NN
> NN maintains states of each data node regarding whether it is running or 
> being decommissioned. But the state machine isn’t well defined. We have dealt 
> with some corner case bug in this area. It will be useful if we can refactor 
> the code to use clear state machine definition that define events, available 
> states and actions for state transitions. It has these benefits.
> ** Make it easy to define correctness of DN state management. Currently some 
> of the state transitions aren't defined in the code. For example, if admins 
> remove a node from include host file while the node is being decommissioned, 
> it will be transitioned to DEAD and DECOMM_IN_PROGRESS. That might not be the 
> intention. If we have state machine definition, we can identify this case.
> ** Make it easy to add new state for DN later. For example, people discussed 
> about new “maintenance” state for DN to support the scenario where admins 
> need to take the machine/rack down for 30 minutes for repair.
> We can refactor DN with clear state machine definition based on YARN 

[jira] [Created] (HDFS-7521) Refactor DN state management

2014-12-12 Thread Ming Ma (JIRA)
Ming Ma created HDFS-7521:
-

 Summary: Refactor DN state management
 Key: HDFS-7521
 URL: https://issues.apache.org/jira/browse/HDFS-7521
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma


There are two aspects w.r.t. DN state management in NN.

* State machine management within active NN
NN maintains states of each data node regarding whether it is running or being 
decommissioned. But the state machine isn’t well defined. We have dealt with 
some corner case bug in this area. It will be useful if we can refactor the 
code to use clear state machine definition that define events, available states 
and actions for state transitions. It has these benefits.

** Make it easy to define correctness of DN state management. Currently some of 
the state transitions aren't defined in the code. For example, if admins remove 
a node from include host file while the node is being decommissioned, it will 
be transitioned to DEAD and DECOMM_IN_PROGRESS. That might not be the 
intention. If we have state machine definition, we can identify this case.

** Make it easy to add new state for DN later. For example, people discussed 
about new “maintenance” state for DN to support the scenario where admins need 
to take the machine/rack down for 30 minutes for repair.

We can refactor DN with clear state machine definition based on YARN state 
related components.


* State machine consistency between active and standby NN

Another dimension of state machine management is consistency across NN pairs. 
We have dealt with bugs due to different live nodes between active NN and 
standby NN. Current design is to have each NN manage its own state based on the 
events it receives. For example, DNs will send heartbeat to both NNs; admins 
will issue decommission commands to both NNs. Alternative design approach we 
discuss is to have ZK manage the state.

Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7430) Refactor the BlockScanner to use O(1) memory and use multiple threads

2014-12-12 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245106#comment-14245106
 ] 

Colin Patrick McCabe commented on HDFS-7430:


bq. Any reason BlockScanner is contained in DataNode rather than FsDatasetImpl? 
It seems like their lifecycles are the same. Might also move to the fsdataset 
package if you agree.

I think the logic is sufficiently general that this could be used in other 
{{FsDataset}} implementations than {{FsDatasetImpl}}.  So I'd prefer to keep it 
in {{DataNode}}.  The block iterator abstracts away the details of reading the 
blocks from a volume, and could be implemented by other {{Volume}} 
implementations.  Actually I think the abstraction is better now because we cut 
the link to reading paths.

bq. There's a bunch of time conversion scattered about, it'd be better to use 
TimeUnit.MILLISECONDS.toHours(millis) and similar where we can. I like this 
form better than TimeUnit#convert since it's very obvious.

Good idea.

bq. Javadoc, need to put a  tag to actually get line-breaks.

added... hopefully I got all the spots

bq. Can add a log when removeVolumeScanner is called and not enabled

added

I have to close this window now, will address the other comments in a bit

> Refactor the BlockScanner to use O(1) memory and use multiple threads
> -
>
> Key: HDFS-7430
> URL: https://issues.apache.org/jira/browse/HDFS-7430
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-7430.002.patch, HDFS-7430.003.patch, 
> HDFS-7430.004.patch, HDFS-7430.005.patch, memory.png
>
>
> We should update the BlockScanner to use a constant amount of memory by 
> keeping track of what block was scanned last, rather than by tracking the 
> scan status of all blocks in memory.  Also, instead of having just one 
> thread, we should have a verification thread per hard disk (or other volume), 
> scanning at a configurable rate of bytes per second.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7520) checknative should display a nicer error message when openssl support is not compiled in

2014-12-12 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-7520:
--

 Summary: checknative should display a nicer error message when 
openssl support is not compiled in
 Key: HDFS-7520
 URL: https://issues.apache.org/jira/browse/HDFS-7520
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe


checknative should display a nicer error message when openssl support is not 
compiled in.  Currently, it displays this:

{code}
[cmccabe@keter hadoop]$ hadoop checknative
14/12/12 14:08:43 INFO bzip2.Bzip2Factory: Successfully loaded & initialized 
native-bzip2 library system-native
14/12/12 14:08:43 INFO zlib.ZlibFactory: Successfully loaded & initialized 
native-zlib library
Native library checking:
hadoop:  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
zlib:true /lib64/libz.so.1
snappy:  true /usr/lib64/libsnappy.so.1
lz4: true revision:99
bzip2:   true /lib64/libbz2.so.1
openssl: false org.apache.hadoop.crypto.OpensslCipher.initIDs()V
{code}

Instead, we should display something like this, if openssl is not supported by 
the current build:
{code}
openssl: false Hadoop was built without openssl support.
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7426) Change nntop JMX format to be a JSON blob

2014-12-12 Thread Maysam Yabandeh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245096#comment-14245096
 ] 

Maysam Yabandeh commented on HDFS-7426:
---

Thanks to [~cmccabe] for the review, and thank you [~andrew.wang] for taking 
care of this jira.

> Change nntop JMX format to be a JSON blob
> -
>
> Key: HDFS-7426
> URL: https://issues.apache.org/jira/browse/HDFS-7426
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.7.0
>
> Attachments: hdfs-7426.001.patch, hdfs-7426.002.patch, 
> hdfs-7426.003.patch, hdfs-7426.004.patch, hdfs-7426.005.patch
>
>
> After discussion with [~maysamyabandeh], we think we can adjust the JMX 
> output to instead be a richer JSON blob. This should be easier to parse and 
> also be more informative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7494) Checking of closed in DFSInputStream#pread() should be protected by synchronization

2014-12-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7494:
-
Status: Patch Available  (was: Open)

> Checking of closed in DFSInputStream#pread() should be protected by 
> synchronization
> ---
>
> Key: HDFS-7494
> URL: https://issues.apache.org/jira/browse/HDFS-7494
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: hdfs-7494-001.patch, hdfs-7494-002.patch
>
>
> {code}
>   private int pread(long position, byte[] buffer, int offset, int length)
>   throws IOException {
> // sanity checks
> dfsClient.checkOpen();
> if (closed) {
> {code}
> Checking of closed should be protected by holding lock on 
> "DFSInputStream.this"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7494) Checking of closed in DFSInputStream#pread() should be protected by synchronization

2014-12-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7494:
-
Attachment: hdfs-7494-002.patch

> Checking of closed in DFSInputStream#pread() should be protected by 
> synchronization
> ---
>
> Key: HDFS-7494
> URL: https://issues.apache.org/jira/browse/HDFS-7494
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: hdfs-7494-001.patch, hdfs-7494-002.patch
>
>
> {code}
>   private int pread(long position, byte[] buffer, int offset, int length)
>   throws IOException {
> // sanity checks
> dfsClient.checkOpen();
> if (closed) {
> {code}
> Checking of closed should be protected by holding lock on 
> "DFSInputStream.this"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7426) Change nntop JMX format to be a JSON blob

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245080#comment-14245080
 ] 

Hudson commented on HDFS-7426:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6711 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6711/])
HDFS-7426. Change nntop JMX format to be a JSON blob. (wang: rev 
fa7b9248e415c04bb555772f44fadaf8d9f34974)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/window/RollingWindowManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/metrics/TopMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/FSNamesystemMBean.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/top/window/TestRollingWindowManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/TopAuditLogger.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/TopConf.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystemMBean.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Change nntop JMX format to be a JSON blob
> -
>
> Key: HDFS-7426
> URL: https://issues.apache.org/jira/browse/HDFS-7426
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.7.0
>
> Attachments: hdfs-7426.001.patch, hdfs-7426.002.patch, 
> hdfs-7426.003.patch, hdfs-7426.004.patch, hdfs-7426.005.patch
>
>
> After discussion with [~maysamyabandeh], we think we can adjust the JMX 
> output to instead be a richer JSON blob. This should be easier to parse and 
> also be more informative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7426) Change nntop JMX format to be a JSON blob

2014-12-12 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-7426:
--
   Resolution: Fixed
Fix Version/s: 2.7.0
   Status: Resolved  (was: Patch Available)

Thanks for the reviews Colin and Maysam, I've committed this to trunk and 
branch-2.

> Change nntop JMX format to be a JSON blob
> -
>
> Key: HDFS-7426
> URL: https://issues.apache.org/jira/browse/HDFS-7426
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.7.0
>
> Attachments: hdfs-7426.001.patch, hdfs-7426.002.patch, 
> hdfs-7426.003.patch, hdfs-7426.004.patch, hdfs-7426.005.patch
>
>
> After discussion with [~maysamyabandeh], we think we can adjust the JMX 
> output to instead be a richer JSON blob. This should be easier to parse and 
> also be more informative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7495) Lock inversion in DFSInputStream#getBlockAt()

2014-12-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7495:
-
Status: Patch Available  (was: Open)

> Lock inversion in DFSInputStream#getBlockAt()
> -
>
> Key: HDFS-7495
> URL: https://issues.apache.org/jira/browse/HDFS-7495
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Attachments: hdfs-7495-001.patch
>
>
> There're two locks: one on DFSInputStream.this , one on 
> DFSInputStream.infoLock
> Normally lock is obtained on infoLock, then on DFSInputStream.infoLock
> However, such order is not observed in DFSInputStream#getBlockAt() :
> {code}
> synchronized(infoLock) {
> ...
>   if (updatePosition) {
> // synchronized not strictly needed, since we only get here
> // from synchronized caller methods
> synchronized(this) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7495) Lock inversion in DFSInputStream#getBlockAt()

2014-12-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7495:
-
Attachment: hdfs-7495-001.patch

Looking a bit closer, updatePosition is not modified within getBlockAt().
Its value is only true when called from blockSeekTo() which already obtains 
lock on this.
Here is proposed patch which makes this part of code more readable.

> Lock inversion in DFSInputStream#getBlockAt()
> -
>
> Key: HDFS-7495
> URL: https://issues.apache.org/jira/browse/HDFS-7495
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Attachments: hdfs-7495-001.patch
>
>
> There're two locks: one on DFSInputStream.this , one on 
> DFSInputStream.infoLock
> Normally lock is obtained on infoLock, then on DFSInputStream.infoLock
> However, such order is not observed in DFSInputStream#getBlockAt() :
> {code}
> synchronized(infoLock) {
> ...
>   if (updatePosition) {
> // synchronized not strictly needed, since we only get here
> // from synchronized caller methods
> synchronized(this) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7495) Lock inversion in DFSInputStream#getBlockAt()

2014-12-12 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245064#comment-14245064
 ] 

Colin Patrick McCabe commented on HDFS-7495:


ugh, I was afraid we'd have something like this.  Good find, Ted.

Looks like we can just move the updatePosition segment out of the 
synchronized(infoLock) section in getBlockAt... or is there more?

> Lock inversion in DFSInputStream#getBlockAt()
> -
>
> Key: HDFS-7495
> URL: https://issues.apache.org/jira/browse/HDFS-7495
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> There're two locks: one on DFSInputStream.this , one on 
> DFSInputStream.infoLock
> Normally lock is obtained on infoLock, then on DFSInputStream.infoLock
> However, such order is not observed in DFSInputStream#getBlockAt() :
> {code}
> synchronized(infoLock) {
> ...
>   if (updatePosition) {
> // synchronized not strictly needed, since we only get here
> // from synchronized caller methods
> synchronized(this) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7519) Support for a Reconfigurable NameNode

2014-12-12 Thread Mike Yoder (JIRA)
Mike Yoder created HDFS-7519:


 Summary: Support for a Reconfigurable NameNode
 Key: HDFS-7519
 URL: https://issues.apache.org/jira/browse/HDFS-7519
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Mike Yoder


The DataNode gained the use of the "Reconfigurable" code in HDFS-6727.  The 
purpose of this jira is to also use the Reconfigurable code in the Namenode.

Use cases:
* Take the variety of refresh-something-in-the-namenode RPCs and bring them all 
under one roof
* Allow for future reconfiguration of parameters, and parameters that plugins 
might make use of




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7494) Checking of closed in DFSInputStream#pread() should be protected by synchronization

2014-12-12 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245059#comment-14245059
 ] 

Colin Patrick McCabe commented on HDFS-7494:


{code}
665 if (!closed.compareAndSet(false, true)) {
666   DFSClient.LOG.warn("DFSInputStream has been closed already");
666 }
{code}

We should return here, not keep going

+1 when that's addressed

> Checking of closed in DFSInputStream#pread() should be protected by 
> synchronization
> ---
>
> Key: HDFS-7494
> URL: https://issues.apache.org/jira/browse/HDFS-7494
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: hdfs-7494-001.patch
>
>
> {code}
>   private int pread(long position, byte[] buffer, int offset, int length)
>   throws IOException {
> // sanity checks
> dfsClient.checkOpen();
> if (closed) {
> {code}
> Checking of closed should be protected by holding lock on 
> "DFSInputStream.this"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7426) Change nntop JMX format to be a JSON blob

2014-12-12 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245024#comment-14245024
 ] 

Colin Patrick McCabe commented on HDFS-7426:


bq. e.g., a loop based on an iterator that is also being modified concurrently. 
anyhow, the point was that to make the purpose of the try/catch clear for 
future contributors, and I think such try/catch would be based placed inside 
audit logger.

Thanks for the explanation... makes sense.

bq. Moved the int/long casting up to TopConf so we can use ints internally 
everywhere.

good idea.

bq. Added some tests for when nntop is disabled, or no windows configured. 
Looks like the null is okay, the KV pair just doesn't show up at all in the 
output.

thanks

bq. I didn't see any whitespace changes to avoid in TestNamenodeMXBean, but 
maybe I just missed it.

yeah, I don't see them in v2

Findbugs and javac warnings are bogus.

+1, thanks Andrew and Maysam.

> Change nntop JMX format to be a JSON blob
> -
>
> Key: HDFS-7426
> URL: https://issues.apache.org/jira/browse/HDFS-7426
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-7426.001.patch, hdfs-7426.002.patch, 
> hdfs-7426.003.patch, hdfs-7426.004.patch, hdfs-7426.005.patch
>
>
> After discussion with [~maysamyabandeh], we think we can adjust the JMX 
> output to instead be a richer JSON blob. This should be easier to parse and 
> also be more informative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7513) HDFS inotify: add defaultBlockSize to CreateEvent

2014-12-12 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245015#comment-14245015
 ] 

Colin Patrick McCabe commented on HDFS-7513:


bq. Maybe drop set from setDefaultBlockSize for consistency with other methods?

ok

will commit pending jenkins, thanks for the review!

> HDFS inotify: add defaultBlockSize to CreateEvent
> -
>
> Key: HDFS-7513
> URL: https://issues.apache.org/jira/browse/HDFS-7513
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-7513.001.patch, HDFS-7513.002.patch, 
> HDFS-7513.003.patch
>
>
> HDFS inotify: add defaultBlockSize to CreateEvent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7516) Fix findbugs warnings in hdfs-nfs project

2014-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245009#comment-14245009
 ] 

Hadoop QA commented on HDFS-7516:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686969/HDFS-7516.001.patch
  against trunk revision c78e3a7.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9029//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9029//console

This message is automatically generated.

> Fix findbugs warnings in hdfs-nfs project
> -
>
> Key: HDFS-7516
> URL: https://issues.apache.org/jira/browse/HDFS-7516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7516.001.patch, findbugsXml.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7513) HDFS inotify: add defaultBlockSize to CreateEvent

2014-12-12 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7513:
---
Attachment: HDFS-7513.003.patch

> HDFS inotify: add defaultBlockSize to CreateEvent
> -
>
> Key: HDFS-7513
> URL: https://issues.apache.org/jira/browse/HDFS-7513
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-7513.001.patch, HDFS-7513.002.patch, 
> HDFS-7513.003.patch
>
>
> HDFS inotify: add defaultBlockSize to CreateEvent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7513) HDFS inotify: add defaultBlockSize to CreateEvent

2014-12-12 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7513:
---
Attachment: (was: HDFS-7513.003.patch)

> HDFS inotify: add defaultBlockSize to CreateEvent
> -
>
> Key: HDFS-7513
> URL: https://issues.apache.org/jira/browse/HDFS-7513
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-7513.001.patch, HDFS-7513.002.patch
>
>
> HDFS inotify: add defaultBlockSize to CreateEvent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7513) HDFS inotify: add defaultBlockSize to CreateEvent

2014-12-12 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7513:
---
Attachment: HDFS-7513.003.patch

> HDFS inotify: add defaultBlockSize to CreateEvent
> -
>
> Key: HDFS-7513
> URL: https://issues.apache.org/jira/browse/HDFS-7513
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-7513.001.patch, HDFS-7513.002.patch
>
>
> HDFS inotify: add defaultBlockSize to CreateEvent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7506) Consolidate implementation of setting inode attributes into a single class

2014-12-12 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7506:
-
Attachment: HDFS-7506.003.patch

> Consolidate implementation of setting inode attributes into a single class
> --
>
> Key: HDFS-7506
> URL: https://issues.apache.org/jira/browse/HDFS-7506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-7506.000.patch, HDFS-7506.001.patch, 
> HDFS-7506.001.patch, HDFS-7506.002.patch, HDFS-7506.003.patch
>
>
> This jira proposes to consolidate the implementation of setting inode 
> attributes (i.e., times, permissions, owner, etc.) to a single class for 
> better maintainability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7271) Find a way to make encryption zone deletion work with HDFS trash.

2014-12-12 Thread Justin Kestelyn (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Kestelyn reassigned HDFS-7271:
-

Assignee: Justin Kestelyn  (was: Yi Liu)

> Find a way to make encryption zone deletion work with HDFS trash.
> -
>
> Key: HDFS-7271
> URL: https://issues.apache.org/jira/browse/HDFS-7271
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.0
>Reporter: Yi Liu
>Assignee: Justin Kestelyn
>
> Currently when HDFS trash is enabled, deletion of encryption zone will have 
> issue:
> {quote}
> rmr: Failed to move to trash: ... can't be moved from an encryption zone.
> {quote}
> A simple way is to add ignore trash flag for fs rm operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7516) Fix findbugs warnings in hdfs-nfs project

2014-12-12 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7516:
-
Status: Patch Available  (was: Open)

> Fix findbugs warnings in hdfs-nfs project
> -
>
> Key: HDFS-7516
> URL: https://issues.apache.org/jira/browse/HDFS-7516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7516.001.patch, findbugsXml.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7516) Fix findbugs warnings in hdfs-nfs project

2014-12-12 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7516:
-
Attachment: HDFS-7516.001.patch

> Fix findbugs warnings in hdfs-nfs project
> -
>
> Key: HDFS-7516
> URL: https://issues.apache.org/jira/browse/HDFS-7516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7516.001.patch, findbugsXml.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7059) HAadmin transtionToActive with forceActive option can show confusing message.

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244976#comment-14244976
 ] 

Hudson commented on HDFS-7059:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #6709 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6709/])
HDFS-7059. Avoid resolving path multiple times. Contributed by Jing Zhao. 
(jing9: rev c78e3a7cdd10c40454e9acb06986ba6d8573cb19)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSnapshotPathINodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestOpenFilesWithSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestLeaseManager.java


> HAadmin transtionToActive with forceActive option can show confusing message.
> -
>
> Key: HDFS-7059
> URL: https://issues.apache.org/jira/browse/HDFS-7059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HDFS-7059.patch
>
>
> Ran into this confusing message on our local HA setup.
> One of the namenode was down and the other was in standby mode.
> The namenode was not able to come out of safe mode so we did 
> transitionToActive with forceActive switch enabled.
> Due to change in HDFS-2949,  it will try connecting to all the  namenode  to 
> see whether they are active or not.
> But since the other namenode is down it will try connect to that namenode for 
> 'ipc.client.connect.max.retries' number of times.
> Every time it is not able to connect, it will log a message :
> INFO ipc.Client: Retrying connect to server: . Already tried 0 
> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, 
> sleepTime=1000 MILLISECONDS)
> Since in our configuration, the number of retries is 50, it will show this 
> message 50 times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7516) Fix findbugs warnings in hdfs-nfs project

2014-12-12 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7516:
-
Attachment: findbugsXml.xml

> Fix findbugs warnings in hdfs-nfs project
> -
>
> Key: HDFS-7516
> URL: https://issues.apache.org/jira/browse/HDFS-7516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: findbugsXml.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7491) Add incremental blockreport latency to DN metrics

2014-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244966#comment-14244966
 ] 

Hadoop QA commented on HDFS-7491:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12685855/HDFS-7491.patch
  against trunk revision 3681de2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
  
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9028//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9028//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9028//console

This message is automatically generated.

> Add incremental blockreport latency to DN metrics
> -
>
> Key: HDFS-7491
> URL: https://issues.apache.org/jira/browse/HDFS-7491
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Ming Ma
>Priority: Minor
> Attachments: HDFS-7491.patch
>
>
> In a busy cluster, IBR processing could be delayed due to NN FSNamesystem 
> lock and cause NN to throw NotReplicatedYetException to DFSClient and thus 
> increase the overall application latency.
> This will be taken care of when we address the NN FSNamesystem lock 
> contention issue.
> It is useful if we can provide IBR latency metrics from DN's point of view.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7509) Avoid resolving path multiple times

2014-12-12 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7509:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

The test failures should be unrelated and they passed in my local machine.

I've committed this to trunk and branch-2. Thanks Charles and Haohui for the 
review!

> Avoid resolving path multiple times
> ---
>
> Key: HDFS-7509
> URL: https://issues.apache.org/jira/browse/HDFS-7509
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 2.7.0
>
> Attachments: HDFS-7509.000.patch, HDFS-7509.001.patch, 
> HDFS-7509.002.patch, HDFS-7509.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7509) Avoid resolving path multiple times

2014-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244950#comment-14244950
 ] 

Hadoop QA commented on HDFS-7509:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686926/HDFS-7509.003.patch
  against trunk revision 3681de2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
  org.apache.hadoop.hdfs.TestDistributedFileSystem

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9027//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9027//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9027//console

This message is automatically generated.

> Avoid resolving path multiple times
> ---
>
> Key: HDFS-7509
> URL: https://issues.apache.org/jira/browse/HDFS-7509
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-7509.000.patch, HDFS-7509.001.patch, 
> HDFS-7509.002.patch, HDFS-7509.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7514) TestTextCommand fails on Windows

2014-12-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244929#comment-14244929
 ] 

Arpit Agarwal commented on HDFS-7514:
-

By the way the test failure and Findbugs warning flagged by Jenkins were 
unrelated to the patch.

> TestTextCommand fails on Windows
> 
>
> Key: HDFS-7514
> URL: https://issues.apache.org/jira/browse/HDFS-7514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.7.0
>
> Attachments: HDFS-7514.branch-2.01.patch, HDFS-7514.trunk.01.patch
>
>
> TestTextCommand fails on Windows
> *Error Message*
> {code}
> Pathname 
> /D:/w/hbk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/testText/weather.avro
>  from 
> D:/w/hbk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/testText/weather.avro
>  is not a valid DFS filename.
> {code}
> *Stacktrace*
> {code}
> java.lang.IllegalArgumentException: Pathname 
> /D:/w/hbk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/testText/weather.avro
>  from 
> D:/w/hbk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/testText/weather.avro
>  is not a valid DFS filename.
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:196)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:397)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775)
>   at 
> org.apache.hadoop.fs.shell.TestTextCommand.createAvroFile(TestTextCommand.java:113)
>   at 
> org.apache.hadoop.fs.shell.TestTextCommand.testDisplayForAvroFiles(TestTextCommand.java:76)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7056) Snapshot support for truncate

2014-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244927#comment-14244927
 ] 

Hadoop QA commented on HDFS-7056:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12686918/HDFS-3107-HDFS-7056-combined.patch
  against trunk revision 3681de2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 10 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1221 javac 
compiler warnings (more than the trunk's current 0 warnings).

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
49 warning messages.
See 
https://builds.apache.org/job/PreCommit-HDFS-Build/9026//artifact/patchprocess/diffJavadocWarnings.txt
 for details.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager
  org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives
  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
  org.apache.hadoop.hdfs.TestDistributedFileSystem

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9026//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9026//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Javac warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9026//artifact/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9026//console

This message is automatically generated.

> Snapshot support for truncate
> -
>
> Key: HDFS-7056
> URL: https://issues.apache.org/jira/browse/HDFS-7056
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-3107-HDFS-7056-combined.patch, HDFS-7056.patch, HDFS-7056.patch, 
> HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, 
> HDFS-7056.patch, HDFSSnapshotWithTruncateDesign.docx
>
>
> Implementation of truncate in HDFS-3107 does not allow truncating files which 
> are in a snapshot. It is desirable to be able to truncate and still keep the 
> old file state of the file in the snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7514) TestTextCommand fails on Windows

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244908#comment-14244908
 ] 

Hudson commented on HDFS-7514:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6708 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6708/])
HDFS-7514. TestTextCommand fails on Windows. (Arpit Agarwal) (arp: rev 
7784b10808c2146cde8025d56e80f042ec3581c6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/shell/TestHdfsTextCommand.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestTextCommand fails on Windows
> 
>
> Key: HDFS-7514
> URL: https://issues.apache.org/jira/browse/HDFS-7514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.7.0
>
> Attachments: HDFS-7514.branch-2.01.patch, HDFS-7514.trunk.01.patch
>
>
> TestTextCommand fails on Windows
> *Error Message*
> {code}
> Pathname 
> /D:/w/hbk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/testText/weather.avro
>  from 
> D:/w/hbk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/testText/weather.avro
>  is not a valid DFS filename.
> {code}
> *Stacktrace*
> {code}
> java.lang.IllegalArgumentException: Pathname 
> /D:/w/hbk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/testText/weather.avro
>  from 
> D:/w/hbk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/testText/weather.avro
>  is not a valid DFS filename.
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:196)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:397)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775)
>   at 
> org.apache.hadoop.fs.shell.TestTextCommand.createAvroFile(TestTextCommand.java:113)
>   at 
> org.apache.hadoop.fs.shell.TestTextCommand.testDisplayForAvroFiles(TestTextCommand.java:76)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7514) TestTextCommand fails on Windows

2014-12-12 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-7514:

  Resolution: Fixed
   Fix Version/s: 2.7.0
Target Version/s:   (was: 2.6.1)
  Status: Resolved  (was: Patch Available)

Thank you for the review and verification Chris!

Committed to trunk and branch-2.

> TestTextCommand fails on Windows
> 
>
> Key: HDFS-7514
> URL: https://issues.apache.org/jira/browse/HDFS-7514
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.7.0
>
> Attachments: HDFS-7514.branch-2.01.patch, HDFS-7514.trunk.01.patch
>
>
> TestTextCommand fails on Windows
> *Error Message*
> {code}
> Pathname 
> /D:/w/hbk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/testText/weather.avro
>  from 
> D:/w/hbk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/testText/weather.avro
>  is not a valid DFS filename.
> {code}
> *Stacktrace*
> {code}
> java.lang.IllegalArgumentException: Pathname 
> /D:/w/hbk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/testText/weather.avro
>  from 
> D:/w/hbk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/testText/weather.avro
>  is not a valid DFS filename.
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:196)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:397)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775)
>   at 
> org.apache.hadoop.fs.shell.TestTextCommand.createAvroFile(TestTextCommand.java:113)
>   at 
> org.apache.hadoop.fs.shell.TestTextCommand.testDisplayForAvroFiles(TestTextCommand.java:76)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7509) Avoid resolving path multiple times

2014-12-12 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244872#comment-14244872
 ] 

Haohui Mai commented on HDFS-7509:
--

The patch also makes changes to clean up various issues introduced during the 
work of flattening INode hierarchy. The changes look good to me. +1.

> Avoid resolving path multiple times
> ---
>
> Key: HDFS-7509
> URL: https://issues.apache.org/jira/browse/HDFS-7509
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-7509.000.patch, HDFS-7509.001.patch, 
> HDFS-7509.002.patch, HDFS-7509.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7495) Lock inversion in DFSInputStream#getBlockAt()

2014-12-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244853#comment-14244853
 ] 

Ted Yu commented on HDFS-7495:
--

Colin:
Can you take a look at the following method which calls getBlockAt() ?
{code}
  private synchronized DatanodeInfo blockSeekTo(long target) throws IOException 
{
{code}
Thanks

> Lock inversion in DFSInputStream#getBlockAt()
> -
>
> Key: HDFS-7495
> URL: https://issues.apache.org/jira/browse/HDFS-7495
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> There're two locks: one on DFSInputStream.this , one on 
> DFSInputStream.infoLock
> Normally lock is obtained on infoLock, then on DFSInputStream.infoLock
> However, such order is not observed in DFSInputStream#getBlockAt() :
> {code}
> synchronized(infoLock) {
> ...
>   if (updatePosition) {
> // synchronized not strictly needed, since we only get here
> // from synchronized caller methods
> synchronized(this) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7494) Checking of closed in DFSInputStream#pread() should be protected by synchronization

2014-12-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-7494:
-
Attachment: hdfs-7494-001.patch

Thanks for the suggestion, Colin.

Please take a look at patch v1.

> Checking of closed in DFSInputStream#pread() should be protected by 
> synchronization
> ---
>
> Key: HDFS-7494
> URL: https://issues.apache.org/jira/browse/HDFS-7494
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: hdfs-7494-001.patch
>
>
> {code}
>   private int pread(long position, byte[] buffer, int offset, int length)
>   throws IOException {
> // sanity checks
> dfsClient.checkOpen();
> if (closed) {
> {code}
> Checking of closed should be protected by holding lock on 
> "DFSInputStream.this"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7494) Checking of closed in DFSInputStream#pread() should be protected by synchronization

2014-12-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HDFS-7494:


Assignee: Ted Yu

> Checking of closed in DFSInputStream#pread() should be protected by 
> synchronization
> ---
>
> Key: HDFS-7494
> URL: https://issues.apache.org/jira/browse/HDFS-7494
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>
> {code}
>   private int pread(long position, byte[] buffer, int offset, int length)
>   throws IOException {
> // sanity checks
> dfsClient.checkOpen();
> if (closed) {
> {code}
> Checking of closed should be protected by holding lock on 
> "DFSInputStream.this"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7212) Huge number of BLOCKED threads rendering DataNodes useless

2014-12-12 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244818#comment-14244818
 ] 

Colin Patrick McCabe commented on HDFS-7212:


This may also be HADOOP-11333.  In either case, I would say try with a later 
release with both fixes and verify that that fixes it.

> Huge number of BLOCKED threads rendering DataNodes useless
> --
>
> Key: HDFS-7212
> URL: https://issues.apache.org/jira/browse/HDFS-7212
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.4.0
> Environment: PROD
>Reporter: Istvan Szukacs
>
> There are 3000 - 8000 threads in each datanode JVM, blocking the entire VM 
> and rendering the service unusable, missing heartbeats and stopping data 
> access. The threads look like this:
> {code}
> 3415 (state = BLOCKED)
> - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may 
> be imprecise)
> - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, 
> line=186 (Compiled frame)
> - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt() 
> @bci=1, line=834 (Interpreted frame)
> - 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(java.util.concurrent.locks.AbstractQueuedSynchronizer$Node,
>  int) @bci=67, line=867 (Interpreted frame)
> - java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(int) @bci=17, 
> line=1197 (Interpreted frame)
> - java.util.concurrent.locks.ReentrantLock$NonfairSync.lock() @bci=21, 
> line=214 (Compiled frame)
> - java.util.concurrent.locks.ReentrantLock.lock() @bci=4, line=290 (Compiled 
> frame)
> - 
> org.apache.hadoop.net.unix.DomainSocketWatcher.add(org.apache.hadoop.net.unix.DomainSocket,
>  org.apache.hadoop.net.unix.DomainSocketWatcher$Handler) @bci=4, line=286 
> (Interpreted frame)
> - 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(java.lang.String,
>  org.apache.hadoop.net.unix.DomainSocket) @bci=169, line=283 (Interpreted 
> frame)
> - 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(java.lang.String)
>  @bci=212, line=413 (Interpreted frame)
> - 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(java.io.DataInputStream)
>  @bci=13, line=172 (Interpreted frame)
> - 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(org.apache.hadoop.hdfs.protocol.datatransfer.Op)
>  @bci=149, line=92 (Compiled frame)
> - org.apache.hadoop.hdfs.server.datanode.DataXceiver.run() @bci=510, line=232 
> (Compiled frame)
> - java.lang.Thread.run() @bci=11, line=744 (Interpreted frame)
> {code}
> Has anybody seen this before?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7400) More reliable namenode health check to detect OS/HW issues

2014-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244814#comment-14244814
 ] 

Hadoop QA commented on HDFS-7400:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686895/HDFS-7400.patch
  against trunk revision 3681de2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9025//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9025//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9025//console

This message is automatically generated.

> More reliable namenode health check to detect OS/HW issues
> --
>
> Key: HDFS-7400
> URL: https://issues.apache.org/jira/browse/HDFS-7400
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-7400.patch
>
>
> We had this scenario on an active NN machine.
> * Disk array controller firmware has a bug. So disks stop working.
> * ZKFC and NN still considered the node healthy; Communications between ZKFC 
> and ZK as well as ZKFC and NN are good.
> * The machine can be pinged.
> * The machine can't be sshed.
> So all clients and DNs can't use the NN. But ZKFC and NN still consider the 
> node healthy.
> The question is how we can have ZKFC and NN detect such OS/HW specific issues 
> quickly? Some ideas we discussed briefly,
> * Have other machines help to make the decision whether the NN is actually 
> healthy. Then you have to figure out to make the decision accurate in the 
> case of network issue, etc.
> * Run OS/HW health check script external to ZKFC/NN on the same machine. If 
> it detects disk or other issues, it can reboot the machine for example.
> * Run OS/HW health check script inside ZKFC/NN. For example NN's 
> HAServiceProtocol#monitorHealth can be modified to call such health check 
> script.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7495) Lock inversion in DFSInputStream#getBlockAt()

2014-12-12 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244809#comment-14244809
 ] 

Colin Patrick McCabe commented on HDFS-7495:


Hi [~tedyu],

The comment at the top of {{DFSInputstream.java}} says:

{code}
  // lock for state shared between read and pread
  // Note: Never acquire a lock on  with this lock held to avoid deadlocks
  //   (it's OK to acquire this lock when the lock on  is held)
  private final Object infoLock = new Object();
{code}

It is normal and expected to acquire {{infoLock}} first, and then the stream 
lock.  Have you found any places where this order is reversed?

> Lock inversion in DFSInputStream#getBlockAt()
> -
>
> Key: HDFS-7495
> URL: https://issues.apache.org/jira/browse/HDFS-7495
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> There're two locks: one on DFSInputStream.this , one on 
> DFSInputStream.infoLock
> Normally lock is obtained on infoLock, then on DFSInputStream.infoLock
> However, such order is not observed in DFSInputStream#getBlockAt() :
> {code}
> synchronized(infoLock) {
> ...
>   if (updatePosition) {
> // synchronized not strictly needed, since we only get here
> // from synchronized caller methods
> synchronized(this) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7494) Checking of closed in DFSInputStream#pread() should be protected by synchronization

2014-12-12 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244802#comment-14244802
 ] 

Colin Patrick McCabe commented on HDFS-7494:


Hi [~tedyu],

We cannot take the stream lock in pread without introducing some big 
performance regressions.  See HDFS-6735 for why we stopped taking the lock 
here... basically, it's because we wanted {{pread()}} to be able to proceed 
independently of {{read()}}.

Let's make this an {{AtomicBoolean}} instead.  Then, we can just check it with 
{{AtomicBoolean#get}}.  In {{DFSInputStream#close}}, we can do 
{{AtomicBoolean#compareAndSet}} to ensure close happens only once, even if 
there are concurrent calls.

> Checking of closed in DFSInputStream#pread() should be protected by 
> synchronization
> ---
>
> Key: HDFS-7494
> URL: https://issues.apache.org/jira/browse/HDFS-7494
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
>   private int pread(long position, byte[] buffer, int offset, int length)
>   throws IOException {
> // sanity checks
> dfsClient.checkOpen();
> if (closed) {
> {code}
> Checking of closed should be protected by holding lock on 
> "DFSInputStream.this"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7411) Refactor and improve decommissioning logic into DecommissionManager

2014-12-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244728#comment-14244728
 ] 

Arpit Agarwal commented on HDFS-7411:
-

bq. So what's broken right now is that these UC blocks aren't picked up by 
decom, since decom scans the DN block list. This means these DNs can be 
erroneously decommissioned. 

That makes sense. Yes logically it makes sense to include UC blocks in the DN 
block list. Perhaps it's safe to do so in addStoredBlockUnderConstruction if we 
can make check it doesn't break any assumption elsewhere in BlockManager.

> Refactor and improve decommissioning logic into DecommissionManager
> ---
>
> Key: HDFS-7411
> URL: https://issues.apache.org/jira/browse/HDFS-7411
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.5.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, 
> hdfs-7411.003.patch, hdfs-7411.004.patch
>
>
> Would be nice to split out decommission logic from DatanodeManager to 
> DecommissionManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7518) Heartbeat processing doesn't have to take FSN readLock

2014-12-12 Thread Ming Ma (JIRA)
Ming Ma created HDFS-7518:
-

 Summary: Heartbeat processing doesn't have to take FSN readLock
 Key: HDFS-7518
 URL: https://issues.apache.org/jira/browse/HDFS-7518
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ming Ma


NameNode takes global read lock when it process heartbeat RPCs from DataNodes. 
This increases lock contention and could impact NN overall throughput. Given 
Heartbeat processing needs to access data specific to the DataNode that invokes 
the RPC; it could just synchronize on the specific DataNode and datanodeMap.

It looks like each DatanodeDescriptor already keeps its own recover blocks, 
replication blocks and invalidate blocks. There are several places that needed 
to be changed to remove FSN lock.

As mentioned in other jiras, we need to some mechanism to reason about the 
correctness of the solution.

Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7509) Avoid resolving path multiple times

2014-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244703#comment-14244703
 ] 

Hadoop QA commented on HDFS-7509:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686869/HDFS-7509.002.patch
  against trunk revision bda748a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager
  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
  org.apache.hadoop.hdfs.TestDecommission

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9023//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9023//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9023//console

This message is automatically generated.

> Avoid resolving path multiple times
> ---
>
> Key: HDFS-7509
> URL: https://issues.apache.org/jira/browse/HDFS-7509
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-7509.000.patch, HDFS-7509.001.patch, 
> HDFS-7509.002.patch, HDFS-7509.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7517) Remove redundant non-null checks in FSNamesystem#getBlockLocations

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244696#comment-14244696
 ] 

Hudson commented on HDFS-7517:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6707 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6707/])
HDFS-7517. Remove redundant non-null checks in FSNamesystem#getBlockLocations. 
Contributed by Haohui Mai. (wheat9: rev 
46612c7a5135d20b20403780b47dd00654aab057)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Remove redundant non-null checks in FSNamesystem#getBlockLocations
> --
>
> Key: HDFS-7517
> URL: https://issues.apache.org/jira/browse/HDFS-7517
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.7.0
>
> Attachments: HDFS-7517.000.patch
>
>
> There is a redundant non-null checks in the function which findbugs 
> complains. It should be fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7517) Remove redundant non-null checks in FSNamesystem#getBlockLocations

2014-12-12 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7517:
-
Summary: Remove redundant non-null checks in FSNamesystem#getBlockLocations 
 (was: Remove redundant non-null checks in {{FSNamesystem#getBlockLocations}})

> Remove redundant non-null checks in FSNamesystem#getBlockLocations
> --
>
> Key: HDFS-7517
> URL: https://issues.apache.org/jira/browse/HDFS-7517
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.7.0
>
> Attachments: HDFS-7517.000.patch
>
>
> There is a redundant non-null checks in the function which findbugs 
> complains. It should be fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7517) Remove redundant non-null checks in FSNamesystem#getBlockLocations

2014-12-12 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7517:
-
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks Jing for the reviews.

> Remove redundant non-null checks in FSNamesystem#getBlockLocations
> --
>
> Key: HDFS-7517
> URL: https://issues.apache.org/jira/browse/HDFS-7517
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.7.0
>
> Attachments: HDFS-7517.000.patch
>
>
> There is a redundant non-null checks in the function which findbugs 
> complains. It should be fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7491) Add incremental blockreport latency to DN metrics

2014-12-12 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-7491:
--
Assignee: Ming Ma
  Status: Patch Available  (was: Open)

> Add incremental blockreport latency to DN metrics
> -
>
> Key: HDFS-7491
> URL: https://issues.apache.org/jira/browse/HDFS-7491
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Ming Ma
>Priority: Minor
> Attachments: HDFS-7491.patch
>
>
> In a busy cluster, IBR processing could be delayed due to NN FSNamesystem 
> lock and cause NN to throw NotReplicatedYetException to DFSClient and thus 
> increase the overall application latency.
> This will be taken care of when we address the NN FSNamesystem lock 
> contention issue.
> It is useful if we can provide IBR latency metrics from DN's point of view.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7509) Avoid resolving path multiple times

2014-12-12 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7509:

Summary: Avoid resolving path multiple times  (was: Avoid resolving path 
multiple times in rename/mkdir)

> Avoid resolving path multiple times
> ---
>
> Key: HDFS-7509
> URL: https://issues.apache.org/jira/browse/HDFS-7509
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-7509.000.patch, HDFS-7509.001.patch, 
> HDFS-7509.002.patch, HDFS-7509.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7509) Avoid resolving path multiple times in rename/mkdir

2014-12-12 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7509:

Attachment: HDFS-7509.003.patch

Actually I just found that there are not a lot of remaining places that still 
resolve path multiple times. Upload a patch with more complete coverage.

> Avoid resolving path multiple times in rename/mkdir
> ---
>
> Key: HDFS-7509
> URL: https://issues.apache.org/jira/browse/HDFS-7509
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-7509.000.patch, HDFS-7509.001.patch, 
> HDFS-7509.002.patch, HDFS-7509.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7400) More reliable namenode health check to detect OS/HW issues

2014-12-12 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244648#comment-14244648
 ] 

Ming Ma commented on HDFS-7400:
---

Thanks, Allen. If nobody raises any objection to providing health check script 
for NN in the next couple days, I will create a YARN jira to refactor health 
check related code to hadoop common.

> More reliable namenode health check to detect OS/HW issues
> --
>
> Key: HDFS-7400
> URL: https://issues.apache.org/jira/browse/HDFS-7400
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-7400.patch
>
>
> We had this scenario on an active NN machine.
> * Disk array controller firmware has a bug. So disks stop working.
> * ZKFC and NN still considered the node healthy; Communications between ZKFC 
> and ZK as well as ZKFC and NN are good.
> * The machine can be pinged.
> * The machine can't be sshed.
> So all clients and DNs can't use the NN. But ZKFC and NN still consider the 
> node healthy.
> The question is how we can have ZKFC and NN detect such OS/HW specific issues 
> quickly? Some ideas we discussed briefly,
> * Have other machines help to make the decision whether the NN is actually 
> healthy. Then you have to figure out to make the decision accurate in the 
> case of network issue, etc.
> * Run OS/HW health check script external to ZKFC/NN on the same machine. If 
> it detects disk or other issues, it can reboot the machine for example.
> * Run OS/HW health check script inside ZKFC/NN. For example NN's 
> HAServiceProtocol#monitorHealth can be modified to call such health check 
> script.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7411) Refactor and improve decommissioning logic into DecommissionManager

2014-12-12 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244641#comment-14244641
 ] 

Andrew Wang commented on HDFS-7411:
---

Hey Arpit,

So what's broken right now is that these UC blocks aren't picked up by decom, 
since decom scans the DN block list. This means these DNs can be erroneously 
decommissioned. I'm not sure how to resolve this without adding the UC blocks 
to the DN block list. Good point about that second if statement too.

I guess I'll dig more on this myself; if anyone else knows about this, a 
comment would be appreciated. At a logical level, I'm not sure why we'd have 
the DN locations in the blocksMap, but not also add the block to the DN's block 
list.

> Refactor and improve decommissioning logic into DecommissionManager
> ---
>
> Key: HDFS-7411
> URL: https://issues.apache.org/jira/browse/HDFS-7411
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.5.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, 
> hdfs-7411.003.patch, hdfs-7411.004.patch
>
>
> Would be nice to split out decommission logic from DatanodeManager to 
> DecommissionManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7056) Snapshot support for truncate

2014-12-12 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HDFS-7056:
---
Attachment: HDFS-3107-HDFS-7056-combined.patch

Attaching new combined patch based on Konstantin's latest HDFS-3107 and 
HDFS-7056 patch.

The editsStored files from HDFS-3107 JIRA continue to work.

> Snapshot support for truncate
> -
>
> Key: HDFS-7056
> URL: https://issues.apache.org/jira/browse/HDFS-7056
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-3107-HDFS-7056-combined.patch, HDFS-7056.patch, HDFS-7056.patch, 
> HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, 
> HDFS-7056.patch, HDFSSnapshotWithTruncateDesign.docx
>
>
> Implementation of truncate in HDFS-3107 does not allow truncating files which 
> are in a snapshot. It is desirable to be able to truncate and still keep the 
> old file state of the file in the snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7056) Snapshot support for truncate

2014-12-12 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HDFS-7056:
---
Status: Patch Available  (was: Open)

> Snapshot support for truncate
> -
>
> Key: HDFS-7056
> URL: https://issues.apache.org/jira/browse/HDFS-7056
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-3107-HDFS-7056-combined.patch, HDFS-7056.patch, HDFS-7056.patch, 
> HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, 
> HDFS-7056.patch, HDFSSnapshotWithTruncateDesign.docx
>
>
> Implementation of truncate in HDFS-3107 does not allow truncating files which 
> are in a snapshot. It is desirable to be able to truncate and still keep the 
> old file state of the file in the snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7411) Refactor and improve decommissioning logic into DecommissionManager

2014-12-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244625#comment-14244625
 ] 

Arpit Agarwal commented on HDFS-7411:
-

Hi Andrew, thank you for the heads up.

Looks like the change to {{addStoredBlockUnderConstruction}} is not necessary. 
The function was updated by HDFS-2832 when we added the concept of datanode as 
a collection of storages. Looking at the pre-2832 code:

{code}
block.addReplicaIfNotPresent(node, block, reportedState);
if (reportedState == ReplicaState.FINALIZED && block.findDatanode(node) < 
0) {
  addStoredBlock(block, node, null, true);
}
{code}

And in {{addStoredBlock}} we have:

{code}
...
// add block to the datanode
boolean added = node.addBlock(storageID, storedBlock);
{code}

I am not sure what was the original reason for not adding UC blocks to the 
DataNode/storage but if nothing is broken right now perhaps we should leave it 
as it is. One more side effect of this change is that the subsequent if block 
will never be taken as block.findDatanode will always return true.


> Refactor and improve decommissioning logic into DecommissionManager
> ---
>
> Key: HDFS-7411
> URL: https://issues.apache.org/jira/browse/HDFS-7411
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.5.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, 
> hdfs-7411.003.patch, hdfs-7411.004.patch
>
>
> Would be nice to split out decommission logic from DatanodeManager to 
> DecommissionManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7067) ClassCastException while using a key created by keytool to create encryption zone.

2014-12-12 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244600#comment-14244600
 ] 

Charles Lamb commented on HDFS-7067:


The three FB warnings appear to be unrelated. I ran FB with and without the 
patch and it produced the same results. The test failure is expected since 
test-patch does not apply the hdfs7067.keystore file to src/test/resources.

CodeWarning
RV  Return value of java.util.concurrent.CountDownLatch.await(long, 
TimeUnit) ignored in 
org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef.process(WatchedEvent)

Multithreaded correctness Warnings
CodeWarning
AT  Sequence of calls to java.util.concurrent.ConcurrentHashMap may not be 
atomic in org.apache.hadoop.net.NetUtils.canonicalizeHost(String)

Security Warnings
CodeWarning
XSS HTTP parameter written to Servlet output in 
org.apache.hadoop.jmx.JMXJsonServlet.doGet(HttpServletRequest, 
HttpServletResponse)

> ClassCastException while using a key created by keytool to create encryption 
> zone. 
> ---
>
> Key: HDFS-7067
> URL: https://issues.apache.org/jira/browse/HDFS-7067
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.0
>Reporter: Yi Yao
>Assignee: Charles Lamb
>Priority: Minor
> Attachments: HDFS-7067.001.patch, HDFS-7067.002.patch, 
> hdfs7067.keystore
>
>
> I'm using transparent encryption. If I create a key for KMS keystore via 
> keytool and use the key to create an encryption zone. I get a 
> ClassCastException rather than an exception with decent error message. I know 
> we should use 'hadoop key create' to create a key. It's better to provide an 
> decent error message to remind user to use the right way to create a KMS key.
> [LOG]
> ERROR[user=hdfs] Method:'GET' Exception:'java.lang.ClassCastException: 
> javax.crypto.spec.SecretKeySpec cannot be cast to 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$KeyMetadata'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7400) More reliable namenode health check to detect OS/HW issues

2014-12-12 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244563#comment-14244563
 ] 

Allen Wittenauer commented on HDFS-7400:


bq. The code could have been reused between YARN and HDFS. We can put health 
check related code in hadoop-common if people prefer.

Yes, definitely. 





> More reliable namenode health check to detect OS/HW issues
> --
>
> Key: HDFS-7400
> URL: https://issues.apache.org/jira/browse/HDFS-7400
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-7400.patch
>
>
> We had this scenario on an active NN machine.
> * Disk array controller firmware has a bug. So disks stop working.
> * ZKFC and NN still considered the node healthy; Communications between ZKFC 
> and ZK as well as ZKFC and NN are good.
> * The machine can be pinged.
> * The machine can't be sshed.
> So all clients and DNs can't use the NN. But ZKFC and NN still consider the 
> node healthy.
> The question is how we can have ZKFC and NN detect such OS/HW specific issues 
> quickly? Some ideas we discussed briefly,
> * Have other machines help to make the decision whether the NN is actually 
> healthy. Then you have to figure out to make the decision accurate in the 
> case of network issue, etc.
> * Run OS/HW health check script external to ZKFC/NN on the same machine. If 
> it detects disk or other issues, it can reboot the machine for example.
> * Run OS/HW health check script inside ZKFC/NN. For example NN's 
> HAServiceProtocol#monitorHealth can be modified to call such health check 
> script.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7400) More reliable namenode health check to detect OS/HW issues

2014-12-12 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-7400:
--
Attachment: HDFS-7400.patch

> More reliable namenode health check to detect OS/HW issues
> --
>
> Key: HDFS-7400
> URL: https://issues.apache.org/jira/browse/HDFS-7400
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-7400.patch
>
>
> We had this scenario on an active NN machine.
> * Disk array controller firmware has a bug. So disks stop working.
> * ZKFC and NN still considered the node healthy; Communications between ZKFC 
> and ZK as well as ZKFC and NN are good.
> * The machine can be pinged.
> * The machine can't be sshed.
> So all clients and DNs can't use the NN. But ZKFC and NN still consider the 
> node healthy.
> The question is how we can have ZKFC and NN detect such OS/HW specific issues 
> quickly? Some ideas we discussed briefly,
> * Have other machines help to make the decision whether the NN is actually 
> healthy. Then you have to figure out to make the decision accurate in the 
> case of network issue, etc.
> * Run OS/HW health check script external to ZKFC/NN on the same machine. If 
> it detects disk or other issues, it can reboot the machine for example.
> * Run OS/HW health check script inside ZKFC/NN. For example NN's 
> HAServiceProtocol#monitorHealth can be modified to call such health check 
> script.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7400) More reliable namenode health check to detect OS/HW issues

2014-12-12 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-7400:
--
Attachment: (was: HDFS-7400.patch)

> More reliable namenode health check to detect OS/HW issues
> --
>
> Key: HDFS-7400
> URL: https://issues.apache.org/jira/browse/HDFS-7400
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-7400.patch
>
>
> We had this scenario on an active NN machine.
> * Disk array controller firmware has a bug. So disks stop working.
> * ZKFC and NN still considered the node healthy; Communications between ZKFC 
> and ZK as well as ZKFC and NN are good.
> * The machine can be pinged.
> * The machine can't be sshed.
> So all clients and DNs can't use the NN. But ZKFC and NN still consider the 
> node healthy.
> The question is how we can have ZKFC and NN detect such OS/HW specific issues 
> quickly? Some ideas we discussed briefly,
> * Have other machines help to make the decision whether the NN is actually 
> healthy. Then you have to figure out to make the decision accurate in the 
> case of network issue, etc.
> * Run OS/HW health check script external to ZKFC/NN on the same machine. If 
> it detects disk or other issues, it can reboot the machine for example.
> * Run OS/HW health check script inside ZKFC/NN. For example NN's 
> HAServiceProtocol#monitorHealth can be modified to call such health check 
> script.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7067) ClassCastException while using a key created by keytool to create encryption zone.

2014-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244487#comment-14244487
 ] 

Hadoop QA commented on HDFS-7067:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686872/HDFS-7067.002.patch
  against trunk revision bda748a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.crypto.key.TestKeyProviderFactory

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9024//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9024//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9024//console

This message is automatically generated.

> ClassCastException while using a key created by keytool to create encryption 
> zone. 
> ---
>
> Key: HDFS-7067
> URL: https://issues.apache.org/jira/browse/HDFS-7067
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.0
>Reporter: Yi Yao
>Assignee: Charles Lamb
>Priority: Minor
> Attachments: HDFS-7067.001.patch, HDFS-7067.002.patch, 
> hdfs7067.keystore
>
>
> I'm using transparent encryption. If I create a key for KMS keystore via 
> keytool and use the key to create an encryption zone. I get a 
> ClassCastException rather than an exception with decent error message. I know 
> we should use 'hadoop key create' to create a key. It's better to provide an 
> decent error message to remind user to use the right way to create a KMS key.
> [LOG]
> ERROR[user=hdfs] Method:'GET' Exception:'java.lang.ClassCastException: 
> javax.crypto.spec.SecretKeySpec cannot be cast to 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$KeyMetadata'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7517) Remove redundant non-null checks in {{FSNamesystem#getBlockLocations}}

2014-12-12 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244465#comment-14244465
 ] 

Jing Zhao commented on HDFS-7517:
-

+1

> Remove redundant non-null checks in {{FSNamesystem#getBlockLocations}}
> --
>
> Key: HDFS-7517
> URL: https://issues.apache.org/jira/browse/HDFS-7517
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-7517.000.patch
>
>
> There is a redundant non-null checks in the function which findbugs 
> complains. It should be fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7506) Consolidate implementation of setting inode attributes into a single class

2014-12-12 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1420#comment-1420
 ] 

Jing Zhao commented on HDFS-7506:
-

The patch looks pretty good to me. One minor comment is that it may be 
unnecessary to define {{getStoragePolicies(blockManager)}} in {{FSDirAttrOp}}. 
It may be more direct to call the method through blockManager. Besides of this 
+1.
{code}
@@ -2181,25 +2046,18 @@ private void setStoragePolicyInt(String src, final 
String policyName)
 readLock();
 try {
   checkOperation(OperationCategory.READ);
-  return blockManager.getStoragePolicies();
+  return FSDirAttrOp.getStoragePolicies(blockManager);
 } finally {
   readUnlock();
 }
   }
{code}

> Consolidate implementation of setting inode attributes into a single class
> --
>
> Key: HDFS-7506
> URL: https://issues.apache.org/jira/browse/HDFS-7506
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-7506.000.patch, HDFS-7506.001.patch, 
> HDFS-7506.001.patch, HDFS-7506.002.patch
>
>
> This jira proposes to consolidate the implementation of setting inode 
> attributes (i.e., times, permissions, owner, etc.) to a single class for 
> better maintainability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7067) ClassCastException while using a key created by keytool to create encryption zone.

2014-12-12 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244408#comment-14244408
 ] 

Charles Lamb commented on HDFS-7067:


To test this, the hdfs7067.keystore will need to be placed in src/test/resource.

This will not pass jenkins since test-patch.sh won't apply hdfs7067.keystore.


> ClassCastException while using a key created by keytool to create encryption 
> zone. 
> ---
>
> Key: HDFS-7067
> URL: https://issues.apache.org/jira/browse/HDFS-7067
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.0
>Reporter: Yi Yao
>Assignee: Charles Lamb
>Priority: Minor
> Attachments: HDFS-7067.001.patch, HDFS-7067.002.patch, 
> hdfs7067.keystore
>
>
> I'm using transparent encryption. If I create a key for KMS keystore via 
> keytool and use the key to create an encryption zone. I get a 
> ClassCastException rather than an exception with decent error message. I know 
> we should use 'hadoop key create' to create a key. It's better to provide an 
> decent error message to remind user to use the right way to create a KMS key.
> [LOG]
> ERROR[user=hdfs] Method:'GET' Exception:'java.lang.ClassCastException: 
> javax.crypto.spec.SecretKeySpec cannot be cast to 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$KeyMetadata'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7067) ClassCastException while using a key created by keytool to create encryption zone.

2014-12-12 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-7067:
---
Attachment: HDFS-7067.002.patch

Rebased.

> ClassCastException while using a key created by keytool to create encryption 
> zone. 
> ---
>
> Key: HDFS-7067
> URL: https://issues.apache.org/jira/browse/HDFS-7067
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.0
>Reporter: Yi Yao
>Assignee: Charles Lamb
>Priority: Minor
> Attachments: HDFS-7067.001.patch, HDFS-7067.002.patch, 
> hdfs7067.keystore
>
>
> I'm using transparent encryption. If I create a key for KMS keystore via 
> keytool and use the key to create an encryption zone. I get a 
> ClassCastException rather than an exception with decent error message. I know 
> we should use 'hadoop key create' to create a key. It's better to provide an 
> decent error message to remind user to use the right way to create a KMS key.
> [LOG]
> ERROR[user=hdfs] Method:'GET' Exception:'java.lang.ClassCastException: 
> javax.crypto.spec.SecretKeySpec cannot be cast to 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$KeyMetadata'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7509) Avoid resolving path multiple times in rename/mkdir

2014-12-12 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7509:

Attachment: HDFS-7509.002.patch

Thanks again for the review, Charles! Update the patch to fix the typo.

> Avoid resolving path multiple times in rename/mkdir
> ---
>
> Key: HDFS-7509
> URL: https://issues.apache.org/jira/browse/HDFS-7509
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-7509.000.patch, HDFS-7509.001.patch, 
> HDFS-7509.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7515) Fix new findbugs warnings in hadoop-hdfs

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244276#comment-14244276
 ] 

Hudson commented on HDFS-7515:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1990 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1990/])
HDFS-7515. Fix new findbugs warnings in hadoop-hdfs. Contributed by Haohui Mai. 
(wheat9: rev b9f6d0c956f0278c8b9b83e05b523a442a730ebb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionCalculator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/LsImageVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshot.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/DelimitedImageVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java


> Fix new findbugs warnings in hadoop-hdfs
> 
>
> Key: HDFS-7515
> URL: https://issues.apache.org/jira/browse/HDFS-7515
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.7.0
>
> Attachments: HADOOP-10480.000.patch, HADOOP-10480.001.patch, 
> HADOOP-10480.002.patch, HADOOP-10480.003.patch, HADOOP-10480.2.patch, 
> HADOOP-10480.patch
>
>
> The following findbugs warnings need to be fixed:
> {noformat}
> [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
> [INFO] BugInstance size is 14
> [INFO] Error size is 0
> [INFO] Total bugs: 14
> [INFO] Redundant nullcheck of curPeer, which is known to be non-null in 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() 

[jira] [Commented] (HDFS-7497) Inconsistent report of decommissioning DataNodes between dfsadmin and NameNode webui

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244268#comment-14244268
 ] 

Hudson commented on HDFS-7497:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1990 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1990/])
HDFS-7497. Inconsistent report of decommissioning DataNodes between dfsadmin 
and NameNode webui. Contributed by Yongjun Zhang. (wang: rev 
b437f5eef40874287d4fbf9d8e43f1a857b5621f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDecommissioningStatus.java


> Inconsistent report of decommissioning DataNodes between dfsadmin and 
> NameNode webui
> 
>
> Key: HDFS-7497
> URL: https://issues.apache.org/jira/browse/HDFS-7497
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: supportability
> Fix For: 2.7.0
>
> Attachments: HDFS-7497.001.patch
>
>
> It's observed that dfsadmin report list DNs in the decomm state while NN UI 
> list DNs in dead state.
> I found what happens is:
> NN webui uses two steps to get the result:
> * first collect a list of all alive DNs, 
> * traverse through all live  DNs to find decommissioning DNs. 
> It calls the following method to decide whether a DN is dead or alive:
> {code}
>   /** Is the datanode dead? */
>   boolean isDatanodeDead(DatanodeDescriptor node) {
> return (node.getLastUpdate() <
> (Time.now() - heartbeatExpireInterval));
>   }
> {code}
> On the other hand, dfsadmin traverse all DNs to find to all decommissioning 
> DNs (check whether a DN is in {{AdminStates.DECOMMISSION_INPROGRESS}} state), 
> without checking whether a DN is dead or alive like above.
> The problem is, when a DN is determined to be dead, its state may still be 
> {{AdminStates.DECOMMISSION_INPROGRESS}} .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7449) Add metrics to NFS gateway

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244271#comment-14244271
 ] 

Hudson commented on HDFS-7449:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1990 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1990/])
HDFS-7449. Add metrics to NFS gateway. Contributed by Brandon Li (brandonli: 
rev f6f2a3f1c73266bfedd802eacde60d8b19b81015)
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestNfs3HttpServer.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Metrics.java


> Add metrics to NFS gateway
> --
>
> Key: HDFS-7449
> URL: https://issues.apache.org/jira/browse/HDFS-7449
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.7.0
>
> Attachments: HDFS-7449.001.patch, HDFS-7449.002.patch, 
> HDFS-7449.003.patch, HDFS-7449.004.patch, HDFS-7449.005.patch, 
> HDFS-7449.006.patch, HDFS-7449.007.patch
>
>
> Add metrics to collect the NFSv3 handler operations, response time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7515) Fix new findbugs warnings in hadoop-hdfs

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244249#comment-14244249
 ] 

Hudson commented on HDFS-7515:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #40 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/40/])
HDFS-7515. Fix new findbugs warnings in hadoop-hdfs. Contributed by Haohui Mai. 
(wheat9: rev b9f6d0c956f0278c8b9b83e05b523a442a730ebb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/DelimitedImageVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionCalculator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/LsImageVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java


> Fix new findbugs warnings in hadoop-hdfs
> 
>
> Key: HDFS-7515
> URL: https://issues.apache.org/jira/browse/HDFS-7515
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.7.0
>
> Attachments: HADOOP-10480.000.patch, HADOOP-10480.001.patch, 
> HADOOP-10480.002.patch, HADOOP-10480.003.patch, HADOOP-10480.2.patch, 
> HADOOP-10480.patch
>
>
> The following findbugs warnings need to be fixed:
> {noformat}
> [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
> [INFO] BugInstance size is 14
> [INFO] Error size is 0
> [INFO] Total bugs: 14
> [INFO] Redundant nullcheck of curPeer, which is known to be non-null in 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFro

[jira] [Commented] (HDFS-7449) Add metrics to NFS gateway

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244244#comment-14244244
 ] 

Hudson commented on HDFS-7449:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #40 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/40/])
HDFS-7449. Add metrics to NFS gateway. Contributed by Brandon Li (brandonli: 
rev f6f2a3f1c73266bfedd802eacde60d8b19b81015)
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Metrics.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestNfs3HttpServer.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java


> Add metrics to NFS gateway
> --
>
> Key: HDFS-7449
> URL: https://issues.apache.org/jira/browse/HDFS-7449
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.7.0
>
> Attachments: HDFS-7449.001.patch, HDFS-7449.002.patch, 
> HDFS-7449.003.patch, HDFS-7449.004.patch, HDFS-7449.005.patch, 
> HDFS-7449.006.patch, HDFS-7449.007.patch
>
>
> Add metrics to collect the NFSv3 handler operations, response time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7497) Inconsistent report of decommissioning DataNodes between dfsadmin and NameNode webui

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244241#comment-14244241
 ] 

Hudson commented on HDFS-7497:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #40 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/40/])
HDFS-7497. Inconsistent report of decommissioning DataNodes between dfsadmin 
and NameNode webui. Contributed by Yongjun Zhang. (wang: rev 
b437f5eef40874287d4fbf9d8e43f1a857b5621f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDecommissioningStatus.java


> Inconsistent report of decommissioning DataNodes between dfsadmin and 
> NameNode webui
> 
>
> Key: HDFS-7497
> URL: https://issues.apache.org/jira/browse/HDFS-7497
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: supportability
> Fix For: 2.7.0
>
> Attachments: HDFS-7497.001.patch
>
>
> It's observed that dfsadmin report list DNs in the decomm state while NN UI 
> list DNs in dead state.
> I found what happens is:
> NN webui uses two steps to get the result:
> * first collect a list of all alive DNs, 
> * traverse through all live  DNs to find decommissioning DNs. 
> It calls the following method to decide whether a DN is dead or alive:
> {code}
>   /** Is the datanode dead? */
>   boolean isDatanodeDead(DatanodeDescriptor node) {
> return (node.getLastUpdate() <
> (Time.now() - heartbeatExpireInterval));
>   }
> {code}
> On the other hand, dfsadmin traverse all DNs to find to all decommissioning 
> DNs (check whether a DN is in {{AdminStates.DECOMMISSION_INPROGRESS}} state), 
> without checking whether a DN is dead or alive like above.
> The problem is, when a DN is determined to be dead, its state may still be 
> {{AdminStates.DECOMMISSION_INPROGRESS}} .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7449) Add metrics to NFS gateway

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244201#comment-14244201
 ] 

Hudson commented on HDFS-7449:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #36 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/36/])
HDFS-7449. Add metrics to NFS gateway. Contributed by Brandon Li (brandonli: 
rev f6f2a3f1c73266bfedd802eacde60d8b19b81015)
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestNfs3HttpServer.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Metrics.java


> Add metrics to NFS gateway
> --
>
> Key: HDFS-7449
> URL: https://issues.apache.org/jira/browse/HDFS-7449
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.7.0
>
> Attachments: HDFS-7449.001.patch, HDFS-7449.002.patch, 
> HDFS-7449.003.patch, HDFS-7449.004.patch, HDFS-7449.005.patch, 
> HDFS-7449.006.patch, HDFS-7449.007.patch
>
>
> Add metrics to collect the NFSv3 handler operations, response time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7497) Inconsistent report of decommissioning DataNodes between dfsadmin and NameNode webui

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244189#comment-14244189
 ] 

Hudson commented on HDFS-7497:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1970 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1970/])
HDFS-7497. Inconsistent report of decommissioning DataNodes between dfsadmin 
and NameNode webui. Contributed by Yongjun Zhang. (wang: rev 
b437f5eef40874287d4fbf9d8e43f1a857b5621f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDecommissioningStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Inconsistent report of decommissioning DataNodes between dfsadmin and 
> NameNode webui
> 
>
> Key: HDFS-7497
> URL: https://issues.apache.org/jira/browse/HDFS-7497
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: supportability
> Fix For: 2.7.0
>
> Attachments: HDFS-7497.001.patch
>
>
> It's observed that dfsadmin report list DNs in the decomm state while NN UI 
> list DNs in dead state.
> I found what happens is:
> NN webui uses two steps to get the result:
> * first collect a list of all alive DNs, 
> * traverse through all live  DNs to find decommissioning DNs. 
> It calls the following method to decide whether a DN is dead or alive:
> {code}
>   /** Is the datanode dead? */
>   boolean isDatanodeDead(DatanodeDescriptor node) {
> return (node.getLastUpdate() <
> (Time.now() - heartbeatExpireInterval));
>   }
> {code}
> On the other hand, dfsadmin traverse all DNs to find to all decommissioning 
> DNs (check whether a DN is in {{AdminStates.DECOMMISSION_INPROGRESS}} state), 
> without checking whether a DN is dead or alive like above.
> The problem is, when a DN is determined to be dead, its state may still be 
> {{AdminStates.DECOMMISSION_INPROGRESS}} .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7449) Add metrics to NFS gateway

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244192#comment-14244192
 ] 

Hudson commented on HDFS-7449:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1970 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1970/])
HDFS-7449. Add metrics to NFS gateway. Contributed by Brandon Li (brandonli: 
rev f6f2a3f1c73266bfedd802eacde60d8b19b81015)
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestNfs3HttpServer.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Metrics.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3.java


> Add metrics to NFS gateway
> --
>
> Key: HDFS-7449
> URL: https://issues.apache.org/jira/browse/HDFS-7449
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.7.0
>
> Attachments: HDFS-7449.001.patch, HDFS-7449.002.patch, 
> HDFS-7449.003.patch, HDFS-7449.004.patch, HDFS-7449.005.patch, 
> HDFS-7449.006.patch, HDFS-7449.007.patch
>
>
> Add metrics to collect the NFSv3 handler operations, response time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7515) Fix new findbugs warnings in hadoop-hdfs

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244206#comment-14244206
 ] 

Hudson commented on HDFS-7515:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #36 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/36/])
HDFS-7515. Fix new findbugs warnings in hadoop-hdfs. Contributed by Haohui Mai. 
(wheat9: rev b9f6d0c956f0278c8b9b83e05b523a442a730ebb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionCalculator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/DelimitedImageVisitor.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/LsImageVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java


> Fix new findbugs warnings in hadoop-hdfs
> 
>
> Key: HDFS-7515
> URL: https://issues.apache.org/jira/browse/HDFS-7515
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.7.0
>
> Attachments: HADOOP-10480.000.patch, HADOOP-10480.001.patch, 
> HADOOP-10480.002.patch, HADOOP-10480.003.patch, HADOOP-10480.2.patch, 
> HADOOP-10480.patch
>
>
> The following findbugs warnings need to be fixed:
> {noformat}
> [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
> [INFO] BugInstance size is 14
> [INFO] Error size is 0
> [INFO] Total bugs: 14
> [INFO] Redundant nullcheck of curPeer, which is known to be non-null in 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() 
> 

[jira] [Commented] (HDFS-7515) Fix new findbugs warnings in hadoop-hdfs

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244197#comment-14244197
 ] 

Hudson commented on HDFS-7515:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1970 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1970/])
HDFS-7515. Fix new findbugs warnings in hadoop-hdfs. Contributed by Haohui Mai. 
(wheat9: rev b9f6d0c956f0278c8b9b83e05b523a442a730ebb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/LsImageVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionCalculator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/DelimitedImageVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java


> Fix new findbugs warnings in hadoop-hdfs
> 
>
> Key: HDFS-7515
> URL: https://issues.apache.org/jira/browse/HDFS-7515
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.7.0
>
> Attachments: HADOOP-10480.000.patch, HADOOP-10480.001.patch, 
> HADOOP-10480.002.patch, HADOOP-10480.003.patch, HADOOP-10480.2.patch, 
> HADOOP-10480.patch
>
>
> The following findbugs warnings need to be fixed:
> {noformat}
> [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
> [INFO] BugInstance size is 14
> [INFO] Error size is 0
> [INFO] Total bugs: 14
> [INFO] Redundant nullcheck of curPeer, which is known to be non-null in 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() 
> ["org.ap

[jira] [Commented] (HDFS-7497) Inconsistent report of decommissioning DataNodes between dfsadmin and NameNode webui

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244198#comment-14244198
 ] 

Hudson commented on HDFS-7497:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #36 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/36/])
HDFS-7497. Inconsistent report of decommissioning DataNodes between dfsadmin 
and NameNode webui. Contributed by Yongjun Zhang. (wang: rev 
b437f5eef40874287d4fbf9d8e43f1a857b5621f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDecommissioningStatus.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java


> Inconsistent report of decommissioning DataNodes between dfsadmin and 
> NameNode webui
> 
>
> Key: HDFS-7497
> URL: https://issues.apache.org/jira/browse/HDFS-7497
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: supportability
> Fix For: 2.7.0
>
> Attachments: HDFS-7497.001.patch
>
>
> It's observed that dfsadmin report list DNs in the decomm state while NN UI 
> list DNs in dead state.
> I found what happens is:
> NN webui uses two steps to get the result:
> * first collect a list of all alive DNs, 
> * traverse through all live  DNs to find decommissioning DNs. 
> It calls the following method to decide whether a DN is dead or alive:
> {code}
>   /** Is the datanode dead? */
>   boolean isDatanodeDead(DatanodeDescriptor node) {
> return (node.getLastUpdate() <
> (Time.now() - heartbeatExpireInterval));
>   }
> {code}
> On the other hand, dfsadmin traverse all DNs to find to all decommissioning 
> DNs (check whether a DN is in {{AdminStates.DECOMMISSION_INPROGRESS}} state), 
> without checking whether a DN is dead or alive like above.
> The problem is, when a DN is determined to be dead, its state may still be 
> {{AdminStates.DECOMMISSION_INPROGRESS}} .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7509) Avoid resolving path multiple times in rename/mkdir

2014-12-12 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244144#comment-14244144
 ] 

Charles Lamb commented on HDFS-7509:


[~jingzhao],

In the .001 patch, there is still one more of these to be fixed:

s/does not operates on/does not operate on/g

I'm a non-binding +1 pending the fix to FSN#getBlockLocations.




> Avoid resolving path multiple times in rename/mkdir
> ---
>
> Key: HDFS-7509
> URL: https://issues.apache.org/jira/browse/HDFS-7509
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-7509.000.patch, HDFS-7509.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7509) Avoid resolving path multiple times in rename/mkdir

2014-12-12 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244122#comment-14244122
 ] 

Charles Lamb commented on HDFS-7509:


Hi [~wheat9],

bq. However, given that (1) the format of the file is somewhat inconsistent 
today, and (2) the interfaces might continue to change significantly in 
subsequent jiras, maybe it might even make sense to revisit the formatting 
issues in a separate jira once the work has reached some milestones. Please 
feel free to file jiras for cleaning up the throw clauses, etc. That would be a 
great improvement of the current code base.

I agree with your sentiment that in general formatting issues should be 
addressed in a separate Jira. If you have the fortitude to do that, then I can 
definitely get behind it. Unfortunately, I've seen that movie and I know how it 
ends. My comments were more directed at formatting issues introduced by this 
patch. I'm not suggesting that we address all formatting errors in a particular 
file since that would cause way too much code churn and make the patch even 
larger than it is now. Rather, I was suggesting that we not introduce new 
breakages to the existing standards with this patch. Hence, my comments were 
confined to changes caused by the patch and nothing more. wrt the 'throw 
clause' cleanups, I was suggesting that we not address that now as that belongs 
in a separate Jira, hence the removal of AccessControlException should probably 
go in that (to be filed) Jira's patch, not this one.



> Avoid resolving path multiple times in rename/mkdir
> ---
>
> Key: HDFS-7509
> URL: https://issues.apache.org/jira/browse/HDFS-7509
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-7509.000.patch, HDFS-7509.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7517) Remove redundant non-null checks in {{FSNamesystem#getBlockLocations}}

2014-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244065#comment-14244065
 ] 

Hadoop QA commented on HDFS-7517:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686811/HDFS-7517.000.patch
  against trunk revision bda748a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9022//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9022//console

This message is automatically generated.

> Remove redundant non-null checks in {{FSNamesystem#getBlockLocations}}
> --
>
> Key: HDFS-7517
> URL: https://issues.apache.org/jira/browse/HDFS-7517
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-7517.000.patch
>
>
> There is a redundant non-null checks in the function which findbugs 
> complains. It should be fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7449) Add metrics to NFS gateway

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244022#comment-14244022
 ] 

Hudson commented on HDFS-7449:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #773 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/773/])
HDFS-7449. Add metrics to NFS gateway. Contributed by Brandon Li (brandonli: 
rev f6f2a3f1c73266bfedd802eacde60d8b19b81015)
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Metrics.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestNfs3HttpServer.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Add metrics to NFS gateway
> --
>
> Key: HDFS-7449
> URL: https://issues.apache.org/jira/browse/HDFS-7449
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.7.0
>
> Attachments: HDFS-7449.001.patch, HDFS-7449.002.patch, 
> HDFS-7449.003.patch, HDFS-7449.004.patch, HDFS-7449.005.patch, 
> HDFS-7449.006.patch, HDFS-7449.007.patch
>
>
> Add metrics to collect the NFSv3 handler operations, response time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7497) Inconsistent report of decommissioning DataNodes between dfsadmin and NameNode webui

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244019#comment-14244019
 ] 

Hudson commented on HDFS-7497:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #773 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/773/])
HDFS-7497. Inconsistent report of decommissioning DataNodes between dfsadmin 
and NameNode webui. Contributed by Yongjun Zhang. (wang: rev 
b437f5eef40874287d4fbf9d8e43f1a857b5621f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDecommissioningStatus.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java


> Inconsistent report of decommissioning DataNodes between dfsadmin and 
> NameNode webui
> 
>
> Key: HDFS-7497
> URL: https://issues.apache.org/jira/browse/HDFS-7497
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: supportability
> Fix For: 2.7.0
>
> Attachments: HDFS-7497.001.patch
>
>
> It's observed that dfsadmin report list DNs in the decomm state while NN UI 
> list DNs in dead state.
> I found what happens is:
> NN webui uses two steps to get the result:
> * first collect a list of all alive DNs, 
> * traverse through all live  DNs to find decommissioning DNs. 
> It calls the following method to decide whether a DN is dead or alive:
> {code}
>   /** Is the datanode dead? */
>   boolean isDatanodeDead(DatanodeDescriptor node) {
> return (node.getLastUpdate() <
> (Time.now() - heartbeatExpireInterval));
>   }
> {code}
> On the other hand, dfsadmin traverse all DNs to find to all decommissioning 
> DNs (check whether a DN is in {{AdminStates.DECOMMISSION_INPROGRESS}} state), 
> without checking whether a DN is dead or alive like above.
> The problem is, when a DN is determined to be dead, its state may still be 
> {{AdminStates.DECOMMISSION_INPROGRESS}} .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7515) Fix new findbugs warnings in hadoop-hdfs

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244027#comment-14244027
 ] 

Hudson commented on HDFS-7515:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #773 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/773/])
HDFS-7515. Fix new findbugs warnings in hadoop-hdfs. Contributed by Haohui Mai. 
(wheat9: rev b9f6d0c956f0278c8b9b83e05b523a442a730ebb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionCalculator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/LsImageVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/DelimitedImageVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java


> Fix new findbugs warnings in hadoop-hdfs
> 
>
> Key: HDFS-7515
> URL: https://issues.apache.org/jira/browse/HDFS-7515
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.7.0
>
> Attachments: HADOOP-10480.000.patch, HADOOP-10480.001.patch, 
> HADOOP-10480.002.patch, HADOOP-10480.003.patch, HADOOP-10480.2.patch, 
> HADOOP-10480.patch
>
>
> The following findbugs warnings need to be fixed:
> {noformat}
> [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
> [INFO] BugInstance size is 14
> [INFO] Error size is 0
> [INFO] Total bugs: 14
> [INFO] Redundant nullcheck of curPeer, which is known to be non-null in 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() 
> ["org.apac

[jira] [Commented] (HDFS-7426) Change nntop JMX format to be a JSON blob

2014-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244013#comment-14244013
 ] 

Hadoop QA commented on HDFS-7426:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686794/hdfs-7426.005.patch
  against trunk revision bda748a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9021//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9021//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9021//console

This message is automatically generated.

> Change nntop JMX format to be a JSON blob
> -
>
> Key: HDFS-7426
> URL: https://issues.apache.org/jira/browse/HDFS-7426
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-7426.001.patch, hdfs-7426.002.patch, 
> hdfs-7426.003.patch, hdfs-7426.004.patch, hdfs-7426.005.patch
>
>
> After discussion with [~maysamyabandeh], we think we can adjust the JMX 
> output to instead be a richer JSON blob. This should be easier to parse and 
> also be more informative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7449) Add metrics to NFS gateway

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243966#comment-14243966
 ] 

Hudson commented on HDFS-7449:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #38 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/38/])
HDFS-7449. Add metrics to NFS gateway. Contributed by Brandon Li (brandonli: 
rev f6f2a3f1c73266bfedd802eacde60d8b19b81015)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Metrics.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestNfs3HttpServer.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/conf/NfsConfigKeys.java


> Add metrics to NFS gateway
> --
>
> Key: HDFS-7449
> URL: https://issues.apache.org/jira/browse/HDFS-7449
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.7.0
>
> Attachments: HDFS-7449.001.patch, HDFS-7449.002.patch, 
> HDFS-7449.003.patch, HDFS-7449.004.patch, HDFS-7449.005.patch, 
> HDFS-7449.006.patch, HDFS-7449.007.patch
>
>
> Add metrics to collect the NFSv3 handler operations, response time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7497) Inconsistent report of decommissioning DataNodes between dfsadmin and NameNode webui

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243963#comment-14243963
 ] 

Hudson commented on HDFS-7497:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #38 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/38/])
HDFS-7497. Inconsistent report of decommissioning DataNodes between dfsadmin 
and NameNode webui. Contributed by Yongjun Zhang. (wang: rev 
b437f5eef40874287d4fbf9d8e43f1a857b5621f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDecommissioningStatus.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Inconsistent report of decommissioning DataNodes between dfsadmin and 
> NameNode webui
> 
>
> Key: HDFS-7497
> URL: https://issues.apache.org/jira/browse/HDFS-7497
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: supportability
> Fix For: 2.7.0
>
> Attachments: HDFS-7497.001.patch
>
>
> It's observed that dfsadmin report list DNs in the decomm state while NN UI 
> list DNs in dead state.
> I found what happens is:
> NN webui uses two steps to get the result:
> * first collect a list of all alive DNs, 
> * traverse through all live  DNs to find decommissioning DNs. 
> It calls the following method to decide whether a DN is dead or alive:
> {code}
>   /** Is the datanode dead? */
>   boolean isDatanodeDead(DatanodeDescriptor node) {
> return (node.getLastUpdate() <
> (Time.now() - heartbeatExpireInterval));
>   }
> {code}
> On the other hand, dfsadmin traverse all DNs to find to all decommissioning 
> DNs (check whether a DN is in {{AdminStates.DECOMMISSION_INPROGRESS}} state), 
> without checking whether a DN is dead or alive like above.
> The problem is, when a DN is determined to be dead, its state may still be 
> {{AdminStates.DECOMMISSION_INPROGRESS}} .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7515) Fix new findbugs warnings in hadoop-hdfs

2014-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243971#comment-14243971
 ] 

Hudson commented on HDFS-7515:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #38 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/38/])
HDFS-7515. Fix new findbugs warnings in hadoop-hdfs. Contributed by Haohui Mai. 
(wheat9: rev b9f6d0c956f0278c8b9b83e05b523a442a730ebb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/DelimitedImageVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionCalculator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/LsImageVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionVisitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java


> Fix new findbugs warnings in hadoop-hdfs
> 
>
> Key: HDFS-7515
> URL: https://issues.apache.org/jira/browse/HDFS-7515
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.7.0
>
> Attachments: HADOOP-10480.000.patch, HADOOP-10480.001.patch, 
> HADOOP-10480.002.patch, HADOOP-10480.003.patch, HADOOP-10480.2.patch, 
> HADOOP-10480.patch
>
>
> The following findbugs warnings need to be fixed:
> {noformat}
> [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
> [INFO] BugInstance size is 14
> [INFO] Error size is 0
> [INFO] Total bugs: 14
> [INFO] Redundant nullcheck of curPeer, which is known to be non-null in 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() 
> 

[jira] [Updated] (HDFS-7056) Snapshot support for truncate

2014-12-12 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7056:
--
Attachment: HDFS-7056.patch

Updating patch to current trunk.
[~Byron Wong] was testing extensively snapshots with truncate during last few 
week. He found too corner cases:
# file size was not computed in ContentSummary for a file that has snapshots 
and is not deleted
# Disk space computation was incorrect when snapshots are deleted in a certain 
sequence after mutliple truncates.

Both are fixed in the latest patch, and we added a series of test cases to 
capture this.

Its been a while. Jing and Colin had good comments, which have been addressed 
by now.
Could you guys please review so that we could finally commit this. It really 
takes cycles to keep up with evolving trunk.

> Snapshot support for truncate
> -
>
> Key: HDFS-7056
> URL: https://issues.apache.org/jira/browse/HDFS-7056
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, 
> HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, 
> HDFSSnapshotWithTruncateDesign.docx
>
>
> Implementation of truncate in HDFS-3107 does not allow truncating files which 
> are in a snapshot. It is desirable to be able to truncate and still keep the 
> old file state of the file in the snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7056) Snapshot support for truncate

2014-12-12 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7056:
--
Status: Open  (was: Patch Available)

> Snapshot support for truncate
> -
>
> Key: HDFS-7056
> URL: https://issues.apache.org/jira/browse/HDFS-7056
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-3107-HDFS-7056-combined.patch, HDFS-3107-HDFS-7056-combined.patch, 
> HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, HDFS-7056.patch, 
> HDFS-7056.patch, HDFS-7056.patch, HDFSSnapshotWithTruncateDesign.docx
>
>
> Implementation of truncate in HDFS-3107 does not allow truncating files which 
> are in a snapshot. It is desirable to be able to truncate and still keep the 
> old file state of the file in the snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3107) HDFS truncate

2014-12-12 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-3107:
--
Attachment: HDFS-3107.patch

Updating patch to current trunk.

> HDFS truncate
> -
>
> Key: HDFS-3107
> URL: https://issues.apache.org/jira/browse/HDFS-3107
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Lei Chang
>Assignee: Plamen Jeliazkov
> Attachments: HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
> HDFS-3107.patch, HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate.pdf, 
> HDFS_truncate_semantics_Mar15.pdf, HDFS_truncate_semantics_Mar21.pdf, 
> editsStored, editsStored.xml
>
>   Original Estimate: 1,344h
>  Remaining Estimate: 1,344h
>
> Systems with transaction support often need to undo changes made to the 
> underlying storage when a transaction is aborted. Currently HDFS does not 
> support truncate (a standard Posix operation) which is a reverse operation of 
> append, which makes upper layer applications use ugly workarounds (such as 
> keeping track of the discarded byte range per file in a separate metadata 
> store, and periodically running a vacuum process to rewrite compacted files) 
> to overcome this limitation of HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7517) Remove redundant non-null checks in {{FSNamesystem#getBlockLocations}}

2014-12-12 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7517:
-
Attachment: HDFS-7517.000.patch

> Remove redundant non-null checks in {{FSNamesystem#getBlockLocations}}
> --
>
> Key: HDFS-7517
> URL: https://issues.apache.org/jira/browse/HDFS-7517
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-7517.000.patch
>
>
> There is a redundant non-null checks in the function which findbugs 
> complains. It should be fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >