[jira] [Commented] (HDFS-3059) ssl-server.xml causes NullPointer

2015-10-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962724#comment-14962724
 ] 

Hadoop QA commented on HDFS-3059:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  21m 19s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m 35s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 10s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 30s | The applied patch generated  2 
new checkstyle issues (total was 508, now 510). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 39s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 25s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  62m 57s | Tests failed in hadoop-hdfs. |
| | | 114m 18s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766944/HDFS-3059.06.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0ab3f9d |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13044/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/13044/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13044/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13044/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13044/console |


This message was automatically generated.

> ssl-server.xml causes NullPointer
> -
>
> Key: HDFS-3059
> URL: https://issues.apache.org/jira/browse/HDFS-3059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 2.7.1
> Environment: in core-site.xml:
> {code:xml}
>   
> hadoop.security.authentication
> kerberos
>   
>   
> hadoop.security.authorization
> true
>   
> {code}
> in hdfs-site.xml:
> {code:xml}
>   
> dfs.https.server.keystore.resource
> /etc/hadoop/conf/ssl-server.xml
>   
>   
> dfs.https.enable
> true
>   
>   
> ...other security props
>   
> {code}
>Reporter: Evert Lammerts
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-3059.02.patch, HDFS-3059.03.patch, 
> HDFS-3059.04.patch, HDFS-3059.05.patch, HDFS-3059.06.patch, HDFS-3059.patch, 
> HDFS-3059.patch.2
>
>
> If ssl is enabled (dfs.https.enable) but ssl-server.xml is not available, a 
> DN will crash during startup while setting up an SSL socket with a 
> NullPointerException:
> {noformat}12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: 
> useKerb = false, useCerts = true
> jetty.ssl.password : jetty.ssl.keypassword : 12/03/07 17:08:36 INFO 
> mortbay.log: jetty-6.1.26.cloudera.1
> 12/03/07 17:08:36 INFO mortbay.log: Started 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: Creating new 
> KrbServerSocket for: 0.0.0.0
> 12/03/07 17:08:36 WARN mortbay.log: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475: java.io.IOException: 
> !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed Server@604788d5: 
> java.io.IOException: !JsseListener: 

[jira] [Commented] (HDFS-7964) Add support for async edit logging

2015-10-18 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962767#comment-14962767
 ] 

Yi Liu commented on HDFS-7964:
--

1. That's right.
2. You can keep it.

> Add support for async edit logging
> --
>
> Key: HDFS-7964
> URL: https://issues.apache.org/jira/browse/HDFS-7964
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-7964.patch, HDFS-7964.patch
>
>
> Edit logging is a major source of contention within the NN.  LogEdit is 
> called within the namespace write log, while logSync is called outside of the 
> lock to allow greater concurrency.  The handler thread remains busy until 
> logSync returns to provide the client with a durability guarantee for the 
> response.
> Write heavy RPC load and/or slow IO causes handlers to stall in logSync.  
> Although the write lock is not held, readers are limited/starved and the call 
> queue fills.  Combining an edit log thread with postponed RPC responses from 
> HADOOP-10300 will provide the same durability guarantee but immediately free 
> up the handlers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9255) Consolidate block recovery related implementation into a single class

2015-10-18 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9255:

Attachment: HDFS-9255.04.patch

Thanks [~rakeshr]! Uploaded 04 patch address all your comments.
 
Many checkstyle issues already exist before. I'll fix them when I move the code.

> Consolidate block recovery related implementation into a single class
> -
>
> Key: HDFS-9255
> URL: https://issues.apache.org/jira/browse/HDFS-9255
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Attachments: HDFS-9255.01.patch, HDFS-9255.02.patch, 
> HDFS-9255.03.patch, HDFS-9255.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-18 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962773#comment-14962773
 ] 

Haohui Mai commented on HDFS-9241:
--

Though I don't think this is a strictly incompatible change as the applications 
won't break if they still depend on hadoop-hdfs, I see a lot of values to allow 
the applications to just change their dependency to hadoop-hdfs-client and 
without changing the code.

I think the proposal makes a lot of sense. +1 on the proposal. [~liuml07], do 
you agree?

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3059) ssl-server.xml causes NullPointer

2015-10-18 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-3059:

Status: Patch Available  (was: Open)

> ssl-server.xml causes NullPointer
> -
>
> Key: HDFS-3059
> URL: https://issues.apache.org/jira/browse/HDFS-3059
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, security
>Affects Versions: 2.7.1
> Environment: in core-site.xml:
> {code:xml}
>   
> hadoop.security.authentication
> kerberos
>   
>   
> hadoop.security.authorization
> true
>   
> {code}
> in hdfs-site.xml:
> {code:xml}
>   
> dfs.https.server.keystore.resource
> /etc/hadoop/conf/ssl-server.xml
>   
>   
> dfs.https.enable
> true
>   
>   
> ...other security props
>   
> {code}
>Reporter: Evert Lammerts
>Assignee: Xiao Chen
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-3059.02.patch, HDFS-3059.03.patch, 
> HDFS-3059.04.patch, HDFS-3059.05.patch, HDFS-3059.06.patch, HDFS-3059.patch, 
> HDFS-3059.patch.2
>
>
> If ssl is enabled (dfs.https.enable) but ssl-server.xml is not available, a 
> DN will crash during startup while setting up an SSL socket with a 
> NullPointerException:
> {noformat}12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: 
> useKerb = false, useCerts = true
> jetty.ssl.password : jetty.ssl.keypassword : 12/03/07 17:08:36 INFO 
> mortbay.log: jetty-6.1.26.cloudera.1
> 12/03/07 17:08:36 INFO mortbay.log: Started 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:36 DEBUG security.Krb5AndCertsSslSocketConnector: Creating new 
> KrbServerSocket for: 0.0.0.0
> 12/03/07 17:08:36 WARN mortbay.log: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475: java.io.IOException: 
> !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 WARN mortbay.log: failed Server@604788d5: 
> java.io.IOException: !JsseListener: java.lang.NullPointerException
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> Krb5AndCertsSslSocketConnector@0.0.0.0:50475
> 12/03/07 17:08:36 INFO mortbay.log: Stopped 
> selectchannelconnec...@p-worker35.alley.sara.nl:1006
> 12/03/07 17:08:37 INFO datanode.DataNode: Waiting for threadgroup to exit, 
> active threads is 0{noformat}
> The same happens if I set an absolute path to an existing 
> dfs.https.server.keystore.resource - in this case the file cannot be found 
> but not even a WARN is given.
> Since in dfs.https.server.keystore.resource we know we need to have 4 
> properties specified (ssl.server.truststore.location, 
> ssl.server.keystore.location, ssl.server.keystore.password, and 
> ssl.server.keystore.keypassword) we should check if they are set and throw an 
> IOException if they are not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8914) Documentation conflict regarding fail-over of Namenode

2015-10-18 Thread Ravindra Babu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962798#comment-14962798
 ] 

Ravindra Babu commented on HDFS-8914:
-

Hi, Please deploy this patch as early as possible to avoid confusion in 
reader's mind. Without this patch, readers of HDFS design article will think 
that HDFS name node is single point of failure in the system even though it's 
not a single point of failure. 

> Documentation conflict regarding fail-over of Namenode
> --
>
> Key: HDFS-8914
> URL: https://issues.apache.org/jira/browse/HDFS-8914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
> Environment: Documentation page in live
>Reporter: Ravindra Babu
>Assignee: Lars Francke
>Priority: Trivial
> Attachments: HDFS-8914.1.patch
>
>
> Please refer to these two links and correct one of them.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> The NameNode machine is a single point of failure for an HDFS cluster. If the 
> NameNode machine fails, manual intervention is necessary. Currently, 
> automatic restart and failover of the NameNode software to another machine is 
> not supported.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> The HDFS High Availability feature addresses the above problems by providing 
> the option of running two redundant NameNodes in the same cluster in an 
> Active/Passive configuration with a hot standby. This allows a fast failover 
> to a new NameNode in the case that a machine crashes, or a graceful 
> administrator-initiated failover for the purpose of planned maintenance.
> Please keep one right version regarding failover statements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8914) Documentation conflict regarding fail-over of Namenode

2015-10-18 Thread Ravindra Babu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962799#comment-14962799
 ] 

Ravindra Babu commented on HDFS-8914:
-

Hi, Please deploy this patch as early as possible to avoid confusion in 
reader's mind. Without this patch, readers of HDFS design article will think 
that HDFS name node is single point of failure in the system even though it's 
not a single point of failure. 

> Documentation conflict regarding fail-over of Namenode
> --
>
> Key: HDFS-8914
> URL: https://issues.apache.org/jira/browse/HDFS-8914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
> Environment: Documentation page in live
>Reporter: Ravindra Babu
>Assignee: Lars Francke
>Priority: Trivial
> Attachments: HDFS-8914.1.patch
>
>
> Please refer to these two links and correct one of them.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> The NameNode machine is a single point of failure for an HDFS cluster. If the 
> NameNode machine fails, manual intervention is necessary. Currently, 
> automatic restart and failover of the NameNode software to another machine is 
> not supported.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> The HDFS High Availability feature addresses the above problems by providing 
> the option of running two redundant NameNodes in the same cluster in an 
> Active/Passive configuration with a hot standby. This allows a fast failover 
> to a new NameNode in the case that a machine crashes, or a graceful 
> administrator-initiated failover for the purpose of planned maintenance.
> Please keep one right version regarding failover statements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-18 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962697#comment-14962697
 ] 

Ming Ma commented on HDFS-8647:
---

Most of the test failures are due to mismatch between BlockManager and function 
definition of {{chooseReplicasToDelete}}; BlockManager should swap 
{{addedNode}} and {{delNodeHint}}.

After the fix, the only failed unit test left is 
{{TestBalancer#testBalancerWithPinnedBlocks}}. Can you please investigate if it 
is related to the patch?

Nits:
* {{public}} can be removed from {{public DatanodeStorageInfo 
chooseReplicaToDelete}}.
* {{final List moreThanOne = new 
ArrayList();}} Given hadoop uses JDK7, you can simplify 
this with type inference {{final List moreThanOne = new 
ArrayList<>();}} Same for other places in the patch.
* {{chooseReplicaToDelete}}’s parameter {{Collection 
first}}. How about name it {{moreThanOne}} instead? Similarly for {{second}}.


> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9226) MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils

2015-10-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962801#comment-14962801
 ] 

Josh Elser commented on HDFS-9226:
--

Anything else that needs to be done here? This at least fixed the problem I was 
seeing in Accumulo tests against 2.8.0-SNAPSHOT.

> MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils
> -
>
> Key: HDFS-9226
> URL: https://issues.apache.org/jira/browse/HDFS-9226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS, test
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HDFS-9226.001.patch, HDFS-9226.002.patch, 
> HDFS-9226.003.patch, HDFS-9226.004.patch, HDFS-9226.005.patch
>
>
> Noticed a test failure when attempting to run Accumulo unit tests against 
> 2.8.0-SNAPSHOT:
> {noformat}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> 

[jira] [Updated] (HDFS-8914) Documentation conflict regarding fail-over of Namenode

2015-10-18 Thread Ravindra Babu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Babu updated HDFS-8914:

Description: 
Please refer to these two links and correct one of them.

http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html

The NameNode machine is a single point of failure for an HDFS cluster. If the 
NameNode machine fails, manual intervention is necessary. Currently, automatic 
restart and failover of the NameNode software to another machine is not 
supported.

http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html

The HDFS High Availability feature addresses the above problems by providing 
the option of running two redundant NameNodes in the same cluster in an 
Active/Passive configuration with a hot standby. This allows a fast failover to 
a new NameNode in the case that a machine crashes, or a graceful 
administrator-initiated failover for the purpose of planned maintenance.

Please update hdfsDesign article with same facts to avoid confusion in Reader's 
mind..

  was:
Please refer to these two links and correct one of them.

http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html

The NameNode machine is a single point of failure for an HDFS cluster. If the 
NameNode machine fails, manual intervention is necessary. Currently, automatic 
restart and failover of the NameNode software to another machine is not 
supported.

http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html

The HDFS High Availability feature addresses the above problems by providing 
the option of running two redundant NameNodes in the same cluster in an 
Active/Passive configuration with a hot standby. This allows a fast failover to 
a new NameNode in the case that a machine crashes, or a graceful 
administrator-initiated failover for the purpose of planned maintenance.

Please keep one right version regarding failover statements.


> Documentation conflict regarding fail-over of Namenode
> --
>
> Key: HDFS-8914
> URL: https://issues.apache.org/jira/browse/HDFS-8914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
> Environment: Documentation page in live
>Reporter: Ravindra Babu
>Assignee: Lars Francke
>Priority: Trivial
> Attachments: HDFS-8914.1.patch
>
>
> Please refer to these two links and correct one of them.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> The NameNode machine is a single point of failure for an HDFS cluster. If the 
> NameNode machine fails, manual intervention is necessary. Currently, 
> automatic restart and failover of the NameNode software to another machine is 
> not supported.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> The HDFS High Availability feature addresses the above problems by providing 
> the option of running two redundant NameNodes in the same cluster in an 
> Active/Passive configuration with a hot standby. This allows a fast failover 
> to a new NameNode in the case that a machine crashes, or a graceful 
> administrator-initiated failover for the purpose of planned maintenance.
> Please update hdfsDesign article with same facts to avoid confusion in 
> Reader's mind..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9070) Allow fsck display pending replica location information for being-written blocks

2015-10-18 Thread GAO Rui (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962737#comment-14962737
 ] 

GAO Rui commented on HDFS-9070:
---

Thanks [~andreina]. For nits 1, I think explicit check for {{isComplete}} could 
be removed, cause under construction block shouldn't be included in 
corruptReplicas or blocksExcess. I will revise the code.  Nits 2, I will add EC 
related assert in {{testFsckOpenECFiles}}. I think it's better to test against 
open EC  files as well as normal files. Cause, both EC and non-EC file are 
necessary test  scenarios. I'll upload a new patch soon. Thank you very much 
for your comment. 

> Allow fsck display pending replica location information for being-written 
> blocks
> 
>
> Key: HDFS-9070
> URL: https://issues.apache.org/jira/browse/HDFS-9070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: GAO Rui
>Assignee: GAO Rui
> Attachments: HDFS-9070--HDFS-7285.00.patch, 
> HDFS-9070-HDFS-7285.00.patch, HDFS-9070-HDFS-7285.01.patch, 
> HDFS-9070-HDFS-7285.02.patch, HDFS-9070-trunk.03.patch, 
> HDFS-9070-trunk.04.patch, HDFS-9070-trunk.05.patch, HDFS-9070-trunk.06.patch
>
>
> When a EC file is being written, it can be helpful to allow fsck display 
> datanode information of the being-written EC file block group. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7964) Add support for async edit logging

2015-10-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962829#comment-14962829
 ] 

Hadoop QA commented on HDFS-7964:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  21m 28s | Findbugs (version 3.0.0) 
appears to be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 11 new or modified test files. |
| {color:green}+1{color} | javac |   9m 38s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  13m 19s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 29s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 53s | The applied patch generated  
10 new checkstyle issues (total was 1190, now 1165). |
| {color:red}-1{color} | whitespace |   0m 20s | The patch has 6  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 50s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 42s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   4m 12s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   4m 27s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  71m 37s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   6m 58s | Tests passed in bkjournal. |
| | | 137m  8s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
| Timed out tests | 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | org.apache.hadoop.hdfs.TestFileCreationClient |
|   | org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults |
|   | org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | org.apache.hadoop.hdfs.TestLeaseRecovery |
|   | org.apache.hadoop.hdfs.server.namenode.TestFileTruncate |
|   | org.apache.hadoop.hdfs.server.mover.TestStorageMover |
|   | org.apache.hadoop.hdfs.server.mover.TestMover |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766854/HDFS-7964.patch |
| Optional Tests | javac unit findbugs checkstyle javadoc |
| git revision | trunk / 476a251 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/13046/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13046/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13046/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13046/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| bkjournal test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13046/artifact/patchprocess/testrun_bkjournal.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13046/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13046/console |


This message was automatically generated.

> Add support for async edit logging
> --
>
> Key: HDFS-7964
> URL: https://issues.apache.org/jira/browse/HDFS-7964
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-7964.patch, HDFS-7964.patch
>
>
> Edit logging is a major source of contention within the NN.  LogEdit is 
> called within the namespace write log, while logSync is called outside of the 
> lock to allow greater concurrency.  The handler thread remains busy until 
> logSync returns to provide the client with a durability guarantee for the 
> response.
> Write heavy RPC load and/or slow IO causes handlers to stall in logSync.  
> Although the write lock is not held, readers are limited/starved and the call 
> queue fills.  Combining an edit log thread with postponed RPC responses from 
> HADOOP-10300 will provide the same durability guarantee but immediately free 
> up the handlers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9255) Consolidate block recovery related implementation into a single class

2015-10-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962814#comment-14962814
 ] 

Hadoop QA commented on HDFS-9255:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  19m 48s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   9m  4s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 58s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 26s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 36s | The applied patch generated  3 
new checkstyle issues (total was 513, now 489). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 38s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   2m 57s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 40s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  70m 56s | Tests failed in hadoop-hdfs. |
| | | 122m 46s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.server.namenode.TestFSImageWithSnapshot |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767274/HDFS-9255.04.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 476a251 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13045/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/13045/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13045/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13045/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13045/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13045/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13045/console |


This message was automatically generated.

> Consolidate block recovery related implementation into a single class
> -
>
> Key: HDFS-9255
> URL: https://issues.apache.org/jira/browse/HDFS-9255
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Attachments: HDFS-9255.01.patch, HDFS-9255.02.patch, 
> HDFS-9255.03.patch, HDFS-9255.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9229) Expose size of NameNode directory as a metric

2015-10-18 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962229#comment-14962229
 ] 

Rakesh R commented on HDFS-9229:


Thank you [~surendrasingh] for the contribution, patch looks almost good. Just 
few comments, please take a look at it.
# Unit is missing, please add it. (for example, in bytes)
{code}
+| `NameDirSize` | NameNode name directories size|
{code}
# In tests, it would be good to assert the the size of the {{nnDirMap.size()}}. 
Presently it will miss the assertion and won't fail the test if the map wrongly 
returns the content with zero length, right?
{code}
+  Map nnDirMap =
+  (Map) JSON.parse(
+  (String) mbs.getAttribute(mxbeanName, "NameDirSize"));
{code}
# Also, there are couple of checkstyle warnings related to the patch, please 
take care. Thanks!

> Expose size of NameNode directory as a metric
> -
>
> Key: HDFS-9229
> URL: https://issues.apache.org/jira/browse/HDFS-9229
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Surendra Singh Lilhore
>Priority: Minor
> Attachments: HDFS-9229.001.patch
>
>
> Useful for admins in reserving / managing NN local file system space. Also 
> useful when transferring NN backups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8650) Erasure Coding: use thread pool for StripedDataStreamer

2015-10-18 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962253#comment-14962253
 ] 

Rakesh R commented on HDFS-8650:


I could see HDFS-8287 is adding are few more threads to make 
{{DFSStripedOutputStream#writeParityCells}} non-blocking. Now while writing two 
set of threads are spawning, how about sharing the same {{thread pool}} between 
both the parity generator threads and StripedDataStreamer threads?

> Erasure Coding: use thread pool for StripedDataStreamer
> ---
>
> Key: HDFS-8650
> URL: https://issues.apache.org/jira/browse/HDFS-8650
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: GAO Rui
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9255) Consolidate block recovery related implementation into a single class

2015-10-18 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962251#comment-14962251
 ] 

Rakesh R commented on HDFS-9255:


Adding one more comment,
- Please add {{@InterfaceAudience.Private}} to the newly added 
BlockRecoveryWorker class.

> Consolidate block recovery related implementation into a single class
> -
>
> Key: HDFS-9255
> URL: https://issues.apache.org/jira/browse/HDFS-9255
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Attachments: HDFS-9255.01.patch, HDFS-9255.02.patch, 
> HDFS-9255.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9255) Consolidate block recovery related implementation into a single class

2015-10-18 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962246#comment-14962246
 ] 

Rakesh R commented on HDFS-9255:


Thanks [~walter.k.su]. Nice work!

I've few comments, please see
# In unit tests probably you could use 
{{GenericTestUtils.assertExceptionContains}} for asserting the exception 
messages rather than swallow it.
{code}
+  try {
+spyTask.recover();
+  } catch(IOException e){
+// IOException: All datanodes failed
+  }
{code}
# class RecoveryTask does not need to be public. Also, how about renaming the 
class to RecoveryTaskContiguous?
# Do you have any plans to incorporate the 
[comments|https://issues.apache.org/jira/browse/HDFS-9173?focusedCommentId=14960162=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14960162]
 discussed in HDFS-9173 here in this jira?
# Please take care the checkstyle warnings related to the patch.

> Consolidate block recovery related implementation into a single class
> -
>
> Key: HDFS-9255
> URL: https://issues.apache.org/jira/browse/HDFS-9255
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Attachments: HDFS-9255.01.patch, HDFS-9255.02.patch, 
> HDFS-9255.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-10-18 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962314#comment-14962314
 ] 

Brahma Reddy Battula commented on HDFS-9242:


[~liuml07] thanks for taking a lookinto this issue..yes,we can exclude this 
one.. Uploaded patch for excluding this one and removed the unused 
import..[~xyao] kindly review..

> Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache 
> ---
>
> Key: HDFS-9242
> URL: https://issues.apache.org/jira/browse/HDFS-9242
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9242.patch
>
>
> This was introduced by HDFS-8855 and pre-patch warning can be found at 
> https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK
> {code}
> Code  Warning
> DCPossible doublecheck on 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
>  in new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> Bug type DC_DOUBLECHECK (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
> In method new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> On field 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
> At DataNodeUGIProvider.java:[lines 49-51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-10-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962374#comment-14962374
 ] 

Hadoop QA commented on HDFS-9242:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 11s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m  6s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 28s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 25s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings, and fixes 1 pre-existing warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  49m 40s | Tests failed in hadoop-hdfs. |
| | |  96m  1s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767247/HDFS-9242.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / e286512 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13041/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13041/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13041/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13041/console |


This message was automatically generated.

> Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache 
> ---
>
> Key: HDFS-9242
> URL: https://issues.apache.org/jira/browse/HDFS-9242
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9242.patch
>
>
> This was introduced by HDFS-8855 and pre-patch warning can be found at 
> https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK
> {code}
> Code  Warning
> DCPossible doublecheck on 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
>  in new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> Bug type DC_DOUBLECHECK (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
> In method new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> On field 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
> At DataNodeUGIProvider.java:[lines 49-51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-10-18 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9242:
---
Status: Patch Available  (was: Open)

> Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache 
> ---
>
> Key: HDFS-9242
> URL: https://issues.apache.org/jira/browse/HDFS-9242
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9242.patch
>
>
> This was introduced by HDFS-8855 and pre-patch warning can be found at 
> https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK
> {code}
> Code  Warning
> DCPossible doublecheck on 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
>  in new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> Bug type DC_DOUBLECHECK (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
> In method new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> On field 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
> At DataNodeUGIProvider.java:[lines 49-51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-10-18 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9242:
---
Attachment: HDFS-9242.patch

> Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache 
> ---
>
> Key: HDFS-9242
> URL: https://issues.apache.org/jira/browse/HDFS-9242
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9242.patch
>
>
> This was introduced by HDFS-8855 and pre-patch warning can be found at 
> https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK
> {code}
> Code  Warning
> DCPossible doublecheck on 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
>  in new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> Bug type DC_DOUBLECHECK (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
> In method new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> On field 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
> At DataNodeUGIProvider.java:[lines 49-51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-10-18 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962388#comment-14962388
 ] 

Brahma Reddy Battula commented on HDFS-9242:


{{test failures}} are unrelated.kindly review.

> Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache 
> ---
>
> Key: HDFS-9242
> URL: https://issues.apache.org/jira/browse/HDFS-9242
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9242.patch
>
>
> This was introduced by HDFS-8855 and pre-patch warning can be found at 
> https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK
> {code}
> Code  Warning
> DCPossible doublecheck on 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
>  in new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> Bug type DC_DOUBLECHECK (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
> In method new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> On field 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
> At DataNodeUGIProvider.java:[lines 49-51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9229) Expose size of NameNode directory as a metric

2015-10-18 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9229:
-
Attachment: HDFS-9229.002.patch

Thanks [~rakeshr] for review.
Attached updated patch. Please review...

> Expose size of NameNode directory as a metric
> -
>
> Key: HDFS-9229
> URL: https://issues.apache.org/jira/browse/HDFS-9229
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Surendra Singh Lilhore
>Priority: Minor
> Attachments: HDFS-9229.001.patch, HDFS-9229.002.patch
>
>
> Useful for admins in reserving / managing NN local file system space. Also 
> useful when transferring NN backups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8344) NameNode doesn't recover lease for files with missing blocks

2015-10-18 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-8344:
---
Attachment: TestHadoop.java

The TestHadoop.java file I referenced in the description.

> NameNode doesn't recover lease for files with missing blocks
> 
>
> Key: HDFS-8344
> URL: https://issues.apache.org/jira/browse/HDFS-8344
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 2.8.0
>
> Attachments: HDFS-8344.01.patch, HDFS-8344.02.patch, 
> HDFS-8344.03.patch, HDFS-8344.04.patch, HDFS-8344.05.patch, 
> HDFS-8344.06.patch, HDFS-8344.07.patch, HDFS-8344.08.patch, 
> HDFS-8344.09.patch, TestHadoop.java
>
>
> I found another\(?) instance in which the lease is not recovered. This is 
> reproducible easily on a pseudo-distributed single node cluster
> # Before you start it helps if you set. This is not necessary, but simply 
> reduces how long you have to wait
> {code}
>   public static final long LEASE_SOFTLIMIT_PERIOD = 30 * 1000;
>   public static final long LEASE_HARDLIMIT_PERIOD = 2 * 
> LEASE_SOFTLIMIT_PERIOD;
> {code}
> # Client starts to write a file. (could be less than 1 block, but it hflushed 
> so some of the data has landed on the datanodes) (I'm copying the client code 
> I am using. I generate a jar and run it using $ hadoop jar TestHadoop.jar)
> # Client crashes. (I simulate this by kill -9 the $(hadoop jar 
> TestHadoop.jar) process after it has printed "Wrote to the bufferedWriter"
> # Shoot the datanode. (Since I ran on a pseudo-distributed cluster, there was 
> only 1)
> I believe the lease should be recovered and the block should be marked 
> missing. However this is not happening. The lease is never recovered.
> The effect of this bug for us was that nodes could not be decommissioned 
> cleanly. Although we knew that the client had crashed, the Namenode never 
> released the leases (even after restarting the Namenode) (even months 
> afterwards). There are actually several other cases too where we don't 
> consider what happens if ALL the datanodes die while the file is being 
> written, but I am going to punt on that for another time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-10-18 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9234:
-
Attachment: HDFS-9234-005.patch

Thanks [~andreina] for review..
Attached updated patch, please review...

> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9234-001.patch, HDFS-9234-002.patch, 
> HDFS-9234-003.patch, HDFS-9234-004.patch, HDFS-9234-005.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6481) DatanodeManager#getDatanodeStorageInfos() should check the length of storageIDs

2015-10-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-6481:
-
Description: 
Ian Brooks reported the following stack trace:
{code}
2014-06-03 13:05:03,915 WARN  [DataStreamer for file 
/user/hbase/WALs/,16020,1401716790638/%2C16020%2C1401716790638.1401796562200
 block BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932] 
hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
 0
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)

at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
2014-06-03 13:05:48,489 ERROR [RpcServer.handler=22,port=16020] wal.FSHLog: 
syncer encountered error, will retry. txid=211
org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
 0
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at 

[jira] [Commented] (HDFS-9229) Expose size of NameNode directory as a metric

2015-10-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962533#comment-14962533
 ] 

Hadoop QA commented on HDFS-9229:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  26m  6s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   9m 26s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 40s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m 26s | Site still builds. |
| {color:red}-1{color} | checkstyle |   2m 55s | The applied patch generated  1 
new checkstyle issues (total was 421, now 421). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 41s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   5m  7s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |   8m 39s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests |  65m 48s | Tests failed in hadoop-hdfs. |
| | | 135m 53s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.util.TestByteArrayManager |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767257/HDFS-9229.002.patch |
| Optional Tests | site javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0ab3f9d |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13042/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/13042/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13042/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13042/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13042/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13042/console |


This message was automatically generated.

> Expose size of NameNode directory as a metric
> -
>
> Key: HDFS-9229
> URL: https://issues.apache.org/jira/browse/HDFS-9229
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Surendra Singh Lilhore
>Priority: Minor
> Attachments: HDFS-9229.001.patch, HDFS-9229.002.patch
>
>
> Useful for admins in reserving / managing NN local file system space. Also 
> useful when transferring NN backups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-10-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962532#comment-14962532
 ] 

Hadoop QA commented on HDFS-9234:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  19m 58s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 59s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 20s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 54s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 33s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  49m 47s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 30s | Tests passed in 
hadoop-hdfs-client. |
| | | 101m 43s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12767260/HDFS-9234-005.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0ab3f9d |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13043/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13043/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13043/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13043/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13043/console |


This message was automatically generated.

> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9234-001.patch, HDFS-9234-002.patch, 
> HDFS-9234-003.patch, HDFS-9234-004.patch, HDFS-9234-005.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)