[jira] [Commented] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-11-01 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984419#comment-14984419
 ] 

Tsuyoshi Ozawa commented on HDFS-9242:
--

[~brahmareddy] [~liuml07] I think this is not false positive. For more detail. 
please check this article: http://www.cs.umd.edu/~pugh/java/memoryModel/

A correct way to fix is to make ugiCache volatile. Could you update it?

> Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache 
> ---
>
> Key: HDFS-9242
> URL: https://issues.apache.org/jira/browse/HDFS-9242
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9242.patch
>
>
> This was introduced by HDFS-8855 and pre-patch warning can be found at 
> https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK
> {code}
> Code  Warning
> DCPossible doublecheck on 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
>  in new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> Bug type DC_DOUBLECHECK (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
> In method new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> On field 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
> At DataNodeUGIProvider.java:[lines 49-51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-11-01 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984482#comment-14984482
 ] 

Haohui Mai commented on HDFS-9242:
--

bq. I think the warning is false positive. We can write a filter rule in the 
dev-support/findbugsExcludeFile.xml file.

Findbug is right. The code is simply broken. It is possible to have multiple 
instances of cache map.

bq. A correct way to fix is to make ugiCache volatile. Could you update it?

Fixing it with volatile is a bad idea as it bars many compiler optimization.

The suggested pattern is to do it through static initializer blocks, where the 
synchronization is properly handled.

> Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache 
> ---
>
> Key: HDFS-9242
> URL: https://issues.apache.org/jira/browse/HDFS-9242
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9242.patch
>
>
> This was introduced by HDFS-8855 and pre-patch warning can be found at 
> https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK
> {code}
> Code  Warning
> DCPossible doublecheck on 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
>  in new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> Bug type DC_DOUBLECHECK (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
> In method new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> On field 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
> At DataNodeUGIProvider.java:[lines 49-51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-11-01 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9242:
-
Priority: Critical  (was: Major)

> Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache 
> ---
>
> Key: HDFS-9242
> URL: https://issues.apache.org/jira/browse/HDFS-9242
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-9242.patch
>
>
> This was introduced by HDFS-8855 and pre-patch warning can be found at 
> https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK
> {code}
> Code  Warning
> DCPossible doublecheck on 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
>  in new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> Bug type DC_DOUBLECHECK (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
> In method new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> On field 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
> At DataNodeUGIProvider.java:[lines 49-51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8855) Webhdfs client leaks active NameNode connections

2015-11-01 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984485#comment-14984485
 ] 

Haohui Mai commented on HDFS-8855:
--

The patch has introduced findbugs warnings in trunk for more than 2 weeks. The 
findbug warning is tracked in HDFS-9242.

Please fix it as soon as possible.

> Webhdfs client leaks active NameNode connections
> 
>
> Key: HDFS-8855
> URL: https://issues.apache.org/jira/browse/HDFS-8855
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Bob Hansen
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0
>
> Attachments: HDFS-8855.005.patch, HDFS-8855.006.patch, 
> HDFS-8855.007.patch, HDFS-8855.1.patch, HDFS-8855.2.patch, HDFS-8855.3.patch, 
> HDFS-8855.4.patch, HDFS_8855.prototype.patch
>
>
> The attached script simulates a process opening ~50 files via webhdfs and 
> performing random reads.  Note that there are at most 50 concurrent reads, 
> and all webhdfs sessions are kept open.  Each read is ~64k at a random 
> position.  
> The script periodically (once per second) shells into the NameNode and 
> produces a summary of the socket states.  For my test cluster with 5 nodes, 
> it took ~30 seconds for the NameNode to have ~25000 active connections and 
> fails.
> It appears that each request to the webhdfs client is opening a new 
> connection to the NameNode and keeping it open after the request is complete. 
>  If the process continues to run, eventually (~30-60 seconds), all of the 
> open connections are closed and the NameNode recovers.  
> This smells like SoftReference reaping.  Are we using SoftReferences in the 
> webhdfs client to cache NameNode connections but never re-using them?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9343) Empty caller context considered invalid

2015-11-01 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9343:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed for 2.8.0. Thanks for the contribution [~liuml07] and reviews [~jnp] 
and [~hitesh].

> Empty caller context considered invalid
> ---
>
> Key: HDFS-9343
> URL: https://issues.apache.org/jira/browse/HDFS-9343
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9343.000.patch, HDFS-9343.001.patch, 
> HDFS-9343.002.patch, HDFS-9343.003.patch, HDFS-9343.004.patch
>
>
> The caller context with empty context string is considered invalid, and it 
> should not appear in the audit log.
> Meanwhile, too long signature will not be written to audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9343) Empty caller context considered invalid

2015-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984626#comment-14984626
 ] 

Hudson commented on HDFS-9343:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2555 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2555/])
HDFS-9343. Empty caller context considered invalid. (Contributed by (arp: rev 
3cde6931cb5055a9d92503f4ecefa35571e7b07f)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogger.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ProtoUtil.java


> Empty caller context considered invalid
> ---
>
> Key: HDFS-9343
> URL: https://issues.apache.org/jira/browse/HDFS-9343
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9343.000.patch, HDFS-9343.001.patch, 
> HDFS-9343.002.patch, HDFS-9343.003.patch, HDFS-9343.004.patch
>
>
> The caller context with empty context string is considered invalid, and it 
> should not appear in the audit log.
> Meanwhile, too long signature will not be written to audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9276) Failed to Update HDFS Delegation Token for long running application in HA mode

2015-11-01 Thread Liangliang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liangliang Gu updated HDFS-9276:

Attachment: HDFS-9276.10.patch

> Failed to Update HDFS Delegation Token for long running application in HA mode
> --
>
> Key: HDFS-9276
> URL: https://issues.apache.org/jira/browse/HDFS-9276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, ha, security
>Affects Versions: 2.7.1
>Reporter: Liangliang Gu
>Assignee: Liangliang Gu
> Attachments: HDFS-9276.01.patch, HDFS-9276.02.patch, 
> HDFS-9276.03.patch, HDFS-9276.04.patch, HDFS-9276.05.patch, 
> HDFS-9276.06.patch, HDFS-9276.07.patch, HDFS-9276.08.patch, 
> HDFS-9276.09.patch, HDFS-9276.10.patch, debug1.PNG, debug2.PNG
>
>
> The Scenario is as follows:
> 1. NameNode HA is enabled.
> 2. Kerberos is enabled.
> 3. HDFS Delegation Token (not Keytab or TGT) is used to communicate with 
> NameNode.
> 4. We want to update the HDFS Delegation Token for long running applicatons. 
> HDFS Client will generate private tokens for each NameNode. When we update 
> the HDFS Delegation Token, these private tokens will not be updated, which 
> will cause token expired.
> This bug can be reproduced by the following program:
> {code}
> import java.security.PrivilegedExceptionAction
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.fs.{FileSystem, Path}
> import org.apache.hadoop.security.UserGroupInformation
> object HadoopKerberosTest {
>   def main(args: Array[String]): Unit = {
> val keytab = "/path/to/keytab/xxx.keytab"
> val principal = "x...@abc.com"
> val creds1 = new org.apache.hadoop.security.Credentials()
> val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
> ugi1.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> val fs = FileSystem.get(new Configuration())
> fs.addDelegationTokens("test", creds1)
> null
>   }
> })
> val ugi = UserGroupInformation.createRemoteUser("test")
> ugi.addCredentials(creds1)
> ugi.doAs(new PrivilegedExceptionAction[Void] {
>   // Get a copy of the credentials
>   override def run(): Void = {
> var i = 0
> while (true) {
>   val creds1 = new org.apache.hadoop.security.Credentials()
>   val ugi1 = 
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
>   ugi1.doAs(new PrivilegedExceptionAction[Void] {
> // Get a copy of the credentials
> override def run(): Void = {
>   val fs = FileSystem.get(new Configuration())
>   fs.addDelegationTokens("test", creds1)
>   null
> }
>   })
>   UserGroupInformation.getCurrentUser.addCredentials(creds1)
>   val fs = FileSystem.get( new Configuration())
>   i += 1
>   println()
>   println(i)
>   println(fs.listFiles(new Path("/user"), false))
>   Thread.sleep(60 * 1000)
> }
> null
>   }
> })
>   }
> }
> {code}
> To reproduce the bug, please set the following configuration to Name Node:
> {code}
> dfs.namenode.delegation.token.max-lifetime = 10min
> dfs.namenode.delegation.key.update-interval = 3min
> dfs.namenode.delegation.token.renew-interval = 3min
> {code}
> The bug will occure after 3 minutes.
> The stacktrace is:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 330156 for test) is expired
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1679)
>   at 
> 

[jira] [Commented] (HDFS-9343) Empty caller context considered invalid

2015-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984634#comment-14984634
 ] 

Hudson commented on HDFS-9343:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #625 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/625/])
HDFS-9343. Empty caller context considered invalid. (Contributed by (arp: rev 
3cde6931cb5055a9d92503f4ecefa35571e7b07f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogger.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ProtoUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java


> Empty caller context considered invalid
> ---
>
> Key: HDFS-9343
> URL: https://issues.apache.org/jira/browse/HDFS-9343
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9343.000.patch, HDFS-9343.001.patch, 
> HDFS-9343.002.patch, HDFS-9343.003.patch, HDFS-9343.004.patch
>
>
> The caller context with empty context string is considered invalid, and it 
> should not appear in the audit log.
> Meanwhile, too long signature will not be written to audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9343) Empty caller context considered invalid

2015-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984640#comment-14984640
 ] 

Hudson commented on HDFS-9343:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #613 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/613/])
HDFS-9343. Empty caller context considered invalid. (Contributed by (arp: rev 
3cde6931cb5055a9d92503f4ecefa35571e7b07f)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogger.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ProtoUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Empty caller context considered invalid
> ---
>
> Key: HDFS-9343
> URL: https://issues.apache.org/jira/browse/HDFS-9343
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9343.000.patch, HDFS-9343.001.patch, 
> HDFS-9343.002.patch, HDFS-9343.003.patch, HDFS-9343.004.patch
>
>
> The caller context with empty context string is considered invalid, and it 
> should not appear in the audit log.
> Meanwhile, too long signature will not be written to audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9343) Empty caller context considered invalid

2015-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984700#comment-14984700
 ] 

Hudson commented on HDFS-9343:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2498 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2498/])
HDFS-9343. Empty caller context considered invalid. (Contributed by (arp: rev 
3cde6931cb5055a9d92503f4ecefa35571e7b07f)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ProtoUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogger.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Empty caller context considered invalid
> ---
>
> Key: HDFS-9343
> URL: https://issues.apache.org/jira/browse/HDFS-9343
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9343.000.patch, HDFS-9343.001.patch, 
> HDFS-9343.002.patch, HDFS-9343.003.patch, HDFS-9343.004.patch
>
>
> The caller context with empty context string is considered invalid, and it 
> should not appear in the audit log.
> Meanwhile, too long signature will not be written to audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9343) Empty caller context considered invalid

2015-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984623#comment-14984623
 ] 

Hudson commented on HDFS-9343:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1348 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1348/])
HDFS-9343. Empty caller context considered invalid. (Contributed by (arp: rev 
3cde6931cb5055a9d92503f4ecefa35571e7b07f)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ProtoUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogger.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Empty caller context considered invalid
> ---
>
> Key: HDFS-9343
> URL: https://issues.apache.org/jira/browse/HDFS-9343
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9343.000.patch, HDFS-9343.001.patch, 
> HDFS-9343.002.patch, HDFS-9343.003.patch, HDFS-9343.004.patch
>
>
> The caller context with empty context string is considered invalid, and it 
> should not appear in the audit log.
> Meanwhile, too long signature will not be written to audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8425) [umbrella] Performance tuning, investigation and optimization for erasure coding

2015-11-01 Thread Takuya Fukudome (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984631#comment-14984631
 ] 

Takuya Fukudome commented on HDFS-8425:
---

Thanks for the comment, [~zhz]! The X-axis means the number of 
files(TestDFSIO's nrFiles parameter). And
bq.  And I guess you didn't kill and DN in read tests?
Yes, you are right. I will do the read tests with failure situation later. 
Thank you!

> [umbrella] Performance tuning, investigation and optimization for erasure 
> coding
> 
>
> Key: HDFS-8425
> URL: https://issues.apache.org/jira/browse/HDFS-8425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
> Attachments: testClientWriteReadFile_v1.pdf, 
> testdfsio-read-mbsec.png, testdfsio-write-mbsec.png
>
>
> This {{umbrella}} jira aims to track performance tuning, investigation and 
> optimization for erasure coding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9343) Empty caller context considered invalid

2015-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984703#comment-14984703
 ] 

Hudson commented on HDFS-9343:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #561 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/561/])
HDFS-9343. Empty caller context considered invalid. (Contributed by (arp: rev 
3cde6931cb5055a9d92503f4ecefa35571e7b07f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogger.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ProtoUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java


> Empty caller context considered invalid
> ---
>
> Key: HDFS-9343
> URL: https://issues.apache.org/jira/browse/HDFS-9343
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9343.000.patch, HDFS-9343.001.patch, 
> HDFS-9343.002.patch, HDFS-9343.003.patch, HDFS-9343.004.patch
>
>
> The caller context with empty context string is considered invalid, and it 
> should not appear in the audit log.
> Meanwhile, too long signature will not be written to audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9276) Failed to Update HDFS Delegation Token for long running application in HA mode

2015-11-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984735#comment-14984735
 ] 

Hadoop QA commented on HDFS-9276:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 50s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 17s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 30s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 19s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 16s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 9s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 53s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 47s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 179m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | hadoop.fs.shell.TestCopyPreserveFlag |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | 

[jira] [Updated] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-11-01 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9242:
---
Attachment: HDFS-9242-002.patch

thanks [~ozawa] and [~wheat9] thanks for taking a look into this issue.
 
After take into deeper look, I think

1. Cant use volatile because it skips compiler optimizations
2. Cant make it static block, because it needs conf 

So extracted to static method init(conf) and calling from DatanodeHttpServer 
during startup. Hence multithread case will not occur for this.

Uploaded the patch...Kindly review the same..

> Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache 
> ---
>
> Key: HDFS-9242
> URL: https://issues.apache.org/jira/browse/HDFS-9242
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-9242-002.patch, HDFS-9242.patch
>
>
> This was introduced by HDFS-8855 and pre-patch warning can be found at 
> https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK
> {code}
> Code  Warning
> DCPossible doublecheck on 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
>  in new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> Bug type DC_DOUBLECHECK (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
> In method new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> On field 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
> At DataNodeUGIProvider.java:[lines 49-51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9343) Empty caller context considered invalid

2015-11-01 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984686#comment-14984686
 ] 

Mingliang Liu commented on HDFS-9343:
-

Thanks [~arpitagarwal] for your review and commit.

> Empty caller context considered invalid
> ---
>
> Key: HDFS-9343
> URL: https://issues.apache.org/jira/browse/HDFS-9343
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9343.000.patch, HDFS-9343.001.patch, 
> HDFS-9343.002.patch, HDFS-9343.003.patch, HDFS-9343.004.patch
>
>
> The caller context with empty context string is considered invalid, and it 
> should not appear in the audit log.
> Meanwhile, too long signature will not be written to audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9337) In webhdfs Nullpoint exception will be thrown in renamesnapshot when oldsnapshotname is not given

2015-11-01 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N updated HDFS-9337:
---
Attachment: HDFS-9337_01.patch

Thanks [~surendrasingh] , I have updated the patch with your comments ,please 
review

> In webhdfs Nullpoint exception will be thrown in renamesnapshot when 
> oldsnapshotname is not given
> -
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9343) Empty caller context considered invalid

2015-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984600#comment-14984600
 ] 

Hudson commented on HDFS-9343:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8739 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8739/])
HDFS-9343. Empty caller context considered invalid. (Contributed by (arp: rev 
3cde6931cb5055a9d92503f4ecefa35571e7b07f)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogger.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ProtoUtil.java


> Empty caller context considered invalid
> ---
>
> Key: HDFS-9343
> URL: https://issues.apache.org/jira/browse/HDFS-9343
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9343.000.patch, HDFS-9343.001.patch, 
> HDFS-9343.002.patch, HDFS-9343.003.patch, HDFS-9343.004.patch
>
>
> The caller context with empty context string is considered invalid, and it 
> should not appear in the audit log.
> Meanwhile, too long signature will not be written to audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-11-01 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984572#comment-14984572
 ] 

Tsuyoshi Ozawa commented on HDFS-9242:
--

[~wheat9] Agree with you..

> Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache 
> ---
>
> Key: HDFS-9242
> URL: https://issues.apache.org/jira/browse/HDFS-9242
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-9242.patch
>
>
> This was introduced by HDFS-8855 and pre-patch warning can be found at 
> https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK
> {code}
> Code  Warning
> DCPossible doublecheck on 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
>  in new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> Bug type DC_DOUBLECHECK (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
> In method new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> On field 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
> At DataNodeUGIProvider.java:[lines 49-51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9343) Empty caller context considered invalid

2015-11-01 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984597#comment-14984597
 ] 

Arpit Agarwal commented on HDFS-9343:
-

The test failures look unrelated, the failed tests passed for me locally. I 
will commit this shortly. 

> Empty caller context considered invalid
> ---
>
> Key: HDFS-9343
> URL: https://issues.apache.org/jira/browse/HDFS-9343
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9343.000.patch, HDFS-9343.001.patch, 
> HDFS-9343.002.patch, HDFS-9343.003.patch, HDFS-9343.004.patch
>
>
> The caller context with empty context string is considered invalid, and it 
> should not appear in the audit log.
> Meanwhile, too long signature will not be written to audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9049) Make Datanode Netty reverse proxy port to be configurable

2015-11-01 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-9049:

Status: Patch Available  (was: Open)

> Make Datanode Netty reverse proxy port to be configurable
> -
>
> Key: HDFS-9049
> URL: https://issues.apache.org/jira/browse/HDFS-9049
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9049-01.patch
>
>
> In DatanodeHttpServer.java Netty is used as reverse proxy. But uses random 
> port to start with binding to localhost. This port can be made configurable 
> for better deployments.
> {code}
>  HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0;))
> .setFindPort(true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9219) Even if permission is enabled in an environment, while resolving reserved paths there is no check on permission.

2015-11-01 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-9219:
-
Attachment: HDFS-9219.3.patch

Updated the patch with GenericTestUtils.assertExceptionContains(..) instead of 
using assertTrue(e.getMessage().contains(..))

Failures are unrelated to this patch.

> Even if permission is enabled in an environment, while resolving reserved 
> paths there is no check on permission.
> 
>
> Key: HDFS-9219
> URL: https://issues.apache.org/jira/browse/HDFS-9219
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: J.Andreina
>Assignee: J.Andreina
> Attachments: HDFS-9219.1.patch, HDFS-9219.2.patch, HDFS-9219.3.patch
>
>
> Currently at few instances , reserved paths are resolved without checking for 
> permission, even if "dfs.permissions.enabled" is set to true.
> {code}
> FSPermissionChecker pc = fsd.getPermissionChecker();
> byte[][] pathComponents = 
> FSDirectory.getPathComponentsForReservedPath(src);
> INodesInPath iip;
> fsd.writeLock();
> try {
>   src = *FSDirectory.resolvePath(src, pathComponents, fsd);*
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9049) Make Datanode Netty reverse proxy port to be configurable

2015-11-01 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-9049:

Attachment: HDFS-9049-02.patch

Attaching rebased patch

> Make Datanode Netty reverse proxy port to be configurable
> -
>
> Key: HDFS-9049
> URL: https://issues.apache.org/jira/browse/HDFS-9049
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9049-01.patch, HDFS-9049-02.patch
>
>
> In DatanodeHttpServer.java Netty is used as reverse proxy. But uses random 
> port to start with binding to localhost. This port can be made configurable 
> for better deployments.
> {code}
>  HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0;))
> .setFindPort(true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8578) On upgrade, Datanode should process all storage/data dirs in parallel

2015-11-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984829#comment-14984829
 ] 

Hadoop QA commented on HDFS-8578:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 5s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 53s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 17s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 26s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 142m 33s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.TestAbandonBlock |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
| JDK v1.8.0_60 Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.TestPersistBlocks |
| JDK v1.7.0_79 Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool |
\\
\\
|| Subsystem || Report/Notes 

[jira] [Commented] (HDFS-9337) In webhdfs Nullpoint exception will be thrown in renamesnapshot when oldsnapshotname is not given

2015-11-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984824#comment-14984824
 ] 

Hadoop QA commented on HDFS-9337:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 8s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 55s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 12s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 28s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 147m 53s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 |
|   | hadoop.hdfs.server.datanode.TestBlockRecovery |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 

[jira] [Commented] (HDFS-8986) Add option to -du to calculate directory space usage excluding snapshots

2015-11-01 Thread Jagadesh Kiran N (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984749#comment-14984749
 ] 

Jagadesh Kiran N commented on HDFS-8986:


[~qwertymaniac] i am not opposing the same , i was collecting the opinions 
before i start implementing the same.

> Add option to -du to calculate directory space usage excluding snapshots
> 
>
> Key: HDFS-8986
> URL: https://issues.apache.org/jira/browse/HDFS-8986
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Gautam Gopalakrishnan
>Assignee: Jagadesh Kiran N
>
> When running {{hadoop fs -du}} on a snapshotted directory (or one of its 
> children), the report includes space consumed by blocks that are only present 
> in the snapshots. This is confusing for end users.
> {noformat}
> $  hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 799.7 M  2.3 G  /tmp/parent
> 799.7 M  2.3 G  /tmp/parent/sub1
> $ hdfs dfs -createSnapshot /tmp/parent snap1
> Created snapshot /tmp/parent/.snapshot/snap1
> $ hadoop fs -rm -skipTrash /tmp/parent/sub1/*
> ...
> $ hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 799.7 M  2.3 G  /tmp/parent
> 799.7 M  2.3 G  /tmp/parent/sub1
> $ hdfs dfs -deleteSnapshot /tmp/parent snap1
> $ hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 0  0  /tmp/parent
> 0  0  /tmp/parent/sub1
> {noformat}
> It would be helpful if we had a flag, say -X, to exclude any snapshot related 
> disk usage in the output



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9357) NN UI is not showing which DN is "Decommissioned "and "Decommissioned & dead".

2015-11-01 Thread Archana T (JIRA)
Archana T created HDFS-9357:
---

 Summary: NN UI is not showing which DN is "Decommissioned "and 
"Decommissioned & dead".
 Key: HDFS-9357
 URL: https://issues.apache.org/jira/browse/HDFS-9357
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Surendra Singh Lilhore
Priority: Critical


NN UI is not showing which DN is "Decommissioned "and "Decommissioned & dead"

Root Cause --
"Decommissioned" and "Decommissioned & dead" icon not reflected on NN UI

When DN is in Decommissioned status or in "Decommissioned & dead" status, same 
status is not reflected on NN UI 

DN status is as below --

hdfs dfsadmin -report

Name: 10.xx.xx.xx1:50076 (host-xx1)
Hostname: host-xx
Decommission Status : Decommissioned
Configured Capacity: 230501634048 (214.67 GB)
DFS Used: 36864 (36 KB)


Dead datanodes (1):
Name: 10.xx.xx.xx2:50076 (host-xx2)
Hostname: host-xx
Decommission Status : Decommissioned

Same is not reflected on NN UI.

Attached NN UI snapshots for the same.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-11-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984788#comment-14984788
 ] 

Hadoop QA commented on HDFS-9242:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 7s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 52s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-hdfs-project/hadoop-hdfs (total was 29, now 28). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 37s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 23s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 31s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 125m 21s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.TestLeaseRecovery2 |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-02 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770030/HDFS-9242-002.patch |
| JIRA Issue | HDFS-9242 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile 

[jira] [Created] (HDFS-9356) Last Contact value is empty in Datanode Info tab while Decommissioning

2015-11-01 Thread Archana T (JIRA)
Archana T created HDFS-9356:
---

 Summary: Last Contact value is empty in Datanode Info tab while 
Decommissioning 
 Key: HDFS-9356
 URL: https://issues.apache.org/jira/browse/HDFS-9356
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Archana T
Assignee: Surendra Singh Lilhore


While DN is in decommissioning state, the Last contact value is empty in the 
Datanode Information tab of Namenode UI.

Attaching the snapshot of the same.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9356) Last Contact value is empty in Datanode Info tab while Decommissioning

2015-11-01 Thread Archana T (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Archana T updated HDFS-9356:

Attachment: decomm.png

> Last Contact value is empty in Datanode Info tab while Decommissioning 
> ---
>
> Key: HDFS-9356
> URL: https://issues.apache.org/jira/browse/HDFS-9356
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Archana T
>Assignee: Surendra Singh Lilhore
> Attachments: decomm.png
>
>
> While DN is in decommissioning state, the Last contact value is empty in the 
> Datanode Information tab of Namenode UI.
> Attaching the snapshot of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9202) Deprecation cases are not handled in hadoop-hdfs-client#DistributedFileSystem

2015-11-01 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-9202:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

HDFS-9301 has resolved this issue by Moving HdfsConfigutaion.java to 
hadoop-hdfs-client

> Deprecation cases are not handled in hadoop-hdfs-client#DistributedFileSystem
> -
>
> Key: HDFS-9202
> URL: https://issues.apache.org/jira/browse/HDFS-9202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Bibin A Chundatt
>Assignee: Vinayakumar B
>Priority: Critical
> Attachments: HDFS-9202.01.patch
>
>
> Deprecation keys are not taken case in case  
> hadoop-hdfs-client#DistributedFileSystem.
> Client side deprecated keys are not usable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-11-01 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9242:
---
Attachment: HDFS-9242-003.patch

Uploaded the patch to fix check-style issue..Testcase failures are unrelated.

> Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache 
> ---
>
> Key: HDFS-9242
> URL: https://issues.apache.org/jira/browse/HDFS-9242
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-9242-002.patch, HDFS-9242-003.patch, HDFS-9242.patch
>
>
> This was introduced by HDFS-8855 and pre-patch warning can be found at 
> https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK
> {code}
> Code  Warning
> DCPossible doublecheck on 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
>  in new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> Bug type DC_DOUBLECHECK (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
> In method new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> On field 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
> At DataNodeUGIProvider.java:[lines 49-51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)