[
https://issues.apache.org/jira/browse/HDFS-9276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14980760#comment-14980760
]
Hadoop QA commented on HDFS-9276:
---------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s {color} | {color:green} The patch appears to include 1 new or modified test
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 43s
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 33s
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
27s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 3s
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 16s
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 4s
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 41s
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 41s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 29s
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 29s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
28s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 2s
{color} | {color:red} hadoop-common-project/hadoop-common introduced 1 new
FindBugs issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 6s
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 27s {color}
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 48s {color}
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 15s {color}
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 46s {color}
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 187m 55s {color}
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
| | org.apache.hadoop.security.token.Token$PrivateToken doesn't override
Token.equals(Object) At Token.java:At Token.java:[line 1] |
| JDK v1.7.0_79 Failed junit tests | hadoop.ipc.TestRPC |
| | hadoop.ha.TestZKFailoverController |
| | hadoop.hdfs.server.datanode.TestFsDatasetCache |
| | hadoop.hdfs.TestLeaseRecovery2 |
| | hadoop.hdfs.server.datanode.TestBlockScanner |
| | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
| | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
| JDK v1.7.0_79 Timed out junit tests |
org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1
Image:test-patch-base-hadoop-date2015-10-29 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12769535/HDFS-9276.07.patch |
| JIRA Issue | HDFS-9276 |
| Optional Tests | asflicense javac javadoc mvninstall unit findbugs
checkstyle compile |
| uname | Linux ac6e55b587f6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality |
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-67f42f1/precommit/personality/hadoop.sh
|
| git revision | trunk / c416999 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_60
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| findbugs | v3.0.0 |
| findbugs |
https://builds.apache.org/job/PreCommit-HDFS-Build/13276/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
|
| whitespace |
https://builds.apache.org/job/PreCommit-HDFS-Build/13276/artifact/patchprocess/whitespace-eol.txt
|
| findbugs |
https://builds.apache.org/job/PreCommit-HDFS-Build/13276/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html
|
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/13276/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/13276/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/13276/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_79.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/13276/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt
|
| unit test logs |
https://builds.apache.org/job/PreCommit-HDFS-Build/13276/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_60.txt
https://builds.apache.org/job/PreCommit-HDFS-Build/13276/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_60.txt
https://builds.apache.org/job/PreCommit-HDFS-Build/13276/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_79.txt
https://builds.apache.org/job/PreCommit-HDFS-Build/13276/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_79.txt
|
| JDK v1.7.0_79 Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/13276/testReport/ |
| asflicense |
https://builds.apache.org/job/PreCommit-HDFS-Build/13276/artifact/patchprocess/patch-asflicense-problems.txt
|
| modules | C: hadoop-common-project/hadoop-common
hadoop-hdfs-project/hadoop-hdfs U: . |
| Max memory used | 225MB |
| Powered by | Apache Yetus http://yetus.apache.org |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/13276/console |
This message was automatically generated.
> Failed to Update HDFS Delegation Token for long running application in HA mode
> ------------------------------------------------------------------------------
>
> Key: HDFS-9276
> URL: https://issues.apache.org/jira/browse/HDFS-9276
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: fs, ha, security
> Affects Versions: 2.7.1
> Reporter: Liangliang Gu
> Assignee: Liangliang Gu
> Attachments: HDFS-9276.01.patch, HDFS-9276.02.patch,
> HDFS-9276.03.patch, HDFS-9276.04.patch, HDFS-9276.05.patch,
> HDFS-9276.06.patch, HDFS-9276.07.patch, debug1.PNG, debug2.PNG
>
>
> The Scenario is as follows:
> 1. NameNode HA is enabled.
> 2. Kerberos is enabled.
> 3. HDFS Delegation Token (not Keytab or TGT) is used to communicate with
> NameNode.
> 4. We want to update the HDFS Delegation Token for long running applicatons.
> HDFS Client will generate private tokens for each NameNode. When we update
> the HDFS Delegation Token, these private tokens will not be updated, which
> will cause token expired.
> This bug can be reproduced by the following program:
> {code}
> import java.security.PrivilegedExceptionAction
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.fs.{FileSystem, Path}
> import org.apache.hadoop.security.UserGroupInformation
> object HadoopKerberosTest {
> def main(args: Array[String]): Unit = {
> val keytab = "/path/to/keytab/xxx.keytab"
> val principal = "[email protected]"
> val creds1 = new org.apache.hadoop.security.Credentials()
> val ugi1 =
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
> ugi1.doAs(new PrivilegedExceptionAction[Void] {
> // Get a copy of the credentials
> override def run(): Void = {
> val fs = FileSystem.get(new Configuration())
> fs.addDelegationTokens("test", creds1)
> null
> }
> })
> val ugi = UserGroupInformation.createRemoteUser("test")
> ugi.addCredentials(creds1)
> ugi.doAs(new PrivilegedExceptionAction[Void] {
> // Get a copy of the credentials
> override def run(): Void = {
> var i = 0
> while (true) {
> val creds1 = new org.apache.hadoop.security.Credentials()
> val ugi1 =
> UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
> ugi1.doAs(new PrivilegedExceptionAction[Void] {
> // Get a copy of the credentials
> override def run(): Void = {
> val fs = FileSystem.get(new Configuration())
> fs.addDelegationTokens("test", creds1)
> null
> }
> })
> UserGroupInformation.getCurrentUser.addCredentials(creds1)
> val fs = FileSystem.get( new Configuration())
> i += 1
> println()
> println(i)
> println(fs.listFiles(new Path("/user"), false))
> Thread.sleep(60 * 1000)
> }
> null
> }
> })
> }
> }
> {code}
> To reproduce the bug, please set the following configuration to Name Node:
> {code}
> dfs.namenode.delegation.token.max-lifetime = 10min
> dfs.namenode.delegation.key.update-interval = 3min
> dfs.namenode.delegation.token.renew-interval = 3min
> {code}
> The bug will occure after 3 minutes.
> The stacktrace is:
> {code}
> Exception in thread "main"
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
> token (HDFS_DELEGATION_TOKEN token 330156 for test) is expired
> at org.apache.hadoop.ipc.Client.call(Client.java:1347)
> at org.apache.hadoop.ipc.Client.call(Client.java:1300)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1679)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1106)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
> at org.apache.hadoop.fs.FileSystem.resolvePath(FileSystem.java:747)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$15.<init>(DistributedFileSystem.java:726)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.listLocatedStatus(DistributedFileSystem.java:717)
> at
> org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1780)
> at org.apache.hadoop.fs.FileSystem$5.<init>(FileSystem.java:1842)
> at org.apache.hadoop.fs.FileSystem.listFiles(FileSystem.java:1839)
> at HadoopKerberosTest6$$anon$2.run(HadoopKerberosTest6.scala:55)
> at HadoopKerberosTest6$$anon$2.run(HadoopKerberosTest6.scala:32)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at HadoopKerberosTest6$.main(HadoopKerberosTest6.scala:32)
> at HadoopKerberosTest6.main(HadoopKerberosTest6.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)