[jira] [Commented] (HADOOP-12122) Fix Hadoop should avoid unsafe split and append on fields that might be IPv6 literals
[ https://issues.apache.org/jira/browse/HADOOP-12122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962694#comment-14962694 ] Nemanja Matkovic commented on HADOOP-12122: --- [~aw] - I guess I have to split this patch into multiple smaller patches as there are no QABot results coming in? > Fix Hadoop should avoid unsafe split and append on fields that might be IPv6 > literals > - > > Key: HADOOP-12122 > URL: https://issues.apache.org/jira/browse/HADOOP-12122 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: HADOOP-11890 >Reporter: Nate Edel >Assignee: Nemanja Matkovic > Attachments: HADOOP-12122-HADOOP-11890.0.patch, > HADOOP-12122-HADOOP-11890.3.patch, HADOOP-12122-HADOOP-11890.4.patch, > HADOOP-12122-HADOOP-11890.5.patch, HADOOP-12122-HADOOP-11890.6.patch, > HADOOP-12122-HADOOP-11890.7.patch, HADOOP-12122-HADOOP-11890.8.patch, > HADOOP-12122-HADOOP-11890.9.patch, HADOOP-12122-HADOOP-12122.2.patch, > HADOOP-12122-HADOOP-12122.3.patch, HADOOP-12122.0.patch, > lets_blow_up_a_lot_of_tests.patch > > > There are a fairly extensive number of locations found via code inspection > which use unsafe methods of handling addresses in a dual-stack or IPv6-only > world: > - splits on the first ":" assuming that delimits a host from a port > - produces a host port pair by appending :port blindly (Java prefers > [ipv6]:port which is the standard for IPv6 URIs) > - depends on the behavior of InetSocketAddress.toString() which produces the > above. > This patch fixes those metaphors that I can find above, and replaces calls to > InetSocketAddress.toString() with a wrapper that properly brackets the IPv6 > address if there is one. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12483) Maintain wrapped SASL ordering for postponed IPC responses
[ https://issues.apache.org/jira/browse/HADOOP-12483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962783#comment-14962783 ] Hudson commented on HADOOP-12483: - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #563 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/563/]) HADOOP-12483. Maintain wrapped SASL ordering for postponed IPC (yliu: rev 476a251e5efe5e5850671f924e622b587c262653) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java > Maintain wrapped SASL ordering for postponed IPC responses > -- > > Key: HADOOP-12483 > URL: https://issues.apache.org/jira/browse/HADOOP-12483 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Fix For: 2.8.0 > > Attachments: HADOOP-12483.patch > > > A SASL encryption algorithm (wrapping) may have a required ordering for > encrypted responses. The IPC layer encrypts when the response is set based > on the assumption it is being immediately sent. Postponed responses violate > that assumption. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-10941) Proxy user verification NPEs if remote host is unresolvable
[ https://issues.apache.org/jira/browse/HADOOP-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962595#comment-14962595 ] Hudson commented on HADOOP-10941: - FAILURE: Integrated in Hadoop-Hdfs-trunk #2447 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2447/]) HADOOP-10941. Proxy user verification NPEs if remote host is (stevel: rev 0ab3f9d56465bf31668159c562305a3b8222004c) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/DefaultImpersonationProvider.java * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestProxyUsers.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestMachineList.java > Proxy user verification NPEs if remote host is unresolvable > --- > > Key: HADOOP-10941 > URL: https://issues.apache.org/jira/browse/HADOOP-10941 > Project: Hadoop Common > Issue Type: Bug > Components: ipc, security >Affects Versions: 2.5.0, 3.0.0 >Reporter: Daryn Sharp >Assignee: Benoy Antony >Priority: Critical > Labels: BB2015-05-TBR > Fix For: 2.8.0 > > Attachments: HADOOP-10941.patch > > > A null is passed to the impersonation providers for the remote address if it > is unresolvable. {{DefaultImpersationProvider}} will NPE, ipc will close the > connection immediately (correct behavior for such unexpected exceptions), > client fails on {{EOFException}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12483) Maintain wrapped SASL ordering for postponed IPC responses
[ https://issues.apache.org/jira/browse/HADOOP-12483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962749#comment-14962749 ] Yi Liu commented on HADOOP-12483: - +1, looks good to me. Thanks [~daryn], will commit shortly. > Maintain wrapped SASL ordering for postponed IPC responses > -- > > Key: HADOOP-12483 > URL: https://issues.apache.org/jira/browse/HADOOP-12483 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Attachments: HADOOP-12483.patch > > > A SASL encryption algorithm (wrapping) may have a required ordering for > encrypted responses. The IPC layer encrypts when the response is set based > on the assumption it is being immediately sent. Postponed responses violate > that assumption. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12483) Maintain wrapped SASL ordering for postponed IPC responses
[ https://issues.apache.org/jira/browse/HADOOP-12483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962759#comment-14962759 ] Hudson commented on HADOOP-12483: - FAILURE: Integrated in Hadoop-trunk-Commit #8659 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/8659/]) HADOOP-12483. Maintain wrapped SASL ordering for postponed IPC (yliu: rev 476a251e5efe5e5850671f924e622b587c262653) * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java * hadoop-common-project/hadoop-common/CHANGES.txt > Maintain wrapped SASL ordering for postponed IPC responses > -- > > Key: HADOOP-12483 > URL: https://issues.apache.org/jira/browse/HADOOP-12483 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Fix For: 2.8.0 > > Attachments: HADOOP-12483.patch > > > A SASL encryption algorithm (wrapping) may have a required ordering for > encrypted responses. The IPC layer encrypts when the response is set based > on the assumption it is being immediately sent. Postponed responses violate > that assumption. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12483) Maintain wrapped SASL ordering for postponed IPC responses
[ https://issues.apache.org/jira/browse/HADOOP-12483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu updated HADOOP-12483: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) Committed to trunk and branch-2. > Maintain wrapped SASL ordering for postponed IPC responses > -- > > Key: HADOOP-12483 > URL: https://issues.apache.org/jira/browse/HADOOP-12483 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Fix For: 2.8.0 > > Attachments: HADOOP-12483.patch > > > A SASL encryption algorithm (wrapping) may have a required ordering for > encrypted responses. The IPC layer encrypts when the response is set based > on the assumption it is being immediately sent. Postponed responses violate > that assumption. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12122) Fix Hadoop should avoid unsafe split and append on fields that might be IPv6 literals
[ https://issues.apache.org/jira/browse/HADOOP-12122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962705#comment-14962705 ] Allen Wittenauer commented on HADOOP-12122: --- Yeah, it just got killed off a bit ago. To put this in perspective, you're triggering all of the long unit tests: hadoop-yarn-server-resourcemanager - 56 minutes hadoop-hdfs - 54 minutes hadoop-mapreduce-client-jobclient - 102 minutes (and that's *with* parallel tests turned on!) The precommit job needs to finish in less than 500 minutes or Jenkins will shoot it. You're at 300 minutes just in unit tests and that's only for one JDK. I'd recommend splitting this up into four, one for each of the major hadoop projects. Patch+commit order should be common, hdfs, yarn, then mapreduce. > Fix Hadoop should avoid unsafe split and append on fields that might be IPv6 > literals > - > > Key: HADOOP-12122 > URL: https://issues.apache.org/jira/browse/HADOOP-12122 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: HADOOP-11890 >Reporter: Nate Edel >Assignee: Nemanja Matkovic > Attachments: HADOOP-12122-HADOOP-11890.0.patch, > HADOOP-12122-HADOOP-11890.3.patch, HADOOP-12122-HADOOP-11890.4.patch, > HADOOP-12122-HADOOP-11890.5.patch, HADOOP-12122-HADOOP-11890.6.patch, > HADOOP-12122-HADOOP-11890.7.patch, HADOOP-12122-HADOOP-11890.8.patch, > HADOOP-12122-HADOOP-11890.9.patch, HADOOP-12122-HADOOP-12122.2.patch, > HADOOP-12122-HADOOP-12122.3.patch, HADOOP-12122.0.patch, > lets_blow_up_a_lot_of_tests.patch > > > There are a fairly extensive number of locations found via code inspection > which use unsafe methods of handling addresses in a dual-stack or IPv6-only > world: > - splits on the first ":" assuming that delimits a host from a port > - produces a host port pair by appending :port blindly (Java prefers > [ipv6]:port which is the standard for IPv6 URIs) > - depends on the behavior of InetSocketAddress.toString() which produces the > above. > This patch fixes those metaphors that I can find above, and replaces calls to > InetSocketAddress.toString() with a wrapper that properly brackets the IPv6 > address if there is one. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-10941) Proxy user verification NPEs if remote host is unresolvable
[ https://issues.apache.org/jira/browse/HADOOP-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962587#comment-14962587 ] Hudson commented on HADOOP-10941: - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #510 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/510/]) HADOOP-10941. Proxy user verification NPEs if remote host is (stevel: rev 0ab3f9d56465bf31668159c562305a3b8222004c) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestMachineList.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/DefaultImpersonationProvider.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestProxyUsers.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java > Proxy user verification NPEs if remote host is unresolvable > --- > > Key: HADOOP-10941 > URL: https://issues.apache.org/jira/browse/HADOOP-10941 > Project: Hadoop Common > Issue Type: Bug > Components: ipc, security >Affects Versions: 2.5.0, 3.0.0 >Reporter: Daryn Sharp >Assignee: Benoy Antony >Priority: Critical > Labels: BB2015-05-TBR > Fix For: 2.8.0 > > Attachments: HADOOP-10941.patch > > > A null is passed to the impersonation providers for the remote address if it > is unresolvable. {{DefaultImpersationProvider}} will NPE, ipc will close the > connection immediately (correct behavior for such unexpected exceptions), > client fails on {{EOFException}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12483) Maintain wrapped SASL ordering for postponed IPC responses
[ https://issues.apache.org/jira/browse/HADOOP-12483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962865#comment-14962865 ] Hudson commented on HADOOP-12483: - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2497 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2497/]) HADOOP-12483. Maintain wrapped SASL ordering for postponed IPC (yliu: rev 476a251e5efe5e5850671f924e622b587c262653) * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java * hadoop-common-project/hadoop-common/CHANGES.txt > Maintain wrapped SASL ordering for postponed IPC responses > -- > > Key: HADOOP-12483 > URL: https://issues.apache.org/jira/browse/HADOOP-12483 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Fix For: 2.8.0 > > Attachments: HADOOP-12483.patch > > > A SASL encryption algorithm (wrapping) may have a required ordering for > encrypted responses. The IPC layer encrypts when the response is set based > on the assumption it is being immediately sent. Postponed responses violate > that assumption. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12483) Maintain wrapped SASL ordering for postponed IPC responses
[ https://issues.apache.org/jira/browse/HADOOP-12483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962825#comment-14962825 ] Hudson commented on HADOOP-12483: - SUCCESS: Integrated in Hadoop-Yarn-trunk #1285 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/1285/]) HADOOP-12483. Maintain wrapped SASL ordering for postponed IPC (yliu: rev 476a251e5efe5e5850671f924e622b587c262653) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java > Maintain wrapped SASL ordering for postponed IPC responses > -- > > Key: HADOOP-12483 > URL: https://issues.apache.org/jira/browse/HADOOP-12483 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Fix For: 2.8.0 > > Attachments: HADOOP-12483.patch > > > A SASL encryption algorithm (wrapping) may have a required ordering for > encrypted responses. The IPC layer encrypts when the response is set based > on the assumption it is being immediately sent. Postponed responses violate > that assumption. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12483) Maintain wrapped SASL ordering for postponed IPC responses
[ https://issues.apache.org/jira/browse/HADOOP-12483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962820#comment-14962820 ] Hudson commented on HADOOP-12483: - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #548 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/548/]) HADOOP-12483. Maintain wrapped SASL ordering for postponed IPC (yliu: rev 476a251e5efe5e5850671f924e622b587c262653) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java > Maintain wrapped SASL ordering for postponed IPC responses > -- > > Key: HADOOP-12483 > URL: https://issues.apache.org/jira/browse/HADOOP-12483 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Affects Versions: 2.8.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Fix For: 2.8.0 > > Attachments: HADOOP-12483.patch > > > A SASL encryption algorithm (wrapping) may have a required ordering for > encrypted responses. The IPC layer encrypts when the response is set based > on the assumption it is being immediately sent. Postponed responses violate > that assumption. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-7266) Deprecate metrics v1
[ https://issues.apache.org/jira/browse/HADOOP-7266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962242#comment-14962242 ] Hadoop QA commented on HADOOP-7266: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 51s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 38s {color} | {color:green} trunk passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 18s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 45s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 7s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s {color} | {color:green} trunk passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 25s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 12s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 4s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 2s {color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 36s {color} | {color:green} hadoop-mapreduce-client-core in the patch passed with JDK v1.8.0. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 30s {color} | {color:green} hadoop-streaming in the patch passed with JDK v1.8.0. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 11s {color} | {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 54s {color} | {color:green} hadoop-mapreduce-client-core in the patch passed with JDK v1.7.0_79. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 29s {color} | {color:green} hadoop-streaming in the patch passed with JDK v1.7.0_79. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s {color} | {color:red} Patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 113m 10s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.7.0_79 Failed junit tests | hadoop.metrics2.impl.TestMetricsSystemImpl | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12767244/HADOOP-7266.005.patch | | JIRA Issue | HADOOP-7266 | | Optional Tests | asflicense javac javadoc mvninstall unit findbugs checkstyle compile | | uname | Linux
[jira] [Commented] (HADOOP-7266) Deprecate metrics v1
[ https://issues.apache.org/jira/browse/HADOOP-7266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962206#comment-14962206 ] Akira AJISAKA commented on HADOOP-7266: --- For example, we need to add {{@SuppressWarnings}} to {{GangliaContext}} to suppress warning for extending deprecated class. {code} @Deprecated @SuppressWarnings("deprecation") @InterfaceAudience.Public @InterfaceStability.Evolving public class GangliaContext extends AbstractMetricsContext { {code} I suppressed warnings only for reducing javac warnings as possible. I have no other reason to do that. > Deprecate metrics v1 > > > Key: HADOOP-7266 > URL: https://issues.apache.org/jira/browse/HADOOP-7266 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Affects Versions: 2.8.0 >Reporter: Luke Lu >Assignee: Akira AJISAKA >Priority: Blocker > Attachments: HADOOP-7266-remove.001.patch, HADOOP-7266.001.patch, > HADOOP-7266.002.patch, HADOOP-7266.003.patch, HADOOP-7266.004.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-7266) Deprecate metrics v1
[ https://issues.apache.org/jira/browse/HADOOP-7266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA updated HADOOP-7266: -- Attachment: HADOOP-7266.005.patch 05 patch: Removed useless {{@SuppressWarnings}} > Deprecate metrics v1 > > > Key: HADOOP-7266 > URL: https://issues.apache.org/jira/browse/HADOOP-7266 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Affects Versions: 2.8.0 >Reporter: Luke Lu >Assignee: Akira AJISAKA >Priority: Blocker > Attachments: HADOOP-7266-remove.001.patch, HADOOP-7266.001.patch, > HADOOP-7266.002.patch, HADOOP-7266.003.patch, HADOOP-7266.004.patch, > HADOOP-7266.005.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-10941) Proxy user verification NPEs if remote host is unresolvable
[ https://issues.apache.org/jira/browse/HADOOP-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-10941: Status: Open (was: Patch Available) > Proxy user verification NPEs if remote host is unresolvable > --- > > Key: HADOOP-10941 > URL: https://issues.apache.org/jira/browse/HADOOP-10941 > Project: Hadoop Common > Issue Type: Bug > Components: ipc, security >Affects Versions: 2.5.0, 3.0.0 >Reporter: Daryn Sharp >Assignee: Benoy Antony >Priority: Critical > Labels: BB2015-05-TBR > Attachments: HADOOP-10941.patch > > > A null is passed to the impersonation providers for the remote address if it > is unresolvable. {{DefaultImpersationProvider}} will NPE, ipc will close the > connection immediately (correct behavior for such unexpected exceptions), > client fails on {{EOFException}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-10941) Proxy user verification NPEs if remote host is unresolvable
[ https://issues.apache.org/jira/browse/HADOOP-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-10941: Status: Patch Available (was: Open) > Proxy user verification NPEs if remote host is unresolvable > --- > > Key: HADOOP-10941 > URL: https://issues.apache.org/jira/browse/HADOOP-10941 > Project: Hadoop Common > Issue Type: Bug > Components: ipc, security >Affects Versions: 2.5.0, 3.0.0 >Reporter: Daryn Sharp >Assignee: Benoy Antony >Priority: Critical > Labels: BB2015-05-TBR > Attachments: HADOOP-10941.patch > > > A null is passed to the impersonation providers for the remote address if it > is unresolvable. {{DefaultImpersationProvider}} will NPE, ipc will close the > connection immediately (correct behavior for such unexpected exceptions), > client fails on {{EOFException}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962283#comment-14962283 ] Hadoop QA commented on HADOOP-11820: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 9s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 18s {color} | {color:green} trunk passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 48s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 30s {color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 49s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run convertXmlToText from findbugs {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s {color} | {color:green} trunk passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 56s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 17s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 48s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 48s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 56s {color} | {color:red} Patch generated 9 new checkstyle issues in . (total was 654, now 658). {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 28s {color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 195m 8s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 50s {color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 190m 27s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 20s {color} | {color:red} Patch generated 56 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 478m 32s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.7.0_79 Failed junit tests | hadoop.hdfs.web.TestWebHDFSOAuth2 | | | hadoop.hdfs.web.TestWebHDFSOAuth2 | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12767235/HDFS-9184.008.patch | | JIRA Issue | HADOOP-11820 | | Optional Tests | asflicense javac javadoc mvninstall unit findbugs checkstyle compile cc | | uname
[jira] [Updated] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8
[ https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-11628: Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) > SPNEGO auth does not work with CNAMEs in JDK8 > - > > Key: HADOOP-11628 > URL: https://issues.apache.org/jira/browse/HADOOP-11628 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Blocker > Labels: jdk8 > Fix For: 2.8.0 > > Attachments: HADOOP-11628.patch > > > Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the > principal for SPNEGO. JDK8 no longer does this which breaks the use of > user-friendly CNAMEs for services. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11244) The HCFS contract test testRenameFileBeingAppended doesn't do a rename
[ https://issues.apache.org/jira/browse/HADOOP-11244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962297#comment-14962297 ] Steve Loughran commented on HADOOP-11244: - this got duplicated & fixed by HADOOP-12268. Sorry Jay -you did get this patch in first, but we weren't keeping atop the patches enough. > The HCFS contract test testRenameFileBeingAppended doesn't do a rename > -- > > Key: HADOOP-11244 > URL: https://issues.apache.org/jira/browse/HADOOP-11244 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Noah Watkins >Assignee: jay vyas > Labels: BB2015-05-TBR > Fix For: 2.8.0 > > Attachments: HADOOP-11244.patch, HADOOP-11244.patch > > > The test AbstractContractAppendTest::testRenameFileBeingAppended appears to > assert the behavior of renaming a file opened for writing. However, the > assertion "assertPathExists("renamed destination file does not exist", > renamed);" fails because it appears that the file "renamed" is never created > (ostensibly it should be the "target" file that has been renamed). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11244) The HCFS contract test testRenameFileBeingAppended doesn't do a rename
[ https://issues.apache.org/jira/browse/HADOOP-11244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-11244: Resolution: Duplicate Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) > The HCFS contract test testRenameFileBeingAppended doesn't do a rename > -- > > Key: HADOOP-11244 > URL: https://issues.apache.org/jira/browse/HADOOP-11244 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Noah Watkins >Assignee: jay vyas > Labels: BB2015-05-TBR > Fix For: 2.8.0 > > Attachments: HADOOP-11244.patch, HADOOP-11244.patch > > > The test AbstractContractAppendTest::testRenameFileBeingAppended appears to > assert the behavior of renaming a file opened for writing. However, the > assertion "assertPathExists("renamed destination file does not exist", > renamed);" fails because it appears that the file "renamed" is never created > (ostensibly it should be the "target" file that has been renamed). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8
[ https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962332#comment-14962332 ] Hudson commented on HADOOP-11628: - FAILURE: Integrated in Hadoop-Yarn-trunk #1282 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/1282/]) HADOOP-11628. SPNEGO auth does not work with CNAMEs in JDK8. (Daryn (stevel: rev bafeb6c7bc50efd11c6637921a50dd9cfdd53841) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java > SPNEGO auth does not work with CNAMEs in JDK8 > - > > Key: HADOOP-11628 > URL: https://issues.apache.org/jira/browse/HADOOP-11628 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Blocker > Labels: jdk8 > Fix For: 2.8.0 > > Attachments: HADOOP-11628.patch > > > Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the > principal for SPNEGO. JDK8 no longer does this which breaks the use of > user-friendly CNAMEs for services. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8
[ https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962344#comment-14962344 ] Hudson commented on HADOOP-11628: - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #561 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/561/]) HADOOP-11628. SPNEGO auth does not work with CNAMEs in JDK8. (Daryn (stevel: rev bafeb6c7bc50efd11c6637921a50dd9cfdd53841) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java > SPNEGO auth does not work with CNAMEs in JDK8 > - > > Key: HADOOP-11628 > URL: https://issues.apache.org/jira/browse/HADOOP-11628 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Blocker > Labels: jdk8 > Fix For: 2.8.0 > > Attachments: HADOOP-11628.patch > > > Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the > principal for SPNEGO. JDK8 no longer does this which breaks the use of > user-friendly CNAMEs for services. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12450) UserGroupInformation should not log at WARN level if no groups are found
[ https://issues.apache.org/jira/browse/HADOOP-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962346#comment-14962346 ] Hudson commented on HADOOP-12450: - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #561 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/561/]) HADOOP-12450. UserGroupInformation should not log at WARN level if no (stevel: rev e286512a7143427f2975ec92cdc4fad0a093a456) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java > UserGroupInformation should not log at WARN level if no groups are found > > > Key: HADOOP-12450 > URL: https://issues.apache.org/jira/browse/HADOOP-12450 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Elliott Clark >Assignee: Elliott Clark >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-12450-v1.patch, HADOOP-12450-v2.patch > > > HBase tries to get the groups of a user on every request. That user may or > may not exist on the box running Hadoop/HBase. > If that user doesn't exist currently Hadoop will log at the WARN level > everytime. This leads to gigs of log spam and serious GC issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11090) [Umbrella] Support Java 8 in Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962276#comment-14962276 ] Steve Loughran commented on HADOOP-11090: - link to HADOOP-11628: SPNEGO auth does not work with CNAMEs in JDK8 > [Umbrella] Support Java 8 in Hadoop > --- > > Key: HADOOP-11090 > URL: https://issues.apache.org/jira/browse/HADOOP-11090 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Mohammad Kamrul Islam >Assignee: Mohammad Kamrul Islam > > Java 8 is coming quickly to various clusters. Making sure Hadoop seamlessly > works with Java 8 is important for the Apache community. > > This JIRA is to track the issues/experiences encountered during Java 8 > migration. If you find a potential bug , please create a separate JIRA either > as a sub-task or linked into this JIRA. > If you find a Hadoop or JVM configuration tuning, you can create a JIRA as > well. Or you can add a comment here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8
[ https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962288#comment-14962288 ] Hudson commented on HADOOP-11628: - FAILURE: Integrated in Hadoop-trunk-Commit #8655 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/8655/]) HADOOP-11628. SPNEGO auth does not work with CNAMEs in JDK8. (Daryn (stevel: rev bafeb6c7bc50efd11c6637921a50dd9cfdd53841) * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java * hadoop-common-project/hadoop-common/CHANGES.txt > SPNEGO auth does not work with CNAMEs in JDK8 > - > > Key: HADOOP-11628 > URL: https://issues.apache.org/jira/browse/HADOOP-11628 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Blocker > Labels: jdk8 > Fix For: 2.8.0 > > Attachments: HADOOP-11628.patch > > > Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the > principal for SPNEGO. JDK8 no longer does this which breaks the use of > user-friendly CNAMEs for services. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12472) Make GenericTestUtils.assertExceptionContains robust
[ https://issues.apache.org/jira/browse/HADOOP-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12472: Status: Patch Available (was: Open) > Make GenericTestUtils.assertExceptionContains robust > > > Key: HADOOP-12472 > URL: https://issues.apache.org/jira/browse/HADOOP-12472 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-12472-001.patch > > > {{GenericTestUtils.assertExceptionContains}} calls > {{Exception.getMessage()}}, followed by msg.contains(). > This will NPE for an exception with a null message, such as NPE. > # it should call toString() > # and do an assertNotNull on the result in case some subclass does something > very bad > # and for safety, check the asser -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12450) UserGroupInformation should not log at WARN level if no groups are found
[ https://issues.apache.org/jira/browse/HADOOP-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962309#comment-14962309 ] Hudson commented on HADOOP-12450: - FAILURE: Integrated in Hadoop-trunk-Commit #8657 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/8657/]) HADOOP-12450. UserGroupInformation should not log at WARN level if no (stevel: rev e286512a7143427f2975ec92cdc4fad0a093a456) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java * hadoop-common-project/hadoop-common/CHANGES.txt > UserGroupInformation should not log at WARN level if no groups are found > > > Key: HADOOP-12450 > URL: https://issues.apache.org/jira/browse/HADOOP-12450 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Elliott Clark >Assignee: Elliott Clark >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-12450-v1.patch, HADOOP-12450-v2.patch > > > HBase tries to get the groups of a user on every request. That user may or > may not exist on the box running Hadoop/HBase. > If that user doesn't exist currently Hadoop will log at the WARN level > everytime. This leads to gigs of log spam and serious GC issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12427) Upgrade Mockito version to 1.10.19
[ https://issues.apache.org/jira/browse/HADOOP-12427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12427: Target Version/s: 3.0.0 Status: Patch Available (was: Open) I just tried to apply this locally and it didn't apply. Let's see what jenkins does... > Upgrade Mockito version to 1.10.19 > -- > > Key: HADOOP-12427 > URL: https://issues.apache.org/jira/browse/HADOOP-12427 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Minor > Attachments: HADOOP-12427.v0.patch > > > The current version is 1.8.5 - inserted in 2011. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12427) Upgrade Mockito version to 1.10.19
[ https://issues.apache.org/jira/browse/HADOOP-12427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12427: Status: Open (was: Patch Available) > Upgrade Mockito version to 1.10.19 > -- > > Key: HADOOP-12427 > URL: https://issues.apache.org/jira/browse/HADOOP-12427 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Giovanni Matteo Fumarola >Assignee: Giovanni Matteo Fumarola >Priority: Minor > Attachments: HADOOP-12427.v0.patch > > > The current version is 1.8.5 - inserted in 2011. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12423) ShutdownHookManager throws exception if JVM is already being shut down
[ https://issues.apache.org/jira/browse/HADOOP-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12423: Affects Version/s: 2.7.1 Status: Open (was: Patch Available) > ShutdownHookManager throws exception if JVM is already being shut down > -- > > Key: HADOOP-12423 > URL: https://issues.apache.org/jira/browse/HADOOP-12423 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.7.1 >Reporter: Abhishek Agarwal >Assignee: Abhishek Agarwal >Priority: Minor > Attachments: HADOOP-12423.patch > > > If JVM is under shutdown, static method in ShutdownHookManager will throw an > IllegalStateException. This exception should be caught and ignored while > registering the hooks. > Stack trace: > {noformat} > java.lang.NoClassDefFoundError: Could not initialize class > org.apache.hadoop.util.ShutdownHookManager >at > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2639) > ~[stormjar.jar:1.4.0-SNAPSHOT] >at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) > ~[stormjar.jar:1.4.0-SNAPSHOT] >at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) > ~[stormjar.jar:1.4.0-SNAPSHOT] >... >... >at > backtype.storm.daemon.executor$fn__6647$fn__6659.invoke(executor.clj:692) > ~[storm-core-0.9.5.jar:0.9.5] >at backtype.storm.util$async_loop$fn__459.invoke(util.clj:461) > ~[storm-core-0.9.5.jar:0.9.5] >at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na] >at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12423) ShutdownHookManager throws exception if JVM is already being shut down
[ https://issues.apache.org/jira/browse/HADOOP-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12423: Target Version/s: 2.8.0 Status: Patch Available (was: Open) > ShutdownHookManager throws exception if JVM is already being shut down > -- > > Key: HADOOP-12423 > URL: https://issues.apache.org/jira/browse/HADOOP-12423 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.7.1 >Reporter: Abhishek Agarwal >Assignee: Abhishek Agarwal >Priority: Minor > Attachments: HADOOP-12423.patch > > > If JVM is under shutdown, static method in ShutdownHookManager will throw an > IllegalStateException. This exception should be caught and ignored while > registering the hooks. > Stack trace: > {noformat} > java.lang.NoClassDefFoundError: Could not initialize class > org.apache.hadoop.util.ShutdownHookManager >at > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2639) > ~[stormjar.jar:1.4.0-SNAPSHOT] >at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) > ~[stormjar.jar:1.4.0-SNAPSHOT] >at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) > ~[stormjar.jar:1.4.0-SNAPSHOT] >... >... >at > backtype.storm.daemon.executor$fn__6647$fn__6659.invoke(executor.clj:692) > ~[storm-core-0.9.5.jar:0.9.5] >at backtype.storm.util$async_loop$fn__459.invoke(util.clj:461) > ~[storm-core-0.9.5.jar:0.9.5] >at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na] >at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-10941) Proxy user verification NPEs if remote host is unresolvable
[ https://issues.apache.org/jira/browse/HADOOP-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-10941: Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) +1, committed > Proxy user verification NPEs if remote host is unresolvable > --- > > Key: HADOOP-10941 > URL: https://issues.apache.org/jira/browse/HADOOP-10941 > Project: Hadoop Common > Issue Type: Bug > Components: ipc, security >Affects Versions: 2.5.0, 3.0.0 >Reporter: Daryn Sharp >Assignee: Benoy Antony >Priority: Critical > Labels: BB2015-05-TBR > Fix For: 2.8.0 > > Attachments: HADOOP-10941.patch > > > A null is passed to the impersonation providers for the remote address if it > is unresolvable. {{DefaultImpersationProvider}} will NPE, ipc will close the > connection immediately (correct behavior for such unexpected exceptions), > client fails on {{EOFException}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12425) Branch-2 pom has conflicting curator dependency declarations
[ https://issues.apache.org/jira/browse/HADOOP-12425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962294#comment-14962294 ] Steve Loughran commented on HADOOP-12425: - Duplicate of HADOOP-12230 : funny, filed same jira twice; patch only on one, and someone else got the same patch in on the other. > Branch-2 pom has conflicting curator dependency declarations > > > Key: HADOOP-12425 > URL: https://issues.apache.org/jira/browse/HADOOP-12425 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 2.8.0 > > Attachments: HADOOP-12425-branch-2-001.patch > > > Post-HADOOP-11492 ; there is duplicate entries of curator in branch-2 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12425) Branch-2 pom has conflicting curator dependency declarations
[ https://issues.apache.org/jira/browse/HADOOP-12425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12425: Resolution: Duplicate Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) > Branch-2 pom has conflicting curator dependency declarations > > > Key: HADOOP-12425 > URL: https://issues.apache.org/jira/browse/HADOOP-12425 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 2.8.0 > > Attachments: HADOOP-12425-branch-2-001.patch > > > Post-HADOOP-11492 ; there is duplicate entries of curator in branch-2 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12472) Make GenericTestUtils.assertExceptionContains robust
[ https://issues.apache.org/jira/browse/HADOOP-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12472: Status: Open (was: Patch Available) > Make GenericTestUtils.assertExceptionContains robust > > > Key: HADOOP-12472 > URL: https://issues.apache.org/jira/browse/HADOOP-12472 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-12472-001.patch > > > {{GenericTestUtils.assertExceptionContains}} calls > {{Exception.getMessage()}}, followed by msg.contains(). > This will NPE for an exception with a null message, such as NPE. > # it should call toString() > # and do an assertNotNull on the result in case some subclass does something > very bad > # and for safety, check the asser -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12415) hdfs and nfs builds broken on -missing compile-time dependency on netty
[ https://issues.apache.org/jira/browse/HADOOP-12415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12415: Status: Open (was: Patch Available) > hdfs and nfs builds broken on -missing compile-time dependency on netty > --- > > Key: HADOOP-12415 > URL: https://issues.apache.org/jira/browse/HADOOP-12415 > Project: Hadoop Common > Issue Type: Bug > Components: nfs >Affects Versions: 2.7.1 > Environment: Bigtop, plain Linux distro of any kind >Reporter: Konstantin Boudnik >Assignee: Tom Zeng > Attachments: HADOOP-12415.patch > > > As discovered in BIGTOP-2049 {{hadoop-nfs}} module compilation is broken. > Looks like that HADOOP-11489 is the root-cause of it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12421) Add jitter to RetryInvocationHandler
[ https://issues.apache.org/jira/browse/HADOOP-12421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12421: Target Version/s: 2.8.0 Status: Patch Available (was: Open) > Add jitter to RetryInvocationHandler > > > Key: HADOOP-12421 > URL: https://issues.apache.org/jira/browse/HADOOP-12421 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Attachments: HADOOP-12421-v1.patch, HADOOP-12421-v2.patch, > HADOOP-12421-v3.patch > > > Calls to NN can become synchronized across a cluster during NN failover. This > leads to a spike in requests until things recover. Making an already tricky > time worse. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12421) Add jitter to RetryInvocationHandler
[ https://issues.apache.org/jira/browse/HADOOP-12421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12421: Status: Open (was: Patch Available) > Add jitter to RetryInvocationHandler > > > Key: HADOOP-12421 > URL: https://issues.apache.org/jira/browse/HADOOP-12421 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Elliott Clark >Assignee: Elliott Clark > Attachments: HADOOP-12421-v1.patch, HADOOP-12421-v2.patch, > HADOOP-12421-v3.patch > > > Calls to NN can become synchronized across a cluster during NN failover. This > leads to a spike in requests until things recover. Making an already tricky > time worse. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12415) hdfs and nfs builds broken on -missing compile-time dependency on netty
[ https://issues.apache.org/jira/browse/HADOOP-12415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12415: Target Version/s: 2.8.0 Status: Patch Available (was: Open) > hdfs and nfs builds broken on -missing compile-time dependency on netty > --- > > Key: HADOOP-12415 > URL: https://issues.apache.org/jira/browse/HADOOP-12415 > Project: Hadoop Common > Issue Type: Bug > Components: nfs >Affects Versions: 2.7.1 > Environment: Bigtop, plain Linux distro of any kind >Reporter: Konstantin Boudnik >Assignee: Tom Zeng > Attachments: HADOOP-12415.patch > > > As discovered in BIGTOP-2049 {{hadoop-nfs}} module compilation is broken. > Looks like that HADOOP-11489 is the root-cause of it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12450) UserGroupInformation should not log at WARN level if no groups are found
[ https://issues.apache.org/jira/browse/HADOOP-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962387#comment-14962387 ] Hudson commented on HADOOP-12450: - FAILURE: Integrated in Hadoop-Yarn-trunk #1283 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/1283/]) HADOOP-12450. UserGroupInformation should not log at WARN level if no (stevel: rev e286512a7143427f2975ec92cdc4fad0a093a456) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java * hadoop-common-project/hadoop-common/CHANGES.txt > UserGroupInformation should not log at WARN level if no groups are found > > > Key: HADOOP-12450 > URL: https://issues.apache.org/jira/browse/HADOOP-12450 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Elliott Clark >Assignee: Elliott Clark >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-12450-v1.patch, HADOOP-12450-v2.patch > > > HBase tries to get the groups of a user on every request. That user may or > may not exist on the box running Hadoop/HBase. > If that user doesn't exist currently Hadoop will log at the WARN level > everytime. This leads to gigs of log spam and serious GC issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962268#comment-14962268 ] Hadoop QA commented on HADOOP-11820: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 49s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 6s {color} | {color:green} trunk passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 42s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s {color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 53s {color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run convertXmlToText from findbugs {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s {color} | {color:green} trunk passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 54s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 6s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 53s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 53s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 54s {color} | {color:red} Patch generated 9 new checkstyle issues in . (total was 654, now 658). {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 5s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 53s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 28s {color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 195m 55s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 39s {color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 190m 17s {color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 16s {color} | {color:red} Patch generated 56 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 470m 44s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.7.0_79 Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.web.TestWebHDFSOAuth2 | | | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade | | | hadoop.hdfs.web.TestWebHDFSOAuth2 | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL |
[jira] [Updated] (HADOOP-12321) Make JvmPauseMonitor to AbstractService
[ https://issues.apache.org/jira/browse/HADOOP-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12321: Status: Patch Available (was: Open) time to get this in .. lets see what the new patch runner says > Make JvmPauseMonitor to AbstractService > --- > > Key: HADOOP-12321 > URL: https://issues.apache.org/jira/browse/HADOOP-12321 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Sunil G > Attachments: 0001-HADOOP-12321.patch, 0002-HADOOP-12321.patch, > 0004-HADOOP-12321.patch, HADOOP-12321-003.patch, > HADOOP-12321-005-aggregated.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > The new JVM pause monitor has been written with its own start/stop lifecycle > which has already proven brittle to both ordering of operations and, even > after HADOOP-12313, is not thread safe (both start and stop are potentially > re-entrant). > It also requires every class which supports the monitor to add another field > and perform the lifecycle operations in its own lifecycle, which, for all > Yarn services, is the YARN app lifecycle (as implemented in Hadoop common) > Making the monitor a subclass of {{AbstractService}} and moving the > init/start & stop operations in {{serviceInit()}}, {{serviceStart()}} & > {{serviceStop()}} methods will fix the concurrency and state model issues, > and make it trivial to add as a child to any YARN service which subclasses > {{CompositeService}} (most the NM and RM apps) will be able to hook up the > monitor simply by creating one in the ctor and adding it as a child. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12321) Make JvmPauseMonitor to AbstractService
[ https://issues.apache.org/jira/browse/HADOOP-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12321: Status: Open (was: Patch Available) > Make JvmPauseMonitor to AbstractService > --- > > Key: HADOOP-12321 > URL: https://issues.apache.org/jira/browse/HADOOP-12321 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Sunil G > Attachments: 0001-HADOOP-12321.patch, 0002-HADOOP-12321.patch, > 0004-HADOOP-12321.patch, HADOOP-12321-003.patch, > HADOOP-12321-005-aggregated.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > The new JVM pause monitor has been written with its own start/stop lifecycle > which has already proven brittle to both ordering of operations and, even > after HADOOP-12313, is not thread safe (both start and stop are potentially > re-entrant). > It also requires every class which supports the monitor to add another field > and perform the lifecycle operations in its own lifecycle, which, for all > Yarn services, is the YARN app lifecycle (as implemented in Hadoop common) > Making the monitor a subclass of {{AbstractService}} and moving the > init/start & stop operations in {{serviceInit()}}, {{serviceStart()}} & > {{serviceStop()}} methods will fix the concurrency and state model issues, > and make it trivial to add as a child to any YARN service which subclasses > {{CompositeService}} (most the NM and RM apps) will be able to hook up the > monitor simply by creating one in the ctor and adding it as a child. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12450) UserGroupInformation should not log at WARN level if no groups are found
[ https://issues.apache.org/jira/browse/HADOOP-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12450: Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) +1, committed —thanks! > UserGroupInformation should not log at WARN level if no groups are found > > > Key: HADOOP-12450 > URL: https://issues.apache.org/jira/browse/HADOOP-12450 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Elliott Clark >Assignee: Elliott Clark >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-12450-v1.patch, HADOOP-12450-v2.patch > > > HBase tries to get the groups of a user on every request. That user may or > may not exist on the box running Hadoop/HBase. > If that user doesn't exist currently Hadoop will log at the WARN level > everytime. This leads to gigs of log spam and serious GC issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12464) Interrupted client may try to fail-over and retry
[ https://issues.apache.org/jira/browse/HADOOP-12464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962305#comment-14962305 ] Steve Loughran commented on HADOOP-12464: - +1 —I'll let you do the commit > Interrupted client may try to fail-over and retry > - > > Key: HADOOP-12464 > URL: https://issues.apache.org/jira/browse/HADOOP-12464 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: Kihwal Lee > Attachments: HADOOP-12464.patch > > > When an IPC client is interrupted, it sometimes try to fail-over to a > different namenode and retry. We've seen this causing hang during shutdown. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-10941) Proxy user verification NPEs if remote host is unresolvable
[ https://issues.apache.org/jira/browse/HADOOP-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962321#comment-14962321 ] Hadoop QA commented on HADOOP-10941: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 51s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 6s {color} | {color:green} trunk passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 50s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s {color} | {color:green} trunk passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 13s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 51s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s {color} | {color:green} the patch passed with JDK v1.8.0 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 55s {color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 40s {color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 71m 39s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12662300/HADOOP-10941.patch | | JIRA Issue | HADOOP-10941 | | Optional Tests | asflicense javac javadoc mvninstall unit findbugs checkstyle compile | | uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-ef9723d/dev-support/personality/hadoop.sh | | git revision | trunk / bafeb6c | | Default Java | 1.7.0_79 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 | | findbugs | v3.0.0 | | JDK v1.7.0_79 Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/7859/testReport/ | | Max memory used | 313MB | | Powered by | Apache Yetus http://yetus.apache.org | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/7859/console | This message was
[jira] [Commented] (HADOOP-10941) Proxy user verification NPEs if remote host is unresolvable
[ https://issues.apache.org/jira/browse/HADOOP-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962370#comment-14962370 ] Hudson commented on HADOOP-10941: - FAILURE: Integrated in Hadoop-trunk-Commit #8658 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/8658/]) HADOOP-10941. Proxy user verification NPEs if remote host is (stevel: rev 0ab3f9d56465bf31668159c562305a3b8222004c) * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestProxyUsers.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestMachineList.java * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/DefaultImpersonationProvider.java > Proxy user verification NPEs if remote host is unresolvable > --- > > Key: HADOOP-10941 > URL: https://issues.apache.org/jira/browse/HADOOP-10941 > Project: Hadoop Common > Issue Type: Bug > Components: ipc, security >Affects Versions: 2.5.0, 3.0.0 >Reporter: Daryn Sharp >Assignee: Benoy Antony >Priority: Critical > Labels: BB2015-05-TBR > Fix For: 2.8.0 > > Attachments: HADOOP-10941.patch > > > A null is passed to the impersonation providers for the remote address if it > is unresolvable. {{DefaultImpersationProvider}} will NPE, ipc will close the > connection immediately (correct behavior for such unexpected exceptions), > client fails on {{EOFException}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8
[ https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962377#comment-14962377 ] Hudson commented on HADOOP-11628: - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #546 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/546/]) HADOOP-11628. SPNEGO auth does not work with CNAMEs in JDK8. (Daryn (stevel: rev bafeb6c7bc50efd11c6637921a50dd9cfdd53841) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java > SPNEGO auth does not work with CNAMEs in JDK8 > - > > Key: HADOOP-11628 > URL: https://issues.apache.org/jira/browse/HADOOP-11628 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Blocker > Labels: jdk8 > Fix For: 2.8.0 > > Attachments: HADOOP-11628.patch > > > Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the > principal for SPNEGO. JDK8 no longer does this which breaks the use of > user-friendly CNAMEs for services. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8
[ https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962397#comment-14962397 ] Hudson commented on HADOOP-11628: - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2495 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2495/]) HADOOP-11628. SPNEGO auth does not work with CNAMEs in JDK8. (Daryn (stevel: rev bafeb6c7bc50efd11c6637921a50dd9cfdd53841) * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java * hadoop-common-project/hadoop-common/CHANGES.txt > SPNEGO auth does not work with CNAMEs in JDK8 > - > > Key: HADOOP-11628 > URL: https://issues.apache.org/jira/browse/HADOOP-11628 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Blocker > Labels: jdk8 > Fix For: 2.8.0 > > Attachments: HADOOP-11628.patch > > > Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the > principal for SPNEGO. JDK8 no longer does this which breaks the use of > user-friendly CNAMEs for services. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12450) UserGroupInformation should not log at WARN level if no groups are found
[ https://issues.apache.org/jira/browse/HADOOP-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962399#comment-14962399 ] Hudson commented on HADOOP-12450: - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2495 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2495/]) HADOOP-12450. UserGroupInformation should not log at WARN level if no (stevel: rev e286512a7143427f2975ec92cdc4fad0a093a456) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java > UserGroupInformation should not log at WARN level if no groups are found > > > Key: HADOOP-12450 > URL: https://issues.apache.org/jira/browse/HADOOP-12450 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Elliott Clark >Assignee: Elliott Clark >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-12450-v1.patch, HADOOP-12450-v2.patch > > > HBase tries to get the groups of a user on every request. That user may or > may not exist on the box running Hadoop/HBase. > If that user doesn't exist currently Hadoop will log at the WARN level > everytime. This leads to gigs of log spam and serious GC issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-10941) Proxy user verification NPEs if remote host is unresolvable
[ https://issues.apache.org/jira/browse/HADOOP-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962429#comment-14962429 ] Hudson commented on HADOOP-10941: - SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #562 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/562/]) HADOOP-10941. Proxy user verification NPEs if remote host is (stevel: rev 0ab3f9d56465bf31668159c562305a3b8222004c) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/DefaultImpersonationProvider.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestProxyUsers.java * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestMachineList.java > Proxy user verification NPEs if remote host is unresolvable > --- > > Key: HADOOP-10941 > URL: https://issues.apache.org/jira/browse/HADOOP-10941 > Project: Hadoop Common > Issue Type: Bug > Components: ipc, security >Affects Versions: 2.5.0, 3.0.0 >Reporter: Daryn Sharp >Assignee: Benoy Antony >Priority: Critical > Labels: BB2015-05-TBR > Fix For: 2.8.0 > > Attachments: HADOOP-10941.patch > > > A null is passed to the impersonation providers for the remote address if it > is unresolvable. {{DefaultImpersationProvider}} will NPE, ipc will close the > connection immediately (correct behavior for such unexpected exceptions), > client fails on {{EOFException}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: (was: HDFS-9184.008.patch) > aw jira testing, ignore > --- > > Key: HADOOP-11820 > URL: https://issues.apache.org/jira/browse/HADOOP-11820 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.0.0 >Reporter: Allen Wittenauer > Attachments: HADOOP-12334.06.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962445#comment-14962445 ] Hadoop QA commented on HADOOP-11820: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 7s {color} | {color:blue} docker + precommit patch detected. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 1s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s {color} | {color:green} trunk passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s {color} | {color:green} trunk passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 7s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 40s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 47s {color} | {color:blue} findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s {color} | {color:green} trunk passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s {color} | {color:green} trunk passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s {color} | {color:green} the patch passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s {color} | {color:green} the patch passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s {color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_60. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 11s {color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_80. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 12s {color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_79. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s {color} | {color:red} Patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 10m 54s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || |
[jira] [Commented] (HADOOP-12450) UserGroupInformation should not log at WARN level if no groups are found
[ https://issues.apache.org/jira/browse/HADOOP-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962455#comment-14962455 ] Hudson commented on HADOOP-12450: - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #547 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/547/]) HADOOP-12450. UserGroupInformation should not log at WARN level if no (stevel: rev e286512a7143427f2975ec92cdc4fad0a093a456) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java * hadoop-common-project/hadoop-common/CHANGES.txt > UserGroupInformation should not log at WARN level if no groups are found > > > Key: HADOOP-12450 > URL: https://issues.apache.org/jira/browse/HADOOP-12450 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Elliott Clark >Assignee: Elliott Clark >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-12450-v1.patch, HADOOP-12450-v2.patch > > > HBase tries to get the groups of a user on every request. That user may or > may not exist on the box running Hadoop/HBase. > If that user doesn't exist currently Hadoop will log at the WARN level > everytime. This leads to gigs of log spam and serious GC issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-10941) Proxy user verification NPEs if remote host is unresolvable
[ https://issues.apache.org/jira/browse/HADOOP-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962454#comment-14962454 ] Hudson commented on HADOOP-10941: - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #547 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/547/]) HADOOP-10941. Proxy user verification NPEs if remote host is (stevel: rev 0ab3f9d56465bf31668159c562305a3b8222004c) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestProxyUsers.java * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestMachineList.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/DefaultImpersonationProvider.java > Proxy user verification NPEs if remote host is unresolvable > --- > > Key: HADOOP-10941 > URL: https://issues.apache.org/jira/browse/HADOOP-10941 > Project: Hadoop Common > Issue Type: Bug > Components: ipc, security >Affects Versions: 2.5.0, 3.0.0 >Reporter: Daryn Sharp >Assignee: Benoy Antony >Priority: Critical > Labels: BB2015-05-TBR > Fix For: 2.8.0 > > Attachments: HADOOP-10941.patch > > > A null is passed to the impersonation providers for the remote address if it > is unresolvable. {{DefaultImpersationProvider}} will NPE, ipc will close the > connection immediately (correct behavior for such unexpected exceptions), > client fails on {{EOFException}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8
[ https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962458#comment-14962458 ] Hudson commented on HADOOP-11628: - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #509 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/509/]) HADOOP-11628. SPNEGO auth does not work with CNAMEs in JDK8. (Daryn (stevel: rev bafeb6c7bc50efd11c6637921a50dd9cfdd53841) * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java * hadoop-common-project/hadoop-common/CHANGES.txt > SPNEGO auth does not work with CNAMEs in JDK8 > - > > Key: HADOOP-11628 > URL: https://issues.apache.org/jira/browse/HADOOP-11628 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Blocker > Labels: jdk8 > Fix For: 2.8.0 > > Attachments: HADOOP-11628.patch > > > Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the > principal for SPNEGO. JDK8 no longer does this which breaks the use of > user-friendly CNAMEs for services. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12122) Fix Hadoop should avoid unsafe split and append on fields that might be IPv6 literals
[ https://issues.apache.org/jira/browse/HADOOP-12122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962423#comment-14962423 ] Allen Wittenauer commented on HADOOP-12122: --- OK, it looks like I missed the patch where parallel builds were fully enabled in Hadoop. I've updated the Yetus personality for Hadoop and re-kicked this patch off. Cross your fingers. :) > Fix Hadoop should avoid unsafe split and append on fields that might be IPv6 > literals > - > > Key: HADOOP-12122 > URL: https://issues.apache.org/jira/browse/HADOOP-12122 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: HADOOP-11890 >Reporter: Nate Edel >Assignee: Nemanja Matkovic > Attachments: HADOOP-12122-HADOOP-11890.0.patch, > HADOOP-12122-HADOOP-11890.3.patch, HADOOP-12122-HADOOP-11890.4.patch, > HADOOP-12122-HADOOP-11890.5.patch, HADOOP-12122-HADOOP-11890.6.patch, > HADOOP-12122-HADOOP-11890.7.patch, HADOOP-12122-HADOOP-11890.8.patch, > HADOOP-12122-HADOOP-11890.9.patch, HADOOP-12122-HADOOP-12122.2.patch, > HADOOP-12122-HADOOP-12122.3.patch, HADOOP-12122.0.patch, > lets_blow_up_a_lot_of_tests.patch > > > There are a fairly extensive number of locations found via code inspection > which use unsafe methods of handling addresses in a dual-stack or IPv6-only > world: > - splits on the first ":" assuming that delimits a host from a port > - produces a host port pair by appending :port blindly (Java prefers > [ipv6]:port which is the standard for IPv6 URIs) > - depends on the behavior of InetSocketAddress.toString() which produces the > above. > This patch fixes those metaphors that I can find above, and replaces calls to > InetSocketAddress.toString() with a wrapper that properly brackets the IPv6 > address if there is one. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: (was: HADOOP-7266-remove.001.patch) > aw jira testing, ignore > --- > > Key: HADOOP-11820 > URL: https://issues.apache.org/jira/browse/HADOOP-11820 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.0.0 >Reporter: Allen Wittenauer > Attachments: HADOOP-12334.06.patch, HDFS-9184.008.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962447#comment-14962447 ] Hadoop QA commented on HADOOP-11820: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s {color} | {color:blue} docker + precommit patch detected. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s {color} | {color:green} trunk passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s {color} | {color:green} trunk passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 8s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 36s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 44s {color} | {color:blue} findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} trunk passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s {color} | {color:green} trunk passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s {color} | {color:green} the patch passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s {color} | {color:green} the patch passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 6s {color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_60. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 16s {color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_80. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 18s {color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_79. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s {color} | {color:red} Patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 39s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || |
[jira] [Commented] (HADOOP-12450) UserGroupInformation should not log at WARN level if no groups are found
[ https://issues.apache.org/jira/browse/HADOOP-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962460#comment-14962460 ] Hudson commented on HADOOP-12450: - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #509 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/509/]) HADOOP-12450. UserGroupInformation should not log at WARN level if no (stevel: rev e286512a7143427f2975ec92cdc4fad0a093a456) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java > UserGroupInformation should not log at WARN level if no groups are found > > > Key: HADOOP-12450 > URL: https://issues.apache.org/jira/browse/HADOOP-12450 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Elliott Clark >Assignee: Elliott Clark >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-12450-v1.patch, HADOOP-12450-v2.patch > > > HBase tries to get the groups of a user on every request. That user may or > may not exist on the box running Hadoop/HBase. > If that user doesn't exist currently Hadoop will log at the WARN level > everytime. This leads to gigs of log spam and serious GC issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8
[ https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962466#comment-14962466 ] Hudson commented on HADOOP-11628: - FAILURE: Integrated in Hadoop-Hdfs-trunk #2446 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2446/]) HADOOP-11628. SPNEGO auth does not work with CNAMEs in JDK8. (Daryn (stevel: rev bafeb6c7bc50efd11c6637921a50dd9cfdd53841) * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java * hadoop-common-project/hadoop-common/CHANGES.txt > SPNEGO auth does not work with CNAMEs in JDK8 > - > > Key: HADOOP-11628 > URL: https://issues.apache.org/jira/browse/HADOOP-11628 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Blocker > Labels: jdk8 > Fix For: 2.8.0 > > Attachments: HADOOP-11628.patch > > > Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the > principal for SPNEGO. JDK8 no longer does this which breaks the use of > user-friendly CNAMEs for services. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962465#comment-14962465 ] Hadoop QA commented on HADOOP-11820: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 4s {color} | {color:blue} docker + precommit patch detected. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 56s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s {color} | {color:green} trunk passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s {color} | {color:green} trunk passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 8s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s {color} | {color:green} trunk passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} trunk passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s {color} | {color:green} the patch passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 7s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s {color} | {color:green} the patch passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s {color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_60. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 11s {color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_80. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 12s {color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_79. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s {color} | {color:red} Patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black}
[jira] [Commented] (HADOOP-12450) UserGroupInformation should not log at WARN level if no groups are found
[ https://issues.apache.org/jira/browse/HADOOP-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962468#comment-14962468 ] Hudson commented on HADOOP-12450: - FAILURE: Integrated in Hadoop-Hdfs-trunk #2446 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2446/]) HADOOP-12450. UserGroupInformation should not log at WARN level if no (stevel: rev e286512a7143427f2975ec92cdc4fad0a093a456) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java * hadoop-common-project/hadoop-common/CHANGES.txt > UserGroupInformation should not log at WARN level if no groups are found > > > Key: HADOOP-12450 > URL: https://issues.apache.org/jira/browse/HADOOP-12450 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Elliott Clark >Assignee: Elliott Clark >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-12450-v1.patch, HADOOP-12450-v2.patch > > > HBase tries to get the groups of a user on every request. That user may or > may not exist on the box running Hadoop/HBase. > If that user doesn't exist currently Hadoop will log at the WARN level > everytime. This leads to gigs of log spam and serious GC issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-10642) Provide option to limit heap memory consumed by dynamic metrics2 metrics
[ https://issues.apache.org/jira/browse/HADOOP-10642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HADOOP-10642: Description: User sunweiei provided the following jmap output in HBase 0.96 deployment: {code} num #instances #bytes class name -- 1: 14917882 3396492464 [C 2: 1996994 2118021808 [B 3: 43341650 1733666000 java.util.LinkedHashMap$Entry 4: 14453983 1156550896 [Ljava.util.HashMap$Entry; 5: 14446577 924580928 org.apache.hadoop.metrics2.lib.Interns$CacheWith2Keys$2 {code} Heap consumption by Interns$CacheWith2Keys$2 (and indirectly by [C) could be due to calls to Interns.info() in DynamicMetricsRegistry which was cloned off metrics2/lib/MetricsRegistry.java. This scenario would arise when large number of regions are tracked through metrics2 dynamically. Interns class doesn't provide API to remove entries in its internal Map. One solution is to provide an option that allows skipping calls to Interns.info() in metrics2/lib/MetricsRegistry.java was: User sunweiei provided the following jmap output in HBase 0.96 deployment: {code} num #instances #bytes class name -- 1: 14917882 3396492464 [C 2: 1996994 2118021808 [B 3: 43341650 1733666000 java.util.LinkedHashMap$Entry 4: 14453983 1156550896 [Ljava.util.HashMap$Entry; 5: 14446577 924580928 org.apache.hadoop.metrics2.lib.Interns$CacheWith2Keys$2 {code} Heap consumption by Interns$CacheWith2Keys$2 (and indirectly by [C) could be due to calls to Interns.info() in DynamicMetricsRegistry which was cloned off metrics2/lib/MetricsRegistry.java. This scenario would arise when large number of regions are tracked through metrics2 dynamically. Interns class doesn't provide API to remove entries in its internal Map. One solution is to provide an option that allows skipping calls to Interns.info() in metrics2/lib/MetricsRegistry.java > Provide option to limit heap memory consumed by dynamic metrics2 metrics > > > Key: HADOOP-10642 > URL: https://issues.apache.org/jira/browse/HADOOP-10642 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Reporter: Ted Yu > > User sunweiei provided the following jmap output in HBase 0.96 deployment: > {code} > num #instances #bytes class name > -- >1: 14917882 3396492464 [C >2: 1996994 2118021808 [B >3: 43341650 1733666000 java.util.LinkedHashMap$Entry >4: 14453983 1156550896 [Ljava.util.HashMap$Entry; >5: 14446577 924580928 > org.apache.hadoop.metrics2.lib.Interns$CacheWith2Keys$2 > {code} > Heap consumption by Interns$CacheWith2Keys$2 (and indirectly by [C) could be > due to calls to Interns.info() in DynamicMetricsRegistry which was cloned off > metrics2/lib/MetricsRegistry.java. > This scenario would arise when large number of regions are tracked through > metrics2 dynamically. > Interns class doesn't provide API to remove entries in its internal Map. > One solution is to provide an option that allows skipping calls to > Interns.info() in metrics2/lib/MetricsRegistry.java -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-7266) Deprecate metrics v1
[ https://issues.apache.org/jira/browse/HADOOP-7266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962484#comment-14962484 ] Tsuyoshi Ozawa commented on HADOOP-7266: I see. Should we also remove SuppressWarning annotation for fields and caller of deprecated methods? IMHO, the javac warnings added by the patch helps us to fix them on following JIRAs. What do you think? > Deprecate metrics v1 > > > Key: HADOOP-7266 > URL: https://issues.apache.org/jira/browse/HADOOP-7266 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Affects Versions: 2.8.0 >Reporter: Luke Lu >Assignee: Akira AJISAKA >Priority: Blocker > Attachments: HADOOP-7266-remove.001.patch, HADOOP-7266.001.patch, > HADOOP-7266.002.patch, HADOOP-7266.003.patch, HADOOP-7266.004.patch, > HADOOP-7266.005.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12122) Fix Hadoop should avoid unsafe split and append on fields that might be IPv6 literals
[ https://issues.apache.org/jira/browse/HADOOP-12122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962401#comment-14962401 ] Allen Wittenauer commented on HADOOP-12122: --- FYI: This patch was already very large (module-count-wise) that it barely fit in the 500m limit on the old test-patch code/setup. Now that both JDK 1.7 and JDK 1.8 are being testing, it goes over the limit and Jenkins kills it before completion. So this patch absolutely must get broken into multiple parts > Fix Hadoop should avoid unsafe split and append on fields that might be IPv6 > literals > - > > Key: HADOOP-12122 > URL: https://issues.apache.org/jira/browse/HADOOP-12122 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: HADOOP-11890 >Reporter: Nate Edel >Assignee: Nemanja Matkovic > Attachments: HADOOP-12122-HADOOP-11890.0.patch, > HADOOP-12122-HADOOP-11890.3.patch, HADOOP-12122-HADOOP-11890.4.patch, > HADOOP-12122-HADOOP-11890.5.patch, HADOOP-12122-HADOOP-11890.6.patch, > HADOOP-12122-HADOOP-11890.7.patch, HADOOP-12122-HADOOP-11890.8.patch, > HADOOP-12122-HADOOP-11890.9.patch, HADOOP-12122-HADOOP-12122.2.patch, > HADOOP-12122-HADOOP-12122.3.patch, HADOOP-12122.0.patch, > lets_blow_up_a_lot_of_tests.patch > > > There are a fairly extensive number of locations found via code inspection > which use unsafe methods of handling addresses in a dual-stack or IPv6-only > world: > - splits on the first ":" assuming that delimits a host from a port > - produces a host port pair by appending :port blindly (Java prefers > [ipv6]:port which is the standard for IPv6 URIs) > - depends on the behavior of InetSocketAddress.toString() which produces the > above. > This patch fixes those metaphors that I can find above, and replaces calls to > InetSocketAddress.toString() with a wrapper that properly brackets the IPv6 > address if there is one. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: HADOOP-12334.06.patch > aw jira testing, ignore > --- > > Key: HADOOP-11820 > URL: https://issues.apache.org/jira/browse/HADOOP-11820 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.0.0 >Reporter: Allen Wittenauer > Attachments: HADOOP-12334.06.patch, HADOOP-12334.06.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-10941) Proxy user verification NPEs if remote host is unresolvable
[ https://issues.apache.org/jira/browse/HADOOP-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962452#comment-14962452 ] Hudson commented on HADOOP-10941: - SUCCESS: Integrated in Hadoop-Yarn-trunk #1284 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/1284/]) HADOOP-10941. Proxy user verification NPEs if remote host is (stevel: rev 0ab3f9d56465bf31668159c562305a3b8222004c) * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestProxyUsers.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestMachineList.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/DefaultImpersonationProvider.java > Proxy user verification NPEs if remote host is unresolvable > --- > > Key: HADOOP-10941 > URL: https://issues.apache.org/jira/browse/HADOOP-10941 > Project: Hadoop Common > Issue Type: Bug > Components: ipc, security >Affects Versions: 2.5.0, 3.0.0 >Reporter: Daryn Sharp >Assignee: Benoy Antony >Priority: Critical > Labels: BB2015-05-TBR > Fix For: 2.8.0 > > Attachments: HADOOP-10941.patch > > > A null is passed to the impersonation providers for the remote address if it > is unresolvable. {{DefaultImpersationProvider}} will NPE, ipc will close the > connection immediately (correct behavior for such unexpected exceptions), > client fails on {{EOFException}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962474#comment-14962474 ] Hadoop QA commented on HADOOP-11820: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s {color} | {color:blue} docker + precommit patch detected. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 5s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s {color} | {color:green} trunk passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 7s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 30s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 28s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s {color} | {color:green} trunk passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s {color} | {color:green} the patch passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} the patch passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 6s {color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_60. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 13s {color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_79. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s {color} | {color:red} Patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 9m 41s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.7.1 Server=1.7.1 Image:test-patch-base-hadoop-date2015-10-18 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12767256/HADOOP-12334.06.patch | | JIRA Issue | HADOOP-11820 | | Optional Tests | asflicense javac javadoc mvninstall unit findbugs checkstyle compile | | uname | Linux e5e110cc79d5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-5d3b14f/dev-support/personality/hadoop.sh | | git revision | trunk / 0ab3f9d | | Default Java | 1.7.0_79 | | Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_60
[jira] [Commented] (HADOOP-10941) Proxy user verification NPEs if remote host is unresolvable
[ https://issues.apache.org/jira/browse/HADOOP-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962511#comment-14962511 ] Hudson commented on HADOOP-10941: - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2496 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2496/]) HADOOP-10941. Proxy user verification NPEs if remote host is (stevel: rev 0ab3f9d56465bf31668159c562305a3b8222004c) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/DefaultImpersonationProvider.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestProxyUsers.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestMachineList.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/MachineList.java > Proxy user verification NPEs if remote host is unresolvable > --- > > Key: HADOOP-10941 > URL: https://issues.apache.org/jira/browse/HADOOP-10941 > Project: Hadoop Common > Issue Type: Bug > Components: ipc, security >Affects Versions: 2.5.0, 3.0.0 >Reporter: Daryn Sharp >Assignee: Benoy Antony >Priority: Critical > Labels: BB2015-05-TBR > Fix For: 2.8.0 > > Attachments: HADOOP-10941.patch > > > A null is passed to the impersonation providers for the remote address if it > is unresolvable. {{DefaultImpersationProvider}} will NPE, ipc will close the > connection immediately (correct behavior for such unexpected exceptions), > client fails on {{EOFException}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)