[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: dp.1 aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: (was: dp.1) aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601924#comment-14601924 ] Hadoop QA commented on HADOOP-11820: (!) A patch to the files used for the QA process has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/PreCommit-HADOOP-Build/7043/console in case of problems. aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: dp.1 aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11335) KMS ACL in meta data or database
[ https://issues.apache.org/jira/browse/HADOOP-11335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601883#comment-14601883 ] Steve Ross commented on HADOOP-11335: - I like the overall premise of this JIRA, particularly the concept of storing the ACLs with the keys. Question: If the method for setting ACLs becomes the hadoop command-line utilities outlined in the design doc, how could one prevent the hadoop admin from having the ability to give themselves access to decrypt all data? A key design requirement of HDFS encryption is to be able to restrict HDFS superusers from having access to key material, thereby providing a layer protection even against admins. This prevents a malicious superuser from having access to both (a) all the key material and (b) all the encrypted data, and thus being able to decrypt everything. For example, see http://hadoop.apache.org/docs/r2.7.0/hadoop-kms/index.html, the section titled KMS Access Control; the blacklist example includes the hdfs user. KMS ACL in meta data or database Key: HADOOP-11335 URL: https://issues.apache.org/jira/browse/HADOOP-11335 Project: Hadoop Common Issue Type: Improvement Components: kms Affects Versions: 2.6.0 Reporter: Jerry Chen Assignee: Dian Fu Labels: BB2015-05-TBR, Security Attachments: HADOOP-11335.001.patch, HADOOP-11335.002.patch, HADOOP-11335.003.patch, HADOOP-11335.004.patch, HADOOP-11335.005.patch, HADOOP-11335.006.patch, HADOOP-11335.007.patch, HADOOP-11335.008.patch, HADOOP-11335.re-design.patch, KMS ACL in metadata or database.pdf Original Estimate: 504h Remaining Estimate: 504h Currently Hadoop KMS has implemented ACL for keys and the per key ACL are stored in the configuration file kms-acls.xml. The management of ACL in configuration file would not be easy in enterprise usage and it is put difficulties for backup and recovery. It is ideal to store the ACL for keys in the key meta data similar to what file system ACL does. In this way, the backup and recovery that works on keys should work for ACL for keys too. On the other hand, with the ACL in meta data, the ACL of each key can be easily manipulate with API or command line tool and take effect instantly. This is very important for enterprise level access control management. This feature can be addressed by separate JIRA. While with the configuration file, these would be hard to provide. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: (was: dp.1) aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12117) Potential NPE from Configuration#loadProperty with allowNullValueProperties set.
[ https://issues.apache.org/jira/browse/HADOOP-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602385#comment-14602385 ] zhihai xu commented on HADOOP-12117: Hi [~raviprak], thanks for the review! The current unit tests don't trigger this issue. bq. I'd be wary of opening up the access level of a Configuration method (even if it is labeled @VisibleForTesting) . Are you sure we can't unit test without changing the access level? Also please log the exception in testLoadProperty(). Good suggestions! fixed in the new patch. I uploaded a new patch HADOOP-12117.001.patch, which addressed all your comments, please review it. Potential NPE from Configuration#loadProperty with allowNullValueProperties set. Key: HADOOP-12117 URL: https://issues.apache.org/jira/browse/HADOOP-12117 Project: Hadoop Common Issue Type: Bug Components: conf Affects Versions: 2.7.1 Reporter: zhihai xu Assignee: zhihai xu Attachments: HADOOP-12117.000.patch, HADOOP-12117.001.patch Potential NPE from Configuration#loadProperty with allowNullValueProperties set. The following code will cause NPE: {code} } else if (!value.equals(properties.getProperty(attr))) { {code} Because if {{allowNullValueProperties}} is true, {{value}} is null and {{finalParameters}} contains {{attr}}, NullPointerException will happen -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602397#comment-14602397 ] Hadoop QA commented on HADOOP-11820: \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} precommit patch detected. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 0s {color} | {color:blue} The patch file was not named according to hadoop's naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute for instructions. {color} | | {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s {color} | {color:blue} Skipping @author checks as test-patch.sh has been patched. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 0s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 10s {color} | {color:red} The applied patch generated 1 new shellcheck (v0.3.3) issues (total was 59, now 51). {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 0m 18s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12742040/dp.1 | | git revision | trunk / 8ef07f7 | | Optional Tests | asflicense shellcheck | | uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Default Java | 1.7.0_55 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 | | shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider upgrading.) | | shellcheck | https://builds.apache.org/job/PreCommit-HADOOP-Build/7047/artifact/patchprocess/diffpatchshellcheck.txt | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/7047/console | This message was automatically generated. aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-9565) Add a Blobstore interface to add to blobstore FileSystems
[ https://issues.apache.org/jira/browse/HADOOP-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602430#comment-14602430 ] Chen He commented on HADOOP-9565: - The ._COPYING_ mechanism actually has problem. I create the bug HDFS-8673. Add a Blobstore interface to add to blobstore FileSystems - Key: HADOOP-9565 URL: https://issues.apache.org/jira/browse/HADOOP-9565 Project: Hadoop Common Issue Type: Improvement Components: fs, fs/s3, fs/swift Affects Versions: 2.6.0 Reporter: Steve Loughran Assignee: Steve Loughran Labels: BB2015-05-TBR Attachments: HADOOP-9565-001.patch, HADOOP-9565-002.patch, HADOOP-9565-003.patch We can make the fact that some {{FileSystem}} implementations are really blobstores, with different atomicity and consistency guarantees, by adding a {{Blobstore}} interface to add to them. This could also be a place to add a {{Copy(Path,Path)}} method, assuming that all blobstores implement at server-side copy operation as a substitute for rename. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gera Shegalov updated HADOOP-12107: --- Priority: Critical (was: Minor) long running apps may have a huge number of StatisticsData instances under FileSystem - Key: HADOOP-12107 URL: https://issues.apache.org/jira/browse/HADOOP-12107 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.7.0 Reporter: Sangjin Lee Assignee: Sangjin Lee Priority: Critical Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, HADOOP-12107.003.patch, HADOOP-12107.004.patch, HADOOP-12107.005.patch We observed with some of our apps (non-mapreduce apps that use filesystems) that they end up accumulating a huge memory footprint coming from {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of {{Statistics}}). Although the thread reference from {{StatisticsData}} is a weak reference, and thus can get cleared once a thread goes away, the actual {{StatisticsData}} instances in the list won't get cleared until any of these following methods is called on {{Statistics}}: - {{getBytesRead()}} - {{getBytesWritten()}} - {{getReadOps()}} - {{getLargeReadOps()}} - {{getWriteOps()}} - {{toString()}} It is quite possible to have an application that interacts with a filesystem but does not call any of these methods on the {{Statistics}}. If such an application runs for a long time and has a large amount of thread churn, the memory footprint will grow significantly. The current workaround is either to limit the thread churn or to invoke these operations occasionally to pare down the memory. However, this is still a deficiency with {{FileSystem$Statistics}} itself in that the memory is controlled only as a side effect of those operations. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602396#comment-14602396 ] Hadoop QA commented on HADOOP-11820: (!) A patch to the files used for the QA process has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/PreCommit-HADOOP-Build/7047/console in case of problems. aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11203) Allow ditscp to accept bandwitdh in fraction MegaBytes
[ https://issues.apache.org/jira/browse/HADOOP-11203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602435#comment-14602435 ] Hudson commented on HADOOP-11203: - FAILURE: Integrated in Hadoop-trunk-Commit #8071 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/8071/]) HADOOP-11203. Allow ditscp to accept bandwitdh in fraction MegaBytes. Contributed by Raju Bairishetti (amareshwari: rev 8ef07f767f0421b006b0fc77e5daf36c7b06abf1) * hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java * hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java * hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/OptionsParser.java * hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java * hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java * hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java * hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java Allow ditscp to accept bandwitdh in fraction MegaBytes -- Key: HADOOP-11203 URL: https://issues.apache.org/jira/browse/HADOOP-11203 Project: Hadoop Common Issue Type: Improvement Components: tools/distcp Reporter: Raju Bairishetti Assignee: Raju Bairishetti Fix For: 3.0.0 Attachments: HADOOP-11203.001.patch, HADOOP-11203.patch DistCp uses ThrottleInputStream, which provides a bandwidth throttling on a specified stream. Currently, Distcp allows the max bandwidth value in Mega Bytes, which does not accept fractional values. It would be better if it accepts the Max Bandwitdh in fractional MegaBytes. Due to this we are not able to throttle the bandwidth in KBs in our prod setup. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602444#comment-14602444 ] Hadoop QA commented on HADOOP-11820: (!) A patch to the files used for the QA process has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/PreCommit-HADOOP-Build/7048/console in case of problems. aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602446#comment-14602446 ] Hadoop QA commented on HADOOP-11820: \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} precommit patch detected. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 0s {color} | {color:blue} The patch file was not named according to hadoop's naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute for instructions. {color} | | {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s {color} | {color:blue} Skipping @author checks as test-patch.sh has been patched. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 0s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 11s {color} | {color:red} The applied patch generated 1 new shellcheck (v0.3.3) issues (total was 59, now 51). {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 0m 19s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12742040/dp.1 | | git revision | trunk / 8ef07f7 | | Optional Tests | asflicense shellcheck | | uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Default Java | 1.7.0_55 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 | | shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider upgrading.) | | shellcheck | https://builds.apache.org/job/PreCommit-HADOOP-Build/7048/artifact/patchprocess/diffpatchshellcheck.txt | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/7048/console | This message was automatically generated. aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12053) Harfs defaulturiport should be Zero ( should not -1)
[ https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602447#comment-14602447 ] Hadoop QA commented on HADOOP-12053: \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 17m 30s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 1 new or modified test files. | | {color:green}+1{color} | javac | 7m 44s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 51s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 23s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 1m 27s | There were no new checkstyle issues. | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 35s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 33s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 2m 42s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:red}-1{color} | common tests | 22m 5s | Tests failed in hadoop-common. | | {color:green}+1{color} | tools/hadoop tests | 1m 11s | Tests passed in hadoop-azure. | | | | 65m 5s | | \\ \\ || Reason || Tests || | Timed out tests | org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12742021/HADOOP-12053.003.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 8ef07f7 | | hadoop-common test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/7046/artifact/patchprocess/testrun_hadoop-common.txt | | hadoop-azure test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/7046/artifact/patchprocess/testrun_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/7046/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/7046/console | This message was automatically generated. Harfs defaulturiport should be Zero ( should not -1) Key: HADOOP-12053 URL: https://issues.apache.org/jira/browse/HADOOP-12053 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.7.0 Reporter: Brahma Reddy Battula Assignee: Gera Shegalov Attachments: HADOOP-12053.001.patch, HADOOP-12053.002.patch, HADOOP-12053.003.patch The harfs overrides the getUriDefaultPort method of AbstractFilesystem, and returns -1 . But -1 can't pass the checkPath method when the {{fs.defaultfs}} is setted without port(like hdfs://hacluster) *Test Code :* {code} for (FileStatus file : files) { String[] edges = file.getPath().getName().split(-); if (applicationId.toString().compareTo(edges[0]) = 0 applicationId.toString().compareTo(edges[1]) = 0) { Path harPath = new Path(har:// + file.getPath().toUri().getPath()); harPath = harPath.getFileSystem(conf).makeQualified(harPath); remoteAppDir = LogAggregationUtils.getRemoteAppLogDir( harPath, applicationId, appOwner, LogAggregationUtils.getRemoteNodeLogDirSuffix(conf)); if (FileContext.getFileContext(remoteAppDir.toUri()).util().exists(remoteAppDir)) { remoteDirSet.add(remoteAppDir); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12049) Control http authentication cookie persistence via configuration
[ https://issues.apache.org/jira/browse/HADOOP-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601069#comment-14601069 ] Hudson commented on HADOOP-12049: - FAILURE: Integrated in Hadoop-Yarn-trunk #969 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/969/]) HADOOP-12049. Control http authentication cookie persistence via configuration. Contributed by Huizhi Lu. (benoy: rev a815cc157ceb24e02189634a85abed8e874568e0) * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpCookieFlag.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestAuthenticationSessionCookie.java Control http authentication cookie persistence via configuration Key: HADOOP-12049 URL: https://issues.apache.org/jira/browse/HADOOP-12049 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Benoy Antony Assignee: hzlu Labels: patch Fix For: 3.0.0, 2.8.0 Attachments: HADOOP-12049.001.patch, HADOOP-12049.003.patch, HADOOP-12049.005.patch, HADOOP-12049.007.patch During http authentication, a cookie is dropped. This is a persistent cookie. The cookie is valid across browser sessions. For clusters which require enhanced security, it is desirable to have a session cookie so that cookie gets deleted when the user closes browser session. It should be possible to specify cookie persistence (session or persistent) via configuration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11958) MetricsSystemImpl fails to show backtrace when an error occurs
[ https://issues.apache.org/jira/browse/HADOOP-11958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601071#comment-14601071 ] Hudson commented on HADOOP-11958: - FAILURE: Integrated in Hadoop-Yarn-trunk #969 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/969/]) HADOOP-11958. MetricsSystemImpl fails to show backtrace when an error occurs (Jason Lowe via jeagles) (jeagles: rev 2236b577a34b069c0d1f91da99f63a199f260ac2) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java MetricsSystemImpl fails to show backtrace when an error occurs -- Key: HADOOP-11958 URL: https://issues.apache.org/jira/browse/HADOOP-11958 Project: Hadoop Common Issue Type: Bug Reporter: Jason Lowe Assignee: Jason Lowe Fix For: 3.0.0, 2.8.0 Attachments: HADOOP-11958.001.patch While investigating YARN-3619 it was frustrating that MetricsSystemImpl was logging a ConcurrentModificationException but without any backtrace. Logging a backtrace would be very beneficial to tracking down the cause of the problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12120) detect and flag flakey tests
[ https://issues.apache.org/jira/browse/HADOOP-12120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601202#comment-14601202 ] Allen Wittenauer commented on HADOOP-12120: --- See also HADOOP-11965 . detect and flag flakey tests Key: HADOOP-12120 URL: https://issues.apache.org/jira/browse/HADOOP-12120 Project: Hadoop Common Issue Type: Sub-task Components: yetus Reporter: Sean Busbey we run patch-test very often and in HBase there are some tests that are flakey but don't appear to fail in pre-commit. we should be able to detect these and get them followed up on. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Moved] (HADOOP-12119) hadoop fs -expunge does not work for federated namespace
[ https://issues.apache.org/jira/browse/HADOOP-12119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B moved HDFS-5277 to HADOOP-12119: -- Affects Version/s: (was: 2.0.5-alpha) 2.0.5-alpha Key: HADOOP-12119 (was: HDFS-5277) Project: Hadoop Common (was: Hadoop HDFS) hadoop fs -expunge does not work for federated namespace - Key: HADOOP-12119 URL: https://issues.apache.org/jira/browse/HADOOP-12119 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.0.5-alpha Reporter: Vrushali C Assignee: J.Andreina Attachments: HDFS-5277.1.patch, HDFS-5277.2.patch, HDFS-5277.3.patch We noticed that hadoop fs -expunge command does not work across federated namespace. This seems to look at only /user/username/.Trash instead of traversing all available namespace and expunging from individual namespace. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12120) detect and flag flakey tests
[ https://issues.apache.org/jira/browse/HADOOP-12120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601118#comment-14601118 ] Sean Busbey commented on HADOOP-12120: -- we might be able to do this via turning on surefire's retry mechanism and then tracking when something is flagged as flakey by it. detect and flag flakey tests Key: HADOOP-12120 URL: https://issues.apache.org/jira/browse/HADOOP-12120 Project: Hadoop Common Issue Type: Sub-task Components: yetus Reporter: Sean Busbey we run patch-test very often and in HBase there are some tests that are flakey but don't appear to fail in pre-commit. we should be able to detect these and get them followed up on. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-12121) smarter branch detection
Allen Wittenauer created HADOOP-12121: - Summary: smarter branch detection Key: HADOOP-12121 URL: https://issues.apache.org/jira/browse/HADOOP-12121 Project: Hadoop Common Issue Type: Sub-task Affects Versions: HADOOP-12111 Reporter: Allen Wittenauer We should make branch detection smarter so that it works on micro versions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12113) update test-patch branch to latest code
[ https://issues.apache.org/jira/browse/HADOOP-12113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601200#comment-14601200 ] Allen Wittenauer commented on HADOOP-12113: --- There is a bug here, but I also didn't name the patch in a way that is known to work. So 'both' is the correct answer. :) I think what I'd like to do is go ahead and commit this then file JIRAs against the two known major (but non-blocking) bugs and the features, redesigns, etc, that I'd like to see worked on either by me or someone else. Thoughts? update test-patch branch to latest code --- Key: HADOOP-12113 URL: https://issues.apache.org/jira/browse/HADOOP-12113 Project: Hadoop Common Issue Type: Sub-task Components: yetus Reporter: Allen Wittenauer Assignee: Allen Wittenauer Attachments: HADOOP-12113-HADOOP-12111.patch [~sekikn] and I have been working on github. We should update the codebase to reflect all of those changes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12121) smarter branch detection
[ https://issues.apache.org/jira/browse/HADOOP-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601230#comment-14601230 ] Allen Wittenauer commented on HADOOP-12121: --- I've had a few thoughts rolling around in my head on how to make this work better/more predictable. The key I think is that when we get the patch name, we should a few operations on it before passing through a parser: a) replace .patch, .diff, .txt with a . b) de-dupe any . c) strip end-of-filename . After that, we can make some reasonable assumptions about what's left over. Additionally, it makes it much easier to build a string that contains multiple periods which can then be passed through verify_valid_branch. smarter branch detection Key: HADOOP-12121 URL: https://issues.apache.org/jira/browse/HADOOP-12121 Project: Hadoop Common Issue Type: Sub-task Components: yetus Affects Versions: HADOOP-12111 Reporter: Allen Wittenauer We should make branch detection smarter so that it works on micro versions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12119) hadoop fs -expunge does not work for federated namespace
[ https://issues.apache.org/jira/browse/HADOOP-12119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601122#comment-14601122 ] J.Andreina commented on HADOOP-12119: - Test case failures are not related to this patch. Please review. hadoop fs -expunge does not work for federated namespace - Key: HADOOP-12119 URL: https://issues.apache.org/jira/browse/HADOOP-12119 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.0.5-alpha Reporter: Vrushali C Assignee: J.Andreina Attachments: HDFS-5277.1.patch, HDFS-5277.2.patch, HDFS-5277.3.patch We noticed that hadoop fs -expunge command does not work across federated namespace. This seems to look at only /user/username/.Trash instead of traversing all available namespace and expunging from individual namespace. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12113) update test-patch branch to latest code
[ https://issues.apache.org/jira/browse/HADOOP-12113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-12113: -- Target Version/s: HADOOP-12111 update test-patch branch to latest code --- Key: HADOOP-12113 URL: https://issues.apache.org/jira/browse/HADOOP-12113 Project: Hadoop Common Issue Type: Sub-task Components: yetus Reporter: Allen Wittenauer Assignee: Allen Wittenauer Attachments: HADOOP-12113-HADOOP-12111.patch [~sekikn] and I have been working on github. We should update the codebase to reflect all of those changes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12119) hadoop fs -expunge does not work for federated namespace
[ https://issues.apache.org/jira/browse/HADOOP-12119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601009#comment-14601009 ] Vinayakumar B commented on HADOOP-12119: Moved to HADOOP since changes are only in common module. hadoop fs -expunge does not work for federated namespace - Key: HADOOP-12119 URL: https://issues.apache.org/jira/browse/HADOOP-12119 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.0.5-alpha Reporter: Vrushali C Assignee: J.Andreina Attachments: HDFS-5277.1.patch, HDFS-5277.2.patch, HDFS-5277.3.patch We noticed that hadoop fs -expunge command does not work across federated namespace. This seems to look at only /user/username/.Trash instead of traversing all available namespace and expunging from individual namespace. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12049) Control http authentication cookie persistence via configuration
[ https://issues.apache.org/jira/browse/HADOOP-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601050#comment-14601050 ] Hudson commented on HADOOP-12049: - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #239 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/239/]) HADOOP-12049. Control http authentication cookie persistence via configuration. Contributed by Huizhi Lu. (benoy: rev a815cc157ceb24e02189634a85abed8e874568e0) * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpCookieFlag.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestAuthenticationSessionCookie.java Control http authentication cookie persistence via configuration Key: HADOOP-12049 URL: https://issues.apache.org/jira/browse/HADOOP-12049 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Benoy Antony Assignee: hzlu Labels: patch Fix For: 3.0.0, 2.8.0 Attachments: HADOOP-12049.001.patch, HADOOP-12049.003.patch, HADOOP-12049.005.patch, HADOOP-12049.007.patch During http authentication, a cookie is dropped. This is a persistent cookie. The cookie is valid across browser sessions. For clusters which require enhanced security, it is desirable to have a session cookie so that cookie gets deleted when the user closes browser session. It should be possible to specify cookie persistence (session or persistent) via configuration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11958) MetricsSystemImpl fails to show backtrace when an error occurs
[ https://issues.apache.org/jira/browse/HADOOP-11958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601052#comment-14601052 ] Hudson commented on HADOOP-11958: - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #239 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/239/]) HADOOP-11958. MetricsSystemImpl fails to show backtrace when an error occurs (Jason Lowe via jeagles) (jeagles: rev 2236b577a34b069c0d1f91da99f63a199f260ac2) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java MetricsSystemImpl fails to show backtrace when an error occurs -- Key: HADOOP-11958 URL: https://issues.apache.org/jira/browse/HADOOP-11958 Project: Hadoop Common Issue Type: Bug Reporter: Jason Lowe Assignee: Jason Lowe Fix For: 3.0.0, 2.8.0 Attachments: HADOOP-11958.001.patch While investigating YARN-3619 it was frustrating that MetricsSystemImpl was logging a ConcurrentModificationException but without any backtrace. Logging a backtrace would be very beneficial to tracking down the cause of the problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HADOOP-12121) smarter branch detection
[ https://issues.apache.org/jira/browse/HADOOP-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601230#comment-14601230 ] Allen Wittenauer edited comment on HADOOP-12121 at 6/25/15 1:56 PM: I've had a few thoughts rolling around in my head on how to make this work better/more predictable. The key I think is that when we get the patch name, we should perform a few operations on it before passing through a parser: a) replace .patch, .diff, .txt with a . b) de-dupe any . c) strip end-of-filename . After that, we can make some reasonable assumptions about what's left over. Additionally, it makes it much easier to build a string that contains multiple periods which can then be passed through verify_valid_branch. was (Author: aw): I've had a few thoughts rolling around in my head on how to make this work better/more predictable. The key I think is that when we get the patch name, we should a few operations on it before passing through a parser: a) replace .patch, .diff, .txt with a . b) de-dupe any . c) strip end-of-filename . After that, we can make some reasonable assumptions about what's left over. Additionally, it makes it much easier to build a string that contains multiple periods which can then be passed through verify_valid_branch. smarter branch detection Key: HADOOP-12121 URL: https://issues.apache.org/jira/browse/HADOOP-12121 Project: Hadoop Common Issue Type: Sub-task Components: yetus Affects Versions: HADOOP-12111 Reporter: Allen Wittenauer We should make branch detection smarter so that it works on micro versions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12119) hadoop fs -expunge does not work for federated namespace
[ https://issues.apache.org/jira/browse/HADOOP-12119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601095#comment-14601095 ] Hadoop QA commented on HADOOP-12119: \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 16m 30s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 1 new or modified test files. | | {color:green}+1{color} | javac | 7m 37s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 46s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 23s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 1m 3s | There were no new checkstyle issues. | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 35s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 34s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 1m 50s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:red}-1{color} | common tests | 21m 37s | Tests failed in hadoop-common. | | | | 60m 58s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.security.token.delegation.web.TestWebDelegationToken | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12741824/HDFS-5277.3.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 57f1a01 | | hadoop-common test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/7041/artifact/patchprocess/testrun_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/7041/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/7041/console | This message was automatically generated. hadoop fs -expunge does not work for federated namespace - Key: HADOOP-12119 URL: https://issues.apache.org/jira/browse/HADOOP-12119 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.0.5-alpha Reporter: Vrushali C Assignee: J.Andreina Attachments: HDFS-5277.1.patch, HDFS-5277.2.patch, HDFS-5277.3.patch We noticed that hadoop fs -expunge command does not work across federated namespace. This seems to look at only /user/username/.Trash instead of traversing all available namespace and expunging from individual namespace. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-12120) detect and flag flakey tests
Sean Busbey created HADOOP-12120: Summary: detect and flag flakey tests Key: HADOOP-12120 URL: https://issues.apache.org/jira/browse/HADOOP-12120 Project: Hadoop Common Issue Type: Sub-task Components: yetus Reporter: Sean Busbey we run patch-test very often and in HBase there are some tests that are flakey but don't appear to fail in pre-commit. we should be able to detect these and get them followed up on. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: dp.1 aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12050) Enable MaxInactiveInterval for hadoop http auth token
[ https://issues.apache.org/jira/browse/HADOOP-12050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601931#comment-14601931 ] Hadoop QA commented on HADOOP-12050: \\ \\ | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 15m 8s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 1 new or modified test files. | | {color:green}+1{color} | javac | 7m 30s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 39s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 21s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 0m 21s | There were no new checkstyle issues. | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 34s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 34s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 0m 42s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | common tests | 5m 19s | Tests passed in hadoop-auth. | | | | 41m 11s | | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12741920/HADOOP-12050.002.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / aa5b15b | | hadoop-auth test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/7042/artifact/patchprocess/testrun_hadoop-auth.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/7042/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/7042/console | This message was automatically generated. Enable MaxInactiveInterval for hadoop http auth token - Key: HADOOP-12050 URL: https://issues.apache.org/jira/browse/HADOOP-12050 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Benoy Antony Assignee: hzlu Fix For: 3.0.0 Attachments: HADOOP-12050.002.patch During http authentication, a cookie which contains the authentication token is dropped. The expiry time of the authentication token can be configured via hadoop.http.authentication.token.validity. The default value is 10 hours. For clusters which require enhanced security, it is desirable to have a configurable MaxInActiveInterval for the authentication token. If there is no activity during MaxInActiveInterval, the authentication token will be invalidated. The MaxInActiveInterval will be less than hadoop.http.authentication.token.validity. The default value will be 30 minutes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601930#comment-14601930 ] Hadoop QA commented on HADOOP-11820: \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} precommit patch detected. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 0s {color} | {color:blue} The patch file was not named according to hadoop's naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute for instructions. {color} | | {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s {color} | {color:blue} Skipping @author checks as test-patch.sh has been patched. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 0s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 10s {color} | {color:red} The applied patch generated 1 new shellcheck (v0.3.3) issues (total was 59, now 48). {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 2m 3s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 2m 20s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12741940/dp.1 | | git revision | trunk / aa5b15b | | Optional Tests | asflicense shellcheck | | uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Default Java | 1.7.0_55 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 | | shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider upgrading.) | | shellcheck | https://builds.apache.org/job/PreCommit-HADOOP-Build/7043/artifact/patchprocess/diffpatchshellcheck.txt | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/7043/console | This message was automatically generated. aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601951#comment-14601951 ] Hadoop QA commented on HADOOP-11820: (!) A patch to the files used for the QA process has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/PreCommit-HADOOP-Build/7044/console in case of problems. aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601954#comment-14601954 ] Hadoop QA commented on HADOOP-11820: \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} precommit patch detected. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s {color} | {color:blue} The patch file was not named according to hadoop's naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute for instructions. {color} | | {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s {color} | {color:blue} Skipping @author checks as test-patch.sh has been patched. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 0s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 11s {color} | {color:red} The applied patch generated 1 new shellcheck (v0.3.3) issues (total was 59, now 48). {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 2m 8s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 2m 25s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12741940/dp.1 | | git revision | trunk / aa5b15b | | Optional Tests | asflicense shellcheck | | uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Default Java | 1.7.0_55 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 | | shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider upgrading.) | | shellcheck | https://builds.apache.org/job/PreCommit-HADOOP-Build/7044/artifact/patchprocess/diffpatchshellcheck.txt | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/7044/console | This message was automatically generated. aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12049) Control http authentication cookie persistence via configuration
[ https://issues.apache.org/jira/browse/HADOOP-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601408#comment-14601408 ] Hudson commented on HADOOP-12049: - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #228 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/228/]) HADOOP-12049. Control http authentication cookie persistence via configuration. Contributed by Huizhi Lu. (benoy: rev a815cc157ceb24e02189634a85abed8e874568e0) * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpCookieFlag.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestAuthenticationSessionCookie.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java Control http authentication cookie persistence via configuration Key: HADOOP-12049 URL: https://issues.apache.org/jira/browse/HADOOP-12049 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Benoy Antony Assignee: hzlu Labels: patch Fix For: 3.0.0, 2.8.0 Attachments: HADOOP-12049.001.patch, HADOOP-12049.003.patch, HADOOP-12049.005.patch, HADOOP-12049.007.patch During http authentication, a cookie is dropped. This is a persistent cookie. The cookie is valid across browser sessions. For clusters which require enhanced security, it is desirable to have a session cookie so that cookie gets deleted when the user closes browser session. It should be possible to specify cookie persistence (session or persistent) via configuration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11958) MetricsSystemImpl fails to show backtrace when an error occurs
[ https://issues.apache.org/jira/browse/HADOOP-11958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601410#comment-14601410 ] Hudson commented on HADOOP-11958: - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #228 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/228/]) HADOOP-11958. MetricsSystemImpl fails to show backtrace when an error occurs (Jason Lowe via jeagles) (jeagles: rev 2236b577a34b069c0d1f91da99f63a199f260ac2) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java * hadoop-common-project/hadoop-common/CHANGES.txt MetricsSystemImpl fails to show backtrace when an error occurs -- Key: HADOOP-11958 URL: https://issues.apache.org/jira/browse/HADOOP-11958 Project: Hadoop Common Issue Type: Bug Reporter: Jason Lowe Assignee: Jason Lowe Fix For: 3.0.0, 2.8.0 Attachments: HADOOP-11958.001.patch While investigating YARN-3619 it was frustrating that MetricsSystemImpl was logging a ConcurrentModificationException but without any backtrace. Logging a backtrace would be very beneficial to tracking down the cause of the problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12036) Consolidate all of the cmake extensions in one directory
[ https://issues.apache.org/jira/browse/HADOOP-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Burlison updated HADOOP-12036: --- Attachment: HADOOP-12036.005.patch Consolidate all of the cmake extensions in one directory Key: HADOOP-12036 URL: https://issues.apache.org/jira/browse/HADOOP-12036 Project: Hadoop Common Issue Type: Sub-task Reporter: Allen Wittenauer Assignee: Alan Burlison Attachments: HADOOP-12036.001.patch, HADOOP-12036.002.patch, HADOOP-12036.004.patch, HADOOP-12036.005.patch Rather than have a half-dozen redefinitions, custom extensions, etc, we should move them all to one location so that the cmake environment is consistent between the various native components. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11958) MetricsSystemImpl fails to show backtrace when an error occurs
[ https://issues.apache.org/jira/browse/HADOOP-11958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601370#comment-14601370 ] Hudson commented on HADOOP-11958: - FAILURE: Integrated in Hadoop-Hdfs-trunk #2167 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2167/]) HADOOP-11958. MetricsSystemImpl fails to show backtrace when an error occurs (Jason Lowe via jeagles) (jeagles: rev 2236b577a34b069c0d1f91da99f63a199f260ac2) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java MetricsSystemImpl fails to show backtrace when an error occurs -- Key: HADOOP-11958 URL: https://issues.apache.org/jira/browse/HADOOP-11958 Project: Hadoop Common Issue Type: Bug Reporter: Jason Lowe Assignee: Jason Lowe Fix For: 3.0.0, 2.8.0 Attachments: HADOOP-11958.001.patch While investigating YARN-3619 it was frustrating that MetricsSystemImpl was logging a ConcurrentModificationException but without any backtrace. Logging a backtrace would be very beneficial to tracking down the cause of the problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12049) Control http authentication cookie persistence via configuration
[ https://issues.apache.org/jira/browse/HADOOP-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601368#comment-14601368 ] Hudson commented on HADOOP-12049: - FAILURE: Integrated in Hadoop-Hdfs-trunk #2167 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2167/]) HADOOP-12049. Control http authentication cookie persistence via configuration. Contributed by Huizhi Lu. (benoy: rev a815cc157ceb24e02189634a85abed8e874568e0) * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpCookieFlag.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestAuthenticationSessionCookie.java Control http authentication cookie persistence via configuration Key: HADOOP-12049 URL: https://issues.apache.org/jira/browse/HADOOP-12049 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Benoy Antony Assignee: hzlu Labels: patch Fix For: 3.0.0, 2.8.0 Attachments: HADOOP-12049.001.patch, HADOOP-12049.003.patch, HADOOP-12049.005.patch, HADOOP-12049.007.patch During http authentication, a cookie is dropped. This is a persistent cookie. The cookie is valid across browser sessions. For clusters which require enhanced security, it is desirable to have a session cookie so that cookie gets deleted when the user closes browser session. It should be possible to specify cookie persistence (session or persistent) via configuration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12049) Control http authentication cookie persistence via configuration
[ https://issues.apache.org/jira/browse/HADOOP-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601444#comment-14601444 ] Hudson commented on HADOOP-12049: - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #237 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/237/]) HADOOP-12049. Control http authentication cookie persistence via configuration. Contributed by Huizhi Lu. (benoy: rev a815cc157ceb24e02189634a85abed8e874568e0) * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestAuthenticationSessionCookie.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpCookieFlag.java Control http authentication cookie persistence via configuration Key: HADOOP-12049 URL: https://issues.apache.org/jira/browse/HADOOP-12049 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Benoy Antony Assignee: hzlu Labels: patch Fix For: 3.0.0, 2.8.0 Attachments: HADOOP-12049.001.patch, HADOOP-12049.003.patch, HADOOP-12049.005.patch, HADOOP-12049.007.patch During http authentication, a cookie is dropped. This is a persistent cookie. The cookie is valid across browser sessions. For clusters which require enhanced security, it is desirable to have a session cookie so that cookie gets deleted when the user closes browser session. It should be possible to specify cookie persistence (session or persistent) via configuration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11958) MetricsSystemImpl fails to show backtrace when an error occurs
[ https://issues.apache.org/jira/browse/HADOOP-11958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601446#comment-14601446 ] Hudson commented on HADOOP-11958: - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #237 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/237/]) HADOOP-11958. MetricsSystemImpl fails to show backtrace when an error occurs (Jason Lowe via jeagles) (jeagles: rev 2236b577a34b069c0d1f91da99f63a199f260ac2) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java * hadoop-common-project/hadoop-common/CHANGES.txt MetricsSystemImpl fails to show backtrace when an error occurs -- Key: HADOOP-11958 URL: https://issues.apache.org/jira/browse/HADOOP-11958 Project: Hadoop Common Issue Type: Bug Reporter: Jason Lowe Assignee: Jason Lowe Fix For: 3.0.0, 2.8.0 Attachments: HADOOP-11958.001.patch While investigating YARN-3619 it was frustrating that MetricsSystemImpl was logging a ConcurrentModificationException but without any backtrace. Logging a backtrace would be very beneficial to tracking down the cause of the problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11958) MetricsSystemImpl fails to show backtrace when an error occurs
[ https://issues.apache.org/jira/browse/HADOOP-11958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601464#comment-14601464 ] Hudson commented on HADOOP-11958: - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2185 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2185/]) HADOOP-11958. MetricsSystemImpl fails to show backtrace when an error occurs (Jason Lowe via jeagles) (jeagles: rev 2236b577a34b069c0d1f91da99f63a199f260ac2) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java * hadoop-common-project/hadoop-common/CHANGES.txt MetricsSystemImpl fails to show backtrace when an error occurs -- Key: HADOOP-11958 URL: https://issues.apache.org/jira/browse/HADOOP-11958 Project: Hadoop Common Issue Type: Bug Reporter: Jason Lowe Assignee: Jason Lowe Fix For: 3.0.0, 2.8.0 Attachments: HADOOP-11958.001.patch While investigating YARN-3619 it was frustrating that MetricsSystemImpl was logging a ConcurrentModificationException but without any backtrace. Logging a backtrace would be very beneficial to tracking down the cause of the problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12049) Control http authentication cookie persistence via configuration
[ https://issues.apache.org/jira/browse/HADOOP-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14601462#comment-14601462 ] Hudson commented on HADOOP-12049: - FAILURE: Integrated in Hadoop-Mapreduce-trunk #2185 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2185/]) HADOOP-12049. Control http authentication cookie persistence via configuration. Contributed by Huizhi Lu. (benoy: rev a815cc157ceb24e02189634a85abed8e874568e0) * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestAuthenticationSessionCookie.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpCookieFlag.java Control http authentication cookie persistence via configuration Key: HADOOP-12049 URL: https://issues.apache.org/jira/browse/HADOOP-12049 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Benoy Antony Assignee: hzlu Labels: patch Fix For: 3.0.0, 2.8.0 Attachments: HADOOP-12049.001.patch, HADOOP-12049.003.patch, HADOOP-12049.005.patch, HADOOP-12049.007.patch During http authentication, a cookie is dropped. This is a persistent cookie. The cookie is valid across browser sessions. For clusters which require enhanced security, it is desirable to have a session cookie so that cookie gets deleted when the user closes browser session. It should be possible to specify cookie persistence (session or persistent) via configuration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12050) Enable MaxInactiveInterval for hadoop http auth token
[ https://issues.apache.org/jira/browse/HADOOP-12050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hzlu updated HADOOP-12050: -- Attachment: HADOOP-12050.002.patch Enable MaxInactiveInterval for hadoop http auth token - Key: HADOOP-12050 URL: https://issues.apache.org/jira/browse/HADOOP-12050 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Benoy Antony Assignee: hzlu Fix For: 3.0.0 Attachments: HADOOP-12050.002.patch During http authentication, a cookie which contains the authentication token is dropped. The expiry time of the authentication token can be configured via hadoop.http.authentication.token.validity. The default value is 10 hours. For clusters which require enhanced security, it is desirable to have a configurable MaxInActiveInterval for the authentication token. If there is no activity during MaxInActiveInterval, the authentication token will be invalidated. The MaxInActiveInterval will be less than hadoop.http.authentication.token.validity. The default value will be 30 minutes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12050) Enable MaxInactiveInterval for hadoop http auth token
[ https://issues.apache.org/jira/browse/HADOOP-12050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hzlu updated HADOOP-12050: -- Attachment: (was: HADOOP-12050.001.patch) Enable MaxInactiveInterval for hadoop http auth token - Key: HADOOP-12050 URL: https://issues.apache.org/jira/browse/HADOOP-12050 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Benoy Antony Assignee: hzlu Fix For: 3.0.0 During http authentication, a cookie which contains the authentication token is dropped. The expiry time of the authentication token can be configured via hadoop.http.authentication.token.validity. The default value is 10 hours. For clusters which require enhanced security, it is desirable to have a configurable MaxInActiveInterval for the authentication token. If there is no activity during MaxInActiveInterval, the authentication token will be invalidated. The MaxInActiveInterval will be less than hadoop.http.authentication.token.validity. The default value will be 30 minutes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12050) Enable MaxInactiveInterval for hadoop http auth token
[ https://issues.apache.org/jira/browse/HADOOP-12050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hzlu updated HADOOP-12050: -- Status: Open (was: Patch Available) Enable MaxInactiveInterval for hadoop http auth token - Key: HADOOP-12050 URL: https://issues.apache.org/jira/browse/HADOOP-12050 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Benoy Antony Assignee: hzlu Fix For: 3.0.0 During http authentication, a cookie which contains the authentication token is dropped. The expiry time of the authentication token can be configured via hadoop.http.authentication.token.validity. The default value is 10 hours. For clusters which require enhanced security, it is desirable to have a configurable MaxInActiveInterval for the authentication token. If there is no activity during MaxInActiveInterval, the authentication token will be invalidated. The MaxInActiveInterval will be less than hadoop.http.authentication.token.validity. The default value will be 30 minutes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12050) Enable MaxInactiveInterval for hadoop http auth token
[ https://issues.apache.org/jira/browse/HADOOP-12050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hzlu updated HADOOP-12050: -- Status: Patch Available (was: Open) Enable MaxInactiveInterval for hadoop http auth token - Key: HADOOP-12050 URL: https://issues.apache.org/jira/browse/HADOOP-12050 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Benoy Antony Assignee: hzlu Fix For: 3.0.0 Attachments: HADOOP-12050.002.patch During http authentication, a cookie which contains the authentication token is dropped. The expiry time of the authentication token can be configured via hadoop.http.authentication.token.validity. The default value is 10 hours. For clusters which require enhanced security, it is desirable to have a configurable MaxInActiveInterval for the authentication token. If there is no activity during MaxInActiveInterval, the authentication token will be invalidated. The MaxInActiveInterval will be less than hadoop.http.authentication.token.validity. The default value will be 30 minutes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12121) smarter branch detection
[ https://issues.apache.org/jira/browse/HADOOP-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602092#comment-14602092 ] Anu Engineer commented on HADOOP-12121: --- since you are planning to work on this, I wanted to flag something that recently tripped me. Currently branch names are case-sensitive. I was building on branch hdfs-7240, but instead of HDFS-7240 I used small letters, and got a build failure since the patch got applied to trunk and build. I was wondering if it makes sense to a case insensitive branch name compare ? or just doc it somewhere ? Here is the build in question : https://issues.apache.org/jira/browse/HDFS-8448?focusedCommentId=14592800page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14592800 smarter branch detection Key: HADOOP-12121 URL: https://issues.apache.org/jira/browse/HADOOP-12121 Project: Hadoop Common Issue Type: Sub-task Components: yetus Affects Versions: HADOOP-12111 Reporter: Allen Wittenauer We should make branch detection smarter so that it works on micro versions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12117) Potential NPE from Configuration#loadProperty with allowNullValueProperties set.
[ https://issues.apache.org/jira/browse/HADOOP-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602134#comment-14602134 ] Ravi Prakash commented on HADOOP-12117: --- Thanks for the contribution Zhihai! setAllowNullValueProperties seems to have been added to support unit test cases downstream. Is there a unit test case you can point me to which is causing this? I'd be wary of opening up the access level of a Configuration method (even if it is labeled @VisibleForTesting) . Are you sure we can't unit test without changing the access level? Also please log the exception in testLoadProperty(). Potential NPE from Configuration#loadProperty with allowNullValueProperties set. Key: HADOOP-12117 URL: https://issues.apache.org/jira/browse/HADOOP-12117 Project: Hadoop Common Issue Type: Bug Components: conf Affects Versions: 2.7.1 Reporter: zhihai xu Assignee: zhihai xu Attachments: HADOOP-12117.000.patch Potential NPE from Configuration#loadProperty with allowNullValueProperties set. The following code will cause NPE: {code} } else if (!value.equals(properties.getProperty(attr))) { {code} Because if {{allowNullValueProperties}} is true, {{value}} is null and {{finalParameters}} contains {{attr}}, NullPointerException will happen -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12053) Harfs defaulturiport should be Zero ( should not -1)
[ https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gera Shegalov updated HADOOP-12053: --- Attachment: HADOOP-12053.003.patch Fixed checkstyle warning. [~cnauroth], do you mind taking a look? Harfs defaulturiport should be Zero ( should not -1) Key: HADOOP-12053 URL: https://issues.apache.org/jira/browse/HADOOP-12053 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.7.0 Reporter: Brahma Reddy Battula Assignee: Gera Shegalov Attachments: HADOOP-12053.001.patch, HADOOP-12053.002.patch, HADOOP-12053.003.patch The harfs overrides the getUriDefaultPort method of AbstractFilesystem, and returns -1 . But -1 can't pass the checkPath method when the {{fs.defaultfs}} is setted without port(like hdfs://hacluster) *Test Code :* {code} for (FileStatus file : files) { String[] edges = file.getPath().getName().split(-); if (applicationId.toString().compareTo(edges[0]) = 0 applicationId.toString().compareTo(edges[1]) = 0) { Path harPath = new Path(har:// + file.getPath().toUri().getPath()); harPath = harPath.getFileSystem(conf).makeQualified(harPath); remoteAppDir = LogAggregationUtils.getRemoteAppLogDir( harPath, applicationId, appOwner, LogAggregationUtils.getRemoteNodeLogDirSuffix(conf)); if (FileContext.getFileContext(remoteAppDir.toUri()).util().exists(remoteAppDir)) { remoteDirSet.add(remoteAppDir); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: dp.1 aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: (was: dp.1) aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: dp.1 aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: (was: dp.1) aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: (was: dp.1) aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: dp.1 aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: (was: dp.1) aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: dp.1 aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12053) Harfs defaulturiport should be Zero ( should not -1)
[ https://issues.apache.org/jira/browse/HADOOP-12053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602274#comment-14602274 ] Gera Shegalov commented on HADOOP-12053: Thanks for the comment Brahma. Can you elaborate on incompatibility? Harfs defaulturiport should be Zero ( should not -1) Key: HADOOP-12053 URL: https://issues.apache.org/jira/browse/HADOOP-12053 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.7.0 Reporter: Brahma Reddy Battula Assignee: Gera Shegalov Attachments: HADOOP-12053.001.patch, HADOOP-12053.002.patch, HADOOP-12053.003.patch The harfs overrides the getUriDefaultPort method of AbstractFilesystem, and returns -1 . But -1 can't pass the checkPath method when the {{fs.defaultfs}} is setted without port(like hdfs://hacluster) *Test Code :* {code} for (FileStatus file : files) { String[] edges = file.getPath().getName().split(-); if (applicationId.toString().compareTo(edges[0]) = 0 applicationId.toString().compareTo(edges[1]) = 0) { Path harPath = new Path(har:// + file.getPath().toUri().getPath()); harPath = harPath.getFileSystem(conf).makeQualified(harPath); remoteAppDir = LogAggregationUtils.getRemoteAppLogDir( harPath, applicationId, appOwner, LogAggregationUtils.getRemoteNodeLogDirSuffix(conf)); if (FileContext.getFileContext(remoteAppDir.toUri()).util().exists(remoteAppDir)) { remoteDirSet.add(remoteAppDir); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: (was: dp.1) aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Comment: was deleted (was: \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} precommit patch detected. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 0s {color} | {color:blue} The patch file was not named according to hadoop's naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute for instructions. {color} | | {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s {color} | {color:blue} Skipping @author checks as test-patch.sh has been patched. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 0s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 10s {color} | {color:red} The applied patch generated 1 new shellcheck (v0.3.3) issues (total was 59, now 48). {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 2m 3s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 2m 20s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12741940/dp.1 | | git revision | trunk / aa5b15b | | Optional Tests | asflicense shellcheck | | uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Default Java | 1.7.0_55 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 | | shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider upgrading.) | | shellcheck | https://builds.apache.org/job/PreCommit-HADOOP-Build/7043/artifact/patchprocess/diffpatchshellcheck.txt | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/7043/console | This message was automatically generated.) aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Comment: was deleted (was: \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} precommit patch detected. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s {color} | {color:blue} The patch file was not named according to hadoop's naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute for instructions. {color} | | {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s {color} | {color:blue} Skipping @author checks as test-patch.sh has been patched. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 0s {color} | {color:green} Patch does not generate ASF License warnings. {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 11s {color} | {color:red} The applied patch generated 1 new shellcheck (v0.3.3) issues (total was 59, now 48). {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 2m 8s {color} | {color:green} Patch has no whitespace issues. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 2m 25s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12741940/dp.1 | | git revision | trunk / aa5b15b | | Optional Tests | asflicense shellcheck | | uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Default Java | 1.7.0_55 | | Multi-JDK versions | /home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 | | shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider upgrading.) | | shellcheck | https://builds.apache.org/job/PreCommit-HADOOP-Build/7044/artifact/patchprocess/diffpatchshellcheck.txt | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/7044/console | This message was automatically generated.) aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Comment: was deleted (was: (!) A patch to the files used for the QA process has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/PreCommit-HADOOP-Build/7043/console in case of problems.) aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: dp.1 aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Comment: was deleted (was: (!) A patch to the files used for the QA process has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/PreCommit-HADOOP-Build/7044/console in case of problems.) aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: dp.1 aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: (was: dp.1) aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: dp.1 aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: (was: dp.1) aw jira testing, ignore --- Key: HADOOP-11820 URL: https://issues.apache.org/jira/browse/HADOOP-11820 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Reporter: Allen Wittenauer Attachments: dp.1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11203) Allow ditscp to accept bandwitdh in fraction MegaBytes
[ https://issues.apache.org/jira/browse/HADOOP-11203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amareshwari Sriramadasu updated HADOOP-11203: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: (was: 2.7.1) 3.0.0 Status: Resolved (was: Patch Available) Committed. Thanks [~raju.bairishetti] Allow ditscp to accept bandwitdh in fraction MegaBytes -- Key: HADOOP-11203 URL: https://issues.apache.org/jira/browse/HADOOP-11203 Project: Hadoop Common Issue Type: Improvement Components: tools/distcp Reporter: Raju Bairishetti Assignee: Raju Bairishetti Fix For: 3.0.0 Attachments: HADOOP-11203.001.patch, HADOOP-11203.patch DistCp uses ThrottleInputStream, which provides a bandwidth throttling on a specified stream. Currently, Distcp allows the max bandwidth value in Mega Bytes, which does not accept fractional values. It would be better if it accepts the Max Bandwitdh in fractional MegaBytes. Due to this we are not able to throttle the bandwidth in KBs in our prod setup. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12117) Potential NPE from Configuration#loadProperty with allowNullValueProperties set.
[ https://issues.apache.org/jira/browse/HADOOP-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhihai xu updated HADOOP-12117: --- Attachment: HADOOP-12117.001.patch Potential NPE from Configuration#loadProperty with allowNullValueProperties set. Key: HADOOP-12117 URL: https://issues.apache.org/jira/browse/HADOOP-12117 Project: Hadoop Common Issue Type: Bug Components: conf Affects Versions: 2.7.1 Reporter: zhihai xu Assignee: zhihai xu Attachments: HADOOP-12117.000.patch, HADOOP-12117.001.patch Potential NPE from Configuration#loadProperty with allowNullValueProperties set. The following code will cause NPE: {code} } else if (!value.equals(properties.getProperty(attr))) { {code} Because if {{allowNullValueProperties}} is true, {{value}} is null and {{finalParameters}} contains {{attr}}, NullPointerException will happen -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12050) Enable MaxInactiveInterval for hadoop http auth token
[ https://issues.apache.org/jira/browse/HADOOP-12050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602193#comment-14602193 ] hzlu commented on HADOOP-12050: --- No problem. Will do. On Thu, Jun 25, 2015 at 5:05 PM, Benoy Antony (JIRA) j...@apache.org Enable MaxInactiveInterval for hadoop http auth token - Key: HADOOP-12050 URL: https://issues.apache.org/jira/browse/HADOOP-12050 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Benoy Antony Assignee: hzlu Fix For: 3.0.0 Attachments: HADOOP-12050.002.patch During http authentication, a cookie which contains the authentication token is dropped. The expiry time of the authentication token can be configured via hadoop.http.authentication.token.validity. The default value is 10 hours. For clusters which require enhanced security, it is desirable to have a configurable MaxInActiveInterval for the authentication token. If there is no activity during MaxInActiveInterval, the authentication token will be invalidated. The MaxInActiveInterval will be less than hadoop.http.authentication.token.validity. The default value will be 30 minutes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12122) Hadoop should avoid unsafe split and append on fields that might be IPv6 literals
[ https://issues.apache.org/jira/browse/HADOOP-12122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nate Edel updated HADOOP-12122: --- Status: Patch Available (was: Open) Hadoop should avoid unsafe split and append on fields that might be IPv6 literals - Key: HADOOP-12122 URL: https://issues.apache.org/jira/browse/HADOOP-12122 Project: Hadoop Common Issue Type: Bug Reporter: Nate Edel Assignee: Nate Edel Attachments: lets_blow_up_a_lot_of_tests.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12122) Hadoop should avoid unsafe split and append on fields that might be IPv6 literals
[ https://issues.apache.org/jira/browse/HADOOP-12122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nate Edel updated HADOOP-12122: --- Attachment: lets_blow_up_a_lot_of_tests.patch Hadoop should avoid unsafe split and append on fields that might be IPv6 literals - Key: HADOOP-12122 URL: https://issues.apache.org/jira/browse/HADOOP-12122 Project: Hadoop Common Issue Type: Bug Reporter: Nate Edel Assignee: Nate Edel Attachments: lets_blow_up_a_lot_of_tests.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12122) Hadoop should avoid unsafe split and append on fields that might be IPv6 literals
[ https://issues.apache.org/jira/browse/HADOOP-12122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602250#comment-14602250 ] Nate Edel commented on HADOOP-12122: There are a LOT of places where we split (or use indexOf to split) IP:Port or Host:port pairs, which should use a smarter method like the Guava HostAndPort. Not yet ready for review, but want to see if the current version of the changelist passes tests. This requires HDFS-8078 to usefully test this on IPv6, although the two are nominally independent. Hadoop should avoid unsafe split and append on fields that might be IPv6 literals - Key: HADOOP-12122 URL: https://issues.apache.org/jira/browse/HADOOP-12122 Project: Hadoop Common Issue Type: Bug Reporter: Nate Edel Assignee: Nate Edel -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-12122) Hadoop should avoid unsafe split and append on fields that might be IPv6 literals
Nate Edel created HADOOP-12122: -- Summary: Hadoop should avoid unsafe split and append on fields that might be IPv6 literals Key: HADOOP-12122 URL: https://issues.apache.org/jira/browse/HADOOP-12122 Project: Hadoop Common Issue Type: Bug Reporter: Nate Edel Assignee: Nate Edel -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12050) Enable MaxInactiveInterval for hadoop http auth token
[ https://issues.apache.org/jira/browse/HADOOP-12050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602158#comment-14602158 ] Benoy Antony commented on HADOOP-12050: --- Thanks for working on this, [~hzlu] . A few comments on the patch. 1. Please add test cases to test the following scenarios a. Both expiry period and InActiveInterval are not reached. b. Expiry period is reached, InActiveInterval is not reached c. Expiry period is not reached, InActiveInterval is reached d. Both expiry period and InActiveInterval are reached. 2. Update the http auth documentation with enhancements introduced in HADOOP-12049 and HADOOP-12050. 3. A nit: change maxInactive to maxInActive (camel case). Enable MaxInactiveInterval for hadoop http auth token - Key: HADOOP-12050 URL: https://issues.apache.org/jira/browse/HADOOP-12050 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 3.0.0 Reporter: Benoy Antony Assignee: hzlu Fix For: 3.0.0 Attachments: HADOOP-12050.002.patch During http authentication, a cookie which contains the authentication token is dropped. The expiry time of the authentication token can be configured via hadoop.http.authentication.token.validity. The default value is 10 hours. For clusters which require enhanced security, it is desirable to have a configurable MaxInActiveInterval for the authentication token. If there is no activity during MaxInActiveInterval, the authentication token will be invalidated. The MaxInActiveInterval will be less than hadoop.http.authentication.token.validity. The default value will be 30 minutes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12102) Add option to list up allowed hosts that can do any operation as generic ACL.
[ https://issues.apache.org/jira/browse/HADOOP-12102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602162#comment-14602162 ] Kai Sasaki commented on HADOOP-12102: - [~cnauroth] Thank you so much for comment! I think we should care about the backwards-compatibility with the maximum priority. If we must keep the protocol layer as it is now, we can implement this ACL in HDFS component not Common component. Or if we can change the protocol layer when upgrading to 3.0.0, it might be also option. Implementing host/IP based ACL outside of service-level ACL can also be possible, but it may bring the complexity and overlap of codebase. Add option to list up allowed hosts that can do any operation as generic ACL. - Key: HADOOP-12102 URL: https://issues.apache.org/jira/browse/HADOOP-12102 Project: Hadoop Common Issue Type: Improvement Affects Versions: 2.7.0 Reporter: Kai Sasaki Assignee: Kai Sasaki Priority: Minor Current NameNode receives all operations through client protocol from any hosts. However, some critical operations such as format should be restricted with not only Kerberos authentication but also with host names in order to prevent us from formatting NameNode by mistake. It is better to add option to write some allowed hosts which can do any operations to NameNode. Although originally this is about HDFS daemons, this feature should be implemented as one of generic ACL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12034) Wrong comment for the filefilter function in test-patch checkstyle plugin
[ https://issues.apache.org/jira/browse/HADOOP-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kengo Seki updated HADOOP-12034: Resolution: Implemented Status: Resolved (was: Patch Available) Wrong comment for the filefilter function in test-patch checkstyle plugin - Key: HADOOP-12034 URL: https://issues.apache.org/jira/browse/HADOOP-12034 Project: Hadoop Common Issue Type: Bug Components: build Reporter: Kengo Seki Assignee: Kengo Seki Priority: Minor Labels: newbie, test-patch Attachments: HADOOP-12034.001.patch This comment is attached to checkstyle_filefilter function, but it is a comment for shellcheck_filefilter actually. {code} # if it ends in an explicit .sh, then this is shell code. # if it doesn't have an extension, we assume it is shell code too {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602230#comment-14602230 ] Gera Shegalov commented on HADOOP-12107: bq StatisticsData is a public class, but its constructor is not public. [~cmccabe], good point on the one hand but on the other hand this constructor is package-scope, and technically usable if an creates a class with the same package name, regardless how unlikely or illegal (in terms of specified audience) it is. How about we defensively keep that constructor for branch-2 at least? long running apps may have a huge number of StatisticsData instances under FileSystem - Key: HADOOP-12107 URL: https://issues.apache.org/jira/browse/HADOOP-12107 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.7.0 Reporter: Sangjin Lee Assignee: Sangjin Lee Priority: Minor Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, HADOOP-12107.003.patch, HADOOP-12107.004.patch, HADOOP-12107.005.patch We observed with some of our apps (non-mapreduce apps that use filesystems) that they end up accumulating a huge memory footprint coming from {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of {{Statistics}}). Although the thread reference from {{StatisticsData}} is a weak reference, and thus can get cleared once a thread goes away, the actual {{StatisticsData}} instances in the list won't get cleared until any of these following methods is called on {{Statistics}}: - {{getBytesRead()}} - {{getBytesWritten()}} - {{getReadOps()}} - {{getLargeReadOps()}} - {{getWriteOps()}} - {{toString()}} It is quite possible to have an application that interacts with a filesystem but does not call any of these methods on the {{Statistics}}. If such an application runs for a long time and has a large amount of thread churn, the memory footprint will grow significantly. The current workaround is either to limit the thread churn or to invoke these operations occasionally to pare down the memory. However, this is still a deficiency with {{FileSystem$Statistics}} itself in that the memory is controlled only as a side effect of those operations. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12113) update test-patch branch to latest code
[ https://issues.apache.org/jira/browse/HADOOP-12113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600750#comment-14600750 ] Sean Busbey commented on HADOOP-12113: -- IIRC, branches named for jiras were problematic, but maybe we fixed that. update test-patch branch to latest code --- Key: HADOOP-12113 URL: https://issues.apache.org/jira/browse/HADOOP-12113 Project: Hadoop Common Issue Type: Sub-task Components: yetus Reporter: Allen Wittenauer Assignee: Allen Wittenauer Attachments: HADOOP-12113-HADOOP-12111.patch [~sekikn] and I have been working on github. We should update the codebase to reflect all of those changes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12081) Support Hadoop on zLinux - fix JAAS authentication issue
[ https://issues.apache.org/jira/browse/HADOOP-12081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600871#comment-14600871 ] Adam Roberts commented on HADOOP-12081: --- All, when can we realistically expect to see this into a Hadoop release? The implications for the zLinux platform are enormous and code that isn't run on said platform won't be impacted in the slightest (thus having zero impact to the community but a BIG impact for our Spark on zLinux efforts) Support Hadoop on zLinux - fix JAAS authentication issue Key: HADOOP-12081 URL: https://issues.apache.org/jira/browse/HADOOP-12081 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 2.6.0 Environment: zLinux. Reporter: Adam Roberts Labels: zlinux Attachments: HADOOP-12081.001.patch Currently the 64 bit check in security/UserGroupInformation.java uses os.arch and checks for 64. s390x is returned on IBM's z platform: s390x is 64 bit. Without this change, if we try to use HDFS with Spark, we get a fatal error (unable to login as we can't find a login class). This address fixes said issue by identifying s390x as a 64 bit platform and thus allowing Spark to run on zLinux. A simple fix with very big implications! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HADOOP-12081) Support Hadoop on zLinux - fix JAAS authentication issue
[ https://issues.apache.org/jira/browse/HADOOP-12081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA reassigned HADOOP-12081: -- Assignee: Akira AJISAKA Support Hadoop on zLinux - fix JAAS authentication issue Key: HADOOP-12081 URL: https://issues.apache.org/jira/browse/HADOOP-12081 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 2.6.0 Environment: zLinux. Reporter: Adam Roberts Assignee: Akira AJISAKA Labels: zlinux Attachments: HADOOP-12081.001.patch Currently the 64 bit check in security/UserGroupInformation.java uses os.arch and checks for 64. s390x is returned on IBM's z platform: s390x is 64 bit. Without this change, if we try to use HDFS with Spark, we get a fatal error (unable to login as we can't find a login class). This address fixes said issue by identifying s390x as a 64 bit platform and thus allowing Spark to run on zLinux. A simple fix with very big implications! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-12081) Support Hadoop on zLinux - fix JAAS authentication issue
[ https://issues.apache.org/jira/browse/HADOOP-12081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600873#comment-14600873 ] Akira AJISAKA commented on HADOOP-12081: I'd like to include the patch in 2.8 release. [~steve_l] and [~aw], would you review the patch? Support Hadoop on zLinux - fix JAAS authentication issue Key: HADOOP-12081 URL: https://issues.apache.org/jira/browse/HADOOP-12081 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 2.6.0 Environment: zLinux. Reporter: Adam Roberts Assignee: Akira AJISAKA Labels: zlinux Attachments: HADOOP-12081.001.patch Currently the 64 bit check in security/UserGroupInformation.java uses os.arch and checks for 64. s390x is returned on IBM's z platform: s390x is 64 bit. Without this change, if we try to use HDFS with Spark, we get a fatal error (unable to login as we can't find a login class). This address fixes said issue by identifying s390x as a 64 bit platform and thus allowing Spark to run on zLinux. A simple fix with very big implications! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12081) Support Hadoop on zLinux - fix JAAS authentication issue
[ https://issues.apache.org/jira/browse/HADOOP-12081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA updated HADOOP-12081: --- Target Version/s: 2.8.0 Environment: zLinux (was: zLinux.) Support Hadoop on zLinux - fix JAAS authentication issue Key: HADOOP-12081 URL: https://issues.apache.org/jira/browse/HADOOP-12081 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 2.6.0 Environment: zLinux Reporter: Adam Roberts Assignee: Akira AJISAKA Labels: zlinux Attachments: HADOOP-12081.001.patch Currently the 64 bit check in security/UserGroupInformation.java uses os.arch and checks for 64. s390x is returned on IBM's z platform: s390x is 64 bit. Without this change, if we try to use HDFS with Spark, we get a fatal error (unable to login as we can't find a login class). This address fixes said issue by identifying s390x as a 64 bit platform and thus allowing Spark to run on zLinux. A simple fix with very big implications! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-10392) Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)
[ https://issues.apache.org/jira/browse/HADOOP-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600929#comment-14600929 ] Hadoop QA commented on HADOOP-10392: \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 20m 55s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 24 new or modified test files. | | {color:red}-1{color} | javac | 7m 29s | The applied patch generated 221 additional warning messages. | | {color:green}+1{color} | javadoc | 9m 40s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 21s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 3m 46s | There were no new checkstyle issues. | | {color:red}-1{color} | whitespace | 0m 3s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. | | {color:green}+1{color} | install | 1m 33s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 32s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 7m 11s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | common tests | 22m 5s | Tests passed in hadoop-common. | | {color:red}-1{color} | mapreduce tests | 106m 23s | Tests failed in hadoop-mapreduce-client-jobclient. | | {color:green}+1{color} | tools/hadoop tests | 0m 51s | Tests passed in hadoop-archives. | | {color:green}+1{color} | tools/hadoop tests | 0m 17s | Tests passed in hadoop-aws. | | {color:red}-1{color} | tools/hadoop tests | 14m 44s | Tests failed in hadoop-gridmix. | | {color:green}+1{color} | tools/hadoop tests | 0m 18s | Tests passed in hadoop-openstack. | | {color:green}+1{color} | tools/hadoop tests | 0m 23s | Tests passed in hadoop-rumen. | | {color:green}+1{color} | tools/hadoop tests | 6m 12s | Tests passed in hadoop-streaming. | | | | 202m 47s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.mapred.TestLocalJobSubmission | | | hadoop.mapred.gridmix.TestRecordFactory | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12739767/HADOOP-10392.009.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / a815cc1 | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/7040/artifact/patchprocess/diffJavacWarnings.txt | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/7040/artifact/patchprocess/whitespace.txt | | hadoop-common test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/7040/artifact/patchprocess/testrun_hadoop-common.txt | | hadoop-mapreduce-client-jobclient test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/7040/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt | | hadoop-archives test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/7040/artifact/patchprocess/testrun_hadoop-archives.txt | | hadoop-aws test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/7040/artifact/patchprocess/testrun_hadoop-aws.txt | | hadoop-gridmix test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/7040/artifact/patchprocess/testrun_hadoop-gridmix.txt | | hadoop-openstack test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/7040/artifact/patchprocess/testrun_hadoop-openstack.txt | | hadoop-rumen test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/7040/artifact/patchprocess/testrun_hadoop-rumen.txt | | hadoop-streaming test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/7040/artifact/patchprocess/testrun_hadoop-streaming.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/7040/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/7040/console | This message was automatically generated. Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem) Key: HADOOP-10392 URL: https://issues.apache.org/jira/browse/HADOOP-10392 Project: Hadoop Common Issue Type: Sub-task Components: fs Affects Versions: 2.3.0 Reporter: Akira AJISAKA Assignee: Akira AJISAKA Priority: Minor Labels: BB2015-05-TBR, newbie Attachments: HADOOP-10392.009.patch,