[jira] [Commented] (HADOOP-14775) Change junit dependency in parent pom file to junit 5 while maintaining backward compatibility to junit4.
[ https://issues.apache.org/jira/browse/HADOOP-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137902#comment-16137902 ] Akira Ajisaka commented on HADOOP-14775: Thank you for providing the patch, [~ajayydv]. After applying the patch, {{mvn test}} fails. {noformat} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-minikdc: Execution default-test of goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test failed: java.lang.NoSuchMethodError: org.apache.maven.surefire.providerapi.ProviderParameters.getProviderProperties()Ljava/util/Map; -> [Help 1] {noformat} Updating maven-surefire-plugin version to 2.19.1 fixes this error, but another error happens. (HADOOP-13514) I'll check HADOOP-13514 again. > Change junit dependency in parent pom file to junit 5 while maintaining > backward compatibility to junit4. > -- > > Key: HADOOP-14775 > URL: https://issues.apache.org/jira/browse/HADOOP-14775 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0-alpha4 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Labels: junit5 > Attachments: HADOOP-14775.01.patch > > > Change junit dependency in parent pom file to junit 5 while maintaining > backward compatibility to junit4. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14649) Update aliyun-sdk-oss version to 2.8.1
[ https://issues.apache.org/jira/browse/HADOOP-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137802#comment-16137802 ] Hadoop QA commented on HADOOP-14649: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14649 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883246/HADOOP-14649.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux c900c32e9139 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4249172 | | Default Java | 1.8.0_144 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13097/testReport/ | | modules | C: hadoop-project U: hadoop-project | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13097/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Update aliyun-sdk-oss version to 2.8.1 > -- > > Key: HADOOP-14649 > URL: https://issues.apache.org/jira/browse/HADOOP-14649 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Genmao Yu > Attachments: HADOOP-14649.000.patch > > > Update the dependency > com.aliyun.oss:aliyun-sdk-oss:2.4.1 > to the latest (2.8.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14649) Update aliyun-sdk-oss version to 2.8.1
[ https://issues.apache.org/jira/browse/HADOOP-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137780#comment-16137780 ] Kai Zheng commented on HADOOP-14649: In annual leave and vacation, email response will be delayed. > Update aliyun-sdk-oss version to 2.8.1 > -- > > Key: HADOOP-14649 > URL: https://issues.apache.org/jira/browse/HADOOP-14649 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Genmao Yu > Attachments: HADOOP-14649.000.patch > > > Update the dependency > com.aliyun.oss:aliyun-sdk-oss:2.4.1 > to the latest (2.8.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14649) Update aliyun-sdk-oss version to 2.8.1
[ https://issues.apache.org/jira/browse/HADOOP-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137779#comment-16137779 ] Genmao Yu commented on HADOOP-14649: unit tests passed {code} --- T E S T S --- --- T E S T S --- Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate Tests run: 11, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 10.676 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.602 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58.933 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractGetFileStatus Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.563 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractGetFileStatus Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.159 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractOpen Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.133 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractOpen Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.935 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.182 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractSeek Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.058 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractSeek Running org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.08 sec - in org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.729 sec - in org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemStore Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.165 sec - in org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemStore Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSInputStream Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.788 sec - in org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSInputStream Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSOutputStream Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.453 sec - in org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSOutputStream Results : Tests run: 144, Failures: 0, Errors: 0, Skipped: 2 {code} > Update aliyun-sdk-oss version to 2.8.1 > -- > > Key: HADOOP-14649 > URL: https://issues.apache.org/jira/browse/HADOOP-14649 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Genmao Yu > Attachments: HADOOP-14649.000.patch > > > Update the dependency > com.aliyun.oss:aliyun-sdk-oss:2.4.1 > to the latest (2.8.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14649) Update aliyun-sdk-oss version to 2.8.1
[ https://issues.apache.org/jira/browse/HADOOP-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Genmao Yu updated HADOOP-14649: --- Attachment: HADOOP-14649.000.patch > Update aliyun-sdk-oss version to 2.8.1 > -- > > Key: HADOOP-14649 > URL: https://issues.apache.org/jira/browse/HADOOP-14649 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Genmao Yu > Attachments: HADOOP-14649.000.patch > > > Update the dependency > com.aliyun.oss:aliyun-sdk-oss:2.4.1 > to the latest (2.8.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14649) Update aliyun-sdk-oss version to 2.8.1
[ https://issues.apache.org/jira/browse/HADOOP-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Genmao Yu updated HADOOP-14649: --- Status: Patch Available (was: Open) > Update aliyun-sdk-oss version to 2.8.1 > -- > > Key: HADOOP-14649 > URL: https://issues.apache.org/jira/browse/HADOOP-14649 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Genmao Yu > > Update the dependency > com.aliyun.oss:aliyun-sdk-oss:2.4.1 > to the latest (2.8.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4
[ https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137566#comment-16137566 ] Hadoop QA commented on HADOOP-14729: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 56 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 0s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 14s{color} | {color:green} root generated 0 new + 1289 unchanged - 2 fixed = 1289 total (was 1291) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 15s{color} | {color:orange} root: The patch generated 41 new + 771 unchanged - 89 fixed = 812 total (was 860) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 7m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 7s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 42s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 50s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s{color} | {color:green} hadoop-mapreduce-client-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 98m 20s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 53s{color} | {color:green} hadoop-mapreduce-client-nativetask in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s{color} | {color:green} hadoop-mapreduce-examples in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 53s{color} | {color:green} hadoop-streaming in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 43s{color} | {color:green} hadoop-datajoin in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 5s{color} | {color:green} hadoop-extras in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 42s{color} | {color:green} hadoop-aws in the patch passed.
[jira] [Commented] (HADOOP-14687) AuthenticatedURL will reuse bad/expired session cookies
[ https://issues.apache.org/jira/browse/HADOOP-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137541#comment-16137541 ] Hudson commented on HADOOP-14687: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12228 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12228/]) HADOOP-14687. AuthenticatedURL will reuse bad/expired session cookies. (jlowe: rev c3793102121767c46091805eae65ef3919a5f368) * (edit) hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/AuthenticatedURL.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServerWithSpengo.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java * (edit) hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java * (edit) hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/PseudoAuthenticator.java > AuthenticatedURL will reuse bad/expired session cookies > --- > > Key: HADOOP-14687 > URL: https://issues.apache.org/jira/browse/HADOOP-14687 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Fix For: 2.9.0, 3.0.0-beta1, 2.8.2 > > Attachments: HADOOP-14687.2.trunk.patch, > HADOOP-14687.branch-2.8.patch, HADOOP-14687.trunk.patch > > > AuthenticatedURL with kerberos was designed to perform spnego, then use a > session cookie to avoid renegotiation overhead. Unfortunately the client > will continue to use a cookie after it expires. Every request elicits a 401, > connection closes (despite keepalive because 401 is an "error"), TGS is > obtained, connection re-opened, re-requests with TGS, repeat cycle. This > places a strain on the kdc and creates lots of time_wait sockets. > > The main problem is unbeknownst to the auth url, the JDK transparently does > spnego. The server issues a new cookie but the auth url doesn't scrape the > cookie from the response because it doesn't know the JDK re-authenticated. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization
[ https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137504#comment-16137504 ] Daryn Sharp commented on HADOOP-9747: - Had an incident today with a RM going dead-in-the-water that this patch would have allowed to self-heal. A re-login was triggered, and it failed. Now the UGI was left with no private credentials. New calls to getCurrentUser() found neither a keytab nor ticket instance, so any call to relogin was a no-op. This patch "remembers" the ugi was logged in from a keytab so re-logins would have been attempted. Regarding earlier concerns, we've been running with the locking on subject's private credentials since early Feb 2017 after experiencing DN lockups. We just aren't running the code to remember the login conf. > Reduce unnecessary UGI synchronization > -- > > Key: HADOOP-9747 > URL: https://issues.apache.org/jira/browse/HADOOP-9747 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Attachments: HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, > HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch > > > Jstacks of heavily loaded NNs show up to dozens of threads blocking in the > UGI. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137478#comment-16137478 ] Wangda Tan commented on HADOOP-13835: - [~vvasudev], I may not have chance to do this at least in the next two weeks. Please feel free to take over if you have bandwidth. When I get chance to do this, I will try to backport branch-2.8 as well. > Move Google Test Framework code from mapreduce to hadoop-common > --- > > Key: HADOOP-13835 > URL: https://issues.apache.org/jira/browse/HADOOP-13835 > Project: Hadoop Common > Issue Type: Task > Components: test >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, > HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, > HADOOP-13835.006.patch, HADOOP-13835.007.patch, > HADOOP-13835.branch-2.007.patch > > > The mapreduce project has Google Test Framework code to allow testing of > native libraries. This should be moved to hadoop-common so that other > projects can use it as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14687) AuthenticatedURL will reuse bad/expired session cookies
[ https://issues.apache.org/jira/browse/HADOOP-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated HADOOP-14687: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.2 3.0.0-beta1 2.9.0 Status: Resolved (was: Patch Available) Thanks, Daryn! I committed this to trunk, branch-2, branch-2.8, and branch-2.8.2. > AuthenticatedURL will reuse bad/expired session cookies > --- > > Key: HADOOP-14687 > URL: https://issues.apache.org/jira/browse/HADOOP-14687 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Fix For: 2.9.0, 3.0.0-beta1, 2.8.2 > > Attachments: HADOOP-14687.2.trunk.patch, > HADOOP-14687.branch-2.8.patch, HADOOP-14687.trunk.patch > > > AuthenticatedURL with kerberos was designed to perform spnego, then use a > session cookie to avoid renegotiation overhead. Unfortunately the client > will continue to use a cookie after it expires. Every request elicits a 401, > connection closes (despite keepalive because 401 is an "error"), TGS is > obtained, connection re-opened, re-requests with TGS, repeat cycle. This > places a strain on the kdc and creates lots of time_wait sockets. > > The main problem is unbeknownst to the auth url, the JDK transparently does > spnego. The server issues a new cookie but the auth url doesn't scrape the > cookie from the response because it doesn't know the JDK re-authenticated. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14687) AuthenticatedURL will reuse bad/expired session cookies
[ https://issues.apache.org/jira/browse/HADOOP-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137467#comment-16137467 ] Jason Lowe commented on HADOOP-14687: - +1 for the branch-2.8 patch as well. Committing this. > AuthenticatedURL will reuse bad/expired session cookies > --- > > Key: HADOOP-14687 > URL: https://issues.apache.org/jira/browse/HADOOP-14687 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Attachments: HADOOP-14687.2.trunk.patch, > HADOOP-14687.branch-2.8.patch, HADOOP-14687.trunk.patch > > > AuthenticatedURL with kerberos was designed to perform spnego, then use a > session cookie to avoid renegotiation overhead. Unfortunately the client > will continue to use a cookie after it expires. Every request elicits a 401, > connection closes (despite keepalive because 401 is an "error"), TGS is > obtained, connection re-opened, re-requests with TGS, repeat cycle. This > places a strain on the kdc and creates lots of time_wait sockets. > > The main problem is unbeknownst to the auth url, the JDK transparently does > spnego. The server issues a new cookie but the auth url doesn't scrape the > cookie from the response because it doesn't know the JDK re-authenticated. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14652) Update metrics-core version
[ https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137377#comment-16137377 ] Ray Chiang commented on HADOOP-14652: - Comparing against my baseline testing, I'm not seeing any new test failures. > Update metrics-core version > --- > > Key: HADOOP-14652 > URL: https://issues.apache.org/jira/browse/HADOOP-14652 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14652.001.patch, HADOOP-14652.002.patch > > > The current artifact is: > com.codehale.metrics:metrics-core:3.0.1 > That version could either be bumped to 3.0.2 (the latest of that line), or > use the latest artifact: > io.dropwizard.metrics:metrics-core:3.2.3 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14655) Update httpcore version to 4.4.6
[ https://issues.apache.org/jira/browse/HADOOP-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137376#comment-16137376 ] Ray Chiang commented on HADOOP-14655: - Comparing against my baseline testing, I'm not seeing any new test failures. > Update httpcore version to 4.4.6 > > > Key: HADOOP-14655 > URL: https://issues.apache.org/jira/browse/HADOOP-14655 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14655.001.patch > > > Update the dependency > org.apache.httpcomponents:httpcore:4.4.4 > to the latest (4.4.6). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14653) Update joda-time version to 2.9.9
[ https://issues.apache.org/jira/browse/HADOOP-14653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137375#comment-16137375 ] Ray Chiang commented on HADOOP-14653: - Comparing against my baseline testing, I'm not seeing any new test failures. > Update joda-time version to 2.9.9 > - > > Key: HADOOP-14653 > URL: https://issues.apache.org/jira/browse/HADOOP-14653 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14653.001.patch > > > Update the dependency > joda-time:joda-time:2.9.4 > to the latest (2.9.9). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14648) Bump commons-configuration2 to 2.1.1
[ https://issues.apache.org/jira/browse/HADOOP-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137374#comment-16137374 ] Ray Chiang commented on HADOOP-14648: - The SLS unit test failures even occur for me in a clean tree without this change. Comparing against my baseline testing, I'm not seeing any new test failures. > Bump commons-configuration2 to 2.1.1 > > > Key: HADOOP-14648 > URL: https://issues.apache.org/jira/browse/HADOOP-14648 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14648.001.patch > > > Update the dependency > org.apache.commons: commons-configuration2: 2.1 > to the latest (2.1.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14251) Credential provider should handle property key deprecation
[ https://issues.apache.org/jira/browse/HADOOP-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137295#comment-16137295 ] John Zhuge commented on HADOOP-14251: - Hi [~steve_l], could you please take another look? Really appreciate your help. Sorry for bugging you but this is blocking a few of our internal tests. > Credential provider should handle property key deprecation > -- > > Key: HADOOP-14251 > URL: https://issues.apache.org/jira/browse/HADOOP-14251 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Critical > Attachments: HADOOP-14251.001.patch, HADOOP-14251.002.patch, > HADOOP-14251.003.patch, HADOOP-14251.004.patch, HADOOP-14251.005.patch > > > The properties with old keys stored in a credential store can not be read via > the new property keys, even though the old keys have been deprecated. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14775) Change junit dependency in parent pom file to junit 5 while maintaining backward compatibility to junit4.
[ https://issues.apache.org/jira/browse/HADOOP-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137280#comment-16137280 ] Hadoop QA commented on HADOOP-14775: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 18m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14775 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883174/HADOOP-14775.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 5ffc3cf24cf7 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 657dd59 | | Default Java | 1.8.0_144 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13096/testReport/ | | modules | C: hadoop-project U: hadoop-project | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13096/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Change junit dependency in parent pom file to junit 5 while maintaining > backward compatibility to junit4. > -- > > Key: HADOOP-14775 > URL: https://issues.apache.org/jira/browse/HADOOP-14775 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0-alpha4 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Labels: junit5 > Attachments: HADOOP-14775.01.patch > > > Change junit dependency in parent pom file to junit 5 while maintaining > backward compatibility to junit4. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14703) ConsoleSink for metrics2
[ https://issues.apache.org/jira/browse/HADOOP-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137265#comment-16137265 ] Hadoop QA commented on HADOOP-14703: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 16s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 66m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | | | hadoop.security.TestKDiag | | | hadoop.metrics2.sink.TestLogSink | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14703 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883160/HADOOP-14703.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 764152fa6230 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 657dd59 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13093/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13093/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13093/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > ConsoleSink for metrics2 > > > Key: HADOOP-14703 > URL: https://issues.apache.org/jira/browse/HADOOP-14703 > Project: Hadoop Common > Issue Type: Improvement > Components: common, metrics >Affects Versions: 3.0.0-beta1 >Reporter: Ronald Macmaster >Assignee:
[jira] [Updated] (HADOOP-14775) Change junit dependency in parent pom file to junit 5 while maintaining backward compatibility to junit4.
[ https://issues.apache.org/jira/browse/HADOOP-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-14775: Status: Patch Available (was: Open) > Change junit dependency in parent pom file to junit 5 while maintaining > backward compatibility to junit4. > -- > > Key: HADOOP-14775 > URL: https://issues.apache.org/jira/browse/HADOOP-14775 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0-alpha4 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Labels: junit5 > Attachments: HADOOP-14775.01.patch > > > Change junit dependency in parent pom file to junit 5 while maintaining > backward compatibility to junit4. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14775) Change junit dependency in parent pom file to junit 5 while maintaining backward compatibility to junit4.
[ https://issues.apache.org/jira/browse/HADOOP-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-14775: Attachment: HADOOP-14775.01.patch > Change junit dependency in parent pom file to junit 5 while maintaining > backward compatibility to junit4. > -- > > Key: HADOOP-14775 > URL: https://issues.apache.org/jira/browse/HADOOP-14775 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0-alpha4 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Labels: junit5 > Attachments: HADOOP-14775.01.patch > > > Change junit dependency in parent pom file to junit 5 while maintaining > backward compatibility to junit4. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14655) Update httpcore version to 4.4.6
[ https://issues.apache.org/jira/browse/HADOOP-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137186#comment-16137186 ] Hadoop QA commented on HADOOP-14655: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 18m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14655 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12878849/HADOOP-14655.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 8080ad6f74be 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 657dd59 | | Default Java | 1.8.0_144 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13094/testReport/ | | modules | C: hadoop-project U: hadoop-project | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13094/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Update httpcore version to 4.4.6 > > > Key: HADOOP-14655 > URL: https://issues.apache.org/jira/browse/HADOOP-14655 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14655.001.patch > > > Update the dependency > org.apache.httpcomponents:httpcore:4.4.4 > to the latest (4.4.6). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14787) AliyunOSS: Implement the `createNonRecursive` operator
[ https://issues.apache.org/jira/browse/HADOOP-14787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137171#comment-16137171 ] Ray Chiang commented on HADOOP-14787: - Thanks [~ste...@apache.org]! > AliyunOSS: Implement the `createNonRecursive` operator > -- > > Key: HADOOP-14787 > URL: https://issues.apache.org/jira/browse/HADOOP-14787 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14787.000.patch > > > {code} > testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate) > Time elapsed: 1.146 sec <<< ERROR! > java.io.IOException: createNonRecursive unsupported for this filesystem class > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem > at > org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304) > at > org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163) > at > org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:178) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:208) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > testOverwriteEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate) > Time elapsed: 0.145 sec <<< ERROR! > java.io.IOException: createNonRecursive unsupported for this filesystem class > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem > at > org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304) > at > org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163) > at > org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:133) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:155) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > testCreateFileOverExistingFileNoOverwrite(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate) > Time elapsed: 0.147 sec <<< ERROR! > java.io.IOException: createNonRecursive unsupported for this filesystem class > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem > at >
[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4
[ https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137168#comment-16137168 ] Ajay Kumar commented on HADOOP-14729: - [~ajisakaa] thanks for review. Updated patch with suggested changes. > Upgrade JUnit 3 TestCase to JUnit 4 > --- > > Key: HADOOP-14729 > URL: https://issues.apache.org/jira/browse/HADOOP-14729 > Project: Hadoop Common > Issue Type: Test >Reporter: Akira Ajisaka >Assignee: Ajay Kumar > Labels: newbie > Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, > HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, > HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch, > HADOOP-14729.009.patch, HADOOP-14729.010.patch > > > There are still test classes that extend from junit.framework.TestCase in > hadoop-common. Upgrade them to JUnit4. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4
[ https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-14729: Attachment: HADOOP-14729.010.patch > Upgrade JUnit 3 TestCase to JUnit 4 > --- > > Key: HADOOP-14729 > URL: https://issues.apache.org/jira/browse/HADOOP-14729 > Project: Hadoop Common > Issue Type: Test >Reporter: Akira Ajisaka >Assignee: Ajay Kumar > Labels: newbie > Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, > HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, > HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch, > HADOOP-14729.009.patch, HADOOP-14729.010.patch > > > There are still test classes that extend from junit.framework.TestCase in > hadoop-common. Upgrade them to JUnit4. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14703) ConsoleSink for metrics2
[ https://issues.apache.org/jira/browse/HADOOP-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ronald Macmaster updated HADOOP-14703: -- Attachment: HADOOP-14703.004.patch Force sequential test execution. > ConsoleSink for metrics2 > > > Key: HADOOP-14703 > URL: https://issues.apache.org/jira/browse/HADOOP-14703 > Project: Hadoop Common > Issue Type: Improvement > Components: common, metrics >Affects Versions: 3.0.0-beta1 >Reporter: Ronald Macmaster >Assignee: Ronald Macmaster > Labels: newbie > Attachments: > 0001-HADOOP-14703.-ConsoleSink-for-simple-metrics-printin.patch, > HADOOP-14703.001.patch, HADOOP-14703.002.patch, HADOOP-14703.003.patch, > HADOOP-14703.004.patch > > Original Estimate: 6h > Remaining Estimate: 6h > > The ConsoleSink will provide a simple solution to dump metrics to the console > through std.out. > Quick access to metrics through the console will simplify the development, > testing, and debugging process. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14655) Update httpcore version to 4.4.6
[ https://issues.apache.org/jira/browse/HADOOP-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14655: Status: Patch Available (was: Open) > Update httpcore version to 4.4.6 > > > Key: HADOOP-14655 > URL: https://issues.apache.org/jira/browse/HADOOP-14655 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14655.001.patch > > > Update the dependency > org.apache.httpcomponents:httpcore:4.4.4 > to the latest (4.4.6). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14687) AuthenticatedURL will reuse bad/expired session cookies
[ https://issues.apache.org/jira/browse/HADOOP-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137098#comment-16137098 ] Hadoop QA commented on HADOOP-14687: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 4s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.8 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 49s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 59s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_151 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s{color} | {color:green} branch-2.8 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_151 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 49s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 72 unchanged - 2 fixed = 72 total (was 74) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 24s{color} | {color:green} hadoop-auth in the patch passed with JDK v1.7.0_151. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 2s{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_151. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 93m 25s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:d946387 | | JIRA Issue | HADOOP-14687 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883133/HADOOP-14687.branch-2.8.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite
[jira] [Commented] (HADOOP-14703) ConsoleSink for metrics2
[ https://issues.apache.org/jira/browse/HADOOP-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137071#comment-16137071 ] Hadoop QA commented on HADOOP-14703: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 36s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 63m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.metrics2.sink.TestLogSink | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14703 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883131/HADOOP-14703.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 2d7d60f1b8da 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4ec5acc | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13091/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13091/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13091/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > ConsoleSink for metrics2 > > > Key: HADOOP-14703 > URL: https://issues.apache.org/jira/browse/HADOOP-14703 > Project: Hadoop Common > Issue Type: Improvement > Components: common, metrics >Affects Versions: 3.0.0-beta1 >Reporter: Ronald Macmaster >Assignee: Ronald Macmaster > Labels: newbie > Attachments: >
[jira] [Commented] (HADOOP-14705) Add batched interface reencryptEncryptedKeys to KMS
[ https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16137062#comment-16137062 ] Xiao Chen commented on HADOOP-14705: This was committed to trunk. Thanks again > Add batched interface reencryptEncryptedKeys to KMS > --- > > Key: HADOOP-14705 > URL: https://issues.apache.org/jira/browse/HADOOP-14705 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, > HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, > HADOOP-14705.06.patch, HADOOP-14705.07.patch, HADOOP-14705.08.patch, > HADOOP-14705.09.patch, HADOOP-14705.10.patch, HADOOP-14705.11.patch > > > HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}. > As the performance results of HDFS-10899 turns out, communication overhead > with the KMS occupies the majority of the time. So this jira proposes to add > a batched interface to re-encrypt multiple EDEKs in 1 call. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14687) AuthenticatedURL will reuse bad/expired session cookies
[ https://issues.apache.org/jira/browse/HADOOP-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daryn Sharp updated HADOOP-14687: - Attachment: HADOOP-14687.branch-2.8.patch Conflicts in 2.8 essentially due to logging (2.8 didn't have a logger). > AuthenticatedURL will reuse bad/expired session cookies > --- > > Key: HADOOP-14687 > URL: https://issues.apache.org/jira/browse/HADOOP-14687 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Attachments: HADOOP-14687.2.trunk.patch, > HADOOP-14687.branch-2.8.patch, HADOOP-14687.trunk.patch > > > AuthenticatedURL with kerberos was designed to perform spnego, then use a > session cookie to avoid renegotiation overhead. Unfortunately the client > will continue to use a cookie after it expires. Every request elicits a 401, > connection closes (despite keepalive because 401 is an "error"), TGS is > obtained, connection re-opened, re-requests with TGS, repeat cycle. This > places a strain on the kdc and creates lots of time_wait sockets. > > The main problem is unbeknownst to the auth url, the JDK transparently does > spnego. The server issues a new cookie but the auth url doesn't scrape the > cookie from the response because it doesn't know the JDK re-authenticated. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store
[ https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136955#comment-16136955 ] Kai Zheng commented on HADOOP-12862: Thanks for this, [~jojochuang]. Will look at this later if not urgent, am in a vacation. > LDAP Group Mapping over SSL can not specify trust store > --- > > Key: HADOOP-12862 > URL: https://issues.apache.org/jira/browse/HADOOP-12862 > Project: Hadoop Common > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, > HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, > HADOOP-12862.006.patch, HADOOP-12862.007.patch > > > In a secure environment, SSL is used to encrypt LDAP request for group > mapping resolution. > We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange. > For information, Hadoop name node, as an LDAP client, talks to a LDAP server > to resolve the group mapping of a user. In the case of LDAP over SSL, a > typical scenario is to establish one-way authentication (the client verifies > the server's certificate is real) by storing the server's certificate in the > client's truststore. > A rarer scenario is to establish two-way authentication: in addition to store > truststore for the client to verify the server, the server also verifies the > client's certificate is real, and the client stores its own certificate in > its keystore. > However, the current implementation for LDAP over SSL does not seem to be > correct in that it only configures keystore but no truststore (so LDAP server > can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP > server's certificate) > I think there should an extra pair of properties to specify the > truststore/password for LDAP server, and use that to configure system > properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}} > I am a security layman so my words can be imprecise. But I hope this makes > sense. > Oracle's SSL LDAP documentation: > http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html > JSSE reference guide: > http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14687) AuthenticatedURL will reuse bad/expired session cookies
[ https://issues.apache.org/jira/browse/HADOOP-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136961#comment-16136961 ] Jason Lowe commented on HADOOP-14687: - Thanks for clarifying. +1 lgtm. I'll commit this later today if there are no objections. > AuthenticatedURL will reuse bad/expired session cookies > --- > > Key: HADOOP-14687 > URL: https://issues.apache.org/jira/browse/HADOOP-14687 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Attachments: HADOOP-14687.2.trunk.patch, HADOOP-14687.trunk.patch > > > AuthenticatedURL with kerberos was designed to perform spnego, then use a > session cookie to avoid renegotiation overhead. Unfortunately the client > will continue to use a cookie after it expires. Every request elicits a 401, > connection closes (despite keepalive because 401 is an "error"), TGS is > obtained, connection re-opened, re-requests with TGS, repeat cycle. This > places a strain on the kdc and creates lots of time_wait sockets. > > The main problem is unbeknownst to the auth url, the JDK transparently does > spnego. The server issues a new cookie but the auth url doesn't scrape the > cookie from the response because it doesn't know the JDK re-authenticated. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14703) ConsoleSink for metrics2
[ https://issues.apache.org/jira/browse/HADOOP-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ronald Macmaster updated HADOOP-14703: -- Attachment: HADOOP-14703.003.patch > ConsoleSink for metrics2 > > > Key: HADOOP-14703 > URL: https://issues.apache.org/jira/browse/HADOOP-14703 > Project: Hadoop Common > Issue Type: Improvement > Components: common, metrics >Affects Versions: 3.0.0-beta1 >Reporter: Ronald Macmaster >Assignee: Ronald Macmaster > Labels: newbie > Attachments: > 0001-HADOOP-14703.-ConsoleSink-for-simple-metrics-printin.patch, > HADOOP-14703.001.patch, HADOOP-14703.002.patch, HADOOP-14703.003.patch > > Original Estimate: 6h > Remaining Estimate: 6h > > The ConsoleSink will provide a simple solution to dump metrics to the console > through std.out. > Quick access to metrics through the console will simplify the development, > testing, and debugging process. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14687) AuthenticatedURL will reuse bad/expired session cookies
[ https://issues.apache.org/jira/browse/HADOOP-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136948#comment-16136948 ] Daryn Sharp commented on HADOOP-14687: -- I think it's fine because the api to explicitly set the value isn't public and the former behavior wouldn't preserve, expose, or even parse metadata like the expiration time. Even if the non-public api is invoked multiple times, the artificial reduction in lifetime does not have a cumulative effect. It's relative to the current moment in time. > AuthenticatedURL will reuse bad/expired session cookies > --- > > Key: HADOOP-14687 > URL: https://issues.apache.org/jira/browse/HADOOP-14687 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Attachments: HADOOP-14687.2.trunk.patch, HADOOP-14687.trunk.patch > > > AuthenticatedURL with kerberos was designed to perform spnego, then use a > session cookie to avoid renegotiation overhead. Unfortunately the client > will continue to use a cookie after it expires. Every request elicits a 401, > connection closes (despite keepalive because 401 is an "error"), TGS is > obtained, connection re-opened, re-requests with TGS, repeat cycle. This > places a strain on the kdc and creates lots of time_wait sockets. > > The main problem is unbeknownst to the auth url, the JDK transparently does > spnego. The server issues a new cookie but the auth url doesn't scrape the > cookie from the response because it doesn't know the JDK re-authenticated. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store
[ https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HADOOP-12862: --- Comment: was deleted (was: In annual leave and vacation, email response will be delayed. For SSM and Hadoop 3.0 related please contact with Wei Zhou; for benchmark with NSG related, please contact with Shunyang; for HAS related, Jiajia. ) > LDAP Group Mapping over SSL can not specify trust store > --- > > Key: HADOOP-12862 > URL: https://issues.apache.org/jira/browse/HADOOP-12862 > Project: Hadoop Common > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, > HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, > HADOOP-12862.006.patch, HADOOP-12862.007.patch > > > In a secure environment, SSL is used to encrypt LDAP request for group > mapping resolution. > We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange. > For information, Hadoop name node, as an LDAP client, talks to a LDAP server > to resolve the group mapping of a user. In the case of LDAP over SSL, a > typical scenario is to establish one-way authentication (the client verifies > the server's certificate is real) by storing the server's certificate in the > client's truststore. > A rarer scenario is to establish two-way authentication: in addition to store > truststore for the client to verify the server, the server also verifies the > client's certificate is real, and the client stores its own certificate in > its keystore. > However, the current implementation for LDAP over SSL does not seem to be > correct in that it only configures keystore but no truststore (so LDAP server > can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP > server's certificate) > I think there should an extra pair of properties to specify the > truststore/password for LDAP server, and use that to configure system > properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}} > I am a security layman so my words can be imprecise. But I hope this makes > sense. > Oracle's SSL LDAP documentation: > http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html > JSSE reference guide: > http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14194) Aliyun OSS should not use empty endpoint as default
[ https://issues.apache.org/jira/browse/HADOOP-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136934#comment-16136934 ] Kai Zheng commented on HADOOP-14194: Yes, my missed +1 on this. > Aliyun OSS should not use empty endpoint as default > --- > > Key: HADOOP-14194 > URL: https://issues.apache.org/jira/browse/HADOOP-14194 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Reporter: Mingliang Liu >Assignee: Genmao Yu > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14194.000.patch, HADOOP-14194.001.patch > > > In {{AliyunOSSFileSystemStore::initialize()}}, it retrieves the endPoint and > using empty string as a default value. > {code} > String endPoint = conf.getTrimmed(ENDPOINT_KEY, ""); > {code} > The plain value without validation is passed to OSSClient. If the endPoint is > not provided (empty string) or the endPoint is not valid, users will get > exception from Aliyun OSS sdk with raw exception message like: > {code} > java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected > authority at index 8: https:// > at com.aliyun.oss.OSSClient.toURI(OSSClient.java:359) > at com.aliyun.oss.OSSClient.setEndpoint(OSSClient.java:313) > at com.aliyun.oss.OSSClient.(OSSClient.java:297) > at > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.initialize(AliyunOSSFileSystemStore.java:134) > at > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.initialize(AliyunOSSFileSystem.java:272) > at > org.apache.hadoop.fs.aliyun.oss.AliyunOSSTestUtils.createTestFileSystem(AliyunOSSTestUtils.java:63) > at > org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract.setUp(TestAliyunOSSFileSystemContract.java:47) > at junit.framework.TestCase.runBare(TestCase.java:139) > at junit.framework.TestResult$1.protect(TestResult.java:122) > at junit.framework.TestResult.runProtected(TestResult.java:142) > at junit.framework.TestResult.run(TestResult.java:125) > at junit.framework.TestCase.run(TestCase.java:129) > at junit.framework.TestSuite.runTest(TestSuite.java:255) > at junit.framework.TestSuite.run(TestSuite.java:250) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84) > at org.junit.runner.JUnitCore.run(JUnitCore.java:160) > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > at > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51) > at > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237) > at > com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) > Caused by: java.net.URISyntaxException: Expected authority at index 8: > https:// > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.failExpecting(URI.java:2854) > at java.net.URI$Parser.parseHierarchical(URI.java:3102) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at com.aliyun.oss.OSSClient.toURI(OSSClient.java:357) > {code} > Let's check endPoint is not null or empty, catch the IllegalArgumentException > and log it, wrapping the exception with clearer message stating the > misconfiguration in endpoint or credentials. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14705) Add batched interface reencryptEncryptedKeys to KMS
[ https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136926#comment-16136926 ] Hudson commented on HADOOP-14705: - ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #12225 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12225/]) HADOOP-14705. Add batched interface reencryptEncryptedKeys to KMS. (xiao: rev 4ec5acc70418a3f2327cf83ecae1789a057fdd99) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/KMSUtil.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSRESTConstants.java * (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java * (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java * (edit) hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderCryptoExtension.java * (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSServerJSONUtils.java * (edit) hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMSAudit.java * (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/EagerKeyGeneratorKeyProviderCryptoExtension.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java * (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java * (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSJSONReader.java * (edit) hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm > Add batched interface reencryptEncryptedKeys to KMS > --- > > Key: HADOOP-14705 > URL: https://issues.apache.org/jira/browse/HADOOP-14705 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, > HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, > HADOOP-14705.06.patch, HADOOP-14705.07.patch, HADOOP-14705.08.patch, > HADOOP-14705.09.patch, HADOOP-14705.10.patch, HADOOP-14705.11.patch > > > HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}. > As the performance results of HDFS-10899 turns out, communication overhead > with the KMS occupies the majority of the time. So this jira proposes to add > a batched interface to re-encrypt multiple EDEKs in 1 call. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14705) Add batched interface reencryptEncryptedKeys to KMS
[ https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-14705: --- Summary: Add batched interface reencryptEncryptedKeys to KMS (was: Add batched reencryptEncryptedKey interface to KMS) > Add batched interface reencryptEncryptedKeys to KMS > --- > > Key: HADOOP-14705 > URL: https://issues.apache.org/jira/browse/HADOOP-14705 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, > HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, > HADOOP-14705.06.patch, HADOOP-14705.07.patch, HADOOP-14705.08.patch, > HADOOP-14705.09.patch, HADOOP-14705.10.patch, HADOOP-14705.11.patch > > > HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}. > As the performance results of HDFS-10899 turns out, communication overhead > with the KMS occupies the majority of the time. So this jira proposes to add > a batched interface to re-encrypt multiple EDEKs in 1 call. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14705) Add batched interface reencryptEncryptedKeys to KMS
[ https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-14705: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-beta1 Status: Resolved (was: Patch Available) > Add batched interface reencryptEncryptedKeys to KMS > --- > > Key: HADOOP-14705 > URL: https://issues.apache.org/jira/browse/HADOOP-14705 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, > HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, > HADOOP-14705.06.patch, HADOOP-14705.07.patch, HADOOP-14705.08.patch, > HADOOP-14705.09.patch, HADOOP-14705.10.patch, HADOOP-14705.11.patch > > > HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}. > As the performance results of HDFS-10899 turns out, communication overhead > with the KMS occupies the majority of the time. So this jira proposes to add > a batched interface to re-encrypt multiple EDEKs in 1 call. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS
[ https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136877#comment-16136877 ] Xiao Chen commented on HADOOP-14705: Committing this given [~jojochuang]'s pending +1. Thanks for the reviews Wei-Chiu and Rushabh! > Add batched reencryptEncryptedKey interface to KMS > -- > > Key: HADOOP-14705 > URL: https://issues.apache.org/jira/browse/HADOOP-14705 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, > HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, > HADOOP-14705.06.patch, HADOOP-14705.07.patch, HADOOP-14705.08.patch, > HADOOP-14705.09.patch, HADOOP-14705.10.patch, HADOOP-14705.11.patch > > > HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}. > As the performance results of HDFS-10899 turns out, communication overhead > with the KMS occupies the majority of the time. So this jira proposes to add > a batched interface to re-encrypt multiple EDEKs in 1 call. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14801) s3guard diff demand creates a new table
Steve Loughran created HADOOP-14801: --- Summary: s3guard diff demand creates a new table Key: HADOOP-14801 URL: https://issues.apache.org/jira/browse/HADOOP-14801 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: HADOOP-13345 Reporter: Steve Loughran Priority: Minor ifr you call {{s3guard diff}} to diff a bucket and a table, it creates the table if not already there. I don't see that as being the right thing to do. {code} hadoop s3guard diff $bucket 2017-08-22 15:14:47,025 INFO s3guard.DynamoDBMetadataStore: Creating non-existent DynamoDB table hwdev-steve-ireland-new in region eu-west-1 2017-08-22 15:14:52,384 INFO s3guard.S3GuardTool: Metadata store DynamoDBMetadataStore{region=eu-west-1, tableName=hwdev-steve-ireland-new} is initialized. {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13139) Branch-2: S3a to use thread pool that blocks clients
[ https://issues.apache.org/jira/browse/HADOOP-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136843#comment-16136843 ] Jason Lowe commented on HADOOP-13139: - Right. If >=2.8.x configs had a fs.s3a.threads.core=10 setting in core-default then this would have "worked" for <=2.7.x. I say that in quotes since the old code would be running with a default of core=10 and max=10 which is a bit less than the 15 core threads and far less than the 256 max threads it defaulted to before. I'm not familiar with S3AFileSystem and don't know if that's reasonable, hence my asking what others thought should be done, if anything, for this scenario. At first glance it's a bit weird to be mixing Hadoop versions between the job client and job which is required to hit this problem, but running via Oozie makes this more likely to happen in practice. In hindsight it would have been a smoother transition to abandon the old config names and use new configs. > Branch-2: S3a to use thread pool that blocks clients > > > Key: HADOOP-13139 > URL: https://issues.apache.org/jira/browse/HADOOP-13139 > Project: Hadoop Common > Issue Type: Task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Pieter Reuse >Assignee: Pieter Reuse > Fix For: 2.8.0 > > Attachments: HADOOP-13139-001.patch, HADOOP-13139-branch-2.001.patch, > HADOOP-13139-branch-2.002.patch, HADOOP-13139-branch-2-003.patch, > HADOOP-13139-branch-2-004.patch, HADOOP-13139-branch-2-005.patch, > HADOOP-13139-branch-2-006.patch > > > HADOOP-11684 is accepted into trunk, but was not applied to branch-2. I will > attach a patch applicable to branch-2. > It should be noted in CHANGES-2.8.0.txt that the config parameter > 'fs.s3a.threads.core' has been been removed and the behavior of the > ThreadPool for s3a has been changed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS
[ https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136836#comment-16136836 ] Rushabh S Shah commented on HADOOP-14705: - +1 (non-binding) ltgm. Thanks [~xiaochen] for being so patient and following up with the review changes immediately. > Add batched reencryptEncryptedKey interface to KMS > -- > > Key: HADOOP-14705 > URL: https://issues.apache.org/jira/browse/HADOOP-14705 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, > HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, > HADOOP-14705.06.patch, HADOOP-14705.07.patch, HADOOP-14705.08.patch, > HADOOP-14705.09.patch, HADOOP-14705.10.patch, HADOOP-14705.11.patch > > > HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}. > As the performance results of HDFS-10899 turns out, communication overhead > with the KMS occupies the majority of the time. So this jira proposes to add > a batched interface to re-encrypt multiple EDEKs in 1 call. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14800) eliminate double stack trace on some s3guard CLI failures
[ https://issues.apache.org/jira/browse/HADOOP-14800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136811#comment-16136811 ] Steve Loughran commented on HADOOP-14800: - {code} hadoop s3guard destroy $bucket 2017-08-22 14:40:12,353 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2017-08-22 14:40:13,836 INFO Configuration.deprecation: fs.s3a.server-side-encryption-key is deprecated. Instead, use fs.s3a.server-side-encryption.key 2017-08-22 14:40:14,945 ERROR s3guard.S3Guard: Failed to instantiate metadata store org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore defined in fs.s3a.metadatastore.impl: java.io.FileNotFoundException: DynamoDB table 'hwdev-steve-ireland-new' does not exist in region eu-west-1; auto-creation is turned off java.io.FileNotFoundException: DynamoDB table 'hwdev-steve-ireland-new' does not exist in region eu-west-1; auto-creation is turned off at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:827) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:244) at org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:96) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:299) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3258) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3307) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3275) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initS3AFileSystem(S3GuardTool.java:245) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseDynamoDBRegion(S3GuardTool.java:164) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:388) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:904) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:913) java.io.FileNotFoundException: DynamoDB table 'hwdev-steve-ireland-new' does not exist in region eu-west-1; auto-creation is turned off at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:827) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:244) at org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:96) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:299) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3258) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3307) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3275) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initS3AFileSystem(S3GuardTool.java:245) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseDynamoDBRegion(S3GuardTool.java:164) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:388) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:904) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:913) {code} > eliminate double stack trace on some s3guard CLI failures > - > > Key: HADOOP-14800 > URL: https://issues.apache.org/jira/browse/HADOOP-14800 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Priority: Minor > Fix For: HADOOP-13345 > > > {{s3guard destroy}] when there's no bucket ends up double-listing the stack > trace, which is somewhat confusing -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14800) eliminate double stack trace on some s3guard CLI failures
Steve Loughran created HADOOP-14800: --- Summary: eliminate double stack trace on some s3guard CLI failures Key: HADOOP-14800 URL: https://issues.apache.org/jira/browse/HADOOP-14800 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: HADOOP-13345 Reporter: Steve Loughran Priority: Minor Fix For: HADOOP-13345 {{s3guard destroy}] when there's no bucket ends up double-listing the stack trace, which is somewhat confusing -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136787#comment-16136787 ] Varun Vasudev commented on HADOOP-13835: [~leftnoteasy] - did you get a chance to do this? Can you backport to branch-2.8 as well? Thanks! > Move Google Test Framework code from mapreduce to hadoop-common > --- > > Key: HADOOP-13835 > URL: https://issues.apache.org/jira/browse/HADOOP-13835 > Project: Hadoop Common > Issue Type: Task > Components: test >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, > HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, > HADOOP-13835.006.patch, HADOOP-13835.007.patch, > HADOOP-13835.branch-2.007.patch > > > The mapreduce project has Google Test Framework code to allow testing of > native libraries. This should be moved to hadoop-common so that other > projects can use it as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14777) S3Guard premerge changes: java 7 build & test tuning
[ https://issues.apache.org/jira/browse/HADOOP-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136685#comment-16136685 ] Steve Loughran commented on HADOOP-14777: - IJ will go wither way with a popup, provided the language level == java 8. Set it to 7 and it forgets about Lambdas so removed the one-button-convert to callable. I did a mixture of intelliJ work (initial bits) & then manual to deal with those which I'd missed, and the need to mark more variables as final > S3Guard premerge changes: java 7 build & test tuning > > > Key: HADOOP-14777 > URL: https://issues.apache.org/jira/browse/HADOOP-14777 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: HADOOP-13345 > > Attachments: HADOOP-14777-HADOOP-13345-001.patch > > > Another set of changes for S3Guard in preparation for merging via HADOOP-13998 > * checkstyle issues > * Made Java 7 friendly (indeed, tested applied to branch-2 with some POM > changes & tested there) > * improve diagnostics on some test failure. This would address HADOOP-14750. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12071) conftest is not documented
[ https://issues.apache.org/jira/browse/HADOOP-12071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136658#comment-16136658 ] Hadoop QA commented on HADOOP-12071: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 5s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 4s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 98m 33s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.sftp.TestSFTPFileSystem | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-12071 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883090/HADOOP-12071-002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 2d49a1439305 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d5ff57a | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13090/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13090/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13090/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > conftest is not documented > -- > > Key: HADOOP-12071 > URL: https://issues.apache.org/jira/browse/HADOOP-12071 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Kengo Seki >
[jira] [Commented] (HADOOP-14787) AliyunOSS: Implement the `createNonRecursive` operator
[ https://issues.apache.org/jira/browse/HADOOP-14787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136654#comment-16136654 ] Hudson commented on HADOOP-14787: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12224 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12224/]) HADOOP-14787. AliyunOSS: Implement the `createNonRecursive` operator. (stevel: rev 27ab5f7385c70f16fd593edc336c573c69f19331) * (edit) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java > AliyunOSS: Implement the `createNonRecursive` operator > -- > > Key: HADOOP-14787 > URL: https://issues.apache.org/jira/browse/HADOOP-14787 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14787.000.patch > > > {code} > testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate) > Time elapsed: 1.146 sec <<< ERROR! > java.io.IOException: createNonRecursive unsupported for this filesystem class > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem > at > org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304) > at > org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163) > at > org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:178) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:208) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > testOverwriteEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate) > Time elapsed: 0.145 sec <<< ERROR! > java.io.IOException: createNonRecursive unsupported for this filesystem class > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem > at > org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304) > at > org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163) > at > org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:133) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:155) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) >
[jira] [Commented] (HADOOP-13139) Branch-2: S3a to use thread pool that blocks clients
[ https://issues.apache.org/jira/browse/HADOOP-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136640#comment-16136640 ] Steve Loughran commented on HADOOP-13139: - I see. what to do? change core threads in 2.8.x so it gets picked up and passed down > Branch-2: S3a to use thread pool that blocks clients > > > Key: HADOOP-13139 > URL: https://issues.apache.org/jira/browse/HADOOP-13139 > Project: Hadoop Common > Issue Type: Task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Pieter Reuse >Assignee: Pieter Reuse > Fix For: 2.8.0 > > Attachments: HADOOP-13139-001.patch, HADOOP-13139-branch-2.001.patch, > HADOOP-13139-branch-2.002.patch, HADOOP-13139-branch-2-003.patch, > HADOOP-13139-branch-2-004.patch, HADOOP-13139-branch-2-005.patch, > HADOOP-13139-branch-2-006.patch > > > HADOOP-11684 is accepted into trunk, but was not applied to branch-2. I will > attach a patch applicable to branch-2. > It should be noted in CHANGES-2.8.0.txt that the config parameter > 'fs.s3a.threads.core' has been been removed and the behavior of the > ThreadPool for s3a has been changed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14194) Aliyun OSS should not use empty endpoint as default
[ https://issues.apache.org/jira/browse/HADOOP-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136637#comment-16136637 ] Steve Loughran commented on HADOOP-14194: - [~drankye]: don't forget to add an explicit +1 on the commit, for the record > Aliyun OSS should not use empty endpoint as default > --- > > Key: HADOOP-14194 > URL: https://issues.apache.org/jira/browse/HADOOP-14194 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Reporter: Mingliang Liu >Assignee: Genmao Yu > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14194.000.patch, HADOOP-14194.001.patch > > > In {{AliyunOSSFileSystemStore::initialize()}}, it retrieves the endPoint and > using empty string as a default value. > {code} > String endPoint = conf.getTrimmed(ENDPOINT_KEY, ""); > {code} > The plain value without validation is passed to OSSClient. If the endPoint is > not provided (empty string) or the endPoint is not valid, users will get > exception from Aliyun OSS sdk with raw exception message like: > {code} > java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected > authority at index 8: https:// > at com.aliyun.oss.OSSClient.toURI(OSSClient.java:359) > at com.aliyun.oss.OSSClient.setEndpoint(OSSClient.java:313) > at com.aliyun.oss.OSSClient.(OSSClient.java:297) > at > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.initialize(AliyunOSSFileSystemStore.java:134) > at > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.initialize(AliyunOSSFileSystem.java:272) > at > org.apache.hadoop.fs.aliyun.oss.AliyunOSSTestUtils.createTestFileSystem(AliyunOSSTestUtils.java:63) > at > org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract.setUp(TestAliyunOSSFileSystemContract.java:47) > at junit.framework.TestCase.runBare(TestCase.java:139) > at junit.framework.TestResult$1.protect(TestResult.java:122) > at junit.framework.TestResult.runProtected(TestResult.java:142) > at junit.framework.TestResult.run(TestResult.java:125) > at junit.framework.TestCase.run(TestCase.java:129) > at junit.framework.TestSuite.runTest(TestSuite.java:255) > at junit.framework.TestSuite.run(TestSuite.java:250) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84) > at org.junit.runner.JUnitCore.run(JUnitCore.java:160) > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > at > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51) > at > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237) > at > com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) > Caused by: java.net.URISyntaxException: Expected authority at index 8: > https:// > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.failExpecting(URI.java:2854) > at java.net.URI$Parser.parseHierarchical(URI.java:3102) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at com.aliyun.oss.OSSClient.toURI(OSSClient.java:357) > {code} > Let's check endPoint is not null or empty, catch the IllegalArgumentException > and log it, wrapping the exception with clearer message stating the > misconfiguration in endpoint or credentials. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14649) Update aliyun-sdk-oss version to 2.8.1
[ https://issues.apache.org/jira/browse/HADOOP-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136635#comment-16136635 ] Steve Loughran commented on HADOOP-14649: - OK, got HADOOP-14787 in...what else is needed for this? > Update aliyun-sdk-oss version to 2.8.1 > -- > > Key: HADOOP-14649 > URL: https://issues.apache.org/jira/browse/HADOOP-14649 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Genmao Yu > > Update the dependency > com.aliyun.oss:aliyun-sdk-oss:2.4.1 > to the latest (2.8.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14787) AliyunOSS: Implement the `createNonRecursive` operator
[ https://issues.apache.org/jira/browse/HADOOP-14787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14787: Resolution: Fixed Fix Version/s: 3.0.0-beta1 Status: Resolved (was: Patch Available) LGTM +1 committed to trunk. Thanks! > AliyunOSS: Implement the `createNonRecursive` operator > -- > > Key: HADOOP-14787 > URL: https://issues.apache.org/jira/browse/HADOOP-14787 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14787.000.patch > > > {code} > testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate) > Time elapsed: 1.146 sec <<< ERROR! > java.io.IOException: createNonRecursive unsupported for this filesystem class > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem > at > org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304) > at > org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163) > at > org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:178) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:208) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > testOverwriteEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate) > Time elapsed: 0.145 sec <<< ERROR! > java.io.IOException: createNonRecursive unsupported for this filesystem class > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem > at > org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304) > at > org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163) > at > org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:133) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:155) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > testCreateFileOverExistingFileNoOverwrite(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate) > Time elapsed: 0.147 sec <<< ERROR! > java.io.IOException: createNonRecursive unsupported for this filesystem class >
[jira] [Updated] (HADOOP-12071) conftest is not documented
[ https://issues.apache.org/jira/browse/HADOOP-12071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12071: Priority: Minor (was: Major) > conftest is not documented > -- > > Key: HADOOP-12071 > URL: https://issues.apache.org/jira/browse/HADOOP-12071 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Kengo Seki >Assignee: Kengo Seki >Priority: Minor > Attachments: HADOOP-12071.001.patch, HADOOP-12071.001.patch, > HADOOP-12071-002.patch > > > HADOOP-7947 introduced new hadoop subcommand conftest, but it is not > documented yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12071) conftest is not documented
[ https://issues.apache.org/jira/browse/HADOOP-12071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12071: Attachment: HADOOP-12071-002.patch HADOOP-12071 patch 002 * patch applies to trunk * quick review of ConfTest itself, with IDE suggested/applied cleanup I looked at what it'd take to add XInclude support (as it fails right now), but concluded it'd take work. I just documented. If we really wanted to improve conftest I'd go for XSD/RelaxNG validation followed by reading the DOM and checking that way. > conftest is not documented > -- > > Key: HADOOP-12071 > URL: https://issues.apache.org/jira/browse/HADOOP-12071 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Kengo Seki >Assignee: Kengo Seki > Attachments: HADOOP-12071.001.patch, HADOOP-12071.001.patch, > HADOOP-12071-002.patch > > > HADOOP-7947 introduced new hadoop subcommand conftest, but it is not > documented yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12071) conftest is not documented
[ https://issues.apache.org/jira/browse/HADOOP-12071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12071: Status: Patch Available (was: Open) > conftest is not documented > -- > > Key: HADOOP-12071 > URL: https://issues.apache.org/jira/browse/HADOOP-12071 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Kengo Seki >Assignee: Kengo Seki > Attachments: HADOOP-12071.001.patch, HADOOP-12071.001.patch, > HADOOP-12071-002.patch > > > HADOOP-7947 introduced new hadoop subcommand conftest, but it is not > documented yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12071) conftest is not documented
[ https://issues.apache.org/jira/browse/HADOOP-12071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12071: Status: Open (was: Patch Available) > conftest is not documented > -- > > Key: HADOOP-12071 > URL: https://issues.apache.org/jira/browse/HADOOP-12071 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Kengo Seki >Assignee: Kengo Seki > Attachments: HADOOP-12071.001.patch, HADOOP-12071.001.patch > > > HADOOP-7947 introduced new hadoop subcommand conftest, but it is not > documented yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14795) TestMapFileOutputFormat missing @after annotation
[ https://issues.apache.org/jira/browse/HADOOP-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136513#comment-16136513 ] Akira Ajisaka commented on HADOOP-14795: bq. I noticed that this test exists in multiple places; o.a.h.mapred and o.a.h.mapreduce.lib.output. I'm not sure I see the point in this teardown method, but if we are fixing it, any reason we wouldn't want to fix both tests (or eliminate one?). Yes. We need to fix both. > TestMapFileOutputFormat missing @after annotation > - > > Key: HADOOP-14795 > URL: https://issues.apache.org/jira/browse/HADOOP-14795 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-14795.01.patch > > > TestMapFileOutputFormat missing @after annotation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4
[ https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136509#comment-16136509 ] Akira Ajisaka commented on HADOOP-14729: Thank you for updating the patch, [~ajayydv]! My comments: * Would you remove @Test from TestTrash#performanceTestDeleteSameFile since this should not be run as a unit test? Sorry for back and forth. * Would you undo the change in TestActiveStandbyElectorRealZK and TestWritableName since the classes are already migrated to JUnit 4 style? * org.apache.hadoop.mapreduce.lib.output.TestFileOutputCommitter#testMapFileOutputCommitterV2 - missing @Test annotation. > Upgrade JUnit 3 TestCase to JUnit 4 > --- > > Key: HADOOP-14729 > URL: https://issues.apache.org/jira/browse/HADOOP-14729 > Project: Hadoop Common > Issue Type: Test >Reporter: Akira Ajisaka >Assignee: Ajay Kumar > Labels: newbie > Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, > HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, > HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch, > HADOOP-14729.009.patch > > > There are still test classes that extend from junit.framework.TestCase in > hadoop-common. Upgrade them to JUnit4. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12077) Provide a multi-URI replication Inode for ViewFs
[ https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136444#comment-16136444 ] Gera Shegalov commented on HADOOP-12077: Hi [~chris.douglas], I just came back from vacation. I'll test everything I can as unprivileged user and will talk to [~mingma] if I need some root functionality. > Provide a multi-URI replication Inode for ViewFs > > > Key: HADOOP-12077 > URL: https://issues.apache.org/jira/browse/HADOOP-12077 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Reporter: Gera Shegalov >Assignee: Gera Shegalov > Attachments: HADOOP-12077.001.patch, HADOOP-12077.002.patch, > HADOOP-12077.003.patch, HADOOP-12077.004.patch, HADOOP-12077.005.patch, > HADOOP-12077.006.patch, HADOOP-12077.007.patch, HADOOP-12077.008.patch, > HADOOP-12077.009.patch > > > This JIRA is to provide simple "replication" capabilities for applications > that maintain logically equivalent paths in multiple locations for caching or > failover (e.g., S3 and HDFS). We noticed a simple common HDFS usage pattern > in our applications. They host their data on some logical cluster C. There > are corresponding HDFS clusters in multiple datacenters. When the application > runs in DC1, it prefers to read from C in DC1, and the applications prefers > to failover to C in DC2 if the application is migrated to DC2 or when C in > DC1 is unavailable. New application data versions are created > periodically/relatively infrequently. > In order to address many common scenarios in a general fashion, and to avoid > unnecessary code duplication, we implement this functionality in ViewFs (our > default FileSystem spanning all clusters in all datacenters) in a project > code-named Nfly (N as in N datacenters). Currently each ViewFs Inode points > to a single URI via ChRootedFileSystem. Consequently, we introduce a new type > of links that points to a list of URIs that are each going to be wrapped in > ChRootedFileSystem. A typical usage: > /nfly/C/user->/DC1/C/user,/DC2/C/user,... This collection of > ChRootedFileSystem instances is fronted by the Nfly filesystem object that is > actually used for the mount point/Inode. Nfly filesystems backs a single > logical path /nfly/C/user//path by multiple physical paths. > Nfly filesystem supports setting minReplication. As long as the number of > URIs on which an update has succeeded is greater than or equal to > minReplication exceptions are only logged but not thrown. Each update > operation is currently executed serially (client-bandwidth driven parallelism > will be added later). > A file create/write: > # Creates a temporary invisible _nfly_tmp_file in the intended chrooted > filesystem. > # Returns a FSDataOutputStream that wraps output streams returned by 1 > # All writes are forwarded to each output stream. > # On close of stream created by 2, all n streams are closed, and the files > are renamed from _nfly_tmp_file to file. All files receive the same mtime > corresponding to the client system time as of beginning of this step. > # If at least minReplication destinations has gone through steps 1-4 without > failures the transaction is considered logically committed, otherwise a > best-effort attempt of cleaning up the temporary files is attempted. > As for reads, we support a notion of locality similar to HDFS /DC/rack/node. > We sort Inode URIs using NetworkTopology by their authorities. These are > typically host names in simple HDFS URIs. If the authority is missing as is > the case with the local file:/// the local host name is assumed > InetAddress.getLocalHost(). This makes sure that the local file system is > always the closest one to the reader in this approach. For our Hadoop 2 hdfs > URIs that are based on nameservice ids instead of hostnames it is very easy > to adjust the topology script since our nameservice ids already contain the > datacenter. As for rack and node we can simply output any string such as > /DC/rack-nsid/node-nsid, since we only care about datacenter-locality for > such filesystem clients. > There are 2 policies/additions to the read call path that makes it more > expensive, but improve user experience: > - readMostRecent - when this policy is enabled, Nfly first checks mtime for > the path under all URIs, sorts them from most recent to least recent. Nfly > then sorts the set of most recent URIs topologically in the same manner as > described above. > - repairOnRead - when readMostRecent is enabled Nfly already has to RPC all > underlying destinations. With repairOnRead, Nfly filesystem would > additionally attempt to refresh destinations with the path missing or a stale > version of the path using the nearest available most recent destination.
[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store
[ https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136364#comment-16136364 ] Hadoop QA commented on HADOOP-12862: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 22 unchanged - 2 fixed = 22 total (was 24) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 47s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 98m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-12862 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12794837/HADOOP-12862.007.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 233e9ca2dd31 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b6bfb2f | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13089/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13089/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > LDAP Group Mapping over SSL can not specify trust store > --- > > Key: HADOOP-12862 > URL: https://issues.apache.org/jira/browse/HADOOP-12862 > Project: Hadoop Common > Issue Type: Bug >
[jira] [Commented] (HADOOP-14519) Client$Connection#waitForWork may suffer from spurious wakeups
[ https://issues.apache.org/jira/browse/HADOOP-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136359#comment-16136359 ] John Zhuge commented on HADOOP-14519: - Thanks [~vagarychen] and [~djp] for looking at this subtle issue carefully. bq. I think the patch slightly changed the syntax of this function. As pointed out by Junping, it should loop until any 1 out 3 conditions is broken, in order to avoid spurious wakeup. bq. However, when we do stop() - where we set running to false, it sounds like we are missing to send notification. May be safe to add notification? stop() does get it out of the loop by sending an interrupt: {code:java|title=Client#close} if (!running.compareAndSet(true, false)) { return; } // wake up all connections for (Connection conn : connections.values()) { conn.interrupt(); <<<= } {code} {code:java|title=Client$Connection#waitForWork} while (calls.isEmpty() && !shouldCloseConnection.get() && running.get()) { long timeout = maxIdleTime- (Time.now()-lastActivity.get()); if (timeout>0) { try { wait(timeout); } catch (InterruptedException e) { // Restore the interrupted status Thread.currentThread().interrupt(); break; <= } } } {code} Since this is a day one bug, quite subtle, and the consequence is not that severe (or we have lived with it for so long). The worst case this can cause is extra TIME_WAIT sockets after it may prematurely quit the connection thread, thus not taking full advantage of the connection pool. I am ok if you'd like to push it out 2.8.2. > Client$Connection#waitForWork may suffer from spurious wakeups > -- > > Key: HADOOP-14519 > URL: https://issues.apache.org/jira/browse/HADOOP-14519 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Affects Versions: 2.8.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Critical > Attachments: HADOOP-14519.001.patch > > > {{Client$Connection#waitForWork}} may suffer spurious wakeup because the > {{wait}} is not surrounded by a loop. See > [https://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#wait()]. > {code:title=Client$Connection#waitForWork} > if (calls.isEmpty() && !shouldCloseConnection.get() && running.get()) { > long timeout = maxIdleTime- > (Time.now()-lastActivity.get()); > if (timeout>0) { > try { > wait(timeout); << spurious wakeup > } catch (InterruptedException e) {} > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org