[jira] [Comment Edited] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store
[ https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136258#comment-16136258 ] Wei-Chiu Chuang edited comment on HADOOP-12862 at 8/22/17 4:48 AM: --- Hello, trying to bump up this patch. I have tested the last patch against Cloudera's latest CDH version. Our internal LDAP server is self signed so Hadoop needs to specify a trust store. Without this patch, LdapGroupsMapping fails with SSLHandshakeException error. {quote} 2017-08-21 21:07:51,517 WARN org.apache.hadoop.security.LdapGroupsMapping: Failed to get groups for user mapred (retry=2) by javax.naming.CommunicationException : simple bind failed:.cloudera.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target] {quote} After applying the patch, groups mapping works successfully. was (Author: jojochuang): Hello, trying to bump up this patch. I have tested the last patch against Cloudera's latest CDH version. Our internal LDAP server is self signed so Hadoop needs to specify a trust store. Without this patch, LdapGroupsMapping fails with SSLHandshakeException error. {quote} 2017-08-21 21:07:51,517 WARN org.apache.hadoop.security.LdapGroupsMapping: Failed to get groups for user mapred (retry=2) by javax.naming.CommunicationException : simple bind failed: scale-ad.ad.halxg.cloudera.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target] {quote} After applying the patch, groups mapping works successfully. > LDAP Group Mapping over SSL can not specify trust store > --- > > Key: HADOOP-12862 > URL: https://issues.apache.org/jira/browse/HADOOP-12862 > Project: Hadoop Common > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, > HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, > HADOOP-12862.006.patch, HADOOP-12862.007.patch > > > In a secure environment, SSL is used to encrypt LDAP request for group > mapping resolution. > We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange. > For information, Hadoop name node, as an LDAP client, talks to a LDAP server > to resolve the group mapping of a user. In the case of LDAP over SSL, a > typical scenario is to establish one-way authentication (the client verifies > the server's certificate is real) by storing the server's certificate in the > client's truststore. > A rarer scenario is to establish two-way authentication: in addition to store > truststore for the client to verify the server, the server also verifies the > client's certificate is real, and the client stores its own certificate in > its keystore. > However, the current implementation for LDAP over SSL does not seem to be > correct in that it only configures keystore but no truststore (so LDAP server > can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP > server's certificate) > I think there should an extra pair of properties to specify the > truststore/password for LDAP server, and use that to configure system > properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}} > I am a security layman so my words can be imprecise. But I hope this makes > sense. > Oracle's SSL LDAP documentation: > http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html > JSSE reference guide: > http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS
[ https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136270#comment-16136270 ] Hadoop QA commented on HADOOP-14705: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 148 unchanged - 3 fixed = 148 total (was 151) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 28s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 19s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 85m 6s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14705 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883037/HADOOP-14705.11.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b18cbcbbf34b 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b6bfb2f | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13088/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13088/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add batched reencryptEncryptedKey interface to KMS > -- > > Key:
[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store
[ https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136263#comment-16136263 ] Kai Zheng commented on HADOOP-12862: In annual leave and vacation, email response will be delayed. For SSM and Hadoop 3.0 related please contact with Wei Zhou; for benchmark with NSG related, please contact with Shunyang; for HAS related, Jiajia. > LDAP Group Mapping over SSL can not specify trust store > --- > > Key: HADOOP-12862 > URL: https://issues.apache.org/jira/browse/HADOOP-12862 > Project: Hadoop Common > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, > HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, > HADOOP-12862.006.patch, HADOOP-12862.007.patch > > > In a secure environment, SSL is used to encrypt LDAP request for group > mapping resolution. > We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange. > For information, Hadoop name node, as an LDAP client, talks to a LDAP server > to resolve the group mapping of a user. In the case of LDAP over SSL, a > typical scenario is to establish one-way authentication (the client verifies > the server's certificate is real) by storing the server's certificate in the > client's truststore. > A rarer scenario is to establish two-way authentication: in addition to store > truststore for the client to verify the server, the server also verifies the > client's certificate is real, and the client stores its own certificate in > its keystore. > However, the current implementation for LDAP over SSL does not seem to be > correct in that it only configures keystore but no truststore (so LDAP server > can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP > server's certificate) > I think there should an extra pair of properties to specify the > truststore/password for LDAP server, and use that to configure system > properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}} > I am a security layman so my words can be imprecise. But I hope this makes > sense. > Oracle's SSL LDAP documentation: > http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html > JSSE reference guide: > http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store
[ https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136262#comment-16136262 ] Wei-Chiu Chuang commented on HADOOP-12862: -- [~drankye] would you be interested in reviewing this patch? It's pretty straightforward but makes LdapGroupsMapping more secure. Thanks in advance. > LDAP Group Mapping over SSL can not specify trust store > --- > > Key: HADOOP-12862 > URL: https://issues.apache.org/jira/browse/HADOOP-12862 > Project: Hadoop Common > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, > HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, > HADOOP-12862.006.patch, HADOOP-12862.007.patch > > > In a secure environment, SSL is used to encrypt LDAP request for group > mapping resolution. > We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange. > For information, Hadoop name node, as an LDAP client, talks to a LDAP server > to resolve the group mapping of a user. In the case of LDAP over SSL, a > typical scenario is to establish one-way authentication (the client verifies > the server's certificate is real) by storing the server's certificate in the > client's truststore. > A rarer scenario is to establish two-way authentication: in addition to store > truststore for the client to verify the server, the server also verifies the > client's certificate is real, and the client stores its own certificate in > its keystore. > However, the current implementation for LDAP over SSL does not seem to be > correct in that it only configures keystore but no truststore (so LDAP server > can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP > server's certificate) > I think there should an extra pair of properties to specify the > truststore/password for LDAP server, and use that to configure system > properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}} > I am a security layman so my words can be imprecise. But I hope this makes > sense. > Oracle's SSL LDAP documentation: > http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html > JSSE reference guide: > http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store
[ https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136258#comment-16136258 ] Wei-Chiu Chuang commented on HADOOP-12862: -- Hello, trying to bump up this patch. I have tested the last patch against Cloudera's latest CDH version. Our internal LDAP server is self signed so Hadoop needs to specify a trust store. Without this patch, LdapGroupsMapping fails with SSLHandshakeException error. {quote} 2017-08-21 21:07:51,517 WARN org.apache.hadoop.security.LdapGroupsMapping: Failed to get groups for user mapred (retry=2) by javax.naming.CommunicationException : simple bind failed: scale-ad.ad.halxg.cloudera.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target] {quote} After applying the patch, groups mapping works successfully. > LDAP Group Mapping over SSL can not specify trust store > --- > > Key: HADOOP-12862 > URL: https://issues.apache.org/jira/browse/HADOOP-12862 > Project: Hadoop Common > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch, > HADOOP-12862.003.patch, HADOOP-12862.004.patch, HADOOP-12862.005.patch, > HADOOP-12862.006.patch, HADOOP-12862.007.patch > > > In a secure environment, SSL is used to encrypt LDAP request for group > mapping resolution. > We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange. > For information, Hadoop name node, as an LDAP client, talks to a LDAP server > to resolve the group mapping of a user. In the case of LDAP over SSL, a > typical scenario is to establish one-way authentication (the client verifies > the server's certificate is real) by storing the server's certificate in the > client's truststore. > A rarer scenario is to establish two-way authentication: in addition to store > truststore for the client to verify the server, the server also verifies the > client's certificate is real, and the client stores its own certificate in > its keystore. > However, the current implementation for LDAP over SSL does not seem to be > correct in that it only configures keystore but no truststore (so LDAP server > can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP > server's certificate) > I think there should an extra pair of properties to specify the > truststore/password for LDAP server, and use that to configure system > properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}} > I am a security layman so my words can be imprecise. But I hope this makes > sense. > Oracle's SSL LDAP documentation: > http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html > JSSE reference guide: > http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS
[ https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-14705: --- Attachment: HADOOP-14705.11.patch Patch 11 to address comments from Rushabh. Thanks for the review, hopefully #11 works. The check here is just a safety check to make sure it's not insanely huge. Not setting to the same level of NN because by KMS as part of hadoop-common should not depend on HDFS NN. For the test comment, I added the '// Decrypt it again and it should be the same' test. Don't think we need the '// Generate another EEK and make sure it's different from the first' since we're already comparing with the original EEKs, which shouldn't be the same, which is covered by existing test in TestGenerate. > Add batched reencryptEncryptedKey interface to KMS > -- > > Key: HADOOP-14705 > URL: https://issues.apache.org/jira/browse/HADOOP-14705 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, > HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, > HADOOP-14705.06.patch, HADOOP-14705.07.patch, HADOOP-14705.08.patch, > HADOOP-14705.09.patch, HADOOP-14705.10.patch, HADOOP-14705.11.patch > > > HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}. > As the performance results of HDFS-10899 turns out, communication overhead > with the KMS occupies the majority of the time. So this jira proposes to add > a batched interface to re-encrypt multiple EDEKs in 1 call. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4
[ https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136146#comment-16136146 ] Hadoop QA commented on HADOOP-14729: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 58 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 12s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 2s{color} | {color:green} root generated 0 new + 1295 unchanged - 2 fixed = 1295 total (was 1297) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 17s{color} | {color:orange} root: The patch generated 41 new + 779 unchanged - 90 fixed = 820 total (was 869) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 7m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 31s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 37s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 56s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s{color} | {color:green} hadoop-mapreduce-client-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green}109m 10s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 12s{color} | {color:green} hadoop-mapreduce-client-nativetask in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 21s{color} | {color:green} hadoop-mapreduce-examples in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 46s{color} | {color:green} hadoop-streaming in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 44s{color} | {color:green} hadoop-datajoin in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 10s{color} | {color:green} hadoop-extras in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s{color} | {color:green} hadoop-aws in the patch passed. {color} |
[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4
[ https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136134#comment-16136134 ] Hadoop QA commented on HADOOP-14729: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 1s{color} | {color:green} The patch appears to include 76 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 16m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m 24s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 19s{color} | {color:green} root generated 0 new + 1295 unchanged - 2 fixed = 1295 total (was 1297) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 24s{color} | {color:orange} root: The patch generated 42 new + 1136 unchanged - 110 fixed = 1178 total (was 1246) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 16m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 20m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 12s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 52s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 51s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 21s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 34s{color} | {color:green} hadoop-mapreduce-client-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 42s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 38s{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green}120m 50s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 4s{color} | {color:green} hadoop-mapreduce-client-nativetask in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 49s{color} | {color:green} hadoop-mapreduce-examples in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 40s{color} | {color:green} hadoop-streaming in the
[jira] [Commented] (HADOOP-14787) AliyunOSS: Implement the `createNonRecursive` operator
[ https://issues.apache.org/jira/browse/HADOOP-14787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136040#comment-16136040 ] Ray Chiang commented on HADOOP-14787: - Thanks [~fabbri]. This code looks simple enough to me. I'll commit this tomorrow if I don't hear any objections. > AliyunOSS: Implement the `createNonRecursive` operator > -- > > Key: HADOOP-14787 > URL: https://issues.apache.org/jira/browse/HADOOP-14787 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Genmao Yu >Assignee: Genmao Yu > Attachments: HADOOP-14787.000.patch > > > {code} > testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate) > Time elapsed: 1.146 sec <<< ERROR! > java.io.IOException: createNonRecursive unsupported for this filesystem class > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem > at > org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304) > at > org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163) > at > org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:178) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:208) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > testOverwriteEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate) > Time elapsed: 0.145 sec <<< ERROR! > java.io.IOException: createNonRecursive unsupported for this filesystem class > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem > at > org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304) > at > org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163) > at > org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:133) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:155) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > testCreateFileOverExistingFileNoOverwrite(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate) > Time elapsed: 0.147 sec <<< ERROR! > java.io.IOException: createNonRecursive unsupported for this filesystem class > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem > at >
[jira] [Commented] (HADOOP-12077) Provide a multi-URI replication Inode for ViewFs
[ https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136013#comment-16136013 ] Chris Douglas commented on HADOOP-12077: [~jira.shegalov], [~mingma] Have you had a chance to test the patch? > Provide a multi-URI replication Inode for ViewFs > > > Key: HADOOP-12077 > URL: https://issues.apache.org/jira/browse/HADOOP-12077 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Reporter: Gera Shegalov >Assignee: Gera Shegalov > Attachments: HADOOP-12077.001.patch, HADOOP-12077.002.patch, > HADOOP-12077.003.patch, HADOOP-12077.004.patch, HADOOP-12077.005.patch, > HADOOP-12077.006.patch, HADOOP-12077.007.patch, HADOOP-12077.008.patch, > HADOOP-12077.009.patch > > > This JIRA is to provide simple "replication" capabilities for applications > that maintain logically equivalent paths in multiple locations for caching or > failover (e.g., S3 and HDFS). We noticed a simple common HDFS usage pattern > in our applications. They host their data on some logical cluster C. There > are corresponding HDFS clusters in multiple datacenters. When the application > runs in DC1, it prefers to read from C in DC1, and the applications prefers > to failover to C in DC2 if the application is migrated to DC2 or when C in > DC1 is unavailable. New application data versions are created > periodically/relatively infrequently. > In order to address many common scenarios in a general fashion, and to avoid > unnecessary code duplication, we implement this functionality in ViewFs (our > default FileSystem spanning all clusters in all datacenters) in a project > code-named Nfly (N as in N datacenters). Currently each ViewFs Inode points > to a single URI via ChRootedFileSystem. Consequently, we introduce a new type > of links that points to a list of URIs that are each going to be wrapped in > ChRootedFileSystem. A typical usage: > /nfly/C/user->/DC1/C/user,/DC2/C/user,... This collection of > ChRootedFileSystem instances is fronted by the Nfly filesystem object that is > actually used for the mount point/Inode. Nfly filesystems backs a single > logical path /nfly/C/user//path by multiple physical paths. > Nfly filesystem supports setting minReplication. As long as the number of > URIs on which an update has succeeded is greater than or equal to > minReplication exceptions are only logged but not thrown. Each update > operation is currently executed serially (client-bandwidth driven parallelism > will be added later). > A file create/write: > # Creates a temporary invisible _nfly_tmp_file in the intended chrooted > filesystem. > # Returns a FSDataOutputStream that wraps output streams returned by 1 > # All writes are forwarded to each output stream. > # On close of stream created by 2, all n streams are closed, and the files > are renamed from _nfly_tmp_file to file. All files receive the same mtime > corresponding to the client system time as of beginning of this step. > # If at least minReplication destinations has gone through steps 1-4 without > failures the transaction is considered logically committed, otherwise a > best-effort attempt of cleaning up the temporary files is attempted. > As for reads, we support a notion of locality similar to HDFS /DC/rack/node. > We sort Inode URIs using NetworkTopology by their authorities. These are > typically host names in simple HDFS URIs. If the authority is missing as is > the case with the local file:/// the local host name is assumed > InetAddress.getLocalHost(). This makes sure that the local file system is > always the closest one to the reader in this approach. For our Hadoop 2 hdfs > URIs that are based on nameservice ids instead of hostnames it is very easy > to adjust the topology script since our nameservice ids already contain the > datacenter. As for rack and node we can simply output any string such as > /DC/rack-nsid/node-nsid, since we only care about datacenter-locality for > such filesystem clients. > There are 2 policies/additions to the read call path that makes it more > expensive, but improve user experience: > - readMostRecent - when this policy is enabled, Nfly first checks mtime for > the path under all URIs, sorts them from most recent to least recent. Nfly > then sorts the set of most recent URIs topologically in the same manner as > described above. > - repairOnRead - when readMostRecent is enabled Nfly already has to RPC all > underlying destinations. With repairOnRead, Nfly filesystem would > additionally attempt to refresh destinations with the path missing or a stale > version of the path using the nearest available most recent destination. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1
[ https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135985#comment-16135985 ] Hadoop QA commented on HADOOP-14799: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 49s{color} | {color:green} root: The patch generated 0 new + 130 unchanged - 4 fixed = 130 total (was 134) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 55s{color} | {color:green} hadoop-auth in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 6s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 78m 39s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14799 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12882978/HADOOP-14799.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 7794116871d7 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh
[jira] [Commented] (HADOOP-14798) Update sshd-core and related mina-core library versions
[ https://issues.apache.org/jira/browse/HADOOP-14798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135924#comment-16135924 ] Hadoop QA commented on HADOOP-14798: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 32s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 27s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 90m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14798 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12882958/HADOOP-14798.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 378acf5aae58 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 736ceab | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13085/testReport/ | | modules | C: hadoop-project
[jira] [Commented] (HADOOP-14777) S3Guard premerge changes: java 7 build & test tuning
[ https://issues.apache.org/jira/browse/HADOOP-14777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135906#comment-16135906 ] Aaron Fabbri commented on HADOOP-14777: --- qq [~ste...@apache.org] did you do the lambda -> callable stuff manually or can you recommend a tool? Not finding anything in Intellij at first glance. > S3Guard premerge changes: java 7 build & test tuning > > > Key: HADOOP-14777 > URL: https://issues.apache.org/jira/browse/HADOOP-14777 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: HADOOP-13345 > > Attachments: HADOOP-14777-HADOOP-13345-001.patch > > > Another set of changes for S3Guard in preparation for merging via HADOOP-13998 > * checkstyle issues > * Made Java 7 friendly (indeed, tested applied to branch-2 with some POM > changes & tested there) > * improve diagnostics on some test failure. This would address HADOOP-14750. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1
[ https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang reassigned HADOOP-14799: --- Assignee: Ray Chiang > Update nimbus-jose-jwt to 4.41.1 > > > Key: HADOOP-14799 > URL: https://issues.apache.org/jira/browse/HADOOP-14799 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14799.001.patch > > > Update the dependency > com.nimbusds:nimbus-jose-jwt:3.9 > to the latest (4.41.1) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1
[ https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14799: Status: Patch Available (was: Open) > Update nimbus-jose-jwt to 4.41.1 > > > Key: HADOOP-14799 > URL: https://issues.apache.org/jira/browse/HADOOP-14799 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14799.001.patch > > > Update the dependency > com.nimbusds:nimbus-jose-jwt:3.9 > to the latest (4.41.1) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1
[ https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14799: Attachment: HADOOP-14799.001.patch > Update nimbus-jose-jwt to 4.41.1 > > > Key: HADOOP-14799 > URL: https://issues.apache.org/jira/browse/HADOOP-14799 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14799.001.patch > > > Update the dependency > com.nimbusds:nimbus-jose-jwt:3.9 > to the latest (4.41.1) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14799) Update XXX to 4.41.1
Ray Chiang created HADOOP-14799: --- Summary: Update XXX to 4.41.1 Key: HADOOP-14799 URL: https://issues.apache.org/jira/browse/HADOOP-14799 Project: Hadoop Common Issue Type: Sub-task Reporter: Ray Chiang Update the dependency com.nimbusds:nimbus-jose-jwt:3.9 to the latest (4.41.1) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1
[ https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14799: Summary: Update nimbus-jose-jwt to 4.41.1 (was: Update XXX to 4.41.1) > Update nimbus-jose-jwt to 4.41.1 > > > Key: HADOOP-14799 > URL: https://issues.apache.org/jira/browse/HADOOP-14799 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang > > Update the dependency > com.nimbusds:nimbus-jose-jwt:3.9 > to the latest (4.41.1) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS
[ https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135797#comment-16135797 ] Rushabh S Shah edited comment on HADOOP-14705 at 8/21/17 8:50 PM: -- The last patch looks very very close. {quote} Not on the request itself, but the client sending it and the server receiving it both need to be able to hold and parse it. As it turned out from HDFS-10899, bigger than 2k may trigger edit log sync and impact performance. For KMS here, I added a static 10k maxNumPerBatch as a safeguard too. Security-wise okay be cause ACL is checked before iterating through the json payload. {quote} If I understand this comment correctly, for a batch of more than 2000, it will impact namenode performance. Then why do we have limit of {{5x}} on server side. Am I missing something ? Couple of minor comments. +TestKeyProviderCryptoExtension.java+ Below line is from patch. bq. // Verify the decrypting the new EEK and orig EEK gives the same material. We should add the same check in {{TestKeyProviderCryptoExtension#testGenerateEncryptedKey}} also. +KMS.java+ Below line is from patch. bq. LOG.trace("Exiting handleEncryptedKeyOp method."); * Log line is carried over from {{handleEncryptedKeyOp}} method. * If I read the existing method code properly, the log line {{Exiting }} is logged only when the call completes _gracefully_. In case of any exceptions, all other calls logs the exception at {{debug}} level and don't log the exiting log line. Just to be consistent, we should also follow the same pattern in {{reencryptEncryptedKeys}} method. was (Author: shahrs87): {quote} Not on the request itself, but the client sending it and the server receiving it both need to be able to hold and parse it. As it turned out from HDFS-10899, bigger than 2k may trigger edit log sync and impact performance. For KMS here, I added a static 10k maxNumPerBatch as a safeguard too. Security-wise okay be cause ACL is checked before iterating through the json payload. {quote} If I understand this comment correctly, for a batch of more than 2000, it will impact namenode performance. Then why do we have limit of {{5x}} on server side. Am I missing something ? Couple of minor comments. +TestKeyProviderCryptoExtension.java+ Below line is from patch. bq. // Verify the decrypting the new EEK and orig EEK gives the same material. We should add the same check in {{TestKeyProviderCryptoExtension#testGenerateEncryptedKey}} also. +KMS.java+ Below line is from patch. bq. LOG.trace("Exiting handleEncryptedKeyOp method."); * Log line is carried over from {{handleEncryptedKeyOp}} method. * If I read the existing method code properly, the log line {{Exiting }} is logged only when the call completes _gracefully_. In case of any exceptions, all other calls logs the exception at {{debug}} level and don't log the exiting log line. Just to be consistent, we should also follow the same pattern in {{reencryptEncryptedKeys}} method. > Add batched reencryptEncryptedKey interface to KMS > -- > > Key: HADOOP-14705 > URL: https://issues.apache.org/jira/browse/HADOOP-14705 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, > HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, > HADOOP-14705.06.patch, HADOOP-14705.07.patch, HADOOP-14705.08.patch, > HADOOP-14705.09.patch, HADOOP-14705.10.patch > > > HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}. > As the performance results of HDFS-10899 turns out, communication overhead > with the KMS occupies the majority of the time. So this jira proposes to add > a batched interface to re-encrypt multiple EDEKs in 1 call. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS
[ https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135797#comment-16135797 ] Rushabh S Shah commented on HADOOP-14705: - {quote} Not on the request itself, but the client sending it and the server receiving it both need to be able to hold and parse it. As it turned out from HDFS-10899, bigger than 2k may trigger edit log sync and impact performance. For KMS here, I added a static 10k maxNumPerBatch as a safeguard too. Security-wise okay be cause ACL is checked before iterating through the json payload. {quote} If I understand this comment correctly, for a batch of more than 2000, it will impact namenode performance. Then why do we have limit of {{5x}} on server side. Am I missing something ? Couple of minor comments. +TestKeyProviderCryptoExtension.java+ Below line is from patch. bq. // Verify the decrypting the new EEK and orig EEK gives the same material. We should add the same check in {{TestKeyProviderCryptoExtension#testGenerateEncryptedKey}} also. +KMS.java+ Below line is from patch. bq. LOG.trace("Exiting handleEncryptedKeyOp method."); * Log line is carried over from {{handleEncryptedKeyOp}} method. * If I read the existing method code properly, the log line {{Exiting }} is logged only when the call completes _gracefully_. In case of any exceptions, all other calls logs the exception at {{debug}} level and don't log the exiting log line. Just to be consistent, we should also follow the same pattern in {{reencryptEncryptedKeys}} method. > Add batched reencryptEncryptedKey interface to KMS > -- > > Key: HADOOP-14705 > URL: https://issues.apache.org/jira/browse/HADOOP-14705 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, > HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, > HADOOP-14705.06.patch, HADOOP-14705.07.patch, HADOOP-14705.08.patch, > HADOOP-14705.09.patch, HADOOP-14705.10.patch > > > HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}. > As the performance results of HDFS-10899 turns out, communication overhead > with the KMS occupies the majority of the time. So this jira proposes to add > a batched interface to re-encrypt multiple EDEKs in 1 call. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4
[ https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135784#comment-16135784 ] Ajay Kumar commented on HADOOP-14729: - [~ajisakaa],[~arpitagarwal] Attached new patch (HADOOP-14729.009.patch) with all suggested changes. Please review when possible. > Upgrade JUnit 3 TestCase to JUnit 4 > --- > > Key: HADOOP-14729 > URL: https://issues.apache.org/jira/browse/HADOOP-14729 > Project: Hadoop Common > Issue Type: Test >Reporter: Akira Ajisaka >Assignee: Ajay Kumar > Labels: newbie > Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, > HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, > HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch, > HADOOP-14729.009.patch > > > There are still test classes that extend from junit.framework.TestCase in > hadoop-common. Upgrade them to JUnit4. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14795) TestMapFileOutputFormat missing @after annotation
[ https://issues.apache.org/jira/browse/HADOOP-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135780#comment-16135780 ] Hadoop QA commented on HADOOP-14795: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 12s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 34m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14795 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12882942/HADOOP-14795.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux ba18cf1158bd 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 736ceab | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13084/testReport/ | | modules | C: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core U: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13084/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestMapFileOutputFormat missing @after annotation > - > > Key: HADOOP-14795 > URL: https://issues.apache.org/jira/browse/HADOOP-14795 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-14795.01.patch > > > TestMapFileOutputFormat missing @after annotation. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4
[ https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-14729: Attachment: HADOOP-14729.009.patch > Upgrade JUnit 3 TestCase to JUnit 4 > --- > > Key: HADOOP-14729 > URL: https://issues.apache.org/jira/browse/HADOOP-14729 > Project: Hadoop Common > Issue Type: Test >Reporter: Akira Ajisaka >Assignee: Ajay Kumar > Labels: newbie > Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, > HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, > HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch, > HADOOP-14729.009.patch > > > There are still test classes that extend from junit.framework.TestCase in > hadoop-common. Upgrade them to JUnit4. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14795) TestMapFileOutputFormat missing @after annotation
[ https://issues.apache.org/jira/browse/HADOOP-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135699#comment-16135699 ] Shane Kumpf commented on HADOOP-14795: -- Thanks for the patch, [~ajayydv]. I noticed that this test exists in multiple places; o.a.h.mapred and o.a.h.mapreduce.lib.output. I'm not sure I see the point in this teardown method, but if we are fixing it, any reason we wouldn't want to fix both tests (or eliminate one?). > TestMapFileOutputFormat missing @after annotation > - > > Key: HADOOP-14795 > URL: https://issues.apache.org/jira/browse/HADOOP-14795 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-14795.01.patch > > > TestMapFileOutputFormat missing @after annotation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4
[ https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135687#comment-16135687 ] Ajay Kumar commented on HADOOP-14729: - [~ajisakaa], thanks for detailed review. I missed some changes you suggested in patch 008. Will upload new one with suggested changes. > Upgrade JUnit 3 TestCase to JUnit 4 > --- > > Key: HADOOP-14729 > URL: https://issues.apache.org/jira/browse/HADOOP-14729 > Project: Hadoop Common > Issue Type: Test >Reporter: Akira Ajisaka >Assignee: Ajay Kumar > Labels: newbie > Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, > HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, > HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch > > > There are still test classes that extend from junit.framework.TestCase in > hadoop-common. Upgrade them to JUnit4. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14798) Update sshd-core and related mina-core library versions
[ https://issues.apache.org/jira/browse/HADOOP-14798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang reassigned HADOOP-14798: --- Assignee: Ray Chiang > Update sshd-core and related mina-core library versions > --- > > Key: HADOOP-14798 > URL: https://issues.apache.org/jira/browse/HADOOP-14798 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14798.001.patch > > > Update the dependencies > org.apache.mina:mina-core:2.0.0-M5 > org.apache.sshd:sshd-core:0.14.0 > mina-core can be updated to 2.0.16 and sshd-core to 1.6.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14798) Update sshd-core and related mina-core library versions
[ https://issues.apache.org/jira/browse/HADOOP-14798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14798: Attachment: HADOOP-14798.001.patch > Update sshd-core and related mina-core library versions > --- > > Key: HADOOP-14798 > URL: https://issues.apache.org/jira/browse/HADOOP-14798 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang > Attachments: HADOOP-14798.001.patch > > > Update the dependencies > org.apache.mina:mina-core:2.0.0-M5 > org.apache.sshd:sshd-core:0.14.0 > mina-core can be updated to 2.0.16 and sshd-core to 1.6.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14798) Update sshd-core and related mina-core library versions
[ https://issues.apache.org/jira/browse/HADOOP-14798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14798: Status: Patch Available (was: Open) > Update sshd-core and related mina-core library versions > --- > > Key: HADOOP-14798 > URL: https://issues.apache.org/jira/browse/HADOOP-14798 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14798.001.patch > > > Update the dependencies > org.apache.mina:mina-core:2.0.0-M5 > org.apache.sshd:sshd-core:0.14.0 > mina-core can be updated to 2.0.16 and sshd-core to 1.6.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14798) Update sshd-core and related mina-core library versions
[ https://issues.apache.org/jira/browse/HADOOP-14798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14798: Summary: Update sshd-core and related mina-core library versions (was: Update sshd-core and related mina-core library) > Update sshd-core and related mina-core library versions > --- > > Key: HADOOP-14798 > URL: https://issues.apache.org/jira/browse/HADOOP-14798 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang > > Update the dependencies > org.apache.mina:mina-core:2.0.0-M5 > org.apache.sshd:sshd-core:0.14.0 > mina-core can be updated to 2.0.16 and sshd-core to 1.6.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14798) Update sshd-core and related mina-core library
Ray Chiang created HADOOP-14798: --- Summary: Update sshd-core and related mina-core library Key: HADOOP-14798 URL: https://issues.apache.org/jira/browse/HADOOP-14798 Project: Hadoop Common Issue Type: Sub-task Reporter: Ray Chiang Update the dependencies org.apache.mina:mina-core:2.0.0-M5 org.apache.sshd:sshd-core:0.14.0 mina-core can be updated to 2.0.16 and sshd-core to 1.6.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14795) TestMapFileOutputFormat missing @after annotation
[ https://issues.apache.org/jira/browse/HADOOP-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-14795: Status: Patch Available (was: In Progress) > TestMapFileOutputFormat missing @after annotation > - > > Key: HADOOP-14795 > URL: https://issues.apache.org/jira/browse/HADOOP-14795 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-14795.01.patch > > > TestMapFileOutputFormat missing @after annotation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14797) Update re2j version to 1.1
Ray Chiang created HADOOP-14797: --- Summary: Update re2j version to 1.1 Key: HADOOP-14797 URL: https://issues.apache.org/jira/browse/HADOOP-14797 Project: Hadoop Common Issue Type: Sub-task Reporter: Ray Chiang Update the dependency com.google.re2j:re2j:1.0 to the latest (1.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14787) AliyunOSS: Implement the `createNonRecursive` operator
[ https://issues.apache.org/jira/browse/HADOOP-14787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135605#comment-16135605 ] Aaron Fabbri commented on HADOOP-14787: --- Looks good to me (+1, non binding). I did not test this, however. > AliyunOSS: Implement the `createNonRecursive` operator > -- > > Key: HADOOP-14787 > URL: https://issues.apache.org/jira/browse/HADOOP-14787 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Genmao Yu >Assignee: Genmao Yu > Attachments: HADOOP-14787.000.patch > > > {code} > testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate) > Time elapsed: 1.146 sec <<< ERROR! > java.io.IOException: createNonRecursive unsupported for this filesystem class > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem > at > org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304) > at > org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163) > at > org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:178) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:208) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > testOverwriteEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate) > Time elapsed: 0.145 sec <<< ERROR! > java.io.IOException: createNonRecursive unsupported for this filesystem class > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem > at > org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304) > at > org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163) > at > org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:133) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:155) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > testCreateFileOverExistingFileNoOverwrite(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate) > Time elapsed: 0.147 sec <<< ERROR! > java.io.IOException: createNonRecursive unsupported for this filesystem class > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem > at >
[jira] [Created] (HADOOP-14796) Update json-simple version to 1.1.1
Ray Chiang created HADOOP-14796: --- Summary: Update json-simple version to 1.1.1 Key: HADOOP-14796 URL: https://issues.apache.org/jira/browse/HADOOP-14796 Project: Hadoop Common Issue Type: Sub-task Reporter: Ray Chiang Update the dependency com.googlecode.json-simple:json-simple:1.1 to the latest (1.1.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14776) clean up ITestS3AFileSystemContract
[ https://issues.apache.org/jira/browse/HADOOP-14776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar reassigned HADOOP-14776: --- Assignee: (was: Ajay Kumar) > clean up ITestS3AFileSystemContract > --- > > Key: HADOOP-14776 > URL: https://issues.apache.org/jira/browse/HADOOP-14776 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Priority: Minor > Attachments: HADOOP-14776.01.patch > > > With the move of {{FileSystemContractTest}} test to JUnit4, the bits of > {{ITestS3AFileSystemContract}} which override existing methods just to skip > them can be cleaned up: The subclasses could throw assume() so their skippage > gets noted. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14795) TestMapFileOutputFormat missing @after annotation
[ https://issues.apache.org/jira/browse/HADOOP-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-14795: Attachment: HADOOP-14795.01.patch > TestMapFileOutputFormat missing @after annotation > - > > Key: HADOOP-14795 > URL: https://issues.apache.org/jira/browse/HADOOP-14795 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-14795.01.patch > > > TestMapFileOutputFormat missing @after annotation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS
[ https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135562#comment-16135562 ] Rushabh S Shah commented on HADOOP-14705: - I will take a final pass today or max tomorrow. > Add batched reencryptEncryptedKey interface to KMS > -- > > Key: HADOOP-14705 > URL: https://issues.apache.org/jira/browse/HADOOP-14705 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, > HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, > HADOOP-14705.06.patch, HADOOP-14705.07.patch, HADOOP-14705.08.patch, > HADOOP-14705.09.patch, HADOOP-14705.10.patch > > > HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}. > As the performance results of HDFS-10899 turns out, communication overhead > with the KMS occupies the majority of the time. So this jira proposes to add > a batched interface to re-encrypt multiple EDEKs in 1 call. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14649) Update aliyun-sdk-oss version to 2.8.1
[ https://issues.apache.org/jira/browse/HADOOP-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135538#comment-16135538 ] Ray Chiang commented on HADOOP-14649: - Thanks [~uncleGen]. I left a quick comment on HADOOP-14787 and hopefully I can get one more set of eyeballs on the patch there. > Update aliyun-sdk-oss version to 2.8.1 > -- > > Key: HADOOP-14649 > URL: https://issues.apache.org/jira/browse/HADOOP-14649 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Genmao Yu > > Update the dependency > com.aliyun.oss:aliyun-sdk-oss:2.4.1 > to the latest (2.8.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-14795) TestMapFileOutputFormat missing @after annotation
[ https://issues.apache.org/jira/browse/HADOOP-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-14795 started by Ajay Kumar. --- > TestMapFileOutputFormat missing @after annotation > - > > Key: HADOOP-14795 > URL: https://issues.apache.org/jira/browse/HADOOP-14795 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Minor > > TestMapFileOutputFormat missing @after annotation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14795) TestMapFileOutputFormat missing @after annotation
Ajay Kumar created HADOOP-14795: --- Summary: TestMapFileOutputFormat missing @after annotation Key: HADOOP-14795 URL: https://issues.apache.org/jira/browse/HADOOP-14795 Project: Hadoop Common Issue Type: Bug Reporter: Ajay Kumar Assignee: Ajay Kumar Priority: Minor TestMapFileOutputFormat missing @after annotation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14794) Standalone MiniKdc server
[ https://issues.apache.org/jira/browse/HADOOP-14794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14794: Attachment: HADOOP-14794.002.patch Patch 002 * Fix usage text * Fix shellcheck, shelldocs, whitespace, asflicense * Make sure $HADOOP_HOME/minikdc is not created for stop or status mode > Standalone MiniKdc server > - > > Key: HADOOP-14794 > URL: https://issues.apache.org/jira/browse/HADOOP-14794 > Project: Hadoop Common > Issue Type: New Feature > Components: security, test >Affects Versions: 2.7.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14794.001.patch, HADOOP-14794.002.patch > > > Add a new subcommand {{hadoop minikdc}} to start a standalone MiniKdc server. > This will make it easier to test Kerberos in pseudo-distributed mode without > an external KDC server. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4
[ https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-14729: Attachment: HADOOP-14729.008.patch > Upgrade JUnit 3 TestCase to JUnit 4 > --- > > Key: HADOOP-14729 > URL: https://issues.apache.org/jira/browse/HADOOP-14729 > Project: Hadoop Common > Issue Type: Test >Reporter: Akira Ajisaka >Assignee: Ajay Kumar > Labels: newbie > Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, > HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, > HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch > > > There are still test classes that extend from junit.framework.TestCase in > hadoop-common. Upgrade them to JUnit4. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14787) AliyunOSS: Implement the `createNonRecursive` operator
[ https://issues.apache.org/jira/browse/HADOOP-14787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135493#comment-16135493 ] Ray Chiang commented on HADOOP-14787: - Pretty much a straight copy from S3AFileSystem. [~jzhuge] or [~fabbri], any comments? > AliyunOSS: Implement the `createNonRecursive` operator > -- > > Key: HADOOP-14787 > URL: https://issues.apache.org/jira/browse/HADOOP-14787 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Genmao Yu >Assignee: Genmao Yu > Attachments: HADOOP-14787.000.patch > > > {code} > testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate) > Time elapsed: 1.146 sec <<< ERROR! > java.io.IOException: createNonRecursive unsupported for this filesystem class > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem > at > org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304) > at > org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163) > at > org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:178) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:208) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > testOverwriteEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate) > Time elapsed: 0.145 sec <<< ERROR! > java.io.IOException: createNonRecursive unsupported for this filesystem class > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem > at > org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304) > at > org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163) > at > org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:133) > at > org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:155) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > testCreateFileOverExistingFileNoOverwrite(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate) > Time elapsed: 0.147 sec <<< ERROR! > java.io.IOException: createNonRecursive unsupported for this filesystem class > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem > at >
[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS
[ https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135490#comment-16135490 ] Xiao Chen commented on HADOOP-14705: Same KDiag failure, tracked at HADOOP-14030. Plan to commit EOB today, unless [~shahrs87] or other watchers have additional comments. > Add batched reencryptEncryptedKey interface to KMS > -- > > Key: HADOOP-14705 > URL: https://issues.apache.org/jira/browse/HADOOP-14705 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, > HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, > HADOOP-14705.06.patch, HADOOP-14705.07.patch, HADOOP-14705.08.patch, > HADOOP-14705.09.patch, HADOOP-14705.10.patch > > > HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}. > As the performance results of HDFS-10899 turns out, communication overhead > with the KMS occupies the majority of the time. So this jira proposes to add > a batched interface to re-encrypt multiple EDEKs in 1 call. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14794) Standalone MiniKdc server
[ https://issues.apache.org/jira/browse/HADOOP-14794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14794: Status: Open (was: Patch Available) Cancel patch to fix usage text, shelldoc, etc. > Standalone MiniKdc server > - > > Key: HADOOP-14794 > URL: https://issues.apache.org/jira/browse/HADOOP-14794 > Project: Hadoop Common > Issue Type: New Feature > Components: security, test >Affects Versions: 2.7.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14794.001.patch > > > Add a new subcommand {{hadoop minikdc}} to start a standalone MiniKdc server. > This will make it easier to test Kerberos in pseudo-distributed mode without > an external KDC server. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14794) Standalone MiniKdc server
[ https://issues.apache.org/jira/browse/HADOOP-14794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135418#comment-16135418 ] Hadoop QA commented on HADOOP-14794: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 1s{color} | {color:red} The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} | | {color:orange}-0{color} | {color:orange} shelldocs {color} | {color:orange} 0m 27s{color} | {color:orange} The patch generated 2 new + 364 unchanged - 0 fixed = 366 total (was 364) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s{color} | {color:green} hadoop-assemblies in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s{color} | {color:green} hadoop-minikdc in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 34s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 46m 48s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14794 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12882896/HADOOP-14794.001.patch | | Optional Tests | asflicense shellcheck shelldocs compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 4271e9b6178d 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 267e19a | | Default Java | 1.8.0_144 | | shellcheck | v0.4.6 | | shellcheck | https://builds.apache.org/job/PreCommit-HADOOP-Build/13082/artifact/patchprocess/diff-patch-shellcheck.txt | | shelldocs | https://builds.apache.org/job/PreCommit-HADOOP-Build/13082/artifact/patchprocess/diff-patch-shelldocs.txt | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/13082/artifact/patchprocess/whitespace-eol.txt | | Test Results |
[jira] [Commented] (HADOOP-14687) AuthenticatedURL will reuse bad/expired session cookies
[ https://issues.apache.org/jira/browse/HADOOP-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135391#comment-16135391 ] Jason Lowe commented on HADOOP-14687: - Thanks for the patch! Wondering if it is worth protecting the code from a case where someone tries to set the same cookie redundantly. Looks like the code will reduce the max age of the cookie each time. Seems like a simple "is this the same cookie we already have" check before we lower the max age could make it do something sane in that unexpected case. Otherwise patch looks good to me. > AuthenticatedURL will reuse bad/expired session cookies > --- > > Key: HADOOP-14687 > URL: https://issues.apache.org/jira/browse/HADOOP-14687 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Attachments: HADOOP-14687.2.trunk.patch, HADOOP-14687.trunk.patch > > > AuthenticatedURL with kerberos was designed to perform spnego, then use a > session cookie to avoid renegotiation overhead. Unfortunately the client > will continue to use a cookie after it expires. Every request elicits a 401, > connection closes (despite keepalive because 401 is an "error"), TGS is > obtained, connection re-opened, re-requests with TGS, repeat cycle. This > places a strain on the kdc and creates lots of time_wait sockets. > > The main problem is unbeknownst to the auth url, the JDK transparently does > spnego. The server issues a new cookie but the auth url doesn't scrape the > cookie from the response because it doesn't know the JDK re-authenticated. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13139) Branch-2: S3a to use thread pool that blocks clients
[ https://issues.apache.org/jira/browse/HADOOP-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135370#comment-16135370 ] Jason Lowe commented on HADOOP-13139: - Had a user that ran into this on one of our clusters that upgraded to 2.8. They were running a pre-2.8 version of the S3AFileSystem code with their job and it failed like this: {noformat} java.lang.IllegalArgumentException at java.util.concurrent.ThreadPoolExecutor.(ThreadPoolExecutor.java:1307) at java.util.concurrent.ThreadPoolExecutor.(ThreadPoolExecutor.java:1230) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:280) at com.yahoo.prism.UseLocalKeyS3AFileSystem.initializeFileSystem(UseLocalKeyS3AFileSystem.java:68) at com.yahoo.prism.UseLocalKeyS3AFileSystem.initialize(UseLocalKeyS3AFileSystem.java:113) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2670) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:95) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2704) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2686) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:374) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) [...] {noformat} The problem is that core-default in 2.8 removed fs.s3a.threads.core but changed the existing fs.s3a.threads.max to 10. The old pre-2.8 S3AFileSystem code had code defaults of 15 and 256, respectively. So when a 2.8 job client (in this case an Oozie server) submits the job, picking up the 2.8 core-default settings for fs.s3a.threads.max for job.xml but the job itself runs with the older S3AFileSystem code the job fails because it tries to initialize a threadpool with core threads=15 and max threads=10. Not sure if this is considered simply an invalid setup, but I suspect this won't be the first case of someone submitting a job with a 2.8 or later client (e.g.: via an Oozie server upgraded independently of a user's job code) and failing because the user hasn't upgraded to the 2.8 or later S3AFileSystem code yet. If we had added a deprecated core-default value for fs.s3a.threads.core then the older code would have gotten consistent values for core and max threads. As it is now, it gets half of the new default settings, and those aren't compatible with the older, other half of the defaults. Thoughts on whether this is worth doing in a followup JIRA? > Branch-2: S3a to use thread pool that blocks clients > > > Key: HADOOP-13139 > URL: https://issues.apache.org/jira/browse/HADOOP-13139 > Project: Hadoop Common > Issue Type: Task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Pieter Reuse >Assignee: Pieter Reuse > Fix For: 2.8.0 > > Attachments: HADOOP-13139-001.patch, HADOOP-13139-branch-2.001.patch, > HADOOP-13139-branch-2.002.patch, HADOOP-13139-branch-2-003.patch, > HADOOP-13139-branch-2-004.patch, HADOOP-13139-branch-2-005.patch, > HADOOP-13139-branch-2-006.patch > > > HADOOP-11684 is accepted into trunk, but was not applied to branch-2. I will > attach a patch applicable to branch-2. > It should be noted in CHANGES-2.8.0.txt that the config parameter > 'fs.s3a.threads.core' has been been removed and the behavior of the > ThreadPool for s3a has been changed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14794) Standalone MiniKdc server
[ https://issues.apache.org/jira/browse/HADOOP-14794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14794: Status: Patch Available (was: Open) > Standalone MiniKdc server > - > > Key: HADOOP-14794 > URL: https://issues.apache.org/jira/browse/HADOOP-14794 > Project: Hadoop Common > Issue Type: New Feature > Components: security, test >Affects Versions: 2.7.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14794.001.patch > > > Add a new subcommand {{hadoop minikdc}} to start a standalone MiniKdc server. > This will make it easier to test Kerberos in pseudo-distributed mode without > an external KDC server. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14794) Standalone MiniKdc server
[ https://issues.apache.org/jira/browse/HADOOP-14794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14794: Attachment: HADOOP-14794.001.patch Patch 001 * Add subcommand "hadoop minikdc" Testing Done * hadoop minikdc {noformat} Usage: hadoop [--config confdir] [start|stop|status] minikdc + {noformat} * hadoop --daemon start minikdc {noformat} Usage: hadoop [--config confdir] [start|stop|status] minikdc + {noformat} * hadoop --daemon stop minikdc * hadoop --daemon status minikdc * hadoop minikdc jzh...@example.com {noformat} WARNING: /Users/jzhuge/hadoop/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT/minikdc does not exist. Creating. WARNING: /Users/jzhuge/hadoop/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT/logs does not exist. Creating. 2017-08-21 08:54:02,511 INFO minikdc.MiniKdc: Configuration: 2017-08-21 08:54:02,511 INFO minikdc.MiniKdc: --- 2017-08-21 08:54:02,513 INFO minikdc.MiniKdc: debug: false 2017-08-21 08:54:02,513 INFO minikdc.MiniKdc: transport: UDP 2017-08-21 08:54:02,513 INFO minikdc.MiniKdc: max.ticket.lifetime: 8640 2017-08-21 08:54:02,513 INFO minikdc.MiniKdc: org.name: EXAMPLE 2017-08-21 08:54:02,513 INFO minikdc.MiniKdc: kdc.port: 0 2017-08-21 08:54:02,513 INFO minikdc.MiniKdc: org.domain: COM 2017-08-21 08:54:02,513 INFO minikdc.MiniKdc: max.renewable.lifetime: 60480 2017-08-21 08:54:02,513 INFO minikdc.MiniKdc: instance: DefaultKrbServer 2017-08-21 08:54:02,513 INFO minikdc.MiniKdc: kdc.bind.address: localhost 2017-08-21 08:54:02,513 INFO minikdc.MiniKdc: --- 2017-08-21 08:54:02,653 INFO minikdc.MiniKdc: MiniKdc started. Standalone MiniKdc Running --- Realm : EXAMPLE.COM Running at : localhost:localhost krb5conf: /Users/jzhuge/hadoop/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT/minikdc/krb5.conf created keytab : /Users/jzhuge/hadoop/hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT/minikdc/keytab with principals : [jzh...@example.com] Do or kill to stop it --- {noformat} * hadoop --daemon start minikdc jzh...@example.com > Standalone MiniKdc server > - > > Key: HADOOP-14794 > URL: https://issues.apache.org/jira/browse/HADOOP-14794 > Project: Hadoop Common > Issue Type: New Feature > Components: security, test >Affects Versions: 2.7.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14794.001.patch > > > Add a new subcommand {{hadoop minikdc}} to start a standalone MiniKdc server. > This will make it easier to test Kerberos in pseudo-distributed mode without > an external KDC server. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14794) Standalone MiniKdc server
John Zhuge created HADOOP-14794: --- Summary: Standalone MiniKdc server Key: HADOOP-14794 URL: https://issues.apache.org/jira/browse/HADOOP-14794 Project: Hadoop Common Issue Type: New Feature Components: security, test Affects Versions: 2.7.0 Reporter: John Zhuge Assignee: John Zhuge Add a new subcommand {{hadoop minikdc}} to start a standalone MiniKdc server. This will make it easier to test Kerberos in pseudo-distributed mode without an external KDC server. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12071) conftest is not documented
[ https://issues.apache.org/jira/browse/HADOOP-12071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135261#comment-16135261 ] Hadoop QA commented on HADOOP-12071: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HADOOP-12071 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-12071 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12882885/HADOOP-12071.001.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13081/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > conftest is not documented > -- > > Key: HADOOP-12071 > URL: https://issues.apache.org/jira/browse/HADOOP-12071 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Kengo Seki >Assignee: Kengo Seki > Attachments: HADOOP-12071.001.patch, HADOOP-12071.001.patch > > > HADOOP-7947 introduced new hadoop subcommand conftest, but it is not > documented yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14793) conftest command can't handle XInclude
Steve Loughran created HADOOP-14793: --- Summary: conftest command can't handle XInclude Key: HADOOP-14793 URL: https://issues.apache.org/jira/browse/HADOOP-14793 Project: Hadoop Common Issue Type: Bug Components: scripts, util Affects Versions: 3.0.0-beta1 Reporter: Steve Loughran Priority: Minor hadoop conftest fails if there's an {code} http://www.w3.org/2001/XInclude; href="something.xml"> {code} expected: follows & validates the include. Actual: reports an error[. {code} hadoop-dist/target/hadoop-3.0.0-beta1-SNAPSHOT/etc/hadoop/core-site.xml: Line 23: element not {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12071) conftest is not documented
[ https://issues.apache.org/jira/browse/HADOOP-12071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12071: Status: Open (was: Patch Available) > conftest is not documented > -- > > Key: HADOOP-12071 > URL: https://issues.apache.org/jira/browse/HADOOP-12071 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Kengo Seki >Assignee: Kengo Seki > Attachments: HADOOP-12071.001.patch, HADOOP-12071.001.patch > > > HADOOP-7947 introduced new hadoop subcommand conftest, but it is not > documented yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12071) conftest is not documented
[ https://issues.apache.org/jira/browse/HADOOP-12071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12071: Target Version/s: 3.0.0-beta1 Status: Patch Available (was: Open) > conftest is not documented > -- > > Key: HADOOP-12071 > URL: https://issues.apache.org/jira/browse/HADOOP-12071 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Kengo Seki >Assignee: Kengo Seki > Attachments: HADOOP-12071.001.patch, HADOOP-12071.001.patch > > > HADOOP-7947 introduced new hadoop subcommand conftest, but it is not > documented yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12071) conftest is not documented
[ https://issues.apache.org/jira/browse/HADOOP-12071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-12071: Attachment: HADOOP-12071.001.patch missed this. resubmitting to see if patch can take it as is > conftest is not documented > -- > > Key: HADOOP-12071 > URL: https://issues.apache.org/jira/browse/HADOOP-12071 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Kengo Seki >Assignee: Kengo Seki > Attachments: HADOOP-12071.001.patch, HADOOP-12071.001.patch > > > HADOOP-7947 introduced new hadoop subcommand conftest, but it is not > documented yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification
[ https://issues.apache.org/jira/browse/HADOOP-13327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135230#comment-16135230 ] Hadoop QA commented on HADOOP-13327: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 32s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 4s{color} | {color:orange} root: The patch generated 3 new + 38 unchanged - 3 fixed = 41 total (was 41) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 5s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 40 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 8s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 44s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 17s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 43s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}178m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | | | hadoop.net.TestDNS | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-13327 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12882859/HADOOP-13327-002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux ac211ceacc06 3.13.0-117-generic
[jira] [Commented] (HADOOP-14402) roll out StreamCapabilities across output streams of all filesystems
[ https://issues.apache.org/jira/browse/HADOOP-14402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135064#comment-16135064 ] Steve Loughran commented on HADOOP-14402: - +I'm not doing anything about declaring new capabilities, not here > roll out StreamCapabilities across output streams of all filesystems > > > Key: HADOOP-14402 > URL: https://issues.apache.org/jira/browse/HADOOP-14402 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, fs/adl, fs/azure, fs/oss, fs/s3, fs/swift >Affects Versions: 3.0.0-alpha4 >Reporter: Steve Loughran >Assignee: Steve Loughran > > HDFS-11644 added a way to probe streams for capabilities, with dfsoutput > stream implementing some. > We should roll this out to all our output streams, with capabilities to define > * is metadata in sync with flushed data (HDFS: no) > * does LocalFileSystem really do a sync/flush (as I don't think any subclass > of ChecksumFS does, so {{FSOutputSummer}} should declare this in its > capabilities > * are intermediate writes visible *at all*? > * Is close() potentially a long operation (for object stores, yes, on a > case-by-case basis) > We'll need useful names for these options, obviously, tests, etc etc -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14790) PageBlobOutputStream to declare hflush/hsync StreamCapabilities for
[ https://issues.apache.org/jira/browse/HADOOP-14790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-14790. - Resolution: Won't Fix > PageBlobOutputStream to declare hflush/hsync StreamCapabilities for > > > Key: HADOOP-14790 > URL: https://issues.apache.org/jira/browse/HADOOP-14790 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Priority: Minor > > HDFS-11644 added an interface for streams to export when dynamically > declaring support for features, as the static service APIs werent reliable. > As {{PageBlobOutputStream}} does support hsync/hflush, it should override > {{StreamCapabilities.hasCapability()}} and declare that it supports them. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14719) Add StreamCapabilities support to WASB
[ https://issues.apache.org/jira/browse/HADOOP-14719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-14719. - Resolution: Duplicate > Add StreamCapabilities support to WASB > -- > > Key: HADOOP-14719 > URL: https://issues.apache.org/jira/browse/HADOOP-14719 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.0.0-alpha4 >Reporter: John Zhuge > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14718) Add StreamCapabilities support to ADLS
[ https://issues.apache.org/jira/browse/HADOOP-14718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-14718. - Resolution: Duplicate > Add StreamCapabilities support to ADLS > -- > > Key: HADOOP-14718 > URL: https://issues.apache.org/jira/browse/HADOOP-14718 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/adl >Affects Versions: 3.0.0-alpha4 >Reporter: John Zhuge > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14718) Add StreamCapabilities support to ADLS
[ https://issues.apache.org/jira/browse/HADOOP-14718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135054#comment-16135054 ] Steve Loughran commented on HADOOP-14718: - Fixed in HADOOP-13327 with tests; closing as a duplicate > Add StreamCapabilities support to ADLS > -- > > Key: HADOOP-14718 > URL: https://issues.apache.org/jira/browse/HADOOP-14718 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/adl >Affects Versions: 3.0.0-alpha4 >Reporter: John Zhuge > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14402) roll out StreamCapabilities across output streams of all filesystems
[ https://issues.apache.org/jira/browse/HADOOP-14402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135052#comment-16135052 ] Steve Loughran commented on HADOOP-14402: - As oss://, s3a:// and swift don't do syncable, they're already sorted with the base "return false". Adding the others in Hadoop-13327 > roll out StreamCapabilities across output streams of all filesystems > > > Key: HADOOP-14402 > URL: https://issues.apache.org/jira/browse/HADOOP-14402 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, fs/adl, fs/azure, fs/oss, fs/s3, fs/swift >Affects Versions: 3.0.0-alpha4 >Reporter: Steve Loughran > > HDFS-11644 added a way to probe streams for capabilities, with dfsoutput > stream implementing some. > We should roll this out to all our output streams, with capabilities to define > * is metadata in sync with flushed data (HDFS: no) > * does LocalFileSystem really do a sync/flush (as I don't think any subclass > of ChecksumFS does, so {{FSOutputSummer}} should declare this in its > capabilities > * are intermediate writes visible *at all*? > * Is close() potentially a long operation (for object stores, yes, on a > case-by-case basis) > We'll need useful names for these options, obviously, tests, etc etc -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14402) roll out StreamCapabilities across output streams of all filesystems
[ https://issues.apache.org/jira/browse/HADOOP-14402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-14402: --- Assignee: Steve Loughran > roll out StreamCapabilities across output streams of all filesystems > > > Key: HADOOP-14402 > URL: https://issues.apache.org/jira/browse/HADOOP-14402 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, fs/adl, fs/azure, fs/oss, fs/s3, fs/swift >Affects Versions: 3.0.0-alpha4 >Reporter: Steve Loughran >Assignee: Steve Loughran > > HDFS-11644 added a way to probe streams for capabilities, with dfsoutput > stream implementing some. > We should roll this out to all our output streams, with capabilities to define > * is metadata in sync with flushed data (HDFS: no) > * does LocalFileSystem really do a sync/flush (as I don't think any subclass > of ChecksumFS does, so {{FSOutputSummer}} should declare this in its > capabilities > * are intermediate writes visible *at all*? > * Is close() potentially a long operation (for object stores, yes, on a > case-by-case basis) > We'll need useful names for these options, obviously, tests, etc etc -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification
[ https://issues.apache.org/jira/browse/HADOOP-13327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13327: Target Version/s: 3.0.0-beta1 Status: Patch Available (was: Open) > Add OutputStream + Syncable to the Filesystem Specification > --- > > Key: HADOOP-13327 > URL: https://issues.apache.org/jira/browse/HADOOP-13327 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13327-002.patch, HADOOP-13327-branch-2-001.patch > > > Write down what a Filesystem output stream should do. While core the API is > defined in Java, that doesn't say what's expected about visibility, > durability, etc —and Hadoop Syncable interface is entirely ours to define. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification
[ https://issues.apache.org/jira/browse/HADOOP-13327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13327: Attachment: HADOOP-13327-002.patch HADOOP-13327 OutputStream. Syncable and StreamCapabilities * The docs state that you must rely on StreamCapabilities.hasCapability() as the cue as to whether Syncable methods are supported; tests are there to see what goes on The Azure and ADL streams now support this interface. * Issues with FSOutputSummer are documented. It needs to make close() at-most-once idempotent & check for it being closed in write(int); explicitly downgrade flush() to no-op. * Object store semantics listed I don't think we've looked at `FSOutputSummer` hard enough to see what it does wrong, not for a while. This does need fixing, but I'd rather delay that until Hadoop 3.1; something, somewhere, will be using it wrong. > Add OutputStream + Syncable to the Filesystem Specification > --- > > Key: HADOOP-13327 > URL: https://issues.apache.org/jira/browse/HADOOP-13327 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13327-002.patch, HADOOP-13327-branch-2-001.patch > > > Write down what a Filesystem output stream should do. While core the API is > defined in Java, that doesn't say what's expected about visibility, > durability, etc —and Hadoop Syncable interface is entirely ours to define. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification
[ https://issues.apache.org/jira/browse/HADOOP-13327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13327: Status: Open (was: Patch Available) > Add OutputStream + Syncable to the Filesystem Specification > --- > > Key: HADOOP-13327 > URL: https://issues.apache.org/jira/browse/HADOOP-13327 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13327-branch-2-001.patch > > > Write down what a Filesystem output stream should do. While core the API is > defined in Java, that doesn't say what's expected about visibility, > durability, etc —and Hadoop Syncable interface is entirely ours to define. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14792) Package on windows fail
[ https://issues.apache.org/jira/browse/HADOOP-14792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajith S resolved HADOOP-14792. -- Resolution: Not A Problem as BUILDING.txt clearly states, cygwin is not supported, removing cygwin from PATH and using Git unix tools instead resolved this. marking as not a issue > Package on windows fail > --- > > Key: HADOOP-14792 > URL: https://issues.apache.org/jira/browse/HADOOP-14792 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ajith S >Assignee: Ajith S > Attachments: packagefail.png > > > {{mvn package -Pdist -Pnative-win -DskipTests -Dtar > -Dmaven.javadoc.skip=true}} > command fails on windows > this is because > dev-support/bin/dist-copynativelibs need dos2unix conversion > to avoid failure, we can add the conversion before bash execute -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-7793) mvn clean package -Dsrc does not work
[ https://issues.apache.org/jira/browse/HADOOP-7793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor resolved HADOOP-7793. -- Resolution: Invalid > mvn clean package -Dsrc does not work > - > > Key: HADOOP-7793 > URL: https://issues.apache.org/jira/browse/HADOOP-7793 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 0.23.0 >Reporter: Bruno Mahé > Labels: bigtop > > "mvn clean package -Dsrc" is supposed to create a source tarball in > hadoop-dist/target/ but some interactions with the clean target will prevent > this to happen. > The following would happen: > * -Dsrc makes maven create a source tarball in hadoop-dist/target/ for the > root module as well as each submodule > 1. The root module would first create a source tarball correctly in > hadoop-dist/target/ > 2. Each submodule will also create a source tarball for their module in > /hadoop-dist/target/. But not before executing the clean target and > therefore deleting the main source tarball created at 1. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-9383) mvn clean compile fails without install goal
[ https://issues.apache.org/jira/browse/HADOOP-9383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134989#comment-16134989 ] Andras Bokor commented on HADOOP-9383: -- I cannot reproduce it on Mac. Does somebody still face to this issue? > mvn clean compile fails without install goal > > > Key: HADOOP-9383 > URL: https://issues.apache.org/jira/browse/HADOOP-9383 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha1 >Reporter: Arpit Agarwal > > 'mvn -Pnative-win clean compile' fails with the following error: > [ERROR] Could not find goal 'protoc' in plugin > org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT among available goals > -> [Help 1] > The build succeeds if the install goal is specified. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS
[ https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134806#comment-16134806 ] Hadoop QA commented on HADOOP-14705: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 149 unchanged - 3 fixed = 149 total (was 152) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 6s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 2s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 68m 12s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14705 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12882821/HADOOP-14705.10.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7cda21a3000d 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 267e19a | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13079/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13079/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13079/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT
[jira] [Commented] (HADOOP-14792) Package on windows fail
[ https://issues.apache.org/jira/browse/HADOOP-14792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134802#comment-16134802 ] Ajith S commented on HADOOP-14792: -- as cygwin do not have dos2unix command installed by default, we can replace carriage return by awk or tr command as below {noformat} /cygdrive/d/hadoop/code/hadoop $ D:/hadoop/code/hadoop/hadoop-project/../dev-support/bin/dist-copynativelibs --version=3.0.0-beta1-SNAPSHOT --builddir=D:\hadoop\code\hadoop\hadoop-project-dist\target --artifact id=hadoop-project-dist --isalbundle=false --isallib= --openssllib= --opensslbinbundle=false --openssllibbundle=false --snappybinbundle=false --snappylib= --snappylibbundle=false - -zstdbinbundle=false --zstdlib= --zstdlibbundle=false D:/hadoop/code/hadoop/hadoop-project/../dev-support/bin/dist-copynativelibs: line 16: $'\r': command not found : invalid option name/hadoop-project/../dev-support/bin/dist-copynativelibs: line 17: set: pipefail D:/hadoop/code/hadoop/hadoop-project/../dev-support/bin/dist-copynativelibs: line 18: $'\r': command not found D:/hadoop/code/hadoop/hadoop-project/../dev-support/bin/dist-copynativelibs: line 21: syntax error near unexpected token `$'\r'' ':/hadoop/code/hadoop/hadoop-project/../dev-support/bin/dist-copynativelibs: line 21: `function bundle_native_lib() /cygdrive/d/hadoop/code/hadoop $ tr -d '\15\32' < D:/hadoop/code/hadoop/hadoop-project/../dev-support/bin/dist-copynativelibs > D:/hadoop/code/hadoop/hadoop-project/../dev-support/bin/dist-copynativelibs /cygdrive/d/hadoop/code/hadoop $ D:/hadoop/code/hadoop/hadoop-project/../dev-support/bin/dist-copynativelibs --version=3.0.0-beta1-SNAPSHOT --builddir=D:\hadoop\code\hadoop\hadoop-project-dist\target --artifact id=hadoop-project-dist --isalbundle=false --isallib= --openssllib= --opensslbinbundle=false --openssllibbundle=false --snappybinbundle=false --snappylib= --snappylibbundle=false - -zstdbinbundle=false --zstdlib= --zstdlibbundle=false /cygdrive/d/hadoop/code/hadoop{noformat} > Package on windows fail > --- > > Key: HADOOP-14792 > URL: https://issues.apache.org/jira/browse/HADOOP-14792 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ajith S >Assignee: Ajith S > Attachments: packagefail.png > > > {{mvn package -Pdist -Pnative-win -DskipTests -Dtar > -Dmaven.javadoc.skip=true}} > command fails on windows > this is because > dev-support/bin/dist-copynativelibs need dos2unix conversion > to avoid failure, we can add the conversion before bash execute -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14792) Package on windows fail
[ https://issues.apache.org/jira/browse/HADOOP-14792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajith S updated HADOOP-14792: - Attachment: packagefail.png > Package on windows fail > --- > > Key: HADOOP-14792 > URL: https://issues.apache.org/jira/browse/HADOOP-14792 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ajith S >Assignee: Ajith S > Attachments: packagefail.png > > > {{mvn package -Pdist -Pnative-win -DskipTests -Dtar > -Dmaven.javadoc.skip=true}} > command fails on windows > this is because > dev-support/bin/dist-copynativelibs need dos2unix conversion > to avoid failure, we can add the conversion before bash execute -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14194) Aliyun OSS should not use empty endpoint as default
[ https://issues.apache.org/jira/browse/HADOOP-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134747#comment-16134747 ] Hudson commented on HADOOP-14194: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12217 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12217/]) HADOOP-14194. Aliyun OSS should not use empty endpoint as default. (kai.zheng: rev 267e19a09f366a965b30c8d4dc75e377b0d92fff) * (edit) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java > Aliyun OSS should not use empty endpoint as default > --- > > Key: HADOOP-14194 > URL: https://issues.apache.org/jira/browse/HADOOP-14194 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Reporter: Mingliang Liu >Assignee: Genmao Yu > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14194.000.patch, HADOOP-14194.001.patch > > > In {{AliyunOSSFileSystemStore::initialize()}}, it retrieves the endPoint and > using empty string as a default value. > {code} > String endPoint = conf.getTrimmed(ENDPOINT_KEY, ""); > {code} > The plain value without validation is passed to OSSClient. If the endPoint is > not provided (empty string) or the endPoint is not valid, users will get > exception from Aliyun OSS sdk with raw exception message like: > {code} > java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected > authority at index 8: https:// > at com.aliyun.oss.OSSClient.toURI(OSSClient.java:359) > at com.aliyun.oss.OSSClient.setEndpoint(OSSClient.java:313) > at com.aliyun.oss.OSSClient.(OSSClient.java:297) > at > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.initialize(AliyunOSSFileSystemStore.java:134) > at > org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.initialize(AliyunOSSFileSystem.java:272) > at > org.apache.hadoop.fs.aliyun.oss.AliyunOSSTestUtils.createTestFileSystem(AliyunOSSTestUtils.java:63) > at > org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract.setUp(TestAliyunOSSFileSystemContract.java:47) > at junit.framework.TestCase.runBare(TestCase.java:139) > at junit.framework.TestResult$1.protect(TestResult.java:122) > at junit.framework.TestResult.runProtected(TestResult.java:142) > at junit.framework.TestResult.run(TestResult.java:125) > at junit.framework.TestCase.run(TestCase.java:129) > at junit.framework.TestSuite.runTest(TestSuite.java:255) > at junit.framework.TestSuite.run(TestSuite.java:250) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84) > at org.junit.runner.JUnitCore.run(JUnitCore.java:160) > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > at > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51) > at > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237) > at > com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) > Caused by: java.net.URISyntaxException: Expected authority at index 8: > https:// > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.failExpecting(URI.java:2854) > at java.net.URI$Parser.parseHierarchical(URI.java:3102) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at com.aliyun.oss.OSSClient.toURI(OSSClient.java:357) > {code} > Let's check endPoint is not null or empty, catch the IllegalArgumentException > and log it, wrapping the exception with clearer message stating the > misconfiguration in endpoint or credentials. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS
[ https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-14705: --- Attachment: HADOOP-14705.10.patch Thanks for the review [~jojochuang], patch 10 to address all comments. > Add batched reencryptEncryptedKey interface to KMS > -- > > Key: HADOOP-14705 > URL: https://issues.apache.org/jira/browse/HADOOP-14705 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, > HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, > HADOOP-14705.06.patch, HADOOP-14705.07.patch, HADOOP-14705.08.patch, > HADOOP-14705.09.patch, HADOOP-14705.10.patch > > > HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}. > As the performance results of HDFS-10899 turns out, communication overhead > with the KMS occupies the majority of the time. So this jira proposes to add > a batched interface to re-encrypt multiple EDEKs in 1 call. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org