[jira] [Updated] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
[ https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhangBing Lin updated HADOOP-14449: --- Component/s: common > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct > > > Key: HADOOP-14449 > URL: https://issues.apache.org/jira/browse/HADOOP-14449 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.0.0-alpha3 >Reporter: ZhangBing Lin >Assignee: ZhangBing Lin >Priority: Minor > Attachments: HADOOP-14449.001.patch > > > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
[ https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhangBing Lin updated HADOOP-14449: --- Status: Patch Available (was: Open) > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct > > > Key: HADOOP-14449 > URL: https://issues.apache.org/jira/browse/HADOOP-14449 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: ZhangBing Lin >Assignee: ZhangBing Lin >Priority: Minor > Attachments: HADOOP-14449.001.patch > > > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
[ https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhangBing Lin updated HADOOP-14449: --- Attachment: HADOOP-14449.001.patch > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct > > > Key: HADOOP-14449 > URL: https://issues.apache.org/jira/browse/HADOOP-14449 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: ZhangBing Lin >Assignee: ZhangBing Lin >Priority: Minor > Attachments: HADOOP-14449.001.patch > > > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
[ https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhangBing Lin updated HADOOP-14449: --- Status: Open (was: Patch Available) > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct > > > Key: HADOOP-14449 > URL: https://issues.apache.org/jira/browse/HADOOP-14449 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: ZhangBing Lin >Assignee: ZhangBing Lin >Priority: Minor > Attachments: HADOOP-14449.001.patch > > > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
[ https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhangBing Lin updated HADOOP-14449: --- Attachment: (was: HADOOP-14449.001.patch) > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct > > > Key: HADOOP-14449 > URL: https://issues.apache.org/jira/browse/HADOOP-14449 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: ZhangBing Lin >Assignee: ZhangBing Lin >Priority: Minor > Attachments: HADOOP-14449.001.patch > > > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14452) Add adl and aliyun to cloud storage module
[ https://issues.apache.org/jira/browse/HADOOP-14452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022344#comment-16022344 ] Mingliang Liu commented on HADOOP-14452: We may link or resolve [HADOOP-14122] here. > Add adl and aliyun to cloud storage module > -- > > Key: HADOOP-14452 > URL: https://issues.apache.org/jira/browse/HADOOP-14452 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/adl, fs/oss >Affects Versions: 3.0.0-alpha2 >Reporter: Wei-Chiu Chuang > > HADOOP-13687 added then existing cloud connector file systems: aws, azure and > openstack to a new module hadoop-cloud-storage. azure data lake and aliyun > were not included in it. > I think we should add them too. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14452) Add adl and aliyun to cloud storage module
Wei-Chiu Chuang created HADOOP-14452: Summary: Add adl and aliyun to cloud storage module Key: HADOOP-14452 URL: https://issues.apache.org/jira/browse/HADOOP-14452 Project: Hadoop Common Issue Type: Improvement Components: fs/adl, fs/oss Affects Versions: 3.0.0-alpha2 Reporter: Wei-Chiu Chuang HADOOP-13687 added then existing cloud connector file systems: aws, azure and openstack to a new module hadoop-cloud-storage. azure data lake and aliyun were not included in it. I think we should add them too. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14407) DistCp - Introduce a configurable copy buffer size
[ https://issues.apache.org/jira/browse/HADOOP-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022336#comment-16022336 ] Yongjun Zhang commented on HADOOP-14407: Welcome [~omkarksa]. My bad, did not catch an issue in your branch-2 patch in time. {code} -- Running org.apache.hadoop.tools.TestDistCpOptions Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.266 sec <<< FAILURE! - in org.apache.hadoop.tools.TestDistCpOptions testToString(org.apache.hadoop.tools.TestDistCpOptions) Time elapsed: 0.018 sec <<< FAILURE! org.junit.ComparisonFailure: expected:<..., filtersFile='null'[]}> but was:<..., filtersFile='null'[, blocksPerChunk=0, copyBufferSize=8192]}> at org.junit.Assert.assertEquals(Assert.java:115) at org.junit.Assert.assertEquals(Assert.java:144) at org.apache.hadoop.tools.TestDistCpOptions.testToString(TestDistCpOptions.java:317) {code} TestDistCpOptions.java is somehow missing from the branch-2 patch, would you please create a new jira for branch-2 only and submit the patch asap? Thanks. > DistCp - Introduce a configurable copy buffer size > -- > > Key: HADOOP-14407 > URL: https://issues.apache.org/jira/browse/HADOOP-14407 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.9.0 >Reporter: Omkar Aradhya K S >Assignee: Omkar Aradhya K S > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HADOOP-14407.001.patch, HADOOP-14407.002.patch, > HADOOP-14407.002.patch, HADOOP-14407.003.patch, > HADOOP-14407.004.branch2.patch, HADOOP-14407.004.patch, > HADOOP-14407.004.patch, TotalTime-vs-CopyBufferSize.jpg > > > Currently, the RetriableFileCopyCommand has a fixed copy buffer size of just > 8KB. We have noticed in our performance tests that with bigger buffer sizes > we saw upto ~3x performance boost. Hence, making the copy buffer size a > configurable setting via the new parameter . -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-14451) Deadlock in NativeIO
[ https://issues.apache.org/jira/browse/HADOOP-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajith S moved YARN-6637 to HADOOP-14451: Key: HADOOP-14451 (was: YARN-6637) Project: Hadoop Common (was: Hadoop YARN) > Deadlock in NativeIO > > > Key: HADOOP-14451 > URL: https://issues.apache.org/jira/browse/HADOOP-14451 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ajith S >Assignee: Ajith S >Priority: Critical > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14180) FileSystem contract tests to replace JUnit 3 with 4
[ https://issues.apache.org/jira/browse/HADOOP-14180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022240#comment-16022240 ] Hadoop QA commented on HADOOP-14180: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 56s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 36s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 30s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 58s{color} | {color:green} root: The patch generated 0 new + 85 unchanged - 14 fixed = 85 total (was 99) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 54s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s{color} | {color:green} hadoop-openstack in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 37s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 34s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s{color} | {color:green} hadoop-aliyun in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 44s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}199m 24s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue |
[jira] [Updated] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
[ https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhangBing Lin updated HADOOP-14449: --- Status: Open (was: Patch Available) > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct > > > Key: HADOOP-14449 > URL: https://issues.apache.org/jira/browse/HADOOP-14449 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: ZhangBing Lin >Assignee: ZhangBing Lin >Priority: Minor > Attachments: HADOOP-14449.001.patch > > > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
[ https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhangBing Lin updated HADOOP-14449: --- Status: Patch Available (was: Open) > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct > > > Key: HADOOP-14449 > URL: https://issues.apache.org/jira/browse/HADOOP-14449 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: ZhangBing Lin >Assignee: ZhangBing Lin >Priority: Minor > Attachments: HADOOP-14449.001.patch > > > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14180) FileSystem contract tests to replace JUnit 3 with 4
[ https://issues.apache.org/jira/browse/HADOOP-14180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HADOOP-14180: --- Attachment: HADOOP-14180.003.patch > FileSystem contract tests to replace JUnit 3 with 4 > --- > > Key: HADOOP-14180 > URL: https://issues.apache.org/jira/browse/HADOOP-14180 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Mingliang Liu >Assignee: Xiaobing Zhou > Labels: test > Attachments: HADOOP-14180.000.patch, HADOOP-14180.001.patch, > HADOOP-14180.002.patch, HADOOP-14180.003.patch > > > This is from discussion in [HADOOP-14170], as Steve commented: > {quote} > ...it's time to move this to JUnit 4, annotate all tests with @test, and make > the test cases skip if they don't have the test FS defined. JUnit 3 doesn't > support Assume, so when I do test runs without the s3n or s3 fs specced, I > get lots of errors I just ignore. > ...Move to Junit 4, and, in our own code, find everywhere we've subclassed a > method to make the test a no-op, and insert an Assume.assumeTrue(false) in > there so they skip properly. > {quote} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14180) FileSystem contract tests to replace JUnit 3 with 4
[ https://issues.apache.org/jira/browse/HADOOP-14180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022033#comment-16022033 ] Xiaobing Zhou commented on HADOOP-14180: Posted v3 to fix it, thanks [~ajisakaa] > FileSystem contract tests to replace JUnit 3 with 4 > --- > > Key: HADOOP-14180 > URL: https://issues.apache.org/jira/browse/HADOOP-14180 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Mingliang Liu >Assignee: Xiaobing Zhou > Labels: test > Attachments: HADOOP-14180.000.patch, HADOOP-14180.001.patch, > HADOOP-14180.002.patch, HADOOP-14180.003.patch > > > This is from discussion in [HADOOP-14170], as Steve commented: > {quote} > ...it's time to move this to JUnit 4, annotate all tests with @test, and make > the test cases skip if they don't have the test FS defined. JUnit 3 doesn't > support Assume, so when I do test runs without the s3n or s3 fs specced, I > get lots of errors I just ignore. > ...Move to Junit 4, and, in our own code, find everywhere we've subclassed a > method to make the test a no-op, and insert an Assume.assumeTrue(false) in > there so they skip properly. > {quote} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14450) ADLS Python client inconsistent when used in tandem with AdlFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-14450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021792#comment-16021792 ] Mingliang Liu commented on HADOOP-14450: I'd suggest a simple unit test to reproduce the bug (if any), or commands in FSShell. Ad hoc pythons clients wrapping ADLFileSystem may have it's own bug. > ADLS Python client inconsistent when used in tandem with AdlFileSystem > -- > > Key: HADOOP-14450 > URL: https://issues.apache.org/jira/browse/HADOOP-14450 > Project: Hadoop Common > Issue Type: Bug > Components: fs/adl >Reporter: Sailesh Mukil >Assignee: Atul Sikaria > Labels: infrastructure > > Impala uses the AdlFileSystem connector to talk to ADLS. As a part of the > Impala tests, we drop tables and verify that the files belonging to that > table have been dropped for all filesystems that Impala supports. These tests > however, fail with ADLS. > If I use the Hadoop ADLS connector to delete a file, and then list the parent > directory of that file using the above Python client within the second, the > client still says that the file is available in ADLS. > This is the Python client from Microsoft that we're using in our testing: > https://github.com/Azure/azure-data-lake-store-python > Their release notes say that it's still a "pre-release preview": > https://github.com/Azure/azure-data-lake-store-python/releases > Questions for the ADLS folks: > Is this a known issue? If so, will it be fixed soon? > Or is this expected behavior? > I'm able to deterministically reproduce it in my tests, with Impala on ADLS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14426) Upgrade Kerby version from 1.0.0-RC2 to 1.0.0
[ https://issues.apache.org/jira/browse/HADOOP-14426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021763#comment-16021763 ] Wei-Chiu Chuang commented on HADOOP-14426: -- Bump the priority to blocker. Kerby 1.0 fixed a critical bug of multiple SPN support, so I believe this deserves a special treatment. If no one objects I will commit the rev01 patch by end of tomorrow. > Upgrade Kerby version from 1.0.0-RC2 to 1.0.0 > - > > Key: HADOOP-14426 > URL: https://issues.apache.org/jira/browse/HADOOP-14426 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Jiajia Li >Assignee: Jiajia Li >Priority: Blocker > Attachments: HADOOP-14426-001.patch > > > Apache Kerby 1.0.0 with some bug fixes. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14330) Kerby breaks multiple SPN support
[ https://issues.apache.org/jira/browse/HADOOP-14330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HADOOP-14330. -- Resolution: Duplicate This one duplicates HADOOP-14426. That jira has the patch pending and this one doesn't. So I'll mark this as a dup. > Kerby breaks multiple SPN support > - > > Key: HADOOP-14330 > URL: https://issues.apache.org/jira/browse/HADOOP-14330 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 3.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Priority: Blocker > > Hadoop 3 currently pulls in Kerby 1.0.0 RC2. Via downstream application > tests, we found a bug in this version that breaks multiple SPN support > implemented in HADOOP-10158, because Kerby can't read keytabs generated by > some applications, and it gets malformed SPNs. The bug is fixed by > DIRKRB-621, targeting Kerby 1.0.0 GA release. > I also verified this regression is fixed in latest Kerby, by having my local > Hadoop repo depend on my local 1.0.0-RC3-SNAPSHOT Kerby artifacts. So the > easiest fix is to wait for a Kerby 1.0 RC3/GA, and update Hadoop pom.xml to > depend on Kerby 1.0 GA. > The planned alpha 3 release is approaching, so I don't anticipate this can > get resolved by then. Beta1 is probably a more realistic target version. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14426) Upgrade Kerby version from 1.0.0-RC2 to 1.0.0
[ https://issues.apache.org/jira/browse/HADOOP-14426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-14426: - Priority: Blocker (was: Major) > Upgrade Kerby version from 1.0.0-RC2 to 1.0.0 > - > > Key: HADOOP-14426 > URL: https://issues.apache.org/jira/browse/HADOOP-14426 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Jiajia Li >Assignee: Jiajia Li >Priority: Blocker > Attachments: HADOOP-14426-001.patch > > > Apache Kerby 1.0.0 with some bug fixes. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14450) ADLS Python client inconsistent when used in tandem with AdlFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-14450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021753#comment-16021753 ] Steve Loughran commented on HADOOP-14450: - sounds like an impala bug, not Hadoop > ADLS Python client inconsistent when used in tandem with AdlFileSystem > -- > > Key: HADOOP-14450 > URL: https://issues.apache.org/jira/browse/HADOOP-14450 > Project: Hadoop Common > Issue Type: Bug > Components: fs/adl >Reporter: Sailesh Mukil >Assignee: Atul Sikaria > Labels: infrastructure > > Impala uses the AdlFileSystem connector to talk to ADLS. As a part of the > Impala tests, we drop tables and verify that the files belonging to that > table have been dropped for all filesystems that Impala supports. These tests > however, fail with ADLS. > If I use the Hadoop ADLS connector to delete a file, and then list the parent > directory of that file using the above Python client within the second, the > client still says that the file is available in ADLS. > This is the Python client from Microsoft that we're using in our testing: > https://github.com/Azure/azure-data-lake-store-python > Their release notes say that it's still a "pre-release preview": > https://github.com/Azure/azure-data-lake-store-python/releases > Questions for the ADLS folks: > Is this a known issue? If so, will it be fixed soon? > Or is this expected behavior? > I'm able to deterministically reproduce it in my tests, with Impala on ADLS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13921) Remove Log4j classes from JobConf
[ https://issues.apache.org/jira/browse/HADOOP-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021723#comment-16021723 ] Sean Busbey commented on HADOOP-13921: -- If folks would like me to come up with a way of testing for log4j classes or some such, let me know. Otherwise I don't think this kind of removal warrants a test case specific to it. > Remove Log4j classes from JobConf > - > > Key: HADOOP-13921 > URL: https://issues.apache.org/jira/browse/HADOOP-13921 > Project: Hadoop Common > Issue Type: Sub-task > Components: conf >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-13921.0.patch, HADOOP-13921.1.patch > > > Replace the use of log4j classes from JobConf so that the dependency is not > needed unless folks are making use of our custom log4j appenders or loading a > logging bridge to use that system. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14450) ADLS Python client inconsistent when used in tandem with AdlFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-14450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021720#comment-16021720 ] John Zhuge commented on HADOOP-14450: - [~ASikaria], could you please take a look? > ADLS Python client inconsistent when used in tandem with AdlFileSystem > -- > > Key: HADOOP-14450 > URL: https://issues.apache.org/jira/browse/HADOOP-14450 > Project: Hadoop Common > Issue Type: Bug > Components: fs/adl >Reporter: Sailesh Mukil >Assignee: Atul Sikaria > Labels: infrastructure > > Impala uses the AdlFileSystem connector to talk to ADLS. As a part of the > Impala tests, we drop tables and verify that the files belonging to that > table have been dropped for all filesystems that Impala supports. These tests > however, fail with ADLS. > If I use the Hadoop ADLS connector to delete a file, and then list the parent > directory of that file using the above Python client within the second, the > client still says that the file is available in ADLS. > This is the Python client from Microsoft that we're using in our testing: > https://github.com/Azure/azure-data-lake-store-python > Their release notes say that it's still a "pre-release preview": > https://github.com/Azure/azure-data-lake-store-python/releases > Questions for the ADLS folks: > Is this a known issue? If so, will it be fixed soon? > Or is this expected behavior? > I'm able to deterministically reproduce it in my tests, with Impala on ADLS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14450) ADLS Python client inconsistent when used in tandem with AdlFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-14450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge reassigned HADOOP-14450: --- Assignee: Atul Sikaria > ADLS Python client inconsistent when used in tandem with AdlFileSystem > -- > > Key: HADOOP-14450 > URL: https://issues.apache.org/jira/browse/HADOOP-14450 > Project: Hadoop Common > Issue Type: Bug > Components: fs/adl >Reporter: Sailesh Mukil >Assignee: Atul Sikaria > Labels: infrastructure > > Impala uses the AdlFileSystem connector to talk to ADLS. As a part of the > Impala tests, we drop tables and verify that the files belonging to that > table have been dropped for all filesystems that Impala supports. These tests > however, fail with ADLS. > If I use the Hadoop ADLS connector to delete a file, and then list the parent > directory of that file using the above Python client within the second, the > client still says that the file is available in ADLS. > This is the Python client from Microsoft that we're using in our testing: > https://github.com/Azure/azure-data-lake-store-python > Their release notes say that it's still a "pre-release preview": > https://github.com/Azure/azure-data-lake-store-python/releases > Questions for the ADLS folks: > Is this a known issue? If so, will it be fixed soon? > Or is this expected behavior? > I'm able to deterministically reproduce it in my tests, with Impala on ADLS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14441) LoadBalancingKMSClientProvider#addDelegationTokens should add delegation tokens from all KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021687#comment-16021687 ] Yongjun Zhang commented on HADOOP-14441: Thanks for working on the issue here guys. Hi [~shahrs87], it seems your patch fit HADOOP-14445 better. If HADOOP-14445 works compatibly, we may not need HADOOP-14441. Would you please post your patch there even though you are polishing the test now? Thanks. > LoadBalancingKMSClientProvider#addDelegationTokens should add delegation > tokens from all KMS instances > -- > > Key: HADOOP-14441 > URL: https://issues.apache.org/jira/browse/HADOOP-14441 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.7.0 > Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HADOOP-14441.001.patch, HADOOP-14441.002.patch, > HADOOP-14441.003.patch > > > LoadBalancingKMSClientProvider only gets delegation token from one KMS > instance, in a round-robin fashion. This is arguably a bug, as JavaDoc for > {{KeyProviderDelegationTokenExtension#addDelegationTokens}} states: > {quote} > /** > * The implementer of this class will take a renewer and add all > * delegation tokens associated with the renewer to the > * Credentials object if it is not already present, > ... > **/ > {quote} > This bug doesn't pop up very often, because HDFS clients such as MapReduce > unintentionally calls {{FileSystem#addDelegationTokens}} multiple times. > We have a custom client that accesses HDFS/KMS-HA using delegation token, and > we were puzzled why it always throws "Failed to find any Kerberos tgt" > exceptions talking to one KMS but not the other. Turns out that client > couldn't talk to the KMS because {{FileSystem#addDelegationTokens}} only gets > one KMS delegation token at a time. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14450) ADLS Python client inconsistent when used in tandem with AdlFileSystem
Sailesh Mukil created HADOOP-14450: -- Summary: ADLS Python client inconsistent when used in tandem with AdlFileSystem Key: HADOOP-14450 URL: https://issues.apache.org/jira/browse/HADOOP-14450 Project: Hadoop Common Issue Type: Bug Components: fs/adl Reporter: Sailesh Mukil Impala uses the AdlFileSystem connector to talk to ADLS. As a part of the Impala tests, we drop tables and verify that the files belonging to that table have been dropped for all filesystems that Impala supports. These tests however, fail with ADLS. If I use the Hadoop ADLS connector to delete a file, and then list the parent directory of that file using the above Python client within the second, the client still says that the file is available in ADLS. This is the Python client from Microsoft that we're using in our testing: https://github.com/Azure/azure-data-lake-store-python Their release notes say that it's still a "pre-release preview": https://github.com/Azure/azure-data-lake-store-python/releases Questions for the ADLS folks: Is this a known issue? If so, will it be fixed soon? Or is this expected behavior? I'm able to deterministically reproduce it in my tests, with Impala on ADLS. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14441) LoadBalancingKMSClientProvider#addDelegationTokens should add delegation tokens from all KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021628#comment-16021628 ] Wei-Chiu Chuang commented on HADOOP-14441: -- [~shahrs87] thanks for your comments and your effort in creating the patch. I've been thinking about alternative way to fix it, but they all turn out to be either incompatible (adding extra parameters to public API), or unable to allow a client to get delegation tokens from multiple KMS clusters. If your patch is incompatible, would you mind to move over to HADOOP-14445 and use this one for a short term fix? Thanks > LoadBalancingKMSClientProvider#addDelegationTokens should add delegation > tokens from all KMS instances > -- > > Key: HADOOP-14441 > URL: https://issues.apache.org/jira/browse/HADOOP-14441 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.7.0 > Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HADOOP-14441.001.patch, HADOOP-14441.002.patch, > HADOOP-14441.003.patch > > > LoadBalancingKMSClientProvider only gets delegation token from one KMS > instance, in a round-robin fashion. This is arguably a bug, as JavaDoc for > {{KeyProviderDelegationTokenExtension#addDelegationTokens}} states: > {quote} > /** > * The implementer of this class will take a renewer and add all > * delegation tokens associated with the renewer to the > * Credentials object if it is not already present, > ... > **/ > {quote} > This bug doesn't pop up very often, because HDFS clients such as MapReduce > unintentionally calls {{FileSystem#addDelegationTokens}} multiple times. > We have a custom client that accesses HDFS/KMS-HA using delegation token, and > we were puzzled why it always throws "Failed to find any Kerberos tgt" > exceptions talking to one KMS but not the other. Turns out that client > couldn't talk to the KMS because {{FileSystem#addDelegationTokens}} only gets > one KMS delegation token at a time. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13921) Remove Log4j classes from JobConf
[ https://issues.apache.org/jira/browse/HADOOP-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021602#comment-16021602 ] Hadoop QA commented on HADOOP-13921: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-client-modules/hadoop-client-runtime {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-client-modules/hadoop-client-runtime {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 0s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s{color} | {color:green} hadoop-mapreduce-client-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s{color} | {color:green} hadoop-client-runtime in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 90m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-13921 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869485/HADOOP-13921.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux
[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021558#comment-16021558 ] Wei-Chiu Chuang commented on HADOOP-14445: -- Another design requirement is to make sure this works for a client to access two different KMS clusters. > Delegation tokens are not shared between KMS instances > -- > > Key: HADOOP-14445 > URL: https://issues.apache.org/jira/browse/HADOOP-14445 > Project: Hadoop Common > Issue Type: Bug > Components: documentation, kms >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Wei-Chiu Chuang > > As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do > not share delegation tokens. (a client uses KMS address/port as the key for > delegation token) > {code:title=DelegationTokenAuthenticatedURL#openConnection} > if (!creds.getAllTokens().isEmpty()) { > InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(), > url.getPort()); > Text service = SecurityUtil.buildTokenService(serviceAddr); > dToken = creds.getToken(service); > {code} > But KMS doc states: > {quote} > Delegation Tokens > Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation > tokens too. > Under HA, A KMS instance must verify the delegation token given by another > KMS instance, by checking the shared secret used to sign the delegation > token. To do this, all KMS instances must be able to retrieve the shared > secret from ZooKeeper. > {quote} > We should either update the KMS documentation, or fix this code to share > delegation tokens. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14441) LoadBalancingKMSClientProvider#addDelegationTokens should add delegation tokens from all KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021541#comment-16021541 ] Rushabh S Shah commented on HADOOP-14441: - bq. If this helps, RM HA gets around the problem of different host:port for different RMs by setting the token's service to host1:port1,host2:port2 (which gets stored in ZK and used by both RMs). Thanks [~rkanter]. That's exactly what I am trying to do. The test case attached in the patch works on my local machine. Trying to create a good patch. > LoadBalancingKMSClientProvider#addDelegationTokens should add delegation > tokens from all KMS instances > -- > > Key: HADOOP-14441 > URL: https://issues.apache.org/jira/browse/HADOOP-14441 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.7.0 > Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HADOOP-14441.001.patch, HADOOP-14441.002.patch, > HADOOP-14441.003.patch > > > LoadBalancingKMSClientProvider only gets delegation token from one KMS > instance, in a round-robin fashion. This is arguably a bug, as JavaDoc for > {{KeyProviderDelegationTokenExtension#addDelegationTokens}} states: > {quote} > /** > * The implementer of this class will take a renewer and add all > * delegation tokens associated with the renewer to the > * Credentials object if it is not already present, > ... > **/ > {quote} > This bug doesn't pop up very often, because HDFS clients such as MapReduce > unintentionally calls {{FileSystem#addDelegationTokens}} multiple times. > We have a custom client that accesses HDFS/KMS-HA using delegation token, and > we were puzzled why it always throws "Failed to find any Kerberos tgt" > exceptions talking to one KMS but not the other. Turns out that client > couldn't talk to the KMS because {{FileSystem#addDelegationTokens}} only gets > one KMS delegation token at a time. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14441) LoadBalancingKMSClientProvider#addDelegationTokens should add delegation tokens from all KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021522#comment-16021522 ] Robert Kanter commented on HADOOP-14441: If this helps, RM HA gets around the problem of different host:port for different RMs by setting the token's service to {{host1:port1,host2:port2}} (which gets stored in ZK and used by both RMs). https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/ClientRMProxy.java#L144 > LoadBalancingKMSClientProvider#addDelegationTokens should add delegation > tokens from all KMS instances > -- > > Key: HADOOP-14441 > URL: https://issues.apache.org/jira/browse/HADOOP-14441 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.7.0 > Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HADOOP-14441.001.patch, HADOOP-14441.002.patch, > HADOOP-14441.003.patch > > > LoadBalancingKMSClientProvider only gets delegation token from one KMS > instance, in a round-robin fashion. This is arguably a bug, as JavaDoc for > {{KeyProviderDelegationTokenExtension#addDelegationTokens}} states: > {quote} > /** > * The implementer of this class will take a renewer and add all > * delegation tokens associated with the renewer to the > * Credentials object if it is not already present, > ... > **/ > {quote} > This bug doesn't pop up very often, because HDFS clients such as MapReduce > unintentionally calls {{FileSystem#addDelegationTokens}} multiple times. > We have a custom client that accesses HDFS/KMS-HA using delegation token, and > we were puzzled why it always throws "Failed to find any Kerberos tgt" > exceptions talking to one KMS but not the other. Turns out that client > couldn't talk to the KMS because {{FileSystem#addDelegationTokens}} only gets > one KMS delegation token at a time. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14448) Play nice with ITestS3AEncryptionSSEC
[ https://issues.apache.org/jira/browse/HADOOP-14448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021493#comment-16021493 ] Steve Loughran commented on HADOOP-14448: - BTW, I was thinking its time to resync the branch...I guess this is one of the prereq's. > Play nice with ITestS3AEncryptionSSEC > - > > Key: HADOOP-14448 > URL: https://issues.apache.org/jira/browse/HADOOP-14448 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Sean Mackrory > > HADOOP-14035 hasn't yet been merged with HADOOP-13345, but it adds tests that > will break when run with S3Guard enabled. It expects that certain filesystem > actions will throw exceptions when the client-provided encryption key is not > configured properly, but those actions may sometimes bypass S3 entirely > thanks to S3Guard (for example, getFileStatus may not actually need to invoke > s3GetFileStatus). If the exception is never thrown, the test fails. > At a minimum we should tweak the tests so they definitely invoke S3 directly, > or just skip the offending tests when anything but the Null implementation is > in use. This also opens the larger question of whether or not S3Guard should > be serving up metadata that is otherwise only accessible when an encryption > key is provided. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14447) Backport HADOOP-13026 to branch 2.7
[ https://issues.apache.org/jira/browse/HADOOP-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Inigo Goiri updated HADOOP-14447: - Status: Patch Available (was: In Progress) > Backport HADOOP-13026 to branch 2.7 > --- > > Key: HADOOP-14447 > URL: https://issues.apache.org/jira/browse/HADOOP-14447 > Project: Hadoop Common > Issue Type: Bug >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HADOOP-13026-branch-2.7.patch > > > Should not wrap IOExceptions into a AuthenticationException in > KerberosAuthenticator. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14447) Backport HADOOP-13026 to branch 2.7
[ https://issues.apache.org/jira/browse/HADOOP-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Inigo Goiri updated HADOOP-14447: - Description: Should not wrap IOExceptions into a AuthenticationException in KerberosAuthenticator. > Backport HADOOP-13026 to branch 2.7 > --- > > Key: HADOOP-14447 > URL: https://issues.apache.org/jira/browse/HADOOP-14447 > Project: Hadoop Common > Issue Type: Bug >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HADOOP-13026-branch-2.7.patch > > > Should not wrap IOExceptions into a AuthenticationException in > KerberosAuthenticator. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14447) Backport HADOOP-13026 to branch 2.7
[ https://issues.apache.org/jira/browse/HADOOP-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Inigo Goiri updated HADOOP-14447: - Status: In Progress (was: Patch Available) > Backport HADOOP-13026 to branch 2.7 > --- > > Key: HADOOP-14447 > URL: https://issues.apache.org/jira/browse/HADOOP-14447 > Project: Hadoop Common > Issue Type: Bug >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HADOOP-13026-branch-2.7.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13921) Remove Log4j classes from JobConf
[ https://issues.apache.org/jira/browse/HADOOP-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021406#comment-16021406 ] Sean Busbey commented on HADOOP-13921: -- bq. Is it basically that we're removing log4j from mapred client, so if users need log4j they'll have to manually add it from now on? It was a change in type. The patch as is doesn't alter where Log4j itself shows up as a dependency. FWIW, the module this changes didn't list log4j as a dependency in the first place, it's relying on it happening to get pulled in by maven from some other dependency. > Remove Log4j classes from JobConf > - > > Key: HADOOP-13921 > URL: https://issues.apache.org/jira/browse/HADOOP-13921 > Project: Hadoop Common > Issue Type: Sub-task > Components: conf >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-13921.0.patch, HADOOP-13921.1.patch > > > Replace the use of log4j classes from JobConf so that the dependency is not > needed unless folks are making use of our custom log4j appenders or loading a > logging bridge to use that system. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13921) Remove Log4j classes from JobConf
[ https://issues.apache.org/jira/browse/HADOOP-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-13921: - Release Note: Changes the type of JobConf.DEFAULT_LOG_LEVEL from a Log4J Level to a String. Clients that referenced this field will need to be recompiled and may need to alter their source to account for the type change. The level itself remains conceptually at "INFO". Status: Patch Available (was: In Progress) > Remove Log4j classes from JobConf > - > > Key: HADOOP-13921 > URL: https://issues.apache.org/jira/browse/HADOOP-13921 > Project: Hadoop Common > Issue Type: Sub-task > Components: conf >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-13921.0.patch, HADOOP-13921.1.patch > > > Replace the use of log4j classes from JobConf so that the dependency is not > needed unless folks are making use of our custom log4j appenders or loading a > logging bridge to use that system. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13921) Remove Log4j classes from JobConf
[ https://issues.apache.org/jira/browse/HADOOP-13921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-13921: - Attachment: HADOOP-13921.1.patch -01 - removes references to log4j Level in javadocs > Remove Log4j classes from JobConf > - > > Key: HADOOP-13921 > URL: https://issues.apache.org/jira/browse/HADOOP-13921 > Project: Hadoop Common > Issue Type: Sub-task > Components: conf >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-13921.0.patch, HADOOP-13921.1.patch > > > Replace the use of log4j classes from JobConf so that the dependency is not > needed unless folks are making use of our custom log4j appenders or loading a > logging bridge to use that system. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14436) The ViewFs.md's minor error about a redundant colon
[ https://issues.apache.org/jira/browse/HADOOP-14436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021322#comment-16021322 ] Brahma Reddy Battula commented on HADOOP-14436: --- [~maobaolong] added you as hadoop contributor,now onwards you can assign the jira's yourself.Happy to help you.. > The ViewFs.md's minor error about a redundant colon > --- > > Key: HADOOP-14436 > URL: https://issues.apache.org/jira/browse/HADOOP-14436 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.7.1, 3.0.0-alpha2 >Reporter: maobaolong >Assignee: maobaolong > > Minor mistake can led the beginner to the wrong way and getting far away from > us. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14436) The ViewFs.md's minor error about a redundant colon
[ https://issues.apache.org/jira/browse/HADOOP-14436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula reassigned HADOOP-14436: - Assignee: maobaolong > The ViewFs.md's minor error about a redundant colon > --- > > Key: HADOOP-14436 > URL: https://issues.apache.org/jira/browse/HADOOP-14436 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.7.1, 3.0.0-alpha2 >Reporter: maobaolong >Assignee: maobaolong > > Minor mistake can led the beginner to the wrong way and getting far away from > us. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14448) Play nice with ITestS3AEncryptionSSEC
[ https://issues.apache.org/jira/browse/HADOOP-14448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021250#comment-16021250 ] Steve Loughran commented on HADOOP-14448: - SSEC is trouble; the tests are in because if you use the wrong key, you can't even do MD operations. You cannot mix SSEC and SSES3 or SSEKMS on the same bucket. This may be a time to think about any future client-side encryption. There's a patch for that, but I've said "wait until s3guard is in", and even then don't like it, because you can get less data back in read() calls than the declared length of the store. Everything will break > Play nice with ITestS3AEncryptionSSEC > - > > Key: HADOOP-14448 > URL: https://issues.apache.org/jira/browse/HADOOP-14448 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Sean Mackrory > > HADOOP-14035 hasn't yet been merged with HADOOP-13345, but it adds tests that > will break when run with S3Guard enabled. It expects that certain filesystem > actions will throw exceptions when the client-provided encryption key is not > configured properly, but those actions may sometimes bypass S3 entirely > thanks to S3Guard (for example, getFileStatus may not actually need to invoke > s3GetFileStatus). If the exception is never thrown, the test fails. > At a minimum we should tweak the tests so they definitely invoke S3 directly, > or just skip the offending tests when anything but the Null implementation is > in use. This also opens the larger question of whether or not S3Guard should > be serving up metadata that is otherwise only accessible when an encryption > key is provided. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021191#comment-16021191 ] Lukas Waldmann commented on HADOOP-1: - Ok will try to work on it, not sure when I will get time for it, seems some of your suggestions will need some work :) By moving "all ftp" stuff you mean both mine and existing ftp and sftp packages? > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14430) the accessTime of FileStatus got by SFTPFileSystem's getFileStatus method is always 0
[ https://issues.apache.org/jira/browse/HADOOP-14430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015736#comment-16015736 ] Hongyuan Li edited comment on HADOOP-14430 at 5/23/17 1:20 PM: --- None of the findbugs/unit test warnings seem to be related to the change. [~brahmareddy] was (Author: hongyuan li): None of the findbugs/unit test warnings seem to be related to the change. > the accessTime of FileStatus got by SFTPFileSystem's getFileStatus method is > always 0 > - > > Key: HADOOP-14430 > URL: https://issues.apache.org/jira/browse/HADOOP-14430 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 3.0.0-alpha2 >Reporter: Hongyuan Li >Assignee: Hongyuan Li >Priority: Trivial > Attachments: HADOOP-14430-001.patch, HADOOP-14430-002.patch > > > the accessTime of FileStatus got by SFTPFileSystem's getFileStatus method is > always 0 > {{long accessTime = 0}} in code below; > {code} private FileStatus getFileStatus(ChannelSftp channel, LsEntry > sftpFile, > Path parentPath) throws IOException { > SftpATTRS attr = sftpFile.getAttrs(); >…… > long modTime = attr.getMTime() * 1000; // convert to milliseconds(This is > wrong too, according to HADOOP-14431 > long accessTime = 0; > …… > } {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14447) Backport HADOOP-13026 to branch 2.7
[ https://issues.apache.org/jira/browse/HADOOP-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021134#comment-16021134 ] Steve Loughran commented on HADOOP-14447: - Once you've had a successful yetus build: +1 from me > Backport HADOOP-13026 to branch 2.7 > --- > > Key: HADOOP-14447 > URL: https://issues.apache.org/jira/browse/HADOOP-14447 > Project: Hadoop Common > Issue Type: Bug >Reporter: Inigo Goiri >Assignee: Inigo Goiri > Attachments: HADOOP-13026-branch-2.7.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-1: --- Assignee: Lukas Waldmann > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-1: Affects Version/s: 2.8.0 Target Version/s: 3.0.0-alpha2 > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann > Attachments: HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021012#comment-16021012 ] Steve Loughran commented on HADOOP-1: - Hit the Submit Patch button for the Yetus patch Given how unloved that code has been it's good to have the new stuff. But: no more dependencies in Hadoop common. I propose # having a new {{hadoop-tools/hadoop-ftp}} module for this # Make all tests which require an FTP server integration tests, prefix {{ITest}} and run by surefire. Look at hadoop-aws for this, and note how the parallel execution works, *provided all tests use unique paths* # move all existing FTP stuff over there, with tests # adopt the S3 policy: [no patches without declaration of test endpoint|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md]. Here something like "Ubuntu 16 ftpd" would suffice # Go for full FSContractTest coverage # if we are happy, cut the older ftp client out. This'll be Hadoop 3.x only, presumably, given use of Java 8 streams. And given the time it invariably takes to get a new FS client to stabilise (one public release, generally). I think it'd be good to find some other volunteers to help nurture this in. I am not going to do that, as I have too many other commitments in the object store space. Quick look at bits of the code, without going into the real details {code} LOGGER.debug(String.format(ErrorStrings.E_FILE_NOTFOUND, f)); {code} As well as not using the name "LOG", this is very inefficient; it's formatting the strings even when not printed. Use SLF4J's own mech: {code} LOG.debug("File not found {}", f)); {code} Some logging (e.g, {{LOGGER.info("Finish glob processing")}} should go to debug as they aren't relevant to users h4. {{AbstractFTPFileSystem}} L557. Don't catch and wrap here. It only loses subclass info for no benefit. If you must, use NetUtils.wrapException and add in the extra diagnostics info expected there. L600: {{open}} should through {{FileNotFoundException}} on a directory. Similarly, {{create}} raises {{FileAlreadyExistsException}}. Should all be documented in the [FS spec|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html] If you set up the FS contract tests with "Strict" exception handling, they'll make sure the exception classes match HDFS's h4. {{ConnectionInfo}} just use Guava's {{Preconditions.checkArgument}} to validate things; it raises {{InvalidArgumentException}}, and does string construction too. It looks like you are trying to support per-ftp endpoint config of user & password. Take a look at what we have done with S3a with per bucket config, all done in S3AUtils. We could maybe pull that up and make it more generic for all the FS clients: ftp and the object stores, specifically. > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Reporter: Lukas Waldmann > Attachments: HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14443) Azure: Add retry and client side failover for authorization, SASKey generation and delegation token generation requests to remote service
[ https://issues.apache.org/jira/browse/HADOOP-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16020980#comment-16020980 ] Steve Loughran commented on HADOOP-14443: - # I've not reviewed the actual functionality as that goes near UGI and the like. Someone who understands that will need to review it. # We have the same policy for Azure patches as for the other object stores: can you state which endpoint you tested against (e.g . Azure ireland) # hit the "submit patch" for Yetus to review General * the Line ending style checker is going to be unhappy. Please cut down where it doesn't destroy readability? Why: helps side-by-side review. * embrace {{Configuration.getTrimmedStrings()}} * SL4J construct strings automatically; critical for performance of debug log messages. Switch to {{LOG.debug("connecting to {}", uri)}} structure. h4. {{RemoteSASKeyGeneratorImpl}} L124 {code} commaSeparatedUrlsString = conf.get(KEY_CRED_SERVICE_URLS); {code} this should use conf.getTrimmedStrings() to have whitespace stripped, split done, tests for all this. Same for {{RemoteWasbAuthorizerImpl}} and {{RemoteWasbDelegationTokenManager}} L163: can you include the URI at fault in the exception text h4. {{RemoteWasbAuthorizerImpl}} L157: Unless its always in inner messages, can you somehow include the URI/endpoint details in the wrapped exception. Your support team will appreciate this. L169: time to use {{}} in the javadocs. h4. {{SecureWasbRemoteCallHelper}} L143. Why not make that commented out LOG.info an uncommented LOG.debug? L159. Use try-with-resources to manage closing of response L174/175. Log message at WARN, print the stack at DEBUG L217: SL4J construcst strings automatically; critical for performance of debug log messages. Switch to {{"connecting to {}", uri}} style. L225. Catching of any Exception is overbroad. Maybe: all IOEs pass up as is. h4. {{JsonUtils}} L42. again, {{LOG.debug("JSON Parsing exception: {}", e.getMessage())}} L42. Maybe: log@debug the errant JSON string L43. String.toLowerCase needs to specify {{Locale.EN}} to work reliably round the world > Azure: Add retry and client side failover for authorization, SASKey > generation and delegation token generation requests to remote service > - > > Key: HADOOP-14443 > URL: https://issues.apache.org/jira/browse/HADOOP-14443 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 2.9.0 >Reporter: Santhosh G Nayak >Assignee: Santhosh G Nayak > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HADOOP-14443.1.patch > > > Currently, {{WasRemoteCallHelper}} can be configured to talk to only one URL > for authorization, SASKey generation and delegation token generation. If for > some reason the service is down, all the requests will fail. > So proposal is to, > - Add support to configure multiple URLs, so that if communication to one URL > fails, client can retry on another instance of the service running on > different node for authorization, SASKey generation and delegation token > generation. > - Rename the configurations {{fs.azure.authorization.remote.service.url}} to > {{fs.azure.authorization.remote.service.urls}} and > {{fs.azure.cred.service.url}} to {{fs.azure.cred.service.urls}} to support > the comma separated list of URLs. > Introduce a new configuration {{fs.azure.delegation.token.service.urls}} to > configure the comma separated list of service URLs to get the delegation > token. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
[ https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16020823#comment-16020823 ] ZhangBing Lin commented on HADOOP-14449: Looking at the compilation log, I found that the Hadoop Common module was compiled successfully. Just modifying the Hadoop Common's ASFHeader does not cause the Yarn UI module to compile failed Compilation failure is caused by compiling json files and js file errors in [Apache Hadoop YARN UI]. Such as: [ERROR] bower ember#2.2.0 invalid-meta ember is missing "ignore" entry in bower.json [ERROR] bower select2#4.0.0 invalid-meta select2 is missing "ignore" entry in bower.json > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct > > > Key: HADOOP-14449 > URL: https://issues.apache.org/jira/browse/HADOOP-14449 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: ZhangBing Lin >Assignee: ZhangBing Lin >Priority: Minor > Attachments: HADOOP-14449.001.patch > > > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
[ https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16020802#comment-16020802 ] Hadoop QA commented on HADOOP-14449: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 37s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 7m 20s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 25s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 34s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 34s{color} | {color:red} root generated 501 new + 286 unchanged - 0 fixed = 787 total (was 286) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 35s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 1s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14449 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869406/HADOOP-14449.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c43af384aa4e 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d0f346a | | Default Java | 1.8.0_131 | | compile | https://builds.apache.org/job/PreCommit-HADOOP-Build/12377/artifact/patchprocess/branch-compile-root.txt | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/12377/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/12377/artifact/patchprocess/diff-compile-javac-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12377/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12377/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was
[jira] [Commented] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
[ https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16020763#comment-16020763 ] ZhangBing Lin commented on HADOOP-14449: [~brahmareddy],thank you! > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct > > > Key: HADOOP-14449 > URL: https://issues.apache.org/jira/browse/HADOOP-14449 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: ZhangBing Lin >Assignee: ZhangBing Lin >Priority: Minor > Attachments: HADOOP-14449.001.patch > > > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
[ https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16020753#comment-16020753 ] Brahma Reddy Battula commented on HADOOP-14449: --- [~linzhangbing] thanks for reporting issue.. Added you as hadoop contributor..Now onwards you can assign jira's. > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct > > > Key: HADOOP-14449 > URL: https://issues.apache.org/jira/browse/HADOOP-14449 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: ZhangBing Lin >Assignee: ZhangBing Lin >Priority: Minor > Attachments: HADOOP-14449.001.patch > > > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
[ https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula reassigned HADOOP-14449: - Assignee: ZhangBing Lin > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct > > > Key: HADOOP-14449 > URL: https://issues.apache.org/jira/browse/HADOOP-14449 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: ZhangBing Lin >Assignee: ZhangBing Lin >Priority: Minor > Attachments: HADOOP-14449.001.patch > > > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
[ https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16020729#comment-16020729 ] ZhangBing Lin commented on HADOOP-14449: Submit a patch! > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct > > > Key: HADOOP-14449 > URL: https://issues.apache.org/jira/browse/HADOOP-14449 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: ZhangBing Lin >Priority: Minor > Attachments: HADOOP-14449.001.patch > > > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
[ https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhangBing Lin updated HADOOP-14449: --- Attachment: HADOOP-14449.001.patch > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct > > > Key: HADOOP-14449 > URL: https://issues.apache.org/jira/browse/HADOOP-14449 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: ZhangBing Lin >Priority: Minor > Attachments: HADOOP-14449.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
[ https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhangBing Lin updated HADOOP-14449: --- Status: Patch Available (was: Open) > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct > > > Key: HADOOP-14449 > URL: https://issues.apache.org/jira/browse/HADOOP-14449 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: ZhangBing Lin >Priority: Minor > Attachments: HADOOP-14449.001.patch > > > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
[ https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhangBing Lin updated HADOOP-14449: --- Description: The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct > > > Key: HADOOP-14449 > URL: https://issues.apache.org/jira/browse/HADOOP-14449 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: ZhangBing Lin >Priority: Minor > Attachments: HADOOP-14449.001.patch > > > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
[ https://issues.apache.org/jira/browse/HADOOP-14449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhangBing Lin updated HADOOP-14449: --- Affects Version/s: 3.0.0-alpha3 > The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not > correct > > > Key: HADOOP-14449 > URL: https://issues.apache.org/jira/browse/HADOOP-14449 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: ZhangBing Lin >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14449) The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct
ZhangBing Lin created HADOOP-14449: -- Summary: The ASF Header in ComparableVersion.java and SSLHostnameVerifier.java is not correct Key: HADOOP-14449 URL: https://issues.apache.org/jira/browse/HADOOP-14449 Project: Hadoop Common Issue Type: Bug Reporter: ZhangBing Lin Priority: Minor -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org