[jira] [Comment Edited] (HADOOP-13597) Switch KMS from Tomcat to Jetty
[ https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717586#comment-15717586 ] John Zhuge edited comment on HADOOP-13597 at 12/3/16 7:03 AM: -- Could have used [Decorator Pattern|https://en.wikipedia.org/wiki/Decorator_pattern] to design a wrapper to log configuration access: {code} interface ConfigurationAccess { String get(String name); String getInt(String name, int defaultValue); } class Configuration implements ConfigurationAccess { ... } class LoggedConfigurationAccess implements ConfigurationAccess { LoggedConfigurationAccess(conf, log) { this.conf = conf; this.log = log; } String get(String name) { String value = conf.get(name); log.info(..., name, value); return value; } String getInt(String name, int defaultValue) { String value = conf.getInt(name, defaultValue); log.info(..., name, value, defaultValue); return value; } } {code} A little downsize though: {{LoggedConfigurationAccess#getInt}} will log 2 messages because {{Configuration#getInt}} calls {{LoggedConfigurationAccess#get}}. was (Author: jzhuge): Could have used [Decorator Pattern|https://en.wikipedia.org/wiki/Decorator_pattern] to design a wrapper to log configuration access: {code} interface ConfigurationAccess { String get(String name); String getInt(String name, int defaultValue); } class Configuration implements ConfigurationAccess { ... } class LoggedConfigurationAccess { LoggedConfigurationAccess(conf, log) { this.conf = conf; this.log = log; } String get(String name) { String value = conf.get(name); log.info(..., name, value); return value; } String getInt(String name, int defaultValue) { String value = conf.getInt(name, defaultValue); log.info(..., name, value, defaultValue); return value; } } {code} A little downsize though: {{LoggedConfigurationAccess#getInt}} will log 2 messages because {{Configuration#getInt}} calls {{LoggedConfigurationAccess#get}}. > Switch KMS from Tomcat to Jetty > --- > > Key: HADOOP-13597 > URL: https://issues.apache.org/jira/browse/HADOOP-13597 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch > > > The Tomcat 6 we are using will reach EOL at the end of 2017. While there are > other good options, I would propose switching to {{Jetty 9}} for the > following reasons: > * Easier migration. Both Tomcat and Jetty are based on {{Servlet > Containers}}, so we don't have change client code that much. It would require > more work to switch to {{JAX-RS}}. > * Well established. > * Good performance and scalability. > Other alternatives: > * Jersey + Grizzly > * Tomcat 8 > Your opinions will be greatly appreciated. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty
[ https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717586#comment-15717586 ] John Zhuge commented on HADOOP-13597: - Could have used [Decorator Pattern|https://en.wikipedia.org/wiki/Decorator_pattern] to design a wrapper to log configuration access: {code} interface ConfigurationAccess { String get(String name); String getInt(String name, int defaultValue); } class Configuration implements ConfigurationAccess { ... } class LoggedConfigurationAccess { LoggedConfigurationAccess(conf, log) { this.conf = conf; this.log = log; } String get(String name) { String value = conf.get(name); log.info(..., name, value); return value; } String getInt(String name, int defaultValue) { String value = conf.getInt(name, defaultValue); log.info(..., name, value, defaultValue); return value; } } {code} A little downsize though: {{LoggedConfigurationAccess#getInt}} will log 2 messages because {{Configuration#getInt}} calls {{LoggedConfigurationAccess#get}}. > Switch KMS from Tomcat to Jetty > --- > > Key: HADOOP-13597 > URL: https://issues.apache.org/jira/browse/HADOOP-13597 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch > > > The Tomcat 6 we are using will reach EOL at the end of 2017. While there are > other good options, I would propose switching to {{Jetty 9}} for the > following reasons: > * Easier migration. Both Tomcat and Jetty are based on {{Servlet > Containers}}, so we don't have change client code that much. It would require > more work to switch to {{JAX-RS}}. > * Well established. > * Good performance and scalability. > Other alternatives: > * Jersey + Grizzly > * Tomcat 8 > Your opinions will be greatly appreciated. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies
[ https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717571#comment-15717571 ] Hadoop QA commented on HADOOP-11804: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-client hadoop-dist . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 52s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 13s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 10s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 13s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-client-modules/hadoop-client hadoop-client-modules/hadoop-client-api . hadoop-client-modules hadoop-client-modules/hadoop-client-check-invariants hadoop-client-modules/hadoop-client-minicluster hadoop-client-modules/hadoop-client-runtime hadoop-dist {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 26s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 53s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 50s{color} | {color:red} The patch generated 5 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}241m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDecommissionWithStriped | | | hadoop.hdfs.TestCrcCorruption | | |
[jira] [Created] (HADOOP-13862) AbstractWadlGeneratorGrammarGenerator couldn't find grammar element for class java.util.Map
John Zhuge created HADOOP-13862: --- Summary: AbstractWadlGeneratorGrammarGenerator couldn't find grammar element for class java.util.Map Key: HADOOP-13862 URL: https://issues.apache.org/jira/browse/HADOOP-13862 Project: Hadoop Common Issue Type: Bug Components: kms Affects Versions: 3.0.0-alpha2 Reporter: John Zhuge Assignee: John Zhuge Priority: Minor Annoying messages in kms.log: {noformat} 2016-12-02 22:23:33,580 INFO AbstractWadlGeneratorGrammarGenerator - Couldn't find grammar element for class javax.ws.rs.core.Response 2016-12-02 22:23:33,580 INFO AbstractWadlGeneratorGrammarGenerator - Couldn't find grammar element for class java.util.Map 2016-12-02 22:23:33,580 INFO AbstractWadlGeneratorGrammarGenerator - Couldn't find grammar element for class java.util.Map 2016-12-02 22:23:33,580 INFO AbstractWadlGeneratorGrammarGenerator - Couldn't find grammar element for class java.util.Map 2016-12-02 22:23:33,581 INFO AbstractWadlGeneratorGrammarGenerator - Couldn't find grammar element for class javax.ws.rs.core.Response 2016-12-02 22:23:33,581 INFO AbstractWadlGeneratorGrammarGenerator - Couldn't find grammar element for class javax.ws.rs.core.Response 2016-12-02 22:23:33,581 INFO AbstractWadlGeneratorGrammarGenerator - Couldn't find grammar element for class java.util.Map {noformat}} http://stackoverflow.com/questions/15767973/jersey-what-does-couldnt-find-grammar-element-mean. Tried disabling WADL, but KMS didn't work: {{hadoop key list}} authentication failed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13847) KMSWebApp should close KeyProviderCryptoExtension
[ https://issues.apache.org/jira/browse/HADOOP-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717539#comment-15717539 ] Yongjun Zhang commented on HADOOP-13847: Thanks for the new rev John, +1 on rev2. > KMSWebApp should close KeyProviderCryptoExtension > - > > Key: HADOOP-13847 > URL: https://issues.apache.org/jira/browse/HADOOP-13847 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Anthony Young-Garner >Assignee: John Zhuge > Attachments: HADOOP-13847.001.patch, HADOOP-13847.002.patch > > > KeyProviderCryptoExtension should be closed in KMSWebApp.contextDestroyed so > that all KeyProviders are also closed. See related HADOOP-13838. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13847) KMSWebApp should close KeyProviderCryptoExtension
[ https://issues.apache.org/jira/browse/HADOOP-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717404#comment-15717404 ] Hadoop QA commented on HADOOP-13847: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 14 unchanged - 1 fixed = 14 total (was 15) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 33s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 20s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 54m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13847 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841611/HADOOP-13847.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux d4c7a8455fff 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f885160 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/11192/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/11192/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This
[jira] [Commented] (HADOOP-13257) Improve Azure Data Lake contract tests.
[ https://issues.apache.org/jira/browse/HADOOP-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717350#comment-15717350 ] Vishwajeet Dusane commented on HADOOP-13257: Thanks a lot [~liuml07] for code feedback and pushing this patch through. Also [~chris.douglas], [~cnauroth], [~ste...@apache.org] for valuable feedback and support. > Improve Azure Data Lake contract tests. > --- > > Key: HADOOP-13257 > URL: https://issues.apache.org/jira/browse/HADOOP-13257 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Chris Nauroth >Assignee: Vishwajeet Dusane > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13257.001.patch, HADOOP-13257.002.patch > > > HADOOP-12875 provided the initial implementation of the FileSystem contract > tests covering Azure Data Lake. This issue tracks subsequent improvements on > those test suites for improved coverage and matching the specified semantics > more closely. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13847) KMSWebApp should close KeyProviderCryptoExtension
[ https://issues.apache.org/jira/browse/HADOOP-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-13847: Attachment: HADOOP-13847.002.patch Patch 002 * Show exception stack trace Thanks [~yzhangal] for catching it. > KMSWebApp should close KeyProviderCryptoExtension > - > > Key: HADOOP-13847 > URL: https://issues.apache.org/jira/browse/HADOOP-13847 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Anthony Young-Garner >Assignee: John Zhuge > Attachments: HADOOP-13847.001.patch, HADOOP-13847.002.patch > > > KeyProviderCryptoExtension should be closed in KMSWebApp.contextDestroyed so > that all KeyProviders are also closed. See related HADOOP-13838. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.
[ https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717170#comment-15717170 ] Mingliang Liu commented on HADOOP-13449: Yes I did consider putting all the ancestors to the metadata store when putting a single path. Another benefit is that, {{isEmpty}} will be much easier: simply issue a query request (limit return size 1) whose hash key ("parent" field) is the specific directory, and if there is any data returned, the directory is non-empty; else empty. Then the case that {{/a, /a/b/c, /a/b/d}} yet {{/a}} is not empty, does not exist. Plus we don't have to store/maintain the {{isEmpty}} field any longer. I gave up this constraints when implementing DDB and let the file system enforces this for the sake of performance. Consider a simple case: to {{put(PathMetadata meta)}} 1K files in a deep directory (say 10 layers), every put operation will check if all the ancestors exist, and 1K operation becomes 10K operations to DDB. For {{put(DirListingMetadata meta)}}, it will be efficient so we can blame users for not using this one instead. So overall, not changing MetadataStore is possible and we can change this in the {{DynamoDBMetadataStore}} implementation. I'll post a patch (may be a wip one) soon. So we did find real bugs/problems/limitation via integration tests; and they're helpful. Thanks, > S3Guard: Implement DynamoDBMetadataStore. > - > > Key: HADOOP-13449 > URL: https://issues.apache.org/jira/browse/HADOOP-13449 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Attachments: HADOOP-13449-HADOOP-13345.000.patch, > HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, > HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, > HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, > HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, > HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch > > > Provide an implementation of the metadata store backed by DynamoDB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies
[ https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-11804: - Attachment: HADOOP-11804.10.patch -10 - rebased for trunk (0cfd7ad) - cleaned up new pom uses of duplicate group ID ping [~sjlee0], [~zhz], [~andrew.wang]. I think the attached example shows v10 working against a webHDFS instance. Please let me know if there's a different test folks would prefer. > POC Hadoop Client w/o transitive dependencies > - > > Key: HADOOP-11804 > URL: https://issues.apache.org/jira/browse/HADOOP-11804 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Sean Busbey >Assignee: Sean Busbey > Attachments: HADOOP-11804.1.patch, HADOOP-11804.10.patch, > HADOOP-11804.2.patch, HADOOP-11804.3.patch, HADOOP-11804.4.patch, > HADOOP-11804.5.patch, HADOOP-11804.6.patch, HADOOP-11804.7.patch, > HADOOP-11804.8.patch, HADOOP-11804.9.patch, hadoop-11804-client-test.tar.gz > > > make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to > talk with a Hadoop cluster without seeing any of the implementation > dependencies. > see proposal on parent for details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies
[ https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-11804: - Attachment: hadoop-11804-client-test.tar.gz - hadoop-11804-client-test.tar.gz Including an example program and the steps I used to verify that it can use shaded client/runtime to talk to a WebHDFS instance. > POC Hadoop Client w/o transitive dependencies > - > > Key: HADOOP-11804 > URL: https://issues.apache.org/jira/browse/HADOOP-11804 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Sean Busbey >Assignee: Sean Busbey > Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, > HADOOP-11804.3.patch, HADOOP-11804.4.patch, HADOOP-11804.5.patch, > HADOOP-11804.6.patch, HADOOP-11804.7.patch, HADOOP-11804.8.patch, > HADOOP-11804.9.patch, hadoop-11804-client-test.tar.gz > > > make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to > talk with a Hadoop cluster without seeing any of the implementation > dependencies. > see proposal on parent for details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11626) Comment ReadStatistics to indicate that it tracks the actual read occurred
[ https://issues.apache.org/jira/browse/HADOOP-11626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717073#comment-15717073 ] Hadoop QA commented on HADOOP-11626: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HADOOP-11626 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-11626 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12703201/HADOOP-11626.002.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/11190/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Comment ReadStatistics to indicate that it tracks the actual read occurred > -- > > Key: HADOOP-11626 > URL: https://issues.apache.org/jira/browse/HADOOP-11626 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Trivial > Attachments: HADOOP-11626.000.patch, HADOOP-11626.001.patch, > HADOOP-11626.002.patch > > > In {{DFSOutputStream#actualGetFromOneDataNode()}}, it updates the > {{ReadStatistics}} even the read is failed: > {code} > int nread = reader.readAll(buf, offset, len); > updateReadStatistics(readStatistics, nread, reader); > if (nread != len) { > throw new IOException("truncated return from reader.read(): " + > "excpected " + len + ", got " + nread); > } > {code} > It indicates that {{ReadStatistics}} tracks actual read occurred. Need to add > comment to {{ReadStatistics}} to make this clear. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11626) Comment ReadStatistics to indicate that it tracks the actual read occurred
[ https://issues.apache.org/jira/browse/HADOOP-11626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HADOOP-11626: Target Version/s: 2.9.0 (was: 2.8.0) > Comment ReadStatistics to indicate that it tracks the actual read occurred > -- > > Key: HADOOP-11626 > URL: https://issues.apache.org/jira/browse/HADOOP-11626 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Trivial > Attachments: HADOOP-11626.000.patch, HADOOP-11626.001.patch, > HADOOP-11626.002.patch > > > In {{DFSOutputStream#actualGetFromOneDataNode()}}, it updates the > {{ReadStatistics}} even the read is failed: > {code} > int nread = reader.readAll(buf, offset, len); > updateReadStatistics(readStatistics, nread, reader); > if (nread != len) { > throw new IOException("truncated return from reader.read(): " + > "excpected " + len + ", got " + nread); > } > {code} > It indicates that {{ReadStatistics}} tracks actual read occurred. Need to add > comment to {{ReadStatistics}} to make this clear. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11626) Comment ReadStatistics to indicate that it tracks the actual read occurred
[ https://issues.apache.org/jira/browse/HADOOP-11626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717058#comment-15717058 ] Junping Du commented on HADOOP-11626: - Move this out of 2.8 given this jira don't make any progress for more than 1 year. > Comment ReadStatistics to indicate that it tracks the actual read occurred > -- > > Key: HADOOP-11626 > URL: https://issues.apache.org/jira/browse/HADOOP-11626 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Trivial > Attachments: HADOOP-11626.000.patch, HADOOP-11626.001.patch, > HADOOP-11626.002.patch > > > In {{DFSOutputStream#actualGetFromOneDataNode()}}, it updates the > {{ReadStatistics}} even the read is failed: > {code} > int nread = reader.readAll(buf, offset, len); > updateReadStatistics(readStatistics, nread, reader); > if (nread != len) { > throw new IOException("truncated return from reader.read(): " + > "excpected " + len + ", got " + nread); > } > {code} > It indicates that {{ReadStatistics}} tracks actual read occurred. Need to add > comment to {{ReadStatistics}} to make this clear. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11626) Comment ReadStatistics to indicate that it tracks the actual read occurred
[ https://issues.apache.org/jira/browse/HADOOP-11626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HADOOP-11626: Labels: (was: BB2015-05-TBR) > Comment ReadStatistics to indicate that it tracks the actual read occurred > -- > > Key: HADOOP-11626 > URL: https://issues.apache.org/jira/browse/HADOOP-11626 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Trivial > Attachments: HADOOP-11626.000.patch, HADOOP-11626.001.patch, > HADOOP-11626.002.patch > > > In {{DFSOutputStream#actualGetFromOneDataNode()}}, it updates the > {{ReadStatistics}} even the read is failed: > {code} > int nread = reader.readAll(buf, offset, len); > updateReadStatistics(readStatistics, nread, reader); > if (nread != len) { > throw new IOException("truncated return from reader.read(): " + > "excpected " + len + ", got " + nread); > } > {code} > It indicates that {{ReadStatistics}} tracks actual read occurred. Need to add > comment to {{ReadStatistics}} to make this clear. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13861) Spelling errors in logging and exceptions for code
[ https://issues.apache.org/jira/browse/HADOOP-13861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717026#comment-15717026 ] Grant Sohn commented on HADOOP-13861: - Code changes are addressing spelling errors in strings, therefore no new tests are needed. > Spelling errors in logging and exceptions for code > -- > > Key: HADOOP-13861 > URL: https://issues.apache.org/jira/browse/HADOOP-13861 > Project: Hadoop Common > Issue Type: Bug > Components: common, fs, io, security >Reporter: Grant Sohn >Assignee: Grant Sohn > Attachments: HADOOP-13861.1.patch > > > Found a set of spelling errors in the logging and exception messages. > Examples: > Bufer -> Buffer > princial -> principal > existance -> existence -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.
[ https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717021#comment-15717021 ] Aaron Fabbri commented on HADOOP-13449: --- I did a little research on #3. It looks like you cannot do a prefix scan on a partition key for DynamoDB. This seems to imply that, considering an operation {{deleteSubtree(delete_path)}}, a simple search by prefix to find all entries with paths that begin with {{delete_path}} would actually be a full table scan. If I'm right, that is unfortunate. The problem with the existing deleteSubtree(delete_path) implementation is that all the children under delete_path might not be reachable from delete_path by doing a simple tree walk over the state in the MetadataStore. The algorithm would work, however, if, when we created a file, we also created all its ancestor directories up to the root. This would establish an invariant that {quote} For any path p in DDB MetadataStore For each ancestor a_i from p to the root a_i is in DDB MetadataStore {quote} This actually sounds reasonable. Can we do it without changing the {{MetadataStore}} interface? I think we can: when we create(path), we always have the full absolute 'path', so we know the names of the ancestors all the way to the root. Thoughts? > S3Guard: Implement DynamoDBMetadataStore. > - > > Key: HADOOP-13449 > URL: https://issues.apache.org/jira/browse/HADOOP-13449 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Attachments: HADOOP-13449-HADOOP-13345.000.patch, > HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, > HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, > HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, > HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, > HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch > > > Provide an implementation of the metadata store backed by DynamoDB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13861) Spelling errors in logging and exceptions for code
[ https://issues.apache.org/jira/browse/HADOOP-13861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15717011#comment-15717011 ] Hadoop QA commented on HADOOP-13861: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 6s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 37s{color} | {color:orange} hadoop-common-project: The patch generated 1 new + 355 unchanged - 1 fixed = 356 total (was 356) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 30s{color} | {color:green} hadoop-auth in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 41s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 58m 48s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13861 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841589/HADOOP-13861.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 88c89b7ee800 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 51211a7 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/11189/artifact/patchprocess/diff-checkstyle-hadoop-common-project.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/11189/testReport/ | | modules | C: hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-common U: hadoop-common-project | | Console output |
[jira] [Commented] (HADOOP-13257) Improve Azure Data Lake contract tests.
[ https://issues.apache.org/jira/browse/HADOOP-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716957#comment-15716957 ] Hudson commented on HADOOP-13257: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10933 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10933/]) HADOOP-13257. Improve Azure Data Lake contract tests. Contributed by (liuml07: rev 4113ec5fa5ca049ebaba039b1faf3911c6a34f7b) * (edit) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractOpenLive.java * (add) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestMetadata.java * (edit) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestAdlRead.java * (edit) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlFileSystemContractLive.java * (add) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlSupportedCharsetInPath.java * (edit) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractRenameLive.java * (add) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlPermissionLive.java * (edit) hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java * (add) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlFileContextMainOperationsLive.java * (edit) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractRootDirLive.java * (edit) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlDifferentSizeWritesLive.java * (edit) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractAppendLive.java * (edit) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractMkdirLive.java * (add) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlFileContextCreateMkdirLive.java * (edit) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractConcatLive.java * (edit) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractDeleteLive.java * (edit) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractCreateLive.java * (edit) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractSeekLive.java * (add) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractGetFileStatusLive.java * (add) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlInternalCreateNonRecursive.java * (edit) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestListStatus.java > Improve Azure Data Lake contract tests. > --- > > Key: HADOOP-13257 > URL: https://issues.apache.org/jira/browse/HADOOP-13257 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Chris Nauroth >Assignee: Vishwajeet Dusane > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13257.001.patch, HADOOP-13257.002.patch > > > HADOOP-12875 provided the initial implementation of the FileSystem contract > tests covering Azure Data Lake. This issue tracks subsequent improvements on > those test suites for improved coverage and matching the specified semantics > more closely. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13847) KMSWebApp should close KeyProviderCryptoExtension
[ https://issues.apache.org/jira/browse/HADOOP-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716939#comment-15716939 ] Yongjun Zhang commented on HADOOP-13847: Thanks Anthony for reporting the issue and John for the patch. The patch looks good to me, except that I'd suggest we change LOG.error("Error closing KeyProviderCryptoExtension: " + ioe); to LOG.error("Error closing KeyProviderCryptoExtension", ioe); Thanks. > KMSWebApp should close KeyProviderCryptoExtension > - > > Key: HADOOP-13847 > URL: https://issues.apache.org/jira/browse/HADOOP-13847 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Anthony Young-Garner >Assignee: John Zhuge > Attachments: HADOOP-13847.001.patch > > > KeyProviderCryptoExtension should be closed in KMSWebApp.contextDestroyed so > that all KeyProviders are also closed. See related HADOOP-13838. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13257) Improve Azure Data Lake contract tests.
[ https://issues.apache.org/jira/browse/HADOOP-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13257: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha2 Status: Resolved (was: Patch Available) +1 Committed to {{trunk}} branch. Thanks for your contribution, [~vishwajeet.dusane]. > Improve Azure Data Lake contract tests. > --- > > Key: HADOOP-13257 > URL: https://issues.apache.org/jira/browse/HADOOP-13257 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Chris Nauroth >Assignee: Vishwajeet Dusane > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13257.001.patch, HADOOP-13257.002.patch > > > HADOOP-12875 provided the initial implementation of the FileSystem contract > tests covering Azure Data Lake. This issue tracks subsequent improvements on > those test suites for improved coverage and matching the specified semantics > more closely. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13861) Spelling errors in logging and exceptions for code
[ https://issues.apache.org/jira/browse/HADOOP-13861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Grant Sohn updated HADOOP-13861: Attachment: HADOOP-13861.1.patch Fixes for spelling errors. > Spelling errors in logging and exceptions for code > -- > > Key: HADOOP-13861 > URL: https://issues.apache.org/jira/browse/HADOOP-13861 > Project: Hadoop Common > Issue Type: Bug > Components: common, fs, io, security >Reporter: Grant Sohn >Assignee: Grant Sohn > Attachments: HADOOP-13861.1.patch > > > Found a set of spelling errors in the logging and exception messages. > Examples: > Bufer -> Buffer > princial -> principal > existance -> existence -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13861) Spelling errors in logging and exceptions for code
[ https://issues.apache.org/jira/browse/HADOOP-13861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Grant Sohn updated HADOOP-13861: Status: Patch Available (was: Open) > Spelling errors in logging and exceptions for code > -- > > Key: HADOOP-13861 > URL: https://issues.apache.org/jira/browse/HADOOP-13861 > Project: Hadoop Common > Issue Type: Bug > Components: common, fs, io, security >Reporter: Grant Sohn >Assignee: Grant Sohn > Attachments: HADOOP-13861.1.patch > > > Found a set of spelling errors in the logging and exception messages. > Examples: > Bufer -> Buffer > princial -> principal > existance -> existence -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13861) Spelling errors in logging and exceptions for code
Grant Sohn created HADOOP-13861: --- Summary: Spelling errors in logging and exceptions for code Key: HADOOP-13861 URL: https://issues.apache.org/jira/browse/HADOOP-13861 Project: Hadoop Common Issue Type: Bug Components: common, fs, io, security Reporter: Grant Sohn Assignee: Grant Sohn Found a set of spelling errors in the logging and exception messages. Examples: Bufer -> Buffer princial -> principal existance -> existence -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13847) KMSWebApp should close KeyProviderCryptoExtension
[ https://issues.apache.org/jira/browse/HADOOP-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716864#comment-15716864 ] Hadoop QA commented on HADOOP-13847: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 14 unchanged - 1 fixed = 14 total (was 15) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 28s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 17s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 37s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13847 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841575/HADOOP-13847.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 0e9417479ca9 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 51211a7 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/11188/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms U: hadoop-common-project | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/11188/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This
[jira] [Commented] (HADOOP-13859) TestConfigurationFieldsBase fails for fields that are DEFAULT values of skipped properties.
[ https://issues.apache.org/jira/browse/HADOOP-13859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716752#comment-15716752 ] Haibo Chen commented on HADOOP-13859: - The unit test failure as well as the findbug warning looks irrelevant to me . > TestConfigurationFieldsBase fails for fields that are DEFAULT values of > skipped properties. > --- > > Key: HADOOP-13859 > URL: https://issues.apache.org/jira/browse/HADOOP-13859 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.0.0-alpha1 >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: HADOOP-13859.01.patch > > > In YARN-5922, two new default values are added in YarnConfiguration for two > timeline-service properties that are skipped in TestConfigurationFieldsBase. > TestConfigurationFieldsBase fails as it mistakenly treats the two newly added > default-values as regular properties. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13847) KMSWebApp should close KeyProviderCryptoExtension
[ https://issues.apache.org/jira/browse/HADOOP-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-13847: Status: Patch Available (was: Open) > KMSWebApp should close KeyProviderCryptoExtension > - > > Key: HADOOP-13847 > URL: https://issues.apache.org/jira/browse/HADOOP-13847 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Anthony Young-Garner >Assignee: John Zhuge > Attachments: HADOOP-13847.001.patch > > > KeyProviderCryptoExtension should be closed in KMSWebApp.contextDestroyed so > that all KeyProviders are also closed. See related HADOOP-13838. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13847) KMSWebApp should close KeyProviderCryptoExtension
[ https://issues.apache.org/jira/browse/HADOOP-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-13847: Attachment: HADOOP-13847.001.patch Patch 001: * KMSWebApp closes KeyProviderCryptoExtension * KeyProviderCryptoExtension#close should avoid infinite loop because keyProvider can be itself TestKMSWithZK exercises the real close(). > KMSWebApp should close KeyProviderCryptoExtension > - > > Key: HADOOP-13847 > URL: https://issues.apache.org/jira/browse/HADOOP-13847 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Anthony Young-Garner >Assignee: John Zhuge > Attachments: HADOOP-13847.001.patch > > > KeyProviderCryptoExtension should be closed in KMSWebApp.contextDestroyed so > that all KeyProviders are also closed. See related HADOOP-13838. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.
[ https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716725#comment-15716725 ] Mingliang Liu commented on HADOOP-13449: Thanks [~fabbri]. Quick reply (I'm working on this as well, will keep posted): # For point 1, let's track elsewhere. # For point 2, the explanation makes sense. My current in-progress change is to remove the "isEmpty" field from DynamoDB (DDB) table for directories, and to issue a query DDB request whose "parent" field is the current directory. Then I realized that, there may be items in the table whose ancestor (parent of parent, say) is the given directory, but their parent directories are missing. e.g for {{/a, /a/b/c, /a/b/d}}, {{/a}} is not empty. This has some similar problem to point 3. A simply query seems not enough. # For point 3, yes we may have to use scan as the hash key is not known. Let's figure out the best solution. > S3Guard: Implement DynamoDBMetadataStore. > - > > Key: HADOOP-13449 > URL: https://issues.apache.org/jira/browse/HADOOP-13449 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Attachments: HADOOP-13449-HADOOP-13345.000.patch, > HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, > HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, > HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, > HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, > HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch > > > Provide an implementation of the metadata store backed by DynamoDB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty
[ https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716720#comment-15716720 ] John Zhuge commented on HADOOP-13597: - I was wrong. The test case {{HADOOP_KMS_OPTS=-Dkms.config.dir=.. bin/hadoop kms}} works in both approaches, though for different reasons: * All in one: {{hadoop_subcommand_kms}} adds {{-Dkms_config_dir}} first, then HADOOP_KMS_OPTS is appended to HADOOP_OPTS. The later {{-Dkms.config.dir}} on HADOOP_OPTS takes effect. * multiple handlers: HADOOP_KMS_OPTS is appended to HADOOP_OPTS first, then _kms_hadoop_finalize calls {{hadoop_add_param HADOOP_OPTS "-Dkms.config.dir="}} which checks the existence of string "-Dkms.config.dir=" and decides not to add param. > Switch KMS from Tomcat to Jetty > --- > > Key: HADOOP-13597 > URL: https://issues.apache.org/jira/browse/HADOOP-13597 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch > > > The Tomcat 6 we are using will reach EOL at the end of 2017. While there are > other good options, I would propose switching to {{Jetty 9}} for the > following reasons: > * Easier migration. Both Tomcat and Jetty are based on {{Servlet > Containers}}, so we don't have change client code that much. It would require > more work to switch to {{JAX-RS}}. > * Well established. > * Good performance and scalability. > Other alternatives: > * Jersey + Grizzly > * Tomcat 8 > Your opinions will be greatly appreciated. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13855) Fix a couple of the s3a statistic names to be consistent with the rest
[ https://issues.apache.org/jira/browse/HADOOP-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716713#comment-15716713 ] Hudson commented on HADOOP-13855: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10932 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10932/]) HADOOP-13855. Fix a couple of the s3a statistic names to be consistent (liuml07: rev 51211a7d7aa342b93951fe61da3f624f0652e101) * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Statistic.java > Fix a couple of the s3a statistic names to be consistent with the rest > -- > > Key: HADOOP-13855 > URL: https://issues.apache.org/jira/browse/HADOOP-13855 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13855-001.patch > > > The S3a streamOpened and streamClosed statistics are camel case, rather than > stream_opened and stream_closed, the way the others are. Fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13859) TestConfigurationFieldsBase fails for fields that are DEFAULT values of skipped properties.
[ https://issues.apache.org/jira/browse/HADOOP-13859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716703#comment-15716703 ] Hadoop QA commented on HADOOP-13859: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 58s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 30s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 46s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 66m 10s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}160m 2s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.token.delegation.TestZKDelegationTokenSecretManager | | | hadoop.ha.TestZKFailoverController | | | hadoop.test.TestLambdaTestUtils | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13859 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841549/HADOOP-13859.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 66e9b96dece0 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c7ff34f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/11186/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html | | unit |
[jira] [Commented] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.
[ https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716696#comment-15716696 ] Aaron Fabbri commented on HADOOP-13449: --- I think this is the list of outstanding items to get integration tests passing. 1. Dealing with anonymous / reduced privilege bucket credentials ([~steve_l]'s previous comment). We should discuss separately... maybe separate JIRA? I have some other related requirements around table <-> bucket mappings. 2. Updating {{S3AFileStatus#isEmptyDirectory()}}. move(), put(), delete(), and deleteSubtree() will need to maintain the parent dir's empty bit and/or invalidate it's state. I think the basic logic used in LocalMetadataStore should work fine for now. 3. deleteSubtree(path) assumes that any deleted subtree is fully recorded in the MetadataStore. Best solution, IMO, is to query for all entries that have 'path' as an ancestor. Hoping we can use a prefix scan to keep that efficient. [~liuml07] would love to hear your DynamoDB expertise on that idea? I'm working on #2 at the moment. I wrote a new integration test {{ITestS3AEmptyDirectory}} that exercises a directory going from empty->non-empty and vice-versa. Much easier to debug that case in isolation! It passes for LocalMetadataStore but, as expected, fails for DDB still. > S3Guard: Implement DynamoDBMetadataStore. > - > > Key: HADOOP-13449 > URL: https://issues.apache.org/jira/browse/HADOOP-13449 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Attachments: HADOOP-13449-HADOOP-13345.000.patch, > HADOOP-13449-HADOOP-13345.001.patch, HADOOP-13449-HADOOP-13345.002.patch, > HADOOP-13449-HADOOP-13345.003.patch, HADOOP-13449-HADOOP-13345.004.patch, > HADOOP-13449-HADOOP-13345.005.patch, HADOOP-13449-HADOOP-13345.006.patch, > HADOOP-13449-HADOOP-13345.007.patch, HADOOP-13449-HADOOP-13345.008.patch, > HADOOP-13449-HADOOP-13345.009.patch, HADOOP-13449-HADOOP-13345.010.patch > > > Provide an implementation of the metadata store backed by DynamoDB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13855) Fix a couple of the s3a statistic names to be consistent with the rest
[ https://issues.apache.org/jira/browse/HADOOP-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716662#comment-15716662 ] Mingliang Liu commented on HADOOP-13855: +1 > Fix a couple of the s3a statistic names to be consistent with the rest > -- > > Key: HADOOP-13855 > URL: https://issues.apache.org/jira/browse/HADOOP-13855 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13855-001.patch > > > The S3a streamOpened and streamClosed statistics are camel case, rather than > stream_opened and stream_closed, the way the others are. Fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13855) Fix a couple of the s3a statistic names to be consistent with the rest
[ https://issues.apache.org/jira/browse/HADOOP-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13855: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha2 2.8.0 Status: Resolved (was: Patch Available) Committed to {{trunk}} through {{branch-2.8}} branches. Thanks for your contribution [~ste...@apache.org]. > Fix a couple of the s3a statistic names to be consistent with the rest > -- > > Key: HADOOP-13855 > URL: https://issues.apache.org/jira/browse/HADOOP-13855 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13855-001.patch > > > The S3a streamOpened and streamClosed statistics are camel case, rather than > stream_opened and stream_closed, the way the others are. Fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13857) S3AUtils.translateException to map (wrapped) InterruptedExceptions to InterruptedIOEs
[ https://issues.apache.org/jira/browse/HADOOP-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716659#comment-15716659 ] Hudson commented on HADOOP-13857: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10931 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10931/]) HADOOP-13857. S3AUtils.translateException to map (wrapped) (liuml07: rev 2ff84a00405e977b1fd791cfb974244580dd5ae8) * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AExceptionTranslation.java > S3AUtils.translateException to map (wrapped) InterruptedExceptions to > InterruptedIOEs > - > > Key: HADOOP-13857 > URL: https://issues.apache.org/jira/browse/HADOOP-13857 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13857-001.patch > > > Currently {{S3AUtils.translateException}} doesn't recognise interruptions; it > just sees an AmazonClientException chain which is then relayed up. > Proposed: look for an {{InterruptedIOException}} at the base of the chain of > exceptions, map it to an {{InterruptedIOException}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13811) s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to sanitize XML document destined for handler class
[ https://issues.apache.org/jira/browse/HADOOP-13811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716636#comment-15716636 ] Luke Miner commented on HADOOP-13811: - I double checked and {{SPARK_HOME}} is unset and there doesn't appear to be any other {{spark-submit}} on the {{PATH}}. I'm stumped. Is there a prebuilt distribution I can get my hands on? > s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to > sanitize XML document destined for handler class > - > > Key: HADOOP-13811 > URL: https://issues.apache.org/jira/browse/HADOOP-13811 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0, 2.7.3 >Reporter: Steve Loughran >Assignee: Steve Loughran > > Sometimes, occasionally, getFileStatus() fails with a stack trace starting > with {{com.amazonaws.AmazonClientException: Failed to sanitize XML document > destined for handler class}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13857) S3AUtils.translateException to map (wrapped) InterruptedExceptions to InterruptedIOEs
[ https://issues.apache.org/jira/browse/HADOOP-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13857: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha2 2.8.0 Status: Resolved (was: Patch Available) +1 Committed to {{trunk}} through {{branch-2.8}} branches. Thanks for your contribution [~ste...@apache.org]. > S3AUtils.translateException to map (wrapped) InterruptedExceptions to > InterruptedIOEs > - > > Key: HADOOP-13857 > URL: https://issues.apache.org/jira/browse/HADOOP-13857 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13857-001.patch > > > Currently {{S3AUtils.translateException}} doesn't recognise interruptions; it > just sees an AmazonClientException chain which is then relayed up. > Proposed: look for an {{InterruptedIOException}} at the base of the chain of > exceptions, map it to an {{InterruptedIOException}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits
[ https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716549#comment-15716549 ] Hadoop QA commented on HADOOP-13709: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 30s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 29s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 2 new + 54 unchanged - 0 fixed = 56 total (was 54) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 14s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ipc.TestIPC | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13709 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841553/HADOOP-13709.009.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux cf19f8246344 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c7ff34f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/11187/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/11187/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/11187/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/11187/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Clean up subprocesses spawned by Shell.java:runCommand when the shell process > exits >
[jira] [Updated] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits
[ https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HADOOP-13709: - Issue Type: Improvement (was: Bug) > Clean up subprocesses spawned by Shell.java:runCommand when the shell process > exits > --- > > Key: HADOOP-13709 > URL: https://issues.apache.org/jira/browse/HADOOP-13709 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.2.0 >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, > HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, > HADOOP-13709.006.patch, HADOOP-13709.007.patch, HADOOP-13709.008.patch, > HADOOP-13709.009.patch > > > The runCommand code in Shell.java can get into a situation where it will > ignore InterruptedExceptions and refuse to shutdown due to being in I/O > waiting for the return value of the subprocess that was spawned. We need to > allow for the subprocess to be interrupted and killed when the shell process > gets killed. Currently the JVM will shutdown and all of the subprocesses will > be orphaned and not killed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits
[ https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716191#comment-15716191 ] Eric Badger edited comment on HADOOP-13709 at 12/2/16 8:32 PM: --- Uploading a new patch. [~jlowe], I took out the static block that registers the shutdown hook in {{Shell}}. We can add this shutdown hook for the localizer via YARN-5641. Also added a javadoc for {{destroyAllProcesses}}. was (Author: ebadger): Uploading a new patch. [~jlowe], I took out the static block that registers the shutdown hook in {{Shell}}. We can add this shutdown hook for the localizer via YARN-5461. Also added a javadoc for {{destroyAllProcesses}}. > Clean up subprocesses spawned by Shell.java:runCommand when the shell process > exits > --- > > Key: HADOOP-13709 > URL: https://issues.apache.org/jira/browse/HADOOP-13709 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, > HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, > HADOOP-13709.006.patch, HADOOP-13709.007.patch, HADOOP-13709.008.patch, > HADOOP-13709.009.patch > > > The runCommand code in Shell.java can get into a situation where it will > ignore InterruptedExceptions and refuse to shutdown due to being in I/O > waiting for the return value of the subprocess that was spawned. We need to > allow for the subprocess to be interrupted and killed when the shell process > gets killed. Currently the JVM will shutdown and all of the subprocesses will > be orphaned and not killed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits
[ https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HADOOP-13709: - Attachment: HADOOP-13709.009.patch Uploading a new patch. [~jlowe], I took out the static block that registers the shutdown hook in {{Shell}}. We can add this shutdown hook for the localizer via YARN-5461. Also added a javadoc for {{destroyAllProcesses}}. > Clean up subprocesses spawned by Shell.java:runCommand when the shell process > exits > --- > > Key: HADOOP-13709 > URL: https://issues.apache.org/jira/browse/HADOOP-13709 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, > HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, > HADOOP-13709.006.patch, HADOOP-13709.007.patch, HADOOP-13709.008.patch, > HADOOP-13709.009.patch > > > The runCommand code in Shell.java can get into a situation where it will > ignore InterruptedExceptions and refuse to shutdown due to being in I/O > waiting for the return value of the subprocess that was spawned. We need to > allow for the subprocess to be interrupted and killed when the shell process > gets killed. Currently the JVM will shutdown and all of the subprocesses will > be orphaned and not killed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13859) TestConfigurationFieldsBase fails for fields that are DEFAULT values of skipped properties.
[ https://issues.apache.org/jira/browse/HADOOP-13859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated HADOOP-13859: Attachment: HADOOP-13859.01.patch > TestConfigurationFieldsBase fails for fields that are DEFAULT values of > skipped properties. > --- > > Key: HADOOP-13859 > URL: https://issues.apache.org/jira/browse/HADOOP-13859 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.0.0-alpha1 >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: HADOOP-13859.01.patch > > > In YARN-5922, two new default values are added in YarnConfiguration for two > timeline-service properties that are skipped in TestConfigurationFieldsBase. > TestConfigurationFieldsBase fails as it mistakenly treats the two newly added > default-values as regular properties. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13859) TestConfigurationFieldsBase fails for fields that are DEFAULT values of skipped properties.
[ https://issues.apache.org/jira/browse/HADOOP-13859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated HADOOP-13859: Status: Patch Available (was: Open) > TestConfigurationFieldsBase fails for fields that are DEFAULT values of > skipped properties. > --- > > Key: HADOOP-13859 > URL: https://issues.apache.org/jira/browse/HADOOP-13859 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.0.0-alpha1 >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: HADOOP-13859.01.patch > > > In YARN-5922, two new default values are added in YarnConfiguration for two > timeline-service properties that are skipped in TestConfigurationFieldsBase. > TestConfigurationFieldsBase fails as it mistakenly treats the two newly added > default-values as regular properties. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13856) FileSystem.rename(final Path src, final Path dst, final Rename... options) to become public; specified, tested
[ https://issues.apache.org/jira/browse/HADOOP-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13856: --- Target Version/s: 2.8.0, 3.0.0-alpha2 Component/s: (was: fs/s3) fs +1 for the proposal. I set the components as "fs" and target versions "2.8.0+". Correct me if I'm wrong. > FileSystem.rename(final Path src, final Path dst, final Rename... options) to > become public; specified, tested > -- > > Key: HADOOP-13856 > URL: https://issues.apache.org/jira/browse/HADOOP-13856 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran > > A lot of code within Hadoop (e.g. committers, filesystem) and downstream > (Hive, spark), don't know what to do when rename() returns false, as it can > be a sign of nothing important, or something major. > In contrast, {{rename(final Path src, final Path dst, final Rename... > options)}} has stricter semantics and throws up all exceptions to be caught > or relayed by callers. Yet it cannot be used as its scoped at {{protected}} > and tagged as {{@Deprected}}. > If it was made public then it could be used in committers and elsewhere; if > we backport the making of it public, then life will be even better -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13860) ZKFailoverController.ElectorCallbacks should have a non-trivial implementation for enterNeutralMode
Karthik Kambatla created HADOOP-13860: - Summary: ZKFailoverController.ElectorCallbacks should have a non-trivial implementation for enterNeutralMode Key: HADOOP-13860 URL: https://issues.apache.org/jira/browse/HADOOP-13860 Project: Hadoop Common Issue Type: Bug Reporter: Karthik Kambatla ZKFailoverController.ElectorCallbacks implements enterNeutralMode trivially. This can lead to a master staying active for longer than necessary, unless the fencing scheme ensures the first active is transitioned to standby before transitioning another master to active (e.g. ssh fencing). YARN-5677 does this for YARN in EmbeddedElectorService. If we choose not to implement, we should at least document this so any user of ZKFailoverController in the future is aware. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13257) Improve Azure Data Lake contract tests.
[ https://issues.apache.org/jira/browse/HADOOP-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15716007#comment-15716007 ] Mingliang Liu commented on HADOOP-13257: Thanks for updating the patch; congrats on a clean Jenkins pre-commit run. I'll review and/or commit this today. > Improve Azure Data Lake contract tests. > --- > > Key: HADOOP-13257 > URL: https://issues.apache.org/jira/browse/HADOOP-13257 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Chris Nauroth >Assignee: Vishwajeet Dusane > Attachments: HADOOP-13257.001.patch, HADOOP-13257.002.patch > > > HADOOP-12875 provided the initial implementation of the FileSystem contract > tests covering Azure Data Lake. This issue tracks subsequent improvements on > those test suites for improved coverage and matching the specified semantics > more closely. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13847) KMSWebApp should close KeyProviderCryptoExtension
[ https://issues.apache.org/jira/browse/HADOOP-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anthony Young-Garner updated HADOOP-13847: -- Description: KeyProviderCryptoExtension should be closed in KMSWebApp.contextDestroyed so that all KeyProviders are also closed. See related HADOOP-13838. (was: KeyProviderCryptoExtension should be closed so that all KeyProviders are also closed. See related HADOOP-13838.) > KMSWebApp should close KeyProviderCryptoExtension > - > > Key: HADOOP-13847 > URL: https://issues.apache.org/jira/browse/HADOOP-13847 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Anthony Young-Garner >Assignee: John Zhuge > > KeyProviderCryptoExtension should be closed in KMSWebApp.contextDestroyed so > that all KeyProviders are also closed. See related HADOOP-13838. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits
[ https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715954#comment-15715954 ] Jason Lowe commented on HADOOP-13709: - Thanks for updating the patch! Synchronization changes look good. Thinking about the patch further, I believe this will break YARN nodemanager work-preserving restart. Currently the nodemanager does not kill the subprocesses when work-preserving restart is enabled, but this kill-all-on-shutdown feature will do it anyway. Therefore minimally I think we need to change it so the shell is capable of tracking shell processes but doesn't always kill them on shutdown. Anything that needs to kill things on shutdown (i.e.: the YARN localizer problematic case that caused this to be filed) can register their own shutdown hook to call Shell.destroyAllProcesses. Since this interface will be public, it would be good to provide some javadoc for it. > Clean up subprocesses spawned by Shell.java:runCommand when the shell process > exits > --- > > Key: HADOOP-13709 > URL: https://issues.apache.org/jira/browse/HADOOP-13709 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, > HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, > HADOOP-13709.006.patch, HADOOP-13709.007.patch, HADOOP-13709.008.patch > > > The runCommand code in Shell.java can get into a situation where it will > ignore InterruptedExceptions and refuse to shutdown due to being in I/O > waiting for the return value of the subprocess that was spawned. We need to > allow for the subprocess to be interrupted and killed when the shell process > gets killed. Currently the JVM will shutdown and all of the subprocesses will > be orphaned and not killed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13859) TestConfigurationFieldsBase fails for fields that are DEFAULT values of skipped properties.
Haibo Chen created HADOOP-13859: --- Summary: TestConfigurationFieldsBase fails for fields that are DEFAULT values of skipped properties. Key: HADOOP-13859 URL: https://issues.apache.org/jira/browse/HADOOP-13859 Project: Hadoop Common Issue Type: Bug Components: common Affects Versions: 3.0.0-alpha1 Reporter: Haibo Chen Assignee: Haibo Chen In YARN-5922, two new default values are added in YarnConfiguration for two timeline-service properties that are skipped in TestConfigurationFieldsBase. TestConfigurationFieldsBase fails as it mistakenly treats the two newly added default-values as regular properties. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits
[ https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715894#comment-15715894 ] Hadoop QA commented on HADOOP-13709: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 36s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 44m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13709 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841534/HADOOP-13709.008.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux a42c27d58596 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0cfd7ad | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/11185/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/11185/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Clean up subprocesses spawned by Shell.java:runCommand when the shell process > exits > --- > > Key: HADOOP-13709 > URL: https://issues.apache.org/jira/browse/HADOOP-13709 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, > HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, > HADOOP-13709.006.patch,
[jira] [Commented] (HADOOP-13857) S3AUtils.translateException to map (wrapped) InterruptedExceptions to InterruptedIOEs
[ https://issues.apache.org/jira/browse/HADOOP-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715803#comment-15715803 ] Hadoop QA commented on HADOOP-13857: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 9s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13857 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841529/HADOOP-13857-001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7693fb93c4cc 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0cfd7ad | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/11184/artifact/patchprocess/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/11184/testReport/ | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/11184/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > S3AUtils.translateException to map (wrapped) InterruptedExceptions to > InterruptedIOEs > - > > Key: HADOOP-13857 > URL: https://issues.apache.org/jira/browse/HADOOP-13857 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve
[jira] [Created] (HADOOP-13858) Skip some tests in TestGridmixMemoryEmulation and TestResourceUsageEmulators on the environment other than Linux or Windows
Akira Ajisaka created HADOOP-13858: -- Summary: Skip some tests in TestGridmixMemoryEmulation and TestResourceUsageEmulators on the environment other than Linux or Windows Key: HADOOP-13858 URL: https://issues.apache.org/jira/browse/HADOOP-13858 Project: Hadoop Common Issue Type: Bug Components: test Environment: other than Linux or Windows, such as Mac Reporter: Akira Ajisaka TestGridmixMemoryEmulation.testTotalHeapUsageEmulatorPlugin fails on Mac. {noformat} Running org.apache.hadoop.mapred.gridmix.TestGridmixMemoryEmulation Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.896 sec <<< FAILURE! - in org.apache.hadoop.mapred.gridmix.TestGridmixMemoryEmulation testTotalHeapUsageEmulatorPlugin(org.apache.hadoop.mapred.gridmix.TestGridmixMemoryEmulation) Time elapsed: 0.009 sec <<< ERROR! java.lang.UnsupportedOperationException: Could not determine OS at org.apache.hadoop.util.SysInfo.newInstance(SysInfo.java:43) at org.apache.hadoop.yarn.util.ResourceCalculatorPlugin.(ResourceCalculatorPlugin.java:42) at org.apache.hadoop.mapred.gridmix.DummyResourceCalculatorPlugin.(DummyResourceCalculatorPlugin.java:32) at org.apache.hadoop.mapred.gridmix.TestGridmixMemoryEmulation.testTotalHeapUsageEmulatorPlugin(TestGridmixMemoryEmulation.java:131) {noformat} The following tests fail on Mac as well: * TestResourceUsageEmulators.testCumulativeCpuUsageEmulatorPlugin * TestResourceUsageEmulators.testCpuUsageEmulator * TestResourceUsageEmulators.testResourceUsageMatcherRunner -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits
[ https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HADOOP-13709: - Attachment: HADOOP-13709.008.patch That all makes sense, [~jlowe]. I believe that I've fixed those all now. I left the synchronized map in and just got rid of the redundant synchronized blocks. Let me know if you'd rather use a regular map along with the synchronized blocks. > Clean up subprocesses spawned by Shell.java:runCommand when the shell process > exits > --- > > Key: HADOOP-13709 > URL: https://issues.apache.org/jira/browse/HADOOP-13709 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, > HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, > HADOOP-13709.006.patch, HADOOP-13709.007.patch, HADOOP-13709.008.patch > > > The runCommand code in Shell.java can get into a situation where it will > ignore InterruptedExceptions and refuse to shutdown due to being in I/O > waiting for the return value of the subprocess that was spawned. We need to > allow for the subprocess to be interrupted and killed when the shell process > gets killed. Currently the JVM will shutdown and all of the subprocesses will > be orphaned and not killed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13842) Update jackson from 1.9.13 to 2.x in hadoop-maven-plugins
[ https://issues.apache.org/jira/browse/HADOOP-13842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715776#comment-15715776 ] Akira Ajisaka commented on HADOOP-13842: I ran qbt with the patch. Only TestTimelineWebServices failed but it is not related to the patch. (YARN-5934) > Update jackson from 1.9.13 to 2.x in hadoop-maven-plugins > - > > Key: HADOOP-13842 > URL: https://issues.apache.org/jira/browse/HADOOP-13842 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Minor > Attachments: HADOOP-13842.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13826) S3A Deadlock in multipart copy due to thread pool limits.
[ https://issues.apache.org/jira/browse/HADOOP-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715769#comment-15715769 ] Steve Loughran commented on HADOOP-13826: - one thing to consider: now that we've gone to fast upload streams each with their own bounded queues, does the main queue need to be bounded at all? I guess we have to limit it by the size of the active connections. we know the big ones are PUT, PUT-PART, COPY and COPY-PART right? The puts are limited per stream; if that is locked down (todo: check those numbers), then there's no overload on memory unless there are many streams trying to write simultaneously. Copy could get the same treatment: we will need to throttle the number of active copy request across the entire FS. Which means the parallel renaming should share a blocking pool with all other threads trying to do copies Oh, and I'd like to submit async delete and simple zero-byte put operations somewhere, alongside any async parent directory check (goal: do the full parent treewalk on create(), but async and only validate the results before that first PUT). > S3A Deadlock in multipart copy due to thread pool limits. > - > > Key: HADOOP-13826 > URL: https://issues.apache.org/jira/browse/HADOOP-13826 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Sean Mackrory >Assignee: Sean Mackrory >Priority: Critical > Attachments: HADOOP-13826.001.patch, HADOOP-13826.002.patch > > > In testing HIVE-15093 we have encountered deadlocks in the s3a connector. The > TransferManager javadocs > (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html) > explain how this is possible: > {quote}It is not recommended to use a single threaded executor or a thread > pool with a bounded work queue as control tasks may submit subtasks that > can't complete until all sub tasks complete. Using an incorrectly configured > thread pool may cause a deadlock (I.E. the work queue is filled with control > tasks that can't finish until subtasks complete but subtasks can't execute > because the queue is filled).{quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13597) Switch KMS from Tomcat to Jetty
[ https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715569#comment-15715569 ] John Zhuge edited comment on HADOOP-13597 at 12/2/16 5:31 PM: -- [~aw] Found mistake in {{hadoop-kms.sh}} for Patch 002 where everything is in {{hadoop_subcommand_kms}}. Should be: {code} if [[ "${HADOOP_SHELL_EXECNAME}" = hadoop ]]; then hadoop_add_profile kms hadoop_add_subcommand "kms" "run KMS, the Key Management Server" fi function _kms_hadoop_init { # init variables } function hadoop_subcommand_kms { # Called by bin/hadoop to provide subcommand case statement if any HADOOP_SUBCMD_SUPPORTDAEMONIZATION=true HADOOP_CLASSNAME=org.apache.hadoop.crypto.key.kms.server.KMSHttpServer } function _kms_hadoop_finalize { # Called in finalize phase, all env vars are settled hadoop_add_param HADOOP_OPTS "-Dkms.config.dir=" \ "-Dkms.config.dir=${HADOOP_CONF_DIR}" hadoop_add_param HADOOP_OPTS "-Dkms.log.dir=" \ "-Dkms.log.dir=${HADOOP_LOG_DIR}" } {code} The 3 functions are called in this order: # _kms_hadoop_init # hadoop_subcommand_kms # _kms_hadoop_finalize was (Author: jzhuge): [~aw] Found mistake in {{hadoop-kms.sh}} for Patch 002 where everything is in {{hadoop_subcommand_kms}}. Should be: {code} if [[ "${HADOOP_SHELL_EXECNAME}" = hadoop ]]; then hadoop_add_profile kms hadoop_add_subcommand "kms" "run KMS, the Key Management Server" fi function _kms_hadoop_init { # init variables } function hadoop_subcommand_kms { # Called by bin/hadoop to provide subcommand case statement if any HADOOP_SUBCMD_SUPPORTDAEMONIZATION=true HADOOP_CLASSNAME=org.apache.hadoop.crypto.key.kms.server.KMSHttpServer } function _kms_hadoop_finalize { # Called in finalize phase, all env vars are settled hadoop_add_param HADOOP_OPTS "-Dkms.config.dir=" \ "-Dkms.config.dir=${HADOOP_CONF_DIR}" hadoop_add_param HADOOP_OPTS "-Dkms.log.dir=" \ "-Dkms.log.dir=${HADOOP_LOG_DIR}" } {code} The 3 functions are called in this order: # _kms_hadoop_init # hadoop_subcommand_kms # _kmd_hadoop_finalize > Switch KMS from Tomcat to Jetty > --- > > Key: HADOOP-13597 > URL: https://issues.apache.org/jira/browse/HADOOP-13597 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch > > > The Tomcat 6 we are using will reach EOL at the end of 2017. While there are > other good options, I would propose switching to {{Jetty 9}} for the > following reasons: > * Easier migration. Both Tomcat and Jetty are based on {{Servlet > Containers}}, so we don't have change client code that much. It would require > more work to switch to {{JAX-RS}}. > * Well established. > * Good performance and scalability. > Other alternatives: > * Jersey + Grizzly > * Tomcat 8 > Your opinions will be greatly appreciated. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13597) Switch KMS from Tomcat to Jetty
[ https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715747#comment-15715747 ] John Zhuge edited comment on HADOOP-13597 at 12/2/16 5:30 PM: -- {{hadoop_add_param HADOOP_OPTS "-Dkms.config.dir=" ...}} may be called too early. Since {{hadoop_subcommand_opts}} is called between {{hadoo_subcommand_kms}} and {{_kms_hadoop_finalize}}, in all-in-one approach, {{HADOOP_KMS_OPTS=-Dkms.config.dir=..}} would not have taken effect. was (Author: jzhuge): {{hadoop_add_param HADOOP_OPTS "-Dkms.config.dir=" "-Dkms.config.dir=${HADOOP_CONF_DIR}"}} may be called too early. Since {{hadoop_subcommand_opts}} is called between {{hadoo_subcommand_kms}} and {{_kms_hadoop_finalize}}, in all-in-one approach, {{HADOOP_KMS_OPTS=-Dkms.config.dir=..}} would not have taken effect. > Switch KMS from Tomcat to Jetty > --- > > Key: HADOOP-13597 > URL: https://issues.apache.org/jira/browse/HADOOP-13597 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch > > > The Tomcat 6 we are using will reach EOL at the end of 2017. While there are > other good options, I would propose switching to {{Jetty 9}} for the > following reasons: > * Easier migration. Both Tomcat and Jetty are based on {{Servlet > Containers}}, so we don't have change client code that much. It would require > more work to switch to {{JAX-RS}}. > * Well established. > * Good performance and scalability. > Other alternatives: > * Jersey + Grizzly > * Tomcat 8 > Your opinions will be greatly appreciated. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty
[ https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715747#comment-15715747 ] John Zhuge commented on HADOOP-13597: - {{hadoop_add_param HADOOP_OPTS "-Dkms.config.dir=" "-Dkms.config.dir=${HADOOP_CONF_DIR}"}} may be called too early. Since {{hadoop_subcommand_opts}} is called between {{hadoo_subcommand_kms}} and {{_kms_hadoop_finalize}}, in all-in-one approach, {{HADOOP_KMS_OPTS=-Dkms.config.dir=..}} would not have taken effect. > Switch KMS from Tomcat to Jetty > --- > > Key: HADOOP-13597 > URL: https://issues.apache.org/jira/browse/HADOOP-13597 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch > > > The Tomcat 6 we are using will reach EOL at the end of 2017. While there are > other good options, I would propose switching to {{Jetty 9}} for the > following reasons: > * Easier migration. Both Tomcat and Jetty are based on {{Servlet > Containers}}, so we don't have change client code that much. It would require > more work to switch to {{JAX-RS}}. > * Well established. > * Good performance and scalability. > Other alternatives: > * Jersey + Grizzly > * Tomcat 8 > Your opinions will be greatly appreciated. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13857) S3AUtils.translateException to map (wrapped) InterruptedExceptions to InterruptedIOEs
[ https://issues.apache.org/jira/browse/HADOOP-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13857: Status: Patch Available (was: Open) test: s3a ireland > S3AUtils.translateException to map (wrapped) InterruptedExceptions to > InterruptedIOEs > - > > Key: HADOOP-13857 > URL: https://issues.apache.org/jira/browse/HADOOP-13857 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13857-001.patch > > > Currently {{S3AUtils.translateException}} doesn't recognise interruptions; it > just sees an AmazonClientException chain which is then relayed up. > Proposed: look for an {{InterruptedIOException}} at the base of the chain of > exceptions, map it to an {{InterruptedIOException}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty
[ https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715710#comment-15715710 ] Allen Wittenauer commented on HADOOP-13597: --- I'm sort of surprised that the "all in one" doesn't work, other than -e should probably be -x. It's almost certainly safer and doesn't have nearly as many side effects. Anything in particular that is breaking that I could help with? > Switch KMS from Tomcat to Jetty > --- > > Key: HADOOP-13597 > URL: https://issues.apache.org/jira/browse/HADOOP-13597 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch > > > The Tomcat 6 we are using will reach EOL at the end of 2017. While there are > other good options, I would propose switching to {{Jetty 9}} for the > following reasons: > * Easier migration. Both Tomcat and Jetty are based on {{Servlet > Containers}}, so we don't have change client code that much. It would require > more work to switch to {{JAX-RS}}. > * Well established. > * Good performance and scalability. > Other alternatives: > * Jersey + Grizzly > * Tomcat 8 > Your opinions will be greatly appreciated. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13857) S3AUtils.translateException to map (wrapped) InterruptedExceptions to InterruptedIOEs
[ https://issues.apache.org/jira/browse/HADOOP-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13857: Attachment: HADOOP-13857-001.patch Patch 001; scan for type package scoped for testing; translateException using the probe & optional convert during handling of AmazonClientException > S3AUtils.translateException to map (wrapped) InterruptedExceptions to > InterruptedIOEs > - > > Key: HADOOP-13857 > URL: https://issues.apache.org/jira/browse/HADOOP-13857 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13857-001.patch > > > Currently {{S3AUtils.translateException}} doesn't recognise interruptions; it > just sees an AmazonClientException chain which is then relayed up. > Proposed: look for an {{InterruptedIOException}} at the base of the chain of > exceptions, map it to an {{InterruptedIOException}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits
[ https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715670#comment-15715670 ] Jason Lowe commented on HADOOP-13709: - The synchronized blocks are unnecessary on the get and put methods for synchronized map. That's what a synchronized map brings to the table over a normal map -- it adds a lock to those methods. However a synchronized map cannot auto-synchronize iteration which is why we need to explicitly lock it when walking it. It'd also be nice t mark process1 and process2 as final in the unit test since otherwise the patch will only compile on JDK8 or later (i.e.: only trunk at this point). > Clean up subprocesses spawned by Shell.java:runCommand when the shell process > exits > --- > > Key: HADOOP-13709 > URL: https://issues.apache.org/jira/browse/HADOOP-13709 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, > HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, > HADOOP-13709.006.patch, HADOOP-13709.007.patch > > > The runCommand code in Shell.java can get into a situation where it will > ignore InterruptedExceptions and refuse to shutdown due to being in I/O > waiting for the return value of the subprocess that was spawned. We need to > allow for the subprocess to be interrupted and killed when the shell process > gets killed. Currently the JVM will shutdown and all of the subprocesses will > be orphaned and not killed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-6240) Rename operation is not consistent between different implementations of FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-6240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715592#comment-15715592 ] Steve Loughran commented on HADOOP-6240: Just a heads up that I've filed HADOOP-13856 to make this protected, @Deprecated {{rename(past, dest, opts)}} method public and undeprecated. For all its failings, {{FileSystem}} is the universal API for FS access, used in the downstream projects and the one generally implemented by object stores, in the hadoop code and elsewhere. Large amounts of Hadoop's own code uses it, in particular FileOutputFormat used for committing work to filesystems making the {{rename(past, dest, opts)}} method public allows {{FileOutputFormat}} to adopt it along with its stricter semantics, and should provide the motivation for more FS subclasses to implement the method directly, for better atomicity, or at least performance. > Rename operation is not consistent between different implementations of > FileSystem > -- > > Key: HADOOP-6240 > URL: https://issues.apache.org/jira/browse/HADOOP-6240 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 0.21.0 >Reporter: Suresh Srinivas >Assignee: Suresh Srinivas > Fix For: 0.21.0 > > Attachments: hadoop-6240-1.patch, hadoop-6240-2.patch, > hadoop-6240-3.patch, hadoop-6240-4.patch, hadoop-6240-5.patch, > hadoop-6240.patch > > > The rename operation has many scenarios that are not consistently implemented > across file systems. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty
[ https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715569#comment-15715569 ] John Zhuge commented on HADOOP-13597: - [~aw] Found mistake in {{hadoop-kms.sh}} for Patch 002 where everything is in {{hadoop_subcommand_kms}}. Should be: {code} if [[ "${HADOOP_SHELL_EXECNAME}" = hadoop ]]; then hadoop_add_profile kms hadoop_add_subcommand "kms" "run KMS, the Key Management Server" fi function _kms_hadoop_init { # init variables } function hadoop_subcommand_kms { # Called by bin/hadoop to provide subcommand case statement if any HADOOP_SUBCMD_SUPPORTDAEMONIZATION=true HADOOP_CLASSNAME=org.apache.hadoop.crypto.key.kms.server.KMSHttpServer } function _kms_hadoop_finalize { # Called in finalize phase, all env vars are settled hadoop_add_param HADOOP_OPTS "-Dkms.config.dir=" \ "-Dkms.config.dir=${HADOOP_CONF_DIR}" hadoop_add_param HADOOP_OPTS "-Dkms.log.dir=" \ "-Dkms.log.dir=${HADOOP_LOG_DIR}" } {code} The 3 functions are called in this order: # _kms_hadoop_init # hadoop_subcommand_kms # _kmd_hadoop_finalize > Switch KMS from Tomcat to Jetty > --- > > Key: HADOOP-13597 > URL: https://issues.apache.org/jira/browse/HADOOP-13597 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch > > > The Tomcat 6 we are using will reach EOL at the end of 2017. While there are > other good options, I would propose switching to {{Jetty 9}} for the > following reasons: > * Easier migration. Both Tomcat and Jetty are based on {{Servlet > Containers}}, so we don't have change client code that much. It would require > more work to switch to {{JAX-RS}}. > * Well established. > * Good performance and scalability. > Other alternatives: > * Jersey + Grizzly > * Tomcat 8 > Your opinions will be greatly appreciated. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13786) add output committer which uses s3guard for consistent O(1) commits to S3
[ https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715389#comment-15715389 ] Steve Loughran commented on HADOOP-13786: - Update: the way s3a does commits via mergeOrUpdate() isn't really that atomic. It's atomic per destination rename, but as it treewalks doing a merge, there are actual potentials for races/inconsistent outcomes. What s3guard is at least guarantee that the view of the FS during that walk is consistent across clients. I'm checking in/pushing up an update, where I've been slowly moving up to some s3a internals, more for performance than consistency. Examples: * switch to {{innerDelete(FileStatus)}} for the delete, adding a parameter there to allow the check for maybe creating a parent dir to be skipped. As we know there's about be a new child, skipping that avoids 1-3 HEAD calls and a PUT which will soon be deleted * switch to an implementation of {{FileSystem.rename/3}}. This raises exceptions and lets us choose overwrite policy (good), but it still does two getFileStatusCalls, one for the src and one for the dest. As we have the source and have just deleted the test, we don't need them again. Having an @Privat method which took the source and dest values would strip out another 2-6 HTTP requests. Admittedly, on a large commit the cost of his preamble is low —the COPY becomes the expense. > add output committer which uses s3guard for consistent O(1) commits to S3 > - > > Key: HADOOP-13786 > URL: https://issues.apache.org/jira/browse/HADOOP-13786 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-alpha2 >Reporter: Steve Loughran >Assignee: Steve Loughran > > A goal of this code is "support O(1) commits to S3 repositories in the > presence of failures". Implement it, including whatever is needed to > demonstrate the correctness of the algorithm. (that is, assuming that s3guard > provides a consistent view of the presence/absence of blobs, show that we can > commit directly). > I consider ourselves free to expose the blobstore-ness of the s3 output > streams (ie. not visible until the close()), if we need to use that to allow > us to abort commit operations. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty
[ https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715380#comment-15715380 ] John Zhuge commented on HADOOP-13597: - [Private branch for Patch 002|https://github.com/jzhuge/hadoop/tree/HADOOP-13597.002]. > Switch KMS from Tomcat to Jetty > --- > > Key: HADOOP-13597 > URL: https://issues.apache.org/jira/browse/HADOOP-13597 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-13597.001.patch, HADOOP-13597.002.patch > > > The Tomcat 6 we are using will reach EOL at the end of 2017. While there are > other good options, I would propose switching to {{Jetty 9}} for the > following reasons: > * Easier migration. Both Tomcat and Jetty are based on {{Servlet > Containers}}, so we don't have change client code that much. It would require > more work to switch to {{JAX-RS}}. > * Well established. > * Good performance and scalability. > Other alternatives: > * Jersey + Grizzly > * Tomcat 8 > Your opinions will be greatly appreciated. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13857) S3AUtils.translateException to map (wrapped) InterruptedExceptions to InterruptedIOEs
[ https://issues.apache.org/jira/browse/HADOOP-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715291#comment-15715291 ] Steve Loughran commented on HADOOP-13857: - One issue: should all AbortedExceptions be uprated to interruptedExceptions? HADOOP-13811 hints at other ways in which interruptions may surface. However, looking for specific error texts is dangerous and brittle > S3AUtils.translateException to map (wrapped) InterruptedExceptions to > InterruptedIOEs > - > > Key: HADOOP-13857 > URL: https://issues.apache.org/jira/browse/HADOOP-13857 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > > Currently {{S3AUtils.translateException}} doesn't recognise interruptions; it > just sees an AmazonClientException chain which is then relayed up. > Proposed: look for an {{InterruptedIOException}} at the base of the chain of > exceptions, map it to an {{InterruptedIOException}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13857) S3AUtils.translateException to map (wrapped) InterruptedExceptions to InterruptedIOEs
[ https://issues.apache.org/jira/browse/HADOOP-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715284#comment-15715284 ] Steve Loughran commented on HADOOP-13857: - After {code} 2016-12-02 14:30:25,692 [JobGenerator] WARN dstream.FileInputDStream (Logging.scala:logWarning(87)) - Error finding new files java.io.InterruptedIOException: getFileStatus on s3a://hwdev-steve-new/spark-cloud/S3AStreamingSuite/streaming/streaming: com.amazonaws.AbortedException: at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:118) at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:94) at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1685) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1476) at org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1452) at org.apache.hadoop.fs.Globber.listStatus(Globber.java:76) at org.apache.hadoop.fs.Globber.doGlob(Globber.java:234) at org.apache.hadoop.fs.Globber.glob(Globber.java:148) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1978) at org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:2168) at org.apache.spark.streaming.dstream.FileInputDStream.findNewFiles(FileInputDStream.scala:205) at org.apache.spark.streaming.dstream.FileInputDStream.compute(FileInputDStream.scala:149) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334) at scala.Option.orElse(Option.scala:289) at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331) at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:36) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334) at scala.Option.orElse(Option.scala:289) at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331) at org.apache.spark.streaming.dstream.FilteredDStream.compute(FilteredDStream.scala:36) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334) at scala.Option.orElse(Option.scala:289) at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331) at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:36) at
[jira] [Commented] (HADOOP-13857) S3AUtils.translateException to map (wrapped) InterruptedExceptions to InterruptedIOEs
[ https://issues.apache.org/jira/browse/HADOOP-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715282#comment-15715282 ] Steve Loughran commented on HADOOP-13857: - before {code} 2016-12-02 11:59:22,623 [JobGenerator] WARN dstream.FileInputDStream (Logging.scala:logWarning(87)) - Error finding new files org.apache.hadoop.fs.s3a.AWSClientIOException: getFileStatus on s3a://hwdev-steve-new/spark-cloud/S3AStreamingSuite/streaming/streaming: com.amazonaws.AbortedException: : at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:116) at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:93) at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1636) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1427) at org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1403) at org.apache.hadoop.fs.Globber.listStatus(Globber.java:76) at org.apache.hadoop.fs.Globber.doGlob(Globber.java:234) at org.apache.hadoop.fs.Globber.glob(Globber.java:148) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1978) at org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:2119) at org.apache.spark.streaming.dstream.FileInputDStream.findNewFiles(FileInputDStream.scala:205) at org.apache.spark.streaming.dstream.FileInputDStream.compute(FileInputDStream.scala:149) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334) at scala.Option.orElse(Option.scala:289) at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331) at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:36) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334) at scala.Option.orElse(Option.scala:289) at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331) at org.apache.spark.streaming.dstream.FilteredDStream.compute(FilteredDStream.scala:36) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336) at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334) at scala.Option.orElse(Option.scala:289) at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:331) at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:36) at
[jira] [Created] (HADOOP-13857) S3AUtils.translateException to map (wrapped) InterruptedExceptions to InterruptedIOEs
Steve Loughran created HADOOP-13857: --- Summary: S3AUtils.translateException to map (wrapped) InterruptedExceptions to InterruptedIOEs Key: HADOOP-13857 URL: https://issues.apache.org/jira/browse/HADOOP-13857 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 2.8.0 Reporter: Steve Loughran Assignee: Steve Loughran Currently {{S3AUtils.translateException}} doesn't recognise interruptions; it just sees an AmazonClientException chain which is then relayed up. Proposed: look for an {{InterruptedIOException}} at the base of the chain of exceptions, map it to an {{InterruptedIOException}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13856) FileSystem.rename(final Path src, final Path dst, final Rename... options) to become public; specified, tested
[ https://issues.apache.org/jira/browse/HADOOP-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715183#comment-15715183 ] Steve Loughran commented on HADOOP-13856: - Requires # base method to become public # all subclassed implementations to become public (HDFS, ADL (Hadoop 3)). # FS spec to cover it. As its explicitly a more consistent rename(), this should be easier. # Add contract tests. # ViewFS to relay it Filesystems which currently swallow exceptions in rename (S3A, WASB) can expose the IOExceptions instead; this can take a bit of refactoring similar to HADOOP-13823 and S3A; we may even want to make that {{RenameFailedException}} with its return code something inside {{hadoop-common/org.apache.hadoop.fs}} > FileSystem.rename(final Path src, final Path dst, final Rename... options) to > become public; specified, tested > -- > > Key: HADOOP-13856 > URL: https://issues.apache.org/jira/browse/HADOOP-13856 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran > > A lot of code within Hadoop (e.g. committers, filesystem) and downstream > (Hive, spark), don't know what to do when rename() returns false, as it can > be a sign of nothing important, or something major. > In contrast, {{rename(final Path src, final Path dst, final Rename... > options)}} has stricter semantics and throws up all exceptions to be caught > or relayed by callers. Yet it cannot be used as its scoped at {{protected}} > and tagged as {{@Deprected}}. > If it was made public then it could be used in committers and elsewhere; if > we backport the making of it public, then life will be even better -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13856) FileSystem.rename(final Path src, final Path dst, final Rename... options) to become public; specified, tested
Steve Loughran created HADOOP-13856: --- Summary: FileSystem.rename(final Path src, final Path dst, final Rename... options) to become public; specified, tested Key: HADOOP-13856 URL: https://issues.apache.org/jira/browse/HADOOP-13856 Project: Hadoop Common Issue Type: Improvement Components: fs/s3 Affects Versions: 2.8.0 Reporter: Steve Loughran A lot of code within Hadoop (e.g. committers, filesystem) and downstream (Hive, spark), don't know what to do when rename() returns false, as it can be a sign of nothing important, or something major. In contrast, {{rename(final Path src, final Path dst, final Rename... options)}} has stricter semantics and throws up all exceptions to be caught or relayed by callers. Yet it cannot be used as its scoped at {{protected}} and tagged as {{@Deprected}}. If it was made public then it could be used in committers and elsewhere; if we backport the making of it public, then life will be even better -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13846) S3A to implement rename(final Path src, final Path dst, final Rename... options)
[ https://issues.apache.org/jira/browse/HADOOP-13846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15715152#comment-15715152 ] Steve Loughran commented on HADOOP-13846: - + rename/3 is also marked as protected; we will actually have to make it publc. ViewFS will need to implement it too. > S3A to implement rename(final Path src, final Path dst, final Rename... > options) > > > Key: HADOOP-13846 > URL: https://issues.apache.org/jira/browse/HADOOP-13846 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran > > S3a now raises exceptions on invalid rename operations, but these get lost. I > plan to use them in my s3guard committer HADOOP-13786. > Rather than just make innerRename() private, S3A could implement > {{FileSystem.rename(final Path src, final Path dst, final Rename... > options)}} and so have an exception-raising rename which can be called > without going more into the internals. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13855) Fix a couple of the s3a statistic names to be consistent with the rest
[ https://issues.apache.org/jira/browse/HADOOP-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714904#comment-15714904 ] Hadoop QA commented on HADOOP-13855: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 47s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13855 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841458/HADOOP-13855-001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 2a893548cf60 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c87b3a4 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/11183/testReport/ | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/11183/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix a couple of the s3a statistic names to be consistent with the rest > -- > > Key: HADOOP-13855 > URL: https://issues.apache.org/jira/browse/HADOOP-13855 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13855-001.patch > > > The S3a
[jira] [Updated] (HADOOP-13855) Fix a couple of the s3a statistic names to be consistent with the rest
[ https://issues.apache.org/jira/browse/HADOOP-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13855: Status: Patch Available (was: Open) > Fix a couple of the s3a statistic names to be consistent with the rest > -- > > Key: HADOOP-13855 > URL: https://issues.apache.org/jira/browse/HADOOP-13855 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13855-001.patch > > > The S3a streamOpened and streamClosed statistics are camel case, rather than > stream_opened and stream_closed, the way the others are. Fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13855) Fix a couple of the s3a statistic names to be consistent with the rest
[ https://issues.apache.org/jira/browse/HADOOP-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13855: Attachment: HADOOP-13855-001.patch patch 001. renames the properties. no, no tests. I'd like this in for 2.8, because we haven't shipped with these properties published as JMX or test run entries yet; it's not too late to change them > Fix a couple of the s3a statistic names to be consistent with the rest > -- > > Key: HADOOP-13855 > URL: https://issues.apache.org/jira/browse/HADOOP-13855 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13855-001.patch > > > The S3a streamOpened and streamClosed statistics are camel case, rather than > stream_opened and stream_closed, the way the others are. Fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13855) Fix a couple of the s3a statistic names to be consistent with the rest
[ https://issues.apache.org/jira/browse/HADOOP-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13855: Summary: Fix a couple of the s3a statistic names to be consistent with the rest (was: fix a couple of the s3a statistic names to be consistent with the rest) > Fix a couple of the s3a statistic names to be consistent with the rest > -- > > Key: HADOOP-13855 > URL: https://issues.apache.org/jira/browse/HADOOP-13855 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > > The S3a streamOpened and streamClosed statistics are camel case, rather than > stream_opened and stream_closed, the way the others are. Fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13855) fix a couple of the s3a statistic names to be consistent with the rest
Steve Loughran created HADOOP-13855: --- Summary: fix a couple of the s3a statistic names to be consistent with the rest Key: HADOOP-13855 URL: https://issues.apache.org/jira/browse/HADOOP-13855 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 2.8.0 Reporter: Steve Loughran Assignee: Steve Loughran The S3a streamOpened and streamClosed statistics are camel case, rather than stream_opened and stream_closed, the way the others are. Fix. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set
[ https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714717#comment-15714717 ] Steve Loughran commented on HADOOP-13852: - I guess the yarn one should be kept in sync. One thing that the patch does is let todays spark releases be tested against Hadoop 3.x, otherwise it will need (a) an updated org.spark-project.hive JAR *and* a matching spark release. > hadoop build to allow hadoop version property to be explicitly set > -- > > Key: HADOOP-13852 > URL: https://issues.apache.org/jira/browse/HADOOP-13852 > Project: Hadoop Common > Issue Type: New Feature > Components: build >Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13852-001.patch > > > Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer > rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to > have the Hadoop version (currently set to pom.version) to be overridden > manually. > This will not affect version names of artifacts, merely the declared Hadoop > version visible in {{VersionInfo.getVersion()}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13811) s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to sanitize XML document destined for handler class
[ https://issues.apache.org/jira/browse/HADOOP-13811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714708#comment-15714708 ] Steve Loughran commented on HADOOP-13811: - I don't see anything wrong with the build. do a quick git pull to make sure all of the hadoop code is up to date, though I'm not confident here, just from the lines where the stack is coming from. one thing to consider: which process is being run here. That is, is there some other SPARK_HOME/bin being executed?. Make sure that {{SPARK_HOME}} is unset, that there aren't other copies of spark-submit on the {{PATH}} > s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to > sanitize XML document destined for handler class > - > > Key: HADOOP-13811 > URL: https://issues.apache.org/jira/browse/HADOOP-13811 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0, 2.7.3 >Reporter: Steve Loughran >Assignee: Steve Loughran > > Sometimes, occasionally, getFileStatus() fails with a stack trace starting > with {{com.amazonaws.AmazonClientException: Failed to sanitize XML document > destined for handler class}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13257) Improve Azure Data Lake contract tests.
[ https://issues.apache.org/jira/browse/HADOOP-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714596#comment-15714596 ] Hadoop QA commented on HADOOP-13257: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 20 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 28s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 26m 48s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13257 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841428/HADOOP-13257.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 19c50b5490fe 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c87b3a4 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/11182/testReport/ | | modules | C: hadoop-tools/hadoop-azure-datalake U: hadoop-tools/hadoop-azure-datalake | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/11182/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Improve Azure Data Lake contract tests. > --- > > Key: HADOOP-13257 > URL: https://issues.apache.org/jira/browse/HADOOP-13257 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Chris Nauroth >Assignee: Vishwajeet Dusane > Attachments: HADOOP-13257.001.patch, HADOOP-13257.002.patch > > > HADOOP-12875 provided the initial implementation of the FileSystem contract > tests covering Azure Data Lake. This issue tracks subsequent improvements on > those test suites for improved
[jira] [Commented] (HADOOP-13597) Switch KMS from Tomcat to Jetty
[ https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714570#comment-15714570 ] Hadoop QA commented on HADOOP-13597: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-assemblies {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 1s{color} | {color:green} root: The patch generated 0 new + 236 unchanged - 9 fixed = 236 total (was 245) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 17s{color} | {color:red} The patch generated 5 new + 559 unchanged - 8 fixed = 564 total (was 567) {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 22s{color} | {color:green} The patch generated 0 new + 342 unchanged - 4 fixed = 342 total (was 346) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 7s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-assemblies {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s{color} | {color:green} hadoop-assemblies in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 38s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 31s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 48s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 90m 6s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13597 | | JIRA Patch URL |
[jira] [Updated] (HADOOP-13257) Improve Azure Data Lake contract tests.
[ https://issues.apache.org/jira/browse/HADOOP-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vishwajeet Dusane updated HADOOP-13257: --- Status: Patch Available (was: Open) > Improve Azure Data Lake contract tests. > --- > > Key: HADOOP-13257 > URL: https://issues.apache.org/jira/browse/HADOOP-13257 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Chris Nauroth >Assignee: Vishwajeet Dusane > Attachments: HADOOP-13257.001.patch, HADOOP-13257.002.patch > > > HADOOP-12875 provided the initial implementation of the FileSystem contract > tests covering Azure Data Lake. This issue tracks subsequent improvements on > those test suites for improved coverage and matching the specified semantics > more closely. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714448#comment-15714448 ] Hadoop QA commented on HADOOP-13835: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 22s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 0s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 10m 0s{color} | {color:red} root generated 25 new + 7 unchanged - 0 fixed = 32 total (was 7) {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 30s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 54s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 37s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}180m 2s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.timeline.webapp.TestTimelineWebServices | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13835 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841412/HADOOP-13835.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml cc | | uname | Linux c451f945ca3d 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c87b3a4 | | Default Java | 1.8.0_111 | | cc | https://builds.apache.org/job/PreCommit-HADOOP-Build/11180/artifact/patchprocess/diff-compile-cc-root.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/11180/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/11180/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HADOOP-Build/11180/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: hadoop-common-project/hadoop-common hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/11180/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org |
[jira] [Commented] (HADOOP-13257) Improve Azure Data Lake contract tests.
[ https://issues.apache.org/jira/browse/HADOOP-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15714399#comment-15714399 ] Vishwajeet Dusane commented on HADOOP-13257: Thanks for clarification [~liuml07] and +1 for the comment on loop through {{FsAction.values()}}. > Improve Azure Data Lake contract tests. > --- > > Key: HADOOP-13257 > URL: https://issues.apache.org/jira/browse/HADOOP-13257 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Chris Nauroth >Assignee: Vishwajeet Dusane > Attachments: HADOOP-13257.001.patch, HADOOP-13257.002.patch > > > HADOOP-12875 provided the initial implementation of the FileSystem contract > tests covering Azure Data Lake. This issue tracks subsequent improvements on > those test suites for improved coverage and matching the specified semantics > more closely. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13257) Improve Azure Data Lake contract tests.
[ https://issues.apache.org/jira/browse/HADOOP-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vishwajeet Dusane updated HADOOP-13257: --- Attachment: HADOOP-13257.002.patch Incorporated review comments from [~liuml07]. > Improve Azure Data Lake contract tests. > --- > > Key: HADOOP-13257 > URL: https://issues.apache.org/jira/browse/HADOOP-13257 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Chris Nauroth >Assignee: Vishwajeet Dusane > Attachments: HADOOP-13257.001.patch, HADOOP-13257.002.patch > > > HADOOP-12875 provided the initial implementation of the FileSystem contract > tests covering Azure Data Lake. This issue tracks subsequent improvements on > those test suites for improved coverage and matching the specified semantics > more closely. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org