[jira] [Updated] (HADOOP-15887) Add an option to avoid writing data locally in Distcp
[ https://issues.apache.org/jira/browse/HADOOP-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Jie updated HADOOP-15887: - Attachment: HADOOP-15887.001.patch > Add an option to avoid writing data locally in Distcp > - > > Key: HADOOP-15887 > URL: https://issues.apache.org/jira/browse/HADOOP-15887 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.8.2, 3.0.0 >Reporter: Tao Jie >Assignee: Tao Jie >Priority: Major > Attachments: HADOOP-15887.001.patch > > > When copying large amount of data from one cluster to another via Distcp, and > the Distcp jobs run in the target cluster, the datanode local usage would be > imbalanced. Because the default placement policy chooses the local node to > store the first replication. > In https://issues.apache.org/jira/browse/HDFS-3702 we add a flag in DFSClient > to avoid replicating to the local datanode. We can make use of this flag in > Distcp. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15887) Add an option to avoid writing data locally in Distcp
Tao Jie created HADOOP-15887: Summary: Add an option to avoid writing data locally in Distcp Key: HADOOP-15887 URL: https://issues.apache.org/jira/browse/HADOOP-15887 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 2.8.2 Reporter: Tao Jie Assignee: Tao Jie When copying large amount of data from one cluster to another via Distcp, and the Distcp jobs run in the target cluster, the datanode local usage would be imbalanced. Because the default placement policy chooses the local node to store the first replication. In https://issues.apache.org/jira/browse/HDFS-3702 we add a flag in DFSClient to avoid replicating to the local datanode. We can make use of this flag in Distcp. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15878) website should have a list of CVEs w/impacted versions and guidance
[ https://issues.apache.org/jira/browse/HADOOP-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-15878: --- Resolution: Fixed Status: Resolved (was: Patch Available) Committed. Thanks [~busbey]! > website should have a list of CVEs w/impacted versions and guidance > --- > > Key: HADOOP-15878 > URL: https://issues.apache.org/jira/browse/HADOOP-15878 > Project: Hadoop Common > Issue Type: Task > Components: documentation, website >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Minor > Attachments: HADOOP-15878.0.patch, HADOOP-15878.0.rendered.patch > > > Our website should have a page with publicly disclosed CVEs listed. They > should include the community's understanding of impacted and fixed versions. > For a simple example, see what kafka does: > https://kafka.apache.org/cve-list -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15878) website should have a list of CVEs w/impacted versions and guidance
[ https://issues.apache.org/jira/browse/HADOOP-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-15878: --- Component/s: website > website should have a list of CVEs w/impacted versions and guidance > --- > > Key: HADOOP-15878 > URL: https://issues.apache.org/jira/browse/HADOOP-15878 > Project: Hadoop Common > Issue Type: Task > Components: documentation, website >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Minor > Attachments: HADOOP-15878.0.patch, HADOOP-15878.0.rendered.patch > > > Our website should have a page with publicly disclosed CVEs listed. They > should include the community's understanding of impacted and fixed versions. > For a simple example, see what kafka does: > https://kafka.apache.org/cve-list -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15886) Fix findbugs warnings in RegistryDNS.java
[ https://issues.apache.org/jira/browse/HADOOP-15886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668043#comment-16668043 ] Akira Ajisaka commented on HADOOP-15886: HADOOP-15821 moved YARN registry to Hadoop registry, however, the findbugs exclude sections for YARN registry was not moved to Hadoop common. This patch moves the sections. > Fix findbugs warnings in RegistryDNS.java > - > > Key: HADOOP-15886 > URL: https://issues.apache.org/jira/browse/HADOOP-15886 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Attachments: YARN-8956.01.patch > > > {noformat} > FindBugs : >module:hadoop-common-project/hadoop-registry >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) > At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) > At RegistryDNS.java:[line 900] >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) > At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) > At RegistryDNS.java:[line 926] >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel, > InetAddress, int) At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel, > InetAddress, int) At RegistryDNS.java:[line 850] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15886) Fix findbugs warnings in RegistryDNS.java
[ https://issues.apache.org/jira/browse/HADOOP-15886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-15886: --- Status: Patch Available (was: Open) > Fix findbugs warnings in RegistryDNS.java > - > > Key: HADOOP-15886 > URL: https://issues.apache.org/jira/browse/HADOOP-15886 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Attachments: YARN-8956.01.patch > > > {noformat} > FindBugs : >module:hadoop-common-project/hadoop-registry >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) > At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) > At RegistryDNS.java:[line 900] >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) > At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) > At RegistryDNS.java:[line 926] >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel, > InetAddress, int) At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel, > InetAddress, int) At RegistryDNS.java:[line 850] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15886) Fix findbugs warnings in RegistryDNS.java
[ https://issues.apache.org/jira/browse/HADOOP-15886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-15886: --- Target Version/s: 3.3.0 > Fix findbugs warnings in RegistryDNS.java > - > > Key: HADOOP-15886 > URL: https://issues.apache.org/jira/browse/HADOOP-15886 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Attachments: YARN-8956.01.patch > > > {noformat} > FindBugs : >module:hadoop-common-project/hadoop-registry >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) > At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) > At RegistryDNS.java:[line 900] >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) > At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) > At RegistryDNS.java:[line 926] >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel, > InetAddress, int) At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel, > InetAddress, int) At RegistryDNS.java:[line 850] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-15886) Fix findbugs warnings in RegistryDNS.java
[ https://issues.apache.org/jira/browse/HADOOP-15886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka moved YARN-8956 to HADOOP-15886: -- Key: HADOOP-15886 (was: YARN-8956) Project: Hadoop Common (was: Hadoop YARN) > Fix findbugs warnings in RegistryDNS.java > - > > Key: HADOOP-15886 > URL: https://issues.apache.org/jira/browse/HADOOP-15886 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Attachments: YARN-8956.01.patch > > > {noformat} > FindBugs : >module:hadoop-common-project/hadoop-registry >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) > At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) > At RegistryDNS.java:[line 900] >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) > At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) > At RegistryDNS.java:[line 926] >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel, > InetAddress, int) At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel, > InetAddress, int) At RegistryDNS.java:[line 850] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667972#comment-16667972 ] Hadoop QA commented on HADOOP-15885: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 29s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 55s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 2 new + 15 unchanged - 0 fixed = 17 total (was 15) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 18s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}101m 20s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-15885 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12946118/HADOOP-15885.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9a177255ba86 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3655e57 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/15427/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15427/testReport/ | | Max. process+thread count | 1449 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15427/console | | Powered by |
[jira] [Updated] (HADOOP-15866) Renamed HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT keys break compatibility
[ https://issues.apache.org/jira/browse/HADOOP-15866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-15866: --- Fix Version/s: 2.9.2 Committed to branch-2 and branch-2.9 as well. > Renamed HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT keys break compatibility > > > Key: HADOOP-15866 > URL: https://issues.apache.org/jira/browse/HADOOP-15866 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4, 3.3.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Blocker > Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.3.0, 3.1.2 > > Attachments: HADOOP-15866.001.patch > > > Our internal tool found HADOOP-15523 breaks public API compatibility: > class CommonConfigurationKeysPublic > || ||Change||Effect|| > |1|Field HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS has been renamed to > HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_KEY.|Recompilation of a client > program may be terminated with the message: cannot find variable > HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS in > CommonConfigurationKeysPublic.| > |2|Field HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT has been > renamed to HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_DEFAULT.|Recompilation > of a client program may be terminated with the message: cannot find variable > HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT in > CommonConfigurationKeysPublic.| > HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT is used to > instantiate a variable in ShellBasedGroupsMapping objects, and since almost > all applications requires groups mapping, this can cause runtime error if > application loads multiple versions of Hadoop library. > IMO this is a blocker for 3.2.0 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HADOOP-15885: - Attachment: HADOOP-15885.003.patch > Add base64 (urlString) support to DTUtil > > > Key: HADOOP-15885 > URL: https://issues.apache.org/jira/browse/HADOOP-15885 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HADOOP-15885.000.patch, HADOOP-15885.001.patch, > HADOOP-15885.002.patch, HADOOP-15885.003.patch > > > HADOOP-12563 added a utility to manage Delegation Token files. Currently, it > supports Java and Protobuf formats. However, When interacting with WebHDFS, > we use base64. In addition, when printing a token, we also print the base64 > value. We should be able to import base64 tokens in the utility. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15781) S3A assumed role tests failing due to changed error text in AWS exceptions
[ https://issues.apache.org/jira/browse/HADOOP-15781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667818#comment-16667818 ] Steve Loughran commented on HADOOP-15781: - OK, this is fun. I'm seeing the same error on an older build *which hasn't had this SDK update* New hypothesis: AWS S3 changed its error text; our tests were brittle to it. If this is true it means that (a) I wasn't quite as incompetent as I believed. (b) we're going to need to backport some/all of this > S3A assumed role tests failing due to changed error text in AWS exceptions > -- > > Key: HADOOP-15781 > URL: https://issues.apache.org/jira/browse/HADOOP-15781 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Affects Versions: 3.2.0 > Environment: some of the fault-catching tests in {{ITestAssumeRole}} > are failing as the SDK update of HADOOP-15642 changed the text. Fix the > tests, perhaps by removing the text check entirely > —it's clearly too brittle >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15781-001.patch > > > This is caused by HADOOP-15642 but I'd missed it because I'd been playing > with assumed roles locally (restricting their rights) and mistook the > failures for "steve's misconfigured the test role", not "the SDK -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15855) Review hadoop credential doc, including object store details
[ https://issues.apache.org/jira/browse/HADOOP-15855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667771#comment-16667771 ] Larry McCay commented on HADOOP-15855: -- LGTM +1 > Review hadoop credential doc, including object store details > > > Key: HADOOP-15855 > URL: https://issues.apache.org/jira/browse/HADOOP-15855 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, security >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-15855-001.patch, HADOOP-15855-002.patch > > > I've got some changes to make to the hadoop credentials API doc; some minor > editing and examples of credential paths in object stores with some extra > details (i.e how you can't refer to a store from the same store URI) > these examples need to come with unit tests to verify that the examples are > correct, obviously -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12437) Allow SecurityUtil to lookup alternate hostnames
[ https://issues.apache.org/jira/browse/HADOOP-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667744#comment-16667744 ] Brahma Reddy Battula commented on HADOOP-12437: --- is this not applicable to other process(namenode/JN/RM/NM) when multiple hosts are configured.? > Allow SecurityUtil to lookup alternate hostnames > - > > Key: HADOOP-12437 > URL: https://issues.apache.org/jira/browse/HADOOP-12437 > Project: Hadoop Common > Issue Type: Bug > Components: net, security > Environment: multi-homed >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Major > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-12437.04.patch, HADOOP-12437.05.patch, > HDFS-9109.01.patch, HDFS-9109.02.patch, HDFS-9109.03.patch > > > The configuration setting {{dfs.datanode.dns.interface}} lets the DataNode > select its hostname by doing a reverse lookup of IP addresses on the specific > network interface. This does not work {{when /etc/hosts}} is used to setup > alternate hostnames, since {{DNS#reverseDns}} only queries the DNS servers. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667692#comment-16667692 ] Hadoop QA commented on HADOOP-15885: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 26s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 23s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 50s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 2 new + 15 unchanged - 0 fixed = 17 total (was 15) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 10s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 13s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}102m 6s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-common-project/hadoop-common | | | Format-string method String.format(String, Object[]) called with format string "%n%s%n %s%n %s%n %s%n %s%n %s%n %s%n %s%n%n" wants 8 arguments but is given 9 in org.apache.hadoop.security.token.DtUtilShell.getCommandUsage() At DtUtilShell.java:with format string "%n%s%n %s%n %s%n %s%n %s%n %s%n %s%n %s%n%n" wants 8 arguments but is given 9 in org.apache.hadoop.security.token.DtUtilShell.getCommandUsage() At DtUtilShell.java:[line 181] | | Failed junit tests | hadoop.security.token.TestDtUtilShell | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-15885 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12946083/HADOOP-15885.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ac2cce030886 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality
[jira] [Commented] (HADOOP-15855) Review hadoop credential doc, including object store details
[ https://issues.apache.org/jira/browse/HADOOP-15855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667685#comment-16667685 ] Steve Loughran commented on HADOOP-15855: - [~lmccay] have you had a chance to look @ this? thx > Review hadoop credential doc, including object store details > > > Key: HADOOP-15855 > URL: https://issues.apache.org/jira/browse/HADOOP-15855 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, security >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-15855-001.patch, HADOOP-15855-002.patch > > > I've got some changes to make to the hadoop credentials API doc; some minor > editing and examples of credential paths in object stores with some extra > details (i.e how you can't refer to a store from the same store URI) > these examples need to come with unit tests to verify that the examples are > correct, obviously -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15227) add mapreduce.outputcommitter.factory.scheme.s3a to core-default
[ https://issues.apache.org/jira/browse/HADOOP-15227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-15227: --- Assignee: (was: Steve Loughran) > add mapreduce.outputcommitter.factory.scheme.s3a to core-default > > > Key: HADOOP-15227 > URL: https://issues.apache.org/jira/browse/HADOOP-15227 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.0 >Reporter: Steve Loughran >Priority: Blocker > > Need to add this property to core-default.xml. It's documented as being > there, but it isn't. > {code} > > mapreduce.outputcommitter.factory.scheme.s3a > org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > > The committer factory to use when writing data to S3A filesystems. > > > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13371) S3A globber to use bulk listObject call over recursive directory scan
[ https://issues.apache.org/jira/browse/HADOOP-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-13371: --- Assignee: (was: Steve Loughran) > S3A globber to use bulk listObject call over recursive directory scan > - > > Key: HADOOP-13371 > URL: https://issues.apache.org/jira/browse/HADOOP-13371 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Priority: Major > > HADOOP-13208 produces O(1) listing of directory trees in > {{FileSystem.listStatus}} calls, but doesn't do anything for > {{FileSystem.globStatus()}}, which uses a completely different codepath, one > which does a selective recursive scan by pattern matching as it goes down, > filtering out those patterns which don't match. Cost is > O(matching-directories) + cost of examining the files. > It should be possible to do the glob status listing in S3A not through the > filtered treewalk, but through a list + filter operation. This would be an > O(files) lookup *before any filtering took place*. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13811) s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to sanitize XML document destined for handler class
[ https://issues.apache.org/jira/browse/HADOOP-13811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-13811: --- Assignee: (was: Steve Loughran) > s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to > sanitize XML document destined for handler class > - > > Key: HADOOP-13811 > URL: https://issues.apache.org/jira/browse/HADOOP-13811 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0, 2.7.3 >Reporter: Steve Loughran >Priority: Major > > Sometimes, occasionally, getFileStatus() fails with a stack trace starting > with {{com.amazonaws.AmazonClientException: Failed to sanitize XML document > destined for handler class}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13059) S3a over-reacts to potentially transient network problems in its init() logic
[ https://issues.apache.org/jira/browse/HADOOP-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-13059: --- Assignee: (was: Steve Loughran) > S3a over-reacts to potentially transient network problems in its init() logic > - > > Key: HADOOP-13059 > URL: https://issues.apache.org/jira/browse/HADOOP-13059 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Priority: Minor > Attachments: HADOOP-13059-001.patch > > > If there's a reason for s3a not being able to connect to AWS, then the > constructor fails, even if this is a potentially transient event. > This happens because the code to check for a bucket existing will relay the > exceptions. > The constructor should catch IOEs against the remote FS, downgrade to warn > and let the code continue; it may fail later, but it may also recover. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13973) S3A GET/HEAD requests failing: java.lang.IllegalStateException: Connection is not open/Connection pool shut down
[ https://issues.apache.org/jira/browse/HADOOP-13973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-13973: --- Assignee: (was: Steve Loughran) > S3A GET/HEAD requests failing: java.lang.IllegalStateException: Connection is > not open/Connection pool shut down > > > Key: HADOOP-13973 > URL: https://issues.apache.org/jira/browse/HADOOP-13973 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 > Environment: EC2 cluster >Reporter: Rajesh Balamohan >Priority: Major > > S3 requests failing with an error coming from Http client, > "java.lang.IllegalStateException: Connection is not open" > Some online discussion implies that this is related to shared connection pool > shutdown & fixed in http client 4.4+. Hadoop & AWS SDK use v 4.5.2 so the fix > is in, we just need to make sure the pool is being set up right. > There's a problem here of course: it may require moving to a later version of > the AWS SDK, with the consequences on jackson , as seen in HADOOP-13050. > And that's if there is a patched version out there -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14714) handle InternalError in bulk object delete through retries
[ https://issues.apache.org/jira/browse/HADOOP-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-14714: --- Assignee: (was: Steve Loughran) > handle InternalError in bulk object delete through retries > -- > > Key: HADOOP-14714 > URL: https://issues.apache.org/jira/browse/HADOOP-14714 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Priority: Major > > There's some more detail appearing on HADOOP-11572 about the errors seen > here; sounds like its large fileset related (or just probability working > against you). Most importantly: retries may make it go away. > Proposed: implement a retry policy. > Issue: delete is not idempotent, not if someone else adds things. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15782) Clarify committers.md around v2 failure handling
[ https://issues.apache.org/jira/browse/HADOOP-15782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667656#comment-16667656 ] Steve Loughran commented on HADOOP-15782: - BTW, do you have a patch for this? > Clarify committers.md around v2 failure handling > > > Key: HADOOP-15782 > URL: https://issues.apache.org/jira/browse/HADOOP-15782 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 3.1.0, 3.1.1 >Reporter: Gera Shegalov >Priority: Major > > The doc file > {{hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committers.md}} > refers to the default file output committer (v2) as not supporting job and > task recovery throughout the doc: > {quote}or just by rerunning everything (The "v2" algorithm and Spark). > {quote} > This is incorrect. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667571#comment-16667571 ] Íñigo Goiri commented on HADOOP-15885: -- Added documentation and fixed the unit test issue on Windows (I had to output the URI instead of doing the old construct). > Add base64 (urlString) support to DTUtil > > > Key: HADOOP-15885 > URL: https://issues.apache.org/jira/browse/HADOOP-15885 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HADOOP-15885.000.patch, HADOOP-15885.001.patch, > HADOOP-15885.002.patch > > > HADOOP-12563 added a utility to manage Delegation Token files. Currently, it > supports Java and Protobuf formats. However, When interacting with WebHDFS, > we use base64. In addition, when printing a token, we also print the base64 > value. We should be able to import base64 tokens in the utility. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HADOOP-15885: - Attachment: HADOOP-15885.002.patch > Add base64 (urlString) support to DTUtil > > > Key: HADOOP-15885 > URL: https://issues.apache.org/jira/browse/HADOOP-15885 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HADOOP-15885.000.patch, HADOOP-15885.001.patch, > HADOOP-15885.002.patch > > > HADOOP-12563 added a utility to manage Delegation Token files. Currently, it > supports Java and Protobuf formats. However, When interacting with WebHDFS, > we use base64. In addition, when printing a token, we also print the base64 > value. We should be able to import base64 tokens in the utility. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HADOOP-15885: - Attachment: HADOOP-15885.001.patch > Add base64 (urlString) support to DTUtil > > > Key: HADOOP-15885 > URL: https://issues.apache.org/jira/browse/HADOOP-15885 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Priority: Minor > Attachments: HADOOP-15885.000.patch, HADOOP-15885.001.patch > > > HADOOP-12563 added a utility to manage Delegation Token files. Currently, it > supports Java and Protobuf formats. However, When interacting with WebHDFS, > we use base64. In addition, when printing a token, we also print the base64 > value. We should be able to import base64 tokens in the utility. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HADOOP-15885: - Assignee: Íñigo Goiri Status: Patch Available (was: Open) > Add base64 (urlString) support to DTUtil > > > Key: HADOOP-15885 > URL: https://issues.apache.org/jira/browse/HADOOP-15885 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HADOOP-15885.000.patch, HADOOP-15885.001.patch > > > HADOOP-12563 added a utility to manage Delegation Token files. Currently, it > supports Java and Protobuf formats. However, When interacting with WebHDFS, > we use base64. In addition, when printing a token, we also print the base64 > value. We should be able to import base64 tokens in the utility. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667517#comment-16667517 ] Íñigo Goiri commented on HADOOP-15885: -- I added [^HADOOP-15885.000.patch] following the import concept where we run: {code} dtutil import {code} I need to add the unit tests but it looks like they are broken in Windows (issues with the filename format), so I need to fix that first. > Add base64 (urlString) support to DTUtil > > > Key: HADOOP-15885 > URL: https://issues.apache.org/jira/browse/HADOOP-15885 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Priority: Minor > Attachments: HADOOP-15885.000.patch > > > HADOOP-12563 added a utility to manage Delegation Token files. Currently, it > supports Java and Protobuf formats. However, When interacting with WebHDFS, > we use base64. In addition, when printing a token, we also print the base64 > value. We should be able to import base64 tokens in the utility. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HADOOP-15885: - Attachment: HADOOP-15885.000.patch > Add base64 (urlString) support to DTUtil > > > Key: HADOOP-15885 > URL: https://issues.apache.org/jira/browse/HADOOP-15885 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Priority: Minor > Attachments: HADOOP-15885.000.patch > > > HADOOP-12563 added a utility to manage Delegation Token files. Currently, it > supports Java and Protobuf formats. However, When interacting with WebHDFS, > we use base64. In addition, when printing a token, we also print the base64 > value. We should be able to import base64 tokens in the utility. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15339) Support additional key/value propereties in JMX bean registration
[ https://issues.apache.org/jira/browse/HADOOP-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667500#comment-16667500 ] Anu Engineer commented on HADOOP-15339: --- +1, on this change. thanks > Support additional key/value propereties in JMX bean registration > - > > Key: HADOOP-15339 > URL: https://issues.apache.org/jira/browse/HADOOP-15339 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15339-branch-3.1.004.patch, > HADOOP-15339.001.patch, HADOOP-15339.002.patch, HADOOP-15339.003.patch > > > org.apache.hadoop.metrics2.util.MBeans.register is a utility function to > register objects to the JMX registry with a given name prefix and name. > JMX supports any additional key value pairs which could be part the the > address of the jmx bean. For example: > _java.lang:type=MemoryManager,name=CodeCacheManager_ > Using this method we can query a group of mbeans, for example we can add the > same tag to similar mbeans from namenode and datanode. > This patch adds a small modification to support custom key value pairs and > also introduce a new unit test for MBeans utility which was missing until now. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667429#comment-16667429 ] Íñigo Goiri commented on HADOOP-15885: -- I'm not sure what would be the interface here but currently we have: {code} dtutil append filename1 filename2 filenameoutput {code} One option would be to just support files with one base64 DT per line, so it would be adding: {code} [-format (java|protobuf|base64)] {code} Another option would be to follow what Azure has been doing with {{org.apache.hadoop.fs.azure.security.TokenUtils}} and just add an import option which would add the base64 token into a DT file. [~mattpaduano], you have been working a bunch on the DTUtil; any thoughts on this? > Add base64 (urlString) support to DTUtil > > > Key: HADOOP-15885 > URL: https://issues.apache.org/jira/browse/HADOOP-15885 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Priority: Minor > > HADOOP-12563 added a utility to manage Delegation Token files. Currently, it > supports Java and Protobuf formats. However, When interacting with WebHDFS, > we use base64. In addition, when printing a token, we also print the base64 > value. We should be able to import base64 tokens in the utility. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15885) Add base64 (urlString) support to DTUtil
Íñigo Goiri created HADOOP-15885: Summary: Add base64 (urlString) support to DTUtil Key: HADOOP-15885 URL: https://issues.apache.org/jira/browse/HADOOP-15885 Project: Hadoop Common Issue Type: New Feature Reporter: Íñigo Goiri HADOOP-12563 added a utility to manage Delegation Token files. Currently, it supports Java and Protobuf formats. However, When interacting with WebHDFS, we use base64. In addition, when printing a token, we also print the base64 value. We should be able to import base64 tokens in the utility. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15865) ConcurrentModificationException in Configuration.overlay() method
[ https://issues.apache.org/jira/browse/HADOOP-15865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667410#comment-16667410 ] Oleksandr Shevchenko commented on HADOOP-15865: --- Could someone review the attached changes? Thanks. > ConcurrentModificationException in Configuration.overlay() method > - > > Key: HADOOP-15865 > URL: https://issues.apache.org/jira/browse/HADOOP-15865 > Project: Hadoop Common > Issue Type: Bug >Reporter: Oleksandr Shevchenko >Assignee: Oleksandr Shevchenko >Priority: Major > Attachments: HADOOP-15865.001.patch > > > Configuration.overlay() is not thread-safe and can be the cause of > ConcurrentModificationException since we use iteration over Properties > object. > {code} > private void overlay(Properties to, Properties from) { > for (Entry entry: from.entrySet()) { > to.put(entry.getKey(), entry.getValue()); > } > } > {code} > Properties class is thread-safe but iterator is not. We should manually > synchronize on the returned set of entries which we use for iteration. > We faced with ResourceManger fails during recovery caused by > ConcurrentModificationException: > {noformat} > 2018-10-12 08:00:56,968 INFO org.apache.hadoop.service.AbstractService: > Service ResourceManager failed in state STARTED; cause: > java.util.ConcurrentModificationException > java.util.ConcurrentModificationException > at java.util.Hashtable$Enumerator.next(Hashtable.java:1383) > at org.apache.hadoop.conf.Configuration.overlay(Configuration.java:2801) > at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2696) > at > org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2632) > at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2528) > at org.apache.hadoop.conf.Configuration.get(Configuration.java:1062) > at > org.apache.hadoop.conf.Configuration.getStringCollection(Configuration.java:1914) > at > org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:53) > at > org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2043) > at org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2023) > at > org.apache.hadoop.yarn.webapp.util.WebAppUtils.getPassword(WebAppUtils.java:452) > at > org.apache.hadoop.yarn.webapp.util.WebAppUtils.loadSslConfiguration(WebAppUtils.java:428) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:293) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1017) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1117) > at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1251) > 2018-10-12 08:00:56,968 INFO > org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: > removing RMDelegation token with sequence number: 3489914 > 2018-10-12 08:00:56,968 INFO > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Removing > RMDelegationToken and SequenceNumber > 2018-10-12 08:00:56,968 INFO > org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore: > Removing RMDelegationToken_3489914 > 2018-10-12 08:00:56,969 INFO org.apache.hadoop.ipc.Server: Stopping server on > 8032 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15339) Support additional key/value propereties in JMX bean registration
[ https://issues.apache.org/jira/browse/HADOOP-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667373#comment-16667373 ] Hadoop QA commented on HADOOP-15339: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 22m 48s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} branch-3.1 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 59s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 51s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 11s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 39s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} branch-3.1 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} branch-3.1 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 0 unchanged - 3 fixed = 0 total (was 3) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 10s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 59s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 2s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}151m 42s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:080e9d0 | | JIRA Issue | HADOOP-15339 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12946039/HADOOP-15339-branch-3.1.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3e78285e1b7e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-3.1 / 7dd8eaf | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15424/testReport/ | | Max. process+thread count | 1347 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15424/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Support
[jira] [Commented] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification
[ https://issues.apache.org/jira/browse/HADOOP-13327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667361#comment-16667361 ] Hadoop QA commented on HADOOP-13327: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 30s{color} | {color:red} HADOOP-13327 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-13327 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12928014/HADOOP-13327-003.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15425/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Add OutputStream + Syncable to the Filesystem Specification > --- > > Key: HADOOP-13327 > URL: https://issues.apache.org/jira/browse/HADOOP-13327 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-13327-002.patch, HADOOP-13327-003.patch, > HADOOP-13327-branch-2-001.patch > > > Write down what a Filesystem output stream should do. While core the API is > defined in Java, that doesn't say what's expected about visibility, > durability, etc —and Hadoop Syncable interface is entirely ours to define. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15339) Support additional key/value propereties in JMX bean registration
[ https://issues.apache.org/jira/browse/HADOOP-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HADOOP-15339: -- Attachment: HADOOP-15339-branch-3.1.004.patch > Support additional key/value propereties in JMX bean registration > - > > Key: HADOOP-15339 > URL: https://issues.apache.org/jira/browse/HADOOP-15339 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15339-branch-3.1.004.patch, > HADOOP-15339.001.patch, HADOOP-15339.002.patch, HADOOP-15339.003.patch > > > org.apache.hadoop.metrics2.util.MBeans.register is a utility function to > register objects to the JMX registry with a given name prefix and name. > JMX supports any additional key value pairs which could be part the the > address of the jmx bean. For example: > _java.lang:type=MemoryManager,name=CodeCacheManager_ > Using this method we can query a group of mbeans, for example we can add the > same tag to similar mbeans from namenode and datanode. > This patch adds a small modification to support custom key value pairs and > also introduce a new unit test for MBeans utility which was missing until now. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15339) Support additional key/value propereties in JMX bean registration
[ https://issues.apache.org/jira/browse/HADOOP-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HADOOP-15339: -- Status: Patch Available (was: Reopened) The branch could be cherry-picked but I re-uploaded the patch to get an actual jenkins response. > Support additional key/value propereties in JMX bean registration > - > > Key: HADOOP-15339 > URL: https://issues.apache.org/jira/browse/HADOOP-15339 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15339-branch-3.1.004.patch, > HADOOP-15339.001.patch, HADOOP-15339.002.patch, HADOOP-15339.003.patch > > > org.apache.hadoop.metrics2.util.MBeans.register is a utility function to > register objects to the JMX registry with a given name prefix and name. > JMX supports any additional key value pairs which could be part the the > address of the jmx bean. For example: > _java.lang:type=MemoryManager,name=CodeCacheManager_ > Using this method we can query a group of mbeans, for example we can add the > same tag to similar mbeans from namenode and datanode. > This patch adds a small modification to support custom key value pairs and > also introduce a new unit test for MBeans utility which was missing until now. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15339) Support additional key/value propereties in JMX bean registration
[ https://issues.apache.org/jira/browse/HADOOP-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HADOOP-15339: -- Target Version/s: 3.2.0, 3.1.2 (was: 3.2.0) > Support additional key/value propereties in JMX bean registration > - > > Key: HADOOP-15339 > URL: https://issues.apache.org/jira/browse/HADOOP-15339 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15339-branch-3.1.004.patch, > HADOOP-15339.001.patch, HADOOP-15339.002.patch, HADOOP-15339.003.patch > > > org.apache.hadoop.metrics2.util.MBeans.register is a utility function to > register objects to the JMX registry with a given name prefix and name. > JMX supports any additional key value pairs which could be part the the > address of the jmx bean. For example: > _java.lang:type=MemoryManager,name=CodeCacheManager_ > Using this method we can query a group of mbeans, for example we can add the > same tag to similar mbeans from namenode and datanode. > This patch adds a small modification to support custom key value pairs and > also introduce a new unit test for MBeans utility which was missing until now. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-15339) Support additional key/value propereties in JMX bean registration
[ https://issues.apache.org/jira/browse/HADOOP-15339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton reopened HADOOP-15339: --- Since the commit we use this change from ozone/hdds and it worked well. This change is required to have a working ozone/hdds webui as the shared code path tags the common jmx beans with generic key/value tags. I reopen this issue and propose to backport it to branch-3.1 to make it easier to use hdds/ozone with older hadoop versions. # It's a small change # Backward compatible # Safe to use (no issue during the last 6 months) # No conflicts for cherry-pick. > Support additional key/value propereties in JMX bean registration > - > > Key: HADOOP-15339 > URL: https://issues.apache.org/jira/browse/HADOOP-15339 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15339.001.patch, HADOOP-15339.002.patch, > HADOOP-15339.003.patch > > > org.apache.hadoop.metrics2.util.MBeans.register is a utility function to > register objects to the JMX registry with a given name prefix and name. > JMX supports any additional key value pairs which could be part the the > address of the jmx bean. For example: > _java.lang:type=MemoryManager,name=CodeCacheManager_ > Using this method we can query a group of mbeans, for example we can add the > same tag to similar mbeans from namenode and datanode. > This patch adds a small modification to support custom key value pairs and > also introduce a new unit test for MBeans utility which was missing until now. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15864) Job submitter / executor fail when SBN domain name can not resolved
[ https://issues.apache.org/jira/browse/HADOOP-15864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16667065#comment-16667065 ] He Xiaoqiao commented on HADOOP-15864: -- {quote}a number of other callers of SecurityUtil.buildTokenService in YARN and MAPREDUCE and none seem to handle a null response value{quote} OMG, I will try to fix this issue and keep compatibility with YARN and other assembly in the next days. Thanks [~jojochuang],[~wilfreds] again. > Job submitter / executor fail when SBN domain name can not resolved > --- > > Key: HADOOP-15864 > URL: https://issues.apache.org/jira/browse/HADOOP-15864 > Project: Hadoop Common > Issue Type: Bug >Reporter: He Xiaoqiao >Assignee: He Xiaoqiao >Priority: Critical > Fix For: 3.0.4, 3.3.0, 3.1.2, 3.2.1 > > Attachments: HADOOP-15864-branch.2.7.001.patch, > HADOOP-15864-branch.2.7.002.patch, HADOOP-15864.003.patch, > HADOOP-15864.branch.2.7.004.patch > > > Job submit failure and Task executes failure if Standby NameNode domain name > can not resolved on HDFS HA with DelegationToken feature. > This issue is triggered when create {{ConfiguredFailoverProxyProvider}} > instance which invoke {{HAUtil.cloneDelegationTokenForLogicalUri}} in HA mode > with Security. Since in HDFS HA mode UGI need include separate token for each > NameNode in order to dealing with Active-Standby switch, the double tokens' > content is same of course. > However when #setTokenService in {{HAUtil.cloneDelegationTokenForLogicalUri}} > it checks whether the address of NameNode has been resolved or not, if Not, > throw #IllegalArgumentException upon, then job submitter/ task executor fail. > HDFS-8068 and HADOOP-12125 try to fix it, but I don't think the two tickets > resolve completely. > Another questions many guys consider is why NameNode domain name can not > resolve? I think there are many scenarios, for instance node replace when > meet fault, and refresh DNS sometimes. Anyway, Standby NameNode failure > should not impact Hadoop cluster stability in my opinion. > a. code ref: org.apache.hadoop.security.SecurityUtil line373-386 > {code:java} > public static Text buildTokenService(InetSocketAddress addr) { > String host = null; > if (useIpForTokenService) { > if (addr.isUnresolved()) { // host has no ip address > throw new IllegalArgumentException( > new UnknownHostException(addr.getHostName()) > ); > } > host = addr.getAddress().getHostAddress(); > } else { > host = StringUtils.toLowerCase(addr.getHostName()); > } > return new Text(host + ":" + addr.getPort()); > } > {code} > b.exception log ref: > {code:xml} > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: Couldn't create proxy provider class > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > at > org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:515) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:170) > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:761) > at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:691) > at > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:150) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:385) > at > org.apache.hadoop.fs.viewfs.ChRootedFileSystem.(ChRootedFileSystem.java:106) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:178) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:172) > at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:303) > at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:377) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:172) > at > org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:172) > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2713) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) > at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2747) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2729) > at
[jira] [Updated] (HADOOP-14999) AliyunOSS: provide one asynchronous multi-part based uploading mechanism
[ https://issues.apache.org/jira/browse/HADOOP-14999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14999: --- Fix Version/s: (was: 2.9.2) > AliyunOSS: provide one asynchronous multi-part based uploading mechanism > > > Key: HADOOP-14999 > URL: https://issues.apache.org/jira/browse/HADOOP-14999 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Genmao Yu >Assignee: Genmao Yu >Priority: Major > Fix For: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3 > > Attachments: HADOOP-14999-branch-2.001.patch, > HADOOP-14999-branch-2.002.patch, HADOOP-14999.001.patch, > HADOOP-14999.002.patch, HADOOP-14999.003.patch, HADOOP-14999.004.patch, > HADOOP-14999.005.patch, HADOOP-14999.006.patch, HADOOP-14999.007.patch, > HADOOP-14999.008.patch, HADOOP-14999.009.patch, HADOOP-14999.010.patch, > HADOOP-14999.011.patch, asynchronous_file_uploading.pdf, > diff-between-patch7-and-patch8.txt > > > This mechanism is designed for uploading file in parallel and asynchronously: > - improve the performance of uploading file to OSS server. Firstly, this > mechanism splits result to multiple small blocks and upload them in parallel. > Then, getting result and uploading blocks are asynchronous. > - avoid buffering too large result into local disk. To cite an extreme > example, there is a task which will output 100GB or even larger, we may need > to output this 100GB to local disk and then upload it. Sometimes, it is > inefficient and limited to disk space. > This patch reuse {{SemaphoredDelegatingExecutor}} as executor service and > depends on HADOOP-15039. > Attached {{asynchronous_file_uploading.pdf}} illustrated the difference > between previous {{AliyunOSSOutputStream}} and > {{AliyunOSSBlockOutputStream}}, i.e. this asynchronous multi-part based > uploading mechanism. > 1. {{AliyunOSSOutputStream}}: we need to output the whole result to local > disk before we can upload it to OSS. This will poses two problems: > - if the output file is too large, it will run out of the local disk. > - if the output file is too large, task will wait long time to upload result > to OSS before finish, wasting much compute resource. > 2. {{AliyunOSSBlockOutputStream}}: we cut the task output into small blocks, > i.e. some small local file, and each block will be packaged into a uploading > task. These tasks will be submitted into {{SemaphoredDelegatingExecutor}}. > {{SemaphoredDelegatingExecutor}} will upload this blocks in parallel, this > will improve performance greatly. > 3. Each task will retry 3 times to upload block to Aliyun OSS. If one of > those tasks failed, the whole file uploading will failed, and we will abort > current uploading. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org