[jira] [Commented] (HADOOP-16071) Fix typo in DistCp Counters - Bandwidth in Bytes
[ https://issues.apache.org/jira/browse/HADOOP-16071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751502#comment-16751502 ] Arpit Agarwal commented on HADOOP-16071: I think Steve is talking about any automation that parses the text output of distcp. > Fix typo in DistCp Counters - Bandwidth in Bytes > > > Key: HADOOP-16071 > URL: https://issues.apache.org/jira/browse/HADOOP-16071 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.2.0 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16071.001.patch > > > {code:bash|title=DistCp MR Job Counters} > ... > DistCp Counters > Bandwidth in Btyes=20971520 > Bytes Copied=20971520 > Bytes Expected=20971520 > Files Copied=1 > {code} > {noformat} > Bandwidth in Btyes -> Bandwidth in Bytes > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15281) Distcp to add no-rename copy option
[ https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Olson updated HADOOP-15281: -- Status: Open (was: Patch Available) > Distcp to add no-rename copy option > --- > > Key: HADOOP-15281 > URL: https://issues.apache.org/jira/browse/HADOOP-15281 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Andrew Olson >Priority: Major > Attachments: HADOOP-15281-001.patch > > > Currently Distcp uploads a file by two strategies > # append parts > # copy to temp then rename > option 2 executes the following sequence in {{promoteTmpToTarget}} > {code} > if ((fs.exists(target) && !fs.delete(target, false)) > || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent())) > || !fs.rename(tmpTarget, target)) { > throw new IOException("Failed to promote tmp-file:" + tmpTarget > + " to: " + target); > } > {code} > For any object store, that's a lot of HTTP requests; for S3A you are looking > at 12+ requests and an O(data) copy call. > This is not a good upload strategy for any store which manifests its output > atomically at the end of the write(). > Proposed: add a switch to write direct to the dest path. either a conf option > or a CLI option -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15281) Distcp to add no-rename copy option
[ https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Olson updated HADOOP-15281: -- Attachment: HADOOP-15281-002.patch Status: Patch Available (was: Open) Attaching an updated patch. > Distcp to add no-rename copy option > --- > > Key: HADOOP-15281 > URL: https://issues.apache.org/jira/browse/HADOOP-15281 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Andrew Olson >Priority: Major > Attachments: HADOOP-15281-001.patch, HADOOP-15281-002.patch > > > Currently Distcp uploads a file by two strategies > # append parts > # copy to temp then rename > option 2 executes the following sequence in {{promoteTmpToTarget}} > {code} > if ((fs.exists(target) && !fs.delete(target, false)) > || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent())) > || !fs.rename(tmpTarget, target)) { > throw new IOException("Failed to promote tmp-file:" + tmpTarget > + " to: " + target); > } > {code} > For any object store, that's a lot of HTTP requests; for S3A you are looking > at 12+ requests and an O(data) copy call. > This is not a good upload strategy for any store which manifests its output > atomically at the end of the write(). > Proposed: add a switch to write direct to the dest path. either a conf option > or a CLI option -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()
[ https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751630#comment-16751630 ] Michael Miller commented on HADOOP-11223: - Did anything ever come from this ticket? I did work recently for Apache Accumulo that would greatly benefit from having a read only copy of Configuration. > Offer a read-only conf alternative to new Configuration() > - > > Key: HADOOP-11223 > URL: https://issues.apache.org/jira/browse/HADOOP-11223 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Reporter: Gopal V >Assignee: Varun Saxena >Priority: Major > Labels: Performance > Attachments: HADOOP-11223.001.patch > > > new Configuration() is called from several static blocks across Hadoop. > This is incredibly inefficient, since each one of those involves primarily > XML parsing at a point where the JIT won't be triggered & interpreter mode is > essentially forced on the JVM. > The alternate solution would be to offer a {{Configuration::getDefault()}} > alternative which disallows any modifications. > At the very least, such a method would need to be called from > # org.apache.hadoop.io.nativeio.NativeIO::() > # org.apache.hadoop.security.SecurityUtil::() > # org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider:: -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16071) Fix typo in DistCp Counters - Bandwidth in Bytes
[ https://issues.apache.org/jira/browse/HADOOP-16071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751592#comment-16751592 ] Siyao Meng commented on HADOOP-16071: - [~arpitagarwal] Hmm okay. Who shall we pull in to answer this question? or how should I proceed? > Fix typo in DistCp Counters - Bandwidth in Bytes > > > Key: HADOOP-16071 > URL: https://issues.apache.org/jira/browse/HADOOP-16071 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.2.0, 3.1.1 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16071.001.patch > > > {code:bash|title=DistCp MR Job Counters} > ... > DistCp Counters > Bandwidth in Btyes=20971520 > Bytes Copied=20971520 > Bytes Expected=20971520 > Files Copied=1 > {code} > {noformat} > Bandwidth in Btyes -> Bandwidth in Bytes > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751603#comment-16751603 ] Hadoop QA commented on HADOOP-15998: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} HADOOP-15998 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-15998 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12956055/HADOOP-15998.v3.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15839/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Jar validation bash scripts don't work on Windows due to platform differences > (colons in paths, \r\n) > - > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Fix For: 3.3.0 > > Attachments: HADOOP-15998.v2.patch, HADOOP-15998.v3.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751627#comment-16751627 ] Giovanni Matteo Fumarola commented on HADOOP-15998: --- [~briangru], can you rebase it? > Jar validation bash scripts don't work on Windows due to platform differences > (colons in paths, \r\n) > - > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Fix For: 3.3.0 > > Attachments: HADOOP-15998.v2.patch, HADOOP-15998.v3.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()
[ https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751720#comment-16751720 ] Hadoop QA commented on HADOOP-11223: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 44s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 48s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 52s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 13s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 18s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 98m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-11223 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12689236/HADOOP-11223.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux be3e9287311a 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4e0aa2c | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/15841/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15841/testReport/ | | Max. process+thread count | 1347 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U:
[jira] [Updated] (HADOOP-16071) Fix typo in DistCp Counters - Bandwidth in Bytes
[ https://issues.apache.org/jira/browse/HADOOP-16071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HADOOP-16071: Affects Version/s: 3.1.1 > Fix typo in DistCp Counters - Bandwidth in Bytes > > > Key: HADOOP-16071 > URL: https://issues.apache.org/jira/browse/HADOOP-16071 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.2.0, 3.1.1 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16071.001.patch > > > {code:bash|title=DistCp MR Job Counters} > ... > DistCp Counters > Bandwidth in Btyes=20971520 > Bytes Copied=20971520 > Bytes Expected=20971520 > Files Copied=1 > {code} > {noformat} > Bandwidth in Btyes -> Bandwidth in Bytes > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751690#comment-16751690 ] Brian Grunkemeyer commented on HADOOP-15998: I recreated the patch on top of trunk. > Jar validation bash scripts don't work on Windows due to platform differences > (colons in paths, \r\n) > - > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Fix For: 3.3.0 > > Attachments: HADOOP-15998.v3.patch, HADOOP-15998.v4.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brian Grunkemeyer updated HADOOP-15998: --- Attachment: (was: HADOOP-15998.v2.patch) > Jar validation bash scripts don't work on Windows due to platform differences > (colons in paths, \r\n) > - > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Fix For: 3.3.0 > > Attachments: HADOOP-15998.v3.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16071) Fix typo in DistCp Counters - Bandwidth in Bytes
[ https://issues.apache.org/jira/browse/HADOOP-16071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751502#comment-16751502 ] Arpit Agarwal edited comment on HADOOP-16071 at 1/24/19 8:13 PM: - I think Steve is talking about any automation that parses the text output of distcp. That may break if it is looking for the specific output string with the typo. was (Author: arpitagarwal): I think Steve is talking about any automation that parses the text output of distcp. > Fix typo in DistCp Counters - Bandwidth in Bytes > > > Key: HADOOP-16071 > URL: https://issues.apache.org/jira/browse/HADOOP-16071 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.2.0 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16071.001.patch > > > {code:bash|title=DistCp MR Job Counters} > ... > DistCp Counters > Bandwidth in Btyes=20971520 > Bytes Copied=20971520 > Bytes Expected=20971520 > Files Copied=1 > {code} > {noformat} > Bandwidth in Btyes -> Bandwidth in Bytes > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751596#comment-16751596 ] Giovanni Matteo Fumarola commented on HADOOP-15998: --- I can't move this Jira back to "Patch available". I will try to move to complete and reopening. > Jar validation bash scripts don't work on Windows due to platform differences > (colons in paths, \r\n) > - > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Fix For: 3.3.0 > > Attachments: HADOOP-15998.v2.patch, HADOOP-15998.v3.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated HADOOP-15998: -- Comment: was deleted (was: I can't move this Jira back to "Patch available". I will try to move to complete and reopening.) > Jar validation bash scripts don't work on Windows due to platform differences > (colons in paths, \r\n) > - > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Fix For: 3.3.0 > > Attachments: HADOOP-15998.v2.patch, HADOOP-15998.v3.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated HADOOP-15998: -- Status: Patch Available (was: Reopened) > Jar validation bash scripts don't work on Windows due to platform differences > (colons in paths, \r\n) > - > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Fix For: 3.3.0 > > Attachments: HADOOP-15998.v2.patch, HADOOP-15998.v3.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15281) Distcp to add no-rename copy option
[ https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751669#comment-16751669 ] Hadoop QA commented on HADOOP-15281: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 10s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 35s{color} | {color:orange} hadoop-tools: The patch generated 10 new + 69 unchanged - 1 fixed = 79 total (was 70) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 45s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 35s{color} | {color:green} hadoop-distcp in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 27s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 84m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-15281 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12956200/HADOOP-15281-002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a99661bf0d47 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4e0aa2c | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | checkstyle |
[jira] [Updated] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brian Grunkemeyer updated HADOOP-15998: --- Attachment: (was: HADOOP-15998.v3.patch) > Jar validation bash scripts don't work on Windows due to platform differences > (colons in paths, \r\n) > - > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Fix For: 3.3.0 > > Attachments: HADOOP-15998.v4.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16072) Visual Studio projects are out of date
[ https://issues.apache.org/jira/browse/HADOOP-16072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brian Grunkemeyer updated HADOOP-16072: --- Status: Patch Available (was: Open) I'm checking in what the build automatically updates on Windows. > Visual Studio projects are out of date > -- > > Key: HADOOP-16072 > URL: https://issues.apache.org/jira/browse/HADOOP-16072 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.2.0 > Environment: Building on Windows 10 with Visual Studio 2017. >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Minor > Labels: build, windows > Fix For: 3.2.0 > > Attachments: HADOOP-16072.v1.patch > > Original Estimate: 6h > Remaining Estimate: 6h > > On Windows when you build, a part of the build process updates some Visual > Studio solution and project files. We should simply check in the updated > version, so that everyone who builds doesn't get stuck with extra changes > they need to manage. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16071) Fix typo in DistCp Counters - Bandwidth in Bytes
[ https://issues.apache.org/jira/browse/HADOOP-16071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751452#comment-16751452 ] Siyao Meng commented on HADOOP-16071: - [~ste...@apache.org] I found the only references of this counter are here: {code:java|title=CopyMapper.java} /** * Hadoop counters for the DistCp CopyMapper. * (These have been kept identical to the old DistCp, * for backward compatibility.) */ public static enum Counter { COPY, // Number of files received by the mapper for copy. DIR_COPY, // Number of directories received by the mapper for copy. SKIP, // Number of files skipped. FAIL, // Number of files that failed to be copied. BYTESCOPIED, // Number of bytes actually copied by the copy-mapper, total. BYTESEXPECTED,// Number of bytes expected to be copied. BYTESFAILED, // Number of bytes that failed to be copied. BYTESSKIPPED, // Number of bytes that were skipped from copy. SLEEP_TIME_MS, // Time map slept while trying to honor bandwidth cap. BANDWIDTH_IN_BYTES, // Effective transfer rate in B/s. } ... @Override protected void cleanup(Context context) throws IOException, InterruptedException { super.cleanup(context); long secs = (System.currentTimeMillis() - startEpoch) / 1000; incrementCounter(context, Counter.BANDWIDTH_IN_BYTES, totalBytesCopied / ((secs == 0 ? 1 : secs))); } {code} So it seems to me that the typo in CopyMapper_Counter.properties is just a display name. > Fix typo in DistCp Counters - Bandwidth in Bytes > > > Key: HADOOP-16071 > URL: https://issues.apache.org/jira/browse/HADOOP-16071 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.2.0 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16071.001.patch > > > {code:bash|title=DistCp MR Job Counters} > ... > DistCp Counters > Bandwidth in Btyes=20971520 > Bytes Copied=20971520 > Bytes Expected=20971520 > Files Copied=1 > {code} > {noformat} > Bandwidth in Btyes -> Bandwidth in Bytes > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brian Grunkemeyer updated HADOOP-15998: --- Attachment: HADOOP-15998.v4.patch > Jar validation bash scripts don't work on Windows due to platform differences > (colons in paths, \r\n) > - > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Fix For: 3.3.0 > > Attachments: HADOOP-15998.v3.patch, HADOOP-15998.v4.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15281) Distcp to add no-rename copy option
[ https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Olson updated HADOOP-15281: -- Attachment: HADOOP-15281-003.patch Status: Patch Available (was: Open) Updated to fix the checkstyle (line length) issues > Distcp to add no-rename copy option > --- > > Key: HADOOP-15281 > URL: https://issues.apache.org/jira/browse/HADOOP-15281 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Andrew Olson >Priority: Major > Attachments: HADOOP-15281-001.patch, HADOOP-15281-002.patch, > HADOOP-15281-003.patch > > > Currently Distcp uploads a file by two strategies > # append parts > # copy to temp then rename > option 2 executes the following sequence in {{promoteTmpToTarget}} > {code} > if ((fs.exists(target) && !fs.delete(target, false)) > || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent())) > || !fs.rename(tmpTarget, target)) { > throw new IOException("Failed to promote tmp-file:" + tmpTarget > + " to: " + target); > } > {code} > For any object store, that's a lot of HTTP requests; for S3A you are looking > at 12+ requests and an O(data) copy call. > This is not a good upload strategy for any store which manifests its output > atomically at the end of the write(). > Proposed: add a switch to write direct to the dest path. either a conf option > or a CLI option -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15281) Distcp to add no-rename copy option
[ https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751494#comment-16751494 ] Andrew Olson commented on HADOOP-15281: --- I will fix the test failure. > Distcp to add no-rename copy option > --- > > Key: HADOOP-15281 > URL: https://issues.apache.org/jira/browse/HADOOP-15281 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Andrew Olson >Priority: Major > Attachments: HADOOP-15281-001.patch > > > Currently Distcp uploads a file by two strategies > # append parts > # copy to temp then rename > option 2 executes the following sequence in {{promoteTmpToTarget}} > {code} > if ((fs.exists(target) && !fs.delete(target, false)) > || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent())) > || !fs.rename(tmpTarget, target)) { > throw new IOException("Failed to promote tmp-file:" + tmpTarget > + " to: " + target); > } > {code} > For any object store, that's a lot of HTTP requests; for S3A you are looking > at 12+ requests and an O(data) copy call. > This is not a good upload strategy for any store which manifests its output > atomically at the end of the write(). > Proposed: add a switch to write direct to the dest path. either a conf option > or a CLI option -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16005) NativeAzureFileSystem does not support setXAttr
[ https://issues.apache.org/jira/browse/HADOOP-16005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16005: Issue Type: Sub-task (was: Bug) Parent: HADOOP-15763 > NativeAzureFileSystem does not support setXAttr > --- > > Key: HADOOP-16005 > URL: https://issues.apache.org/jira/browse/HADOOP-16005 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Clemens Wolff >Priority: Major > > When interacting with Azure Blob Storage via the Hadoop FileSystem client, > it's currently (as of > [a8bbd81|https://github.com/apache/hadoop/commit/a8bbd818d5bc4762324bcdb7cf1fdd5c2f93891b]) > not possible to set custom metadata attributes. > Here is a snippet that demonstrates the missing behavior (throws an > UnsupportedOperationException): > {code:java} > val blobAccount = "SET ME" > val blobKey = "SET ME" > val blobContainer = "SET ME" > val blobFile = "SET ME" > import org.apache.hadoop.conf.Configuration > import org.apache.hadoop.fs.{FileSystem, Path} > val conf = new Configuration() > conf.set("fs.wasbs.impl", "org.apache.hadoop.fs.azure.NativeAzureFileSystem") > conf.set(s"fs.azure.account.key.$blobAccount.blob.core.windows.net", blobKey) > val path = new > Path(s"wasbs://$blobContainer@$blobAccount.blob.core.windows.net/$blobFile") > val fs = FileSystem.get(path, conf) > fs.setXAttr(path, "somekey", "somevalue".getBytes) > {code} > Looking at the code in hadoop-tools/hadoop-azure, NativeAzureFileSystem > inherits the default setXAttr from FileSystem which throws the > UnsupportedOperationException. > The underlying Azure Blob Storage service does support custom metadata > ([service > docs|https://docs.microsoft.com/en-us/azure/storage/blobs/storage-properties-metadata]) > as does the azure-storage SDK that's being used by NativeAzureFileSystem > ([SDK > docs|http://javadox.com/com.microsoft.azure/azure-storage/2.0.0/com/microsoft/azure/storage/blob/CloudBlob.html#setMetadata(java.util.HashMap)]). > Is there another way that I should be setting custom metadata on Azure Blob > Storage files? Is there a specific reason why setXAttr hasn't been > implemented on NativeAzureFileSystem? If not, I can take a shot at > implementing it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola reopened HADOOP-15998: --- > Jar validation bash scripts don't work on Windows due to platform differences > (colons in paths, \r\n) > - > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Fix For: 3.3.0 > > Attachments: HADOOP-15998.v2.patch, HADOOP-15998.v3.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16072) Visual Studio projects are out of date
Brian Grunkemeyer created HADOOP-16072: -- Summary: Visual Studio projects are out of date Key: HADOOP-16072 URL: https://issues.apache.org/jira/browse/HADOOP-16072 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.2.0 Environment: Building on Windows 10 with Visual Studio 2017. Reporter: Brian Grunkemeyer Fix For: 3.2.0 On Windows when you build, a part of the build process updates some Visual Studio solution and project files. We should simply check in the updated version, so that everyone who builds doesn't get stuck with extra changes they need to manage. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15281) Distcp to add no-rename copy option
[ https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751490#comment-16751490 ] Hadoop QA commented on HADOOP-15281: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 36s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 36s{color} | {color:orange} hadoop-tools: The patch generated 10 new + 69 unchanged - 1 fixed = 79 total (was 70) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 18s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 31s{color} | {color:red} hadoop-distcp in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 32s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 78m 18s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.tools.TestDistCpOptions | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-15281 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12956179/HADOOP-15281-001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4cd00da04e47 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3c7d700 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | |
[jira] [Resolved] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola resolved HADOOP-15998. --- Resolution: Incomplete > Jar validation bash scripts don't work on Windows due to platform differences > (colons in paths, \r\n) > - > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Fix For: 3.3.0 > > Attachments: HADOOP-15998.v2.patch, HADOOP-15998.v3.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16072) Visual Studio projects are out of date
[ https://issues.apache.org/jira/browse/HADOOP-16072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brian Grunkemeyer updated HADOOP-16072: --- Attachment: HADOOP-16072.v1.patch > Visual Studio projects are out of date > -- > > Key: HADOOP-16072 > URL: https://issues.apache.org/jira/browse/HADOOP-16072 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.2.0 > Environment: Building on Windows 10 with Visual Studio 2017. >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Minor > Labels: build, windows > Fix For: 3.2.0 > > Attachments: HADOOP-16072.v1.patch > > Original Estimate: 6h > Remaining Estimate: 6h > > On Windows when you build, a part of the build process updates some Visual > Studio solution and project files. We should simply check in the updated > version, so that everyone who builds doesn't get stuck with extra changes > they need to manage. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16072) Visual Studio projects are out of date
[ https://issues.apache.org/jira/browse/HADOOP-16072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brian Grunkemeyer reassigned HADOOP-16072: -- Assignee: Brian Grunkemeyer > Visual Studio projects are out of date > -- > > Key: HADOOP-16072 > URL: https://issues.apache.org/jira/browse/HADOOP-16072 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.2.0 > Environment: Building on Windows 10 with Visual Studio 2017. >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Minor > Labels: build, windows > Fix For: 3.2.0 > > Original Estimate: 6h > Remaining Estimate: 6h > > On Windows when you build, a part of the build process updates some Visual > Studio solution and project files. We should simply check in the updated > version, so that everyone who builds doesn't get stuck with extra changes > they need to manage. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15281) Distcp to add no-rename copy option
[ https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Olson updated HADOOP-15281: -- Status: Open (was: Patch Available) > Distcp to add no-rename copy option > --- > > Key: HADOOP-15281 > URL: https://issues.apache.org/jira/browse/HADOOP-15281 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Andrew Olson >Priority: Major > Attachments: HADOOP-15281-001.patch, HADOOP-15281-002.patch > > > Currently Distcp uploads a file by two strategies > # append parts > # copy to temp then rename > option 2 executes the following sequence in {{promoteTmpToTarget}} > {code} > if ((fs.exists(target) && !fs.delete(target, false)) > || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent())) > || !fs.rename(tmpTarget, target)) { > throw new IOException("Failed to promote tmp-file:" + tmpTarget > + " to: " + target); > } > {code} > For any object store, that's a lot of HTTP requests; for S3A you are looking > at 12+ requests and an O(data) copy call. > This is not a good upload strategy for any store which manifests its output > atomically at the end of the write(). > Proposed: add a switch to write direct to the dest path. either a conf option > or a CLI option -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15229) Add FileSystem builder-based openFile() API to match createFile() + S3 Select
[ https://issues.apache.org/jira/browse/HADOOP-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751744#comment-16751744 ] Aaron Fabbri commented on HADOOP-15229: --- I looked at the latest large patch today. Also played around with the builder / future stuff. Ran a subset (due to cost) of integration tests including the select tests and some contract tests. This LGTM (+1) assuming yetus is happy and other folks' have had a chance to conclude their reviews. Please do file followup JIRAs and link here (e.g. more MR integration, etc). > Add FileSystem builder-based openFile() API to match createFile() + S3 Select > - > > Key: HADOOP-15229 > URL: https://issues.apache.org/jira/browse/HADOOP-15229 > Project: Hadoop Common > Issue Type: New Feature > Components: fs, fs/azure, fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-15229-001.patch, HADOOP-15229-002.patch, > HADOOP-15229-003.patch, HADOOP-15229-004.patch, HADOOP-15229-004.patch, > HADOOP-15229-005.patch, HADOOP-15229-006.patch, HADOOP-15229-007.patch, > HADOOP-15229-009.patch, HADOOP-15229-010.patch, HADOOP-15229-011.patch, > HADOOP-15229-012.patch, HADOOP-15229-013.patch, HADOOP-15229-014.patch, > HADOOP-15229-015.patch, HADOOP-15229-016.patch, HADOOP-15229-017.patch, > HADOOP-15229-018.patch, HADOOP-15229-019.patch > > > Replicate HDFS-1170 and HADOOP-14365 with an API to open files. > A key requirement of this is not HDFS, it's to put in the fadvise policy for > working with object stores, where getting the decision to do a full GET and > TCP abort on seek vs smaller GETs is fundamentally different: the wrong > option can cost you minutes. S3A and Azure both have adaptive policies now > (first backward seek), but they still don't do it that well. > Columnar formats (ORC, Parquet) should be able to say "fs.input.fadvise" > "random" as an option when they open files; I can imagine other options too. > The Builder model of [~eddyxu] is the one to mimic, method for method. > Ideally with as much code reuse as possible -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16072) Visual Studio projects are out of date
[ https://issues.apache.org/jira/browse/HADOOP-16072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751788#comment-16751788 ] Hadoop QA commented on HADOOP-16072: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 52m 26s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 6 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 10s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 43s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 99m 39s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-16072 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12956222/HADOOP-16072.v1.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient | | uname | Linux 7bad6ebd8351 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a33ef4f | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/15844/artifact/out/whitespace-tabs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15844/testReport/ | | Max. process+thread count | 1716 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15844/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Visual Studio projects are out of date > -- > > Key: HADOOP-16072 > URL: https://issues.apache.org/jira/browse/HADOOP-16072 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.2.0 > Environment: Building on Windows 10 with Visual Studio 2017. >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Minor >
[jira] [Commented] (HADOOP-16072) Visual Studio projects are out of date
[ https://issues.apache.org/jira/browse/HADOOP-16072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751827#comment-16751827 ] Brian Grunkemeyer commented on HADOOP-16072: About the QA feedback: 1) I didn't add any new tests. My test was building on Windows. 2) Visual Studio solution files contain tabs. We ideally shouldn't tamper with that to satisfy a code review requirement. I'll try seeking an authoritative comment from the MSBuild experts, however I'd like to point out the file already does contain tabs. > Visual Studio projects are out of date > -- > > Key: HADOOP-16072 > URL: https://issues.apache.org/jira/browse/HADOOP-16072 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.2.0 > Environment: Building on Windows 10 with Visual Studio 2017. >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Minor > Labels: build, windows > Fix For: 3.2.0 > > Attachments: HADOOP-16072.v1.patch > > Original Estimate: 6h > Remaining Estimate: 6h > > On Windows when you build, a part of the build process updates some Visual > Studio solution and project files. We should simply check in the updated > version, so that everyone who builds doesn't get stuck with extra changes > they need to manage. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15686) Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS logs
[ https://issues.apache.org/jira/browse/HADOOP-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HADOOP-15686: Assignee: Wei-Chiu Chuang > Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS logs > --- > > Key: HADOOP-15686 > URL: https://issues.apache.org/jira/browse/HADOOP-15686 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > > After we switched underlying system of KMS from Tomcat to Jetty, we started > to observe a lot of bogus messages like the follow [1]. It is harmless but > very annoying. Let's suppress it in log4j configuration. > [1] > {quote} > Aug 20, 2018 11:26:17 AM > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator > buildModelAndSchemas > SEVERE: Failed to generate the schema for the JAX-B elements > com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of > IllegalAnnotationExceptions > java.util.Map is an interface, and JAXB can't handle interfaces. > this problem is related to the following location: > at java.util.Map > java.util.Map does not have a no-arg default constructor. > this problem is related to the following location: > at java.util.Map > at > com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170) > at > com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145) > at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247) > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234) > at javax.xml.bind.ContextFinder.find(ContextFinder.java:441) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584) > at > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:169) > at > com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator.createExternalGrammar(AbstractWadlGeneratorGrammarGenerator.java:405) > at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:149) > at > com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:119) > at > com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:138) > at > com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:110) > at > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) > at > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772) > at >
[jira] [Updated] (HADOOP-16069) Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in ZKDelegationTokenSecretManager using principal with Schema /_HOST
[ https://issues.apache.org/jira/browse/HADOOP-16069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] luhuachao updated HADOOP-16069: --- Description: when use ZKDelegationTokenSecretManager with Kerberos, we cannot configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL with principal like 'nn/_h...@example.com', we have to use principal like 'nn/hostn...@example.com' here. (was: when use ZKDelegationTokenSecretManager with Kerberos, we cannot configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL with principal like 'nn/_h...@example.com', we have to user principal like 'nn/hostn...@example.com' here.) > Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in > ZKDelegationTokenSecretManager using principal with Schema /_HOST > > > Key: HADOOP-16069 > URL: https://issues.apache.org/jira/browse/HADOOP-16069 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: luhuachao >Priority: Critical > Attachments: HADOOP-16069.001.patch > > > when use ZKDelegationTokenSecretManager with Kerberos, we cannot configure > ZK_DTSM_ZK_KERBEROS_PRINCIPAL with principal like 'nn/_h...@example.com', we > have to use principal like 'nn/hostn...@example.com' here. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16069) Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in ZKDelegationTokenSecretManager using principal with Schema /_HOST
[ https://issues.apache.org/jira/browse/HADOOP-16069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] luhuachao updated HADOOP-16069: --- Description: when use ZKDelegationTokenSecretManager with Kerberos, we cannot configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL with principal like 'nn/_h...@example.com', we have to use principal like 'nn/hostn...@example.com' . (was: when use ZKDelegationTokenSecretManager with Kerberos, we cannot configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL with principal like 'nn/_h...@example.com', we have to use principal like 'nn/hostn...@example.com' here.) > Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in > ZKDelegationTokenSecretManager using principal with Schema /_HOST > > > Key: HADOOP-16069 > URL: https://issues.apache.org/jira/browse/HADOOP-16069 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: luhuachao >Priority: Critical > Attachments: HADOOP-16069.001.patch > > > when use ZKDelegationTokenSecretManager with Kerberos, we cannot configure > ZK_DTSM_ZK_KERBEROS_PRINCIPAL with principal like 'nn/_h...@example.com', we > have to use principal like 'nn/hostn...@example.com' . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15686) Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS logs
[ https://issues.apache.org/jira/browse/HADOOP-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751787#comment-16751787 ] Wei-Chiu Chuang commented on HADOOP-15686: -- I figured it out after reviewing the patch at ATLAS-16. Essentially I need to install a SLF4JBridgeHandler when KMS starts. > Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS logs > --- > > Key: HADOOP-15686 > URL: https://issues.apache.org/jira/browse/HADOOP-15686 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HADOOP-15686.001.patch > > > After we switched underlying system of KMS from Tomcat to Jetty, we started > to observe a lot of bogus messages like the follow [1]. It is harmless but > very annoying. Let's suppress it in log4j configuration. > [1] > {quote} > Aug 20, 2018 11:26:17 AM > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator > buildModelAndSchemas > SEVERE: Failed to generate the schema for the JAX-B elements > com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of > IllegalAnnotationExceptions > java.util.Map is an interface, and JAXB can't handle interfaces. > this problem is related to the following location: > at java.util.Map > java.util.Map does not have a no-arg default constructor. > this problem is related to the following location: > at java.util.Map > at > com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170) > at > com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145) > at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247) > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234) > at javax.xml.bind.ContextFinder.find(ContextFinder.java:441) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584) > at > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:169) > at > com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator.createExternalGrammar(AbstractWadlGeneratorGrammarGenerator.java:405) > at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:149) > at > com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:119) > at > com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:138) > at > com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:110) > at > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) > at > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at >
[jira] [Commented] (HADOOP-15281) Distcp to add no-rename copy option
[ https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751761#comment-16751761 ] Hadoop QA commented on HADOOP-15281: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 4s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} hadoop-tools: The patch generated 0 new + 68 unchanged - 2 fixed = 68 total (was 70) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 59s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 49s{color} | {color:green} hadoop-distcp in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 26s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 76m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-15281 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12956219/HADOOP-15281-003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 30bd5cfdc894 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4e0aa2c | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15843/testReport/ | |
[jira] [Updated] (HADOOP-16065) -Ddynamodb should be -Ddynamo in AWS SDK testing document
[ https://issues.apache.org/jira/browse/HADOOP-16065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16065: --- Resolution: Fixed Fix Version/s: 3.2.1 3.3.0 Status: Resolved (was: Patch Available) Committed to trunk and branch-3.2. Thanks [~ste...@apache.org] for the review. > -Ddynamodb should be -Ddynamo in AWS SDK testing document > - > > Key: HADOOP-16065 > URL: https://issues.apache.org/jira/browse/HADOOP-16065 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Minor > Labels: newbie > Fix For: 3.3.0, 3.2.1 > > Attachments: HADOOP-16065.01.patch, HADOOP-16065.02.patch, > HADOOP-16065.03.patch > > > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md > {{-Ddynamodb}} should be {{-Ddynamo}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15686) Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS logs
[ https://issues.apache.org/jira/browse/HADOOP-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-15686: - Status: Patch Available (was: Open) > Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS logs > --- > > Key: HADOOP-15686 > URL: https://issues.apache.org/jira/browse/HADOOP-15686 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HADOOP-15686.001.patch > > > After we switched underlying system of KMS from Tomcat to Jetty, we started > to observe a lot of bogus messages like the follow [1]. It is harmless but > very annoying. Let's suppress it in log4j configuration. > [1] > {quote} > Aug 20, 2018 11:26:17 AM > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator > buildModelAndSchemas > SEVERE: Failed to generate the schema for the JAX-B elements > com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of > IllegalAnnotationExceptions > java.util.Map is an interface, and JAXB can't handle interfaces. > this problem is related to the following location: > at java.util.Map > java.util.Map does not have a no-arg default constructor. > this problem is related to the following location: > at java.util.Map > at > com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170) > at > com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145) > at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247) > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234) > at javax.xml.bind.ContextFinder.find(ContextFinder.java:441) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584) > at > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:169) > at > com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator.createExternalGrammar(AbstractWadlGeneratorGrammarGenerator.java:405) > at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:149) > at > com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:119) > at > com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:138) > at > com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:110) > at > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) > at > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) > at >
[jira] [Updated] (HADOOP-15686) Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS logs
[ https://issues.apache.org/jira/browse/HADOOP-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-15686: - Attachment: HADOOP-15686.001.patch > Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS logs > --- > > Key: HADOOP-15686 > URL: https://issues.apache.org/jira/browse/HADOOP-15686 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HADOOP-15686.001.patch > > > After we switched underlying system of KMS from Tomcat to Jetty, we started > to observe a lot of bogus messages like the follow [1]. It is harmless but > very annoying. Let's suppress it in log4j configuration. > [1] > {quote} > Aug 20, 2018 11:26:17 AM > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator > buildModelAndSchemas > SEVERE: Failed to generate the schema for the JAX-B elements > com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of > IllegalAnnotationExceptions > java.util.Map is an interface, and JAXB can't handle interfaces. > this problem is related to the following location: > at java.util.Map > java.util.Map does not have a no-arg default constructor. > this problem is related to the following location: > at java.util.Map > at > com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170) > at > com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145) > at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247) > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234) > at javax.xml.bind.ContextFinder.find(ContextFinder.java:441) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584) > at > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:169) > at > com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator.createExternalGrammar(AbstractWadlGeneratorGrammarGenerator.java:405) > at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:149) > at > com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:119) > at > com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:138) > at > com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:110) > at > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) > at > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) > at >
[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751728#comment-16751728 ] Hadoop QA commented on HADOOP-15998: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 39s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 52s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 2s{color} | {color:red} The patch generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 19s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s{color} | {color:green} hadoop-client-check-invariants in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s{color} | {color:green} hadoop-client-check-test-invariants in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-15998 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12956216/HADOOP-15998.v4.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml shellcheck shelldocs | | uname | Linux a572bac5f647 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4e0aa2c | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | shellcheck | v0.4.6 | | shellcheck | https://builds.apache.org/job/PreCommit-HADOOP-Build/15842/artifact/out/diff-patch-shellcheck.txt | | Test Results |
[jira] [Commented] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0
[ https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751909#comment-16751909 ] Kai Xie commented on HADOOP-16049: -- So the patch (branch-2-005) should be ready for review. FYI I'll be traveling and please expect a delay in my response > DistCp result has data and checksum mismatch when blocks per chunk > 0 > -- > > Key: HADOOP-16049 > URL: https://issues.apache.org/jira/browse/HADOOP-16049 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 2.9.2 >Reporter: Kai Xie >Assignee: Kai Xie >Priority: Major > Attachments: HADOOP-16049-branch-2-003.patch, > HADOOP-16049-branch-2-003.patch, HADOOP-16049-branch-2-004.patch, > HADOOP-16049-branch-2-005.patch > > > In 2.9.2 RetriableFileCopyCommand.copyBytes, > {code:java} > int bytesRead = readBytes(inStream, buf, sourceOffset); > while (bytesRead >= 0) { > ... > if (action == FileAction.APPEND) { > sourceOffset += bytesRead; > } > ... // write to dst > bytesRead = readBytes(inStream, buf, sourceOffset); > }{code} > it does a positioned read but the position (`sourceOffset` here) is never > updated when blocks per chunk is set to > 0 (which always disables append > action). So for chunk with offset != 0, it will keep copying the first few > bytes again and again, causing result to have data & checksum mismatch. > To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default > copy buffer size) in class TestDistCpSystem and run it. > HADOOP-15292 has resolved the issue reported in this ticket in > trunk/branch-3.1/branch-3.2 by not using the positioned read, but has not > been backported to branch-2 yet > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15686) Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS logs
[ https://issues.apache.org/jira/browse/HADOOP-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751835#comment-16751835 ] Hadoop QA commented on HADOOP-15686: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 33s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 14s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 80m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-15686 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12956246/HADOOP-15686.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 216e1787fdef 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3c60303 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15845/testReport/ | | Max. process+thread count | 470 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-kms U: hadoop-common-project/hadoop-kms | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15845/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS logs > --- > > Key:
[jira] [Commented] (HADOOP-15711) Fix branch-2 builds
[ https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751846#comment-16751846 ] Jonathan Hung commented on HADOOP-15711: In the qbt runs there's fatal errors in the logs such as {noformat} --- T H R E A D --- Current thread (0x7f3cc031d800): VMThread [stack: 0x7f3ca0dce000,0x7f3ca0ecf000] [id=23500] Stack: [0x7f3ca0dce000,0x7f3ca0ecf000], sp=0x7f3ca0ecdb10, free space=1022k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x966c25] V [libjvm.so+0x49b96e] V [libjvm.so+0x872b51] V [libjvm.so+0x96b69a] V [libjvm.so+0x96baf2] V [libjvm.so+0x7da992] VM_Operation (0x7f3c95bafad0): RevokeBias, mode: safepoint, requested by thread 0x7f3cc0744800 {noformat} Suspected it might be related to [https://bugs.openjdk.java.net/browse/JDK-6869327,] so I tried adding {{-XX:+UseCountedLoopSafepoints}} to one of the runs but it didn't seem to do anything Then tried porting HADOOP-14816 (and HADOOP-15610) to a test branch forked off branch-2, getting similar results as reported in HDFS-12711, here's a test run : [https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/39/] (run with openjdk8) > Fix branch-2 builds > --- > > Key: HADOOP-15711 > URL: https://issues.apache.org/jira/browse/HADOOP-15711 > Project: Hadoop Common > Issue Type: Task >Reporter: Jonathan Hung >Priority: Critical > Attachments: HADOOP-15711.001.branch-2.patch > > > Branch-2 builds have been disabled for a while: > https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/ > A test run here causes hdfs tests to hang: > https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/ > Running hadoop-hdfs tests locally reveal some errors such > as:{noformat}[ERROR] > testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2) Time elapsed: > 0.059 s <<< ERROR! > java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:714) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403) > at > org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473) > at > org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489) > at > org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){noformat} > I was able to get more tests passing locally by increasing the max user > process count on my machine. But the error suggests that there's an issue in > the tests themselves. Not sure if the error seen locally is the same reason > as why jenkins builds are failing, I wasn't able to confirm based on the > jenkins builds' lack of output. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15711) Fix branch-2 builds
[ https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751846#comment-16751846 ] Jonathan Hung edited comment on HADOOP-15711 at 1/25/19 3:23 AM: - In the qbt runs there's fatal errors in the logs such as {noformat} --- T H R E A D --- Current thread (0x7f3cc031d800): VMThread [stack: 0x7f3ca0dce000,0x7f3ca0ecf000] [id=23500] Stack: [0x7f3ca0dce000,0x7f3ca0ecf000], sp=0x7f3ca0ecdb10, free space=1022k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x966c25] V [libjvm.so+0x49b96e] V [libjvm.so+0x872b51] V [libjvm.so+0x96b69a] V [libjvm.so+0x96baf2] V [libjvm.so+0x7da992] VM_Operation (0x7f3c95bafad0): RevokeBias, mode: safepoint, requested by thread 0x7f3cc0744800 {noformat} Suspected it might be related to [https://bugs.openjdk.java.net/browse/JDK-6869327,] so I tried adding {{-XX:+UseCountedLoopSafepoints}} to one of the runs but it didn't seem to do anything Then tried porting HADOOP-14816 (and HADOOP-15610) to a test branch forked off branch-2, getting similar results as reported in HDFS-12711, here's a test run : [https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/39/] (run with openjdk8) - so at least it appears the unit tests are running to completion with openjdk8. was (Author: jhung): In the qbt runs there's fatal errors in the logs such as {noformat} --- T H R E A D --- Current thread (0x7f3cc031d800): VMThread [stack: 0x7f3ca0dce000,0x7f3ca0ecf000] [id=23500] Stack: [0x7f3ca0dce000,0x7f3ca0ecf000], sp=0x7f3ca0ecdb10, free space=1022k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x966c25] V [libjvm.so+0x49b96e] V [libjvm.so+0x872b51] V [libjvm.so+0x96b69a] V [libjvm.so+0x96baf2] V [libjvm.so+0x7da992] VM_Operation (0x7f3c95bafad0): RevokeBias, mode: safepoint, requested by thread 0x7f3cc0744800 {noformat} Suspected it might be related to [https://bugs.openjdk.java.net/browse/JDK-6869327,] so I tried adding {{-XX:+UseCountedLoopSafepoints}} to one of the runs but it didn't seem to do anything Then tried porting HADOOP-14816 (and HADOOP-15610) to a test branch forked off branch-2, getting similar results as reported in HDFS-12711, here's a test run : [https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/39/] (run with openjdk8) > Fix branch-2 builds > --- > > Key: HADOOP-15711 > URL: https://issues.apache.org/jira/browse/HADOOP-15711 > Project: Hadoop Common > Issue Type: Task >Reporter: Jonathan Hung >Priority: Critical > Attachments: HADOOP-15711.001.branch-2.patch > > > Branch-2 builds have been disabled for a while: > https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/ > A test run here causes hdfs tests to hang: > https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/ > Running hadoop-hdfs tests locally reveal some errors such > as:{noformat}[ERROR] > testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2) Time elapsed: > 0.059 s <<< ERROR! > java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:714) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403) > at > org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:234) > at > org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1080) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:883) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:514) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:473) > at > org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend(TestFileAppend2.java:489) > at > org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend2(TestFileAppend2.java:543) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at >
[jira] [Commented] (HADOOP-15686) Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS logs
[ https://issues.apache.org/jira/browse/HADOOP-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751856#comment-16751856 ] Wei-Chiu Chuang commented on HADOOP-15686: -- [~ste...@apache.org] do you think you can help with a quick review? I verified TestKMS does not output these bogus log messages after the patch. > Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS logs > --- > > Key: HADOOP-15686 > URL: https://issues.apache.org/jira/browse/HADOOP-15686 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HADOOP-15686.001.patch > > > After we switched underlying system of KMS from Tomcat to Jetty, we started > to observe a lot of bogus messages like the follow [1]. It is harmless but > very annoying. Let's suppress it in log4j configuration. > [1] > {quote} > Aug 20, 2018 11:26:17 AM > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator > buildModelAndSchemas > SEVERE: Failed to generate the schema for the JAX-B elements > com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of > IllegalAnnotationExceptions > java.util.Map is an interface, and JAXB can't handle interfaces. > this problem is related to the following location: > at java.util.Map > java.util.Map does not have a no-arg default constructor. > this problem is related to the following location: > at java.util.Map > at > com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170) > at > com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145) > at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247) > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234) > at javax.xml.bind.ContextFinder.find(ContextFinder.java:441) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584) > at > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:169) > at > com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator.createExternalGrammar(AbstractWadlGeneratorGrammarGenerator.java:405) > at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:149) > at > com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:119) > at > com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:138) > at > com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:110) > at > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) > at > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at
[jira] [Comment Edited] (HADOOP-15711) Fix branch-2 builds
[ https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751846#comment-16751846 ] Jonathan Hung edited comment on HADOOP-15711 at 1/25/19 3:24 AM: - In the qbt runs there's fatal errors in the logs such as {noformat} # # A fatal error has been detected by the Java Runtime Environment: # # Internal Error (safepoint.cpp:325), pid=30102, tid=140265819887360 # guarantee(PageArmed == 0) failed: invariant # # JRE version: OpenJDK Runtime Environment (7.0_181-b01) (build 1.7.0_181-b01) # Java VM: OpenJDK 64-Bit Server VM (24.181-b01 mixed mode linux-amd64 compressed oops) # Derivative: IcedTea 2.6.14 # Distribution: Ubuntu 14.04 LTS, package 7u181-2.6.14-0ubuntu0.3 # Core dump written. Default location: /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/core or core.30102 # # If you would like to submit a bug report, please include # instructions on how to reproduce the bug and visit: # http://icedtea.classpath.org/bugzilla # --- T H R E A D --- Current thread (0x7f923c31d800): VMThread [stack: 0x7f922e4e5000,0x7f922e5e6000] [id=30122] Stack: [0x7f922e4e5000,0x7f922e5e6000], sp=0x7f922e5e4b10, free space=1022k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x966c25] V [libjvm.so+0x49b96e] V [libjvm.so+0x872b51] V [libjvm.so+0x96b69a] V [libjvm.so+0x96baf2] V [libjvm.so+0x7da992] VM_Operation (0x7f9210b2b920): RevokeBias, mode: safepoint, requested by thread 0x7f923dd0f800 {noformat} Suspected it might be related to [https://bugs.openjdk.java.net/browse/JDK-6869327,] so I tried adding {{-XX:+UseCountedLoopSafepoints}} to one of the runs but it didn't seem to do anything Then tried porting HADOOP-14816 (and HADOOP-15610) to a test branch forked off branch-2, getting similar results as reported in HDFS-12711, here's a test run : [https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/39/] (run with openjdk8) - so at least it appears the unit tests are running to completion with openjdk8. was (Author: jhung): In the qbt runs there's fatal errors in the logs such as {noformat} --- T H R E A D --- Current thread (0x7f3cc031d800): VMThread [stack: 0x7f3ca0dce000,0x7f3ca0ecf000] [id=23500] Stack: [0x7f3ca0dce000,0x7f3ca0ecf000], sp=0x7f3ca0ecdb10, free space=1022k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x966c25] V [libjvm.so+0x49b96e] V [libjvm.so+0x872b51] V [libjvm.so+0x96b69a] V [libjvm.so+0x96baf2] V [libjvm.so+0x7da992] VM_Operation (0x7f3c95bafad0): RevokeBias, mode: safepoint, requested by thread 0x7f3cc0744800 {noformat} Suspected it might be related to [https://bugs.openjdk.java.net/browse/JDK-6869327,] so I tried adding {{-XX:+UseCountedLoopSafepoints}} to one of the runs but it didn't seem to do anything Then tried porting HADOOP-14816 (and HADOOP-15610) to a test branch forked off branch-2, getting similar results as reported in HDFS-12711, here's a test run : [https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/39/] (run with openjdk8) - so at least it appears the unit tests are running to completion with openjdk8. > Fix branch-2 builds > --- > > Key: HADOOP-15711 > URL: https://issues.apache.org/jira/browse/HADOOP-15711 > Project: Hadoop Common > Issue Type: Task >Reporter: Jonathan Hung >Priority: Critical > Attachments: HADOOP-15711.001.branch-2.patch > > > Branch-2 builds have been disabled for a while: > https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86/ > A test run here causes hdfs tests to hang: > https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-branch2-java7-linux-x86-jhung/4/ > Running hadoop-hdfs tests locally reveal some errors such > as:{noformat}[ERROR] > testComplexAppend2(org.apache.hadoop.hdfs.TestFileAppend2) Time elapsed: > 0.059 s <<< ERROR! > java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:714) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1164) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1128) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:174) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1172) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:403) > at >
[jira] [Updated] (HADOOP-16069) Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in ZKDelegationTokenSecretManager using principal with Schema /_HOST
[ https://issues.apache.org/jira/browse/HADOOP-16069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] luhuachao updated HADOOP-16069: --- Component/s: common > Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in > ZKDelegationTokenSecretManager using principal with Schema /_HOST > > > Key: HADOOP-16069 > URL: https://issues.apache.org/jira/browse/HADOOP-16069 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.1.0 >Reporter: luhuachao >Priority: Critical > Labels: kerberos > Attachments: HADOOP-16069.001.patch > > > when use ZKDelegationTokenSecretManager with Kerberos, we cannot configure > ZK_DTSM_ZK_KERBEROS_PRINCIPAL with principal like 'nn/_h...@example.com', we > have to use principal like 'nn/hostn...@example.com' . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16069) Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in ZKDelegationTokenSecretManager using principal with Schema /_HOST
[ https://issues.apache.org/jira/browse/HADOOP-16069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] luhuachao updated HADOOP-16069: --- Labels: kerberos (was: ) > Support configure ZK_DTSM_ZK_KERBEROS_PRINCIPAL in > ZKDelegationTokenSecretManager using principal with Schema /_HOST > > > Key: HADOOP-16069 > URL: https://issues.apache.org/jira/browse/HADOOP-16069 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: luhuachao >Priority: Critical > Labels: kerberos > Attachments: HADOOP-16069.001.patch > > > when use ZKDelegationTokenSecretManager with Kerberos, we cannot configure > ZK_DTSM_ZK_KERBEROS_PRINCIPAL with principal like 'nn/_h...@example.com', we > have to use principal like 'nn/hostn...@example.com' . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16065) -Ddynamodb should be -Ddynamo in AWS SDK testing document
[ https://issues.apache.org/jira/browse/HADOOP-16065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751795#comment-16751795 ] Hudson commented on HADOOP-16065: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15825 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15825/]) HADOOP-16065. -Ddynamodb should be -Ddynamo in AWS SDK testing document. (aajisaka: rev 3c60303ac59d3b6cc375e7ac10214fc36d330fa4) * (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md > -Ddynamodb should be -Ddynamo in AWS SDK testing document > - > > Key: HADOOP-16065 > URL: https://issues.apache.org/jira/browse/HADOOP-16065 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Minor > Labels: newbie > Fix For: 3.3.0, 3.2.1 > > Attachments: HADOOP-16065.01.patch, HADOOP-16065.02.patch, > HADOOP-16065.03.patch > > > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md > {{-Ddynamodb}} should be {{-Ddynamo}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15205) maven release: missing source attachments for hadoop-mapreduce-client-core
[ https://issues.apache.org/jira/browse/HADOOP-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16750884#comment-16750884 ] Sunil Govindan commented on HADOOP-15205: - Yes. release 3.2.0 is updated with this. [~kgyrtkirk] could we close this given documentation is updated > maven release: missing source attachments for hadoop-mapreduce-client-core > -- > > Key: HADOOP-15205 > URL: https://issues.apache.org/jira/browse/HADOOP-15205 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0, 2.8.2, 2.8.3, 2.7.5, 3.0.0, 3.1.0, 3.0.1, 2.8.4, > 2.9.2, 2.8.5 >Reporter: Zoltan Haindrich >Priority: Major > Attachments: chk.bash > > > I wanted to use the source attachment; however it looks like since 2.7.5 that > artifact is not present at maven central ; it looks like the last release > which had source attachments / javadocs was 2.7.4 > http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.4/ > http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.5/ > this seems to be not limited to mapreduce; as the same change is present for > yarn-common as well > http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.4/ > http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.5/ > and also hadoop-common > http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.4/ > http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.5/ > http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.0.0/ > http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.1.0/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] YangY updated HADOOP-15616: --- Attachment: HADOOP-15616.008.patch > Incorporate Tencent Cloud COS File System Implementation > > > Key: HADOOP-15616 > URL: https://issues.apache.org/jira/browse/HADOOP-15616 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/cos >Reporter: Junping Du >Assignee: YangY >Priority: Major > Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, > HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, > HADOOP-15616.006.patch, HADOOP-15616.007.patch, HADOOP-15616.008.patch, > Tencent-COS-Integrated.pdf > > > Tencent cloud is top 2 cloud vendors in China market and the object store COS > ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s > cloud users but now it is hard for hadoop user to access data laid on COS > storage as no native support for COS in Hadoop. > This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just > like what we do before for S3, ADL, OSS, etc. With simple configuration, > Hadoop applications can read/write data from COS without any code change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] YangY updated HADOOP-15616: --- Attachment: (was: HADOOP-15616.008.patch) > Incorporate Tencent Cloud COS File System Implementation > > > Key: HADOOP-15616 > URL: https://issues.apache.org/jira/browse/HADOOP-15616 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/cos >Reporter: Junping Du >Assignee: YangY >Priority: Major > Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, > HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, > HADOOP-15616.006.patch, HADOOP-15616.007.patch, Tencent-COS-Integrated.pdf > > > Tencent cloud is top 2 cloud vendors in China market and the object store COS > ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s > cloud users but now it is hard for hadoop user to access data laid on COS > storage as no native support for COS in Hadoop. > This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just > like what we do before for S3, ADL, OSS, etc. With simple configuration, > Hadoop applications can read/write data from COS without any code change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] YangY updated HADOOP-15616: --- Attachment: HADOOP-15616.008.patch > Incorporate Tencent Cloud COS File System Implementation > > > Key: HADOOP-15616 > URL: https://issues.apache.org/jira/browse/HADOOP-15616 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/cos >Reporter: Junping Du >Assignee: YangY >Priority: Major > Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, > HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, > HADOOP-15616.006.patch, HADOOP-15616.007.patch, HADOOP-15616.008.patch, > Tencent-COS-Integrated.pdf > > > Tencent cloud is top 2 cloud vendors in China market and the object store COS > ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s > cloud users but now it is hard for hadoop user to access data laid on COS > storage as no native support for COS in Hadoop. > This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just > like what we do before for S3, ADL, OSS, etc. With simple configuration, > Hadoop applications can read/write data from COS without any code change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16071) Fix typo in DistCp Counters - Bandwidth in Bytes
Siyao Meng created HADOOP-16071: --- Summary: Fix typo in DistCp Counters - Bandwidth in Bytes Key: HADOOP-16071 URL: https://issues.apache.org/jira/browse/HADOOP-16071 Project: Hadoop Common Issue Type: Bug Components: tools/distcp Affects Versions: 3.2.0 Reporter: Siyao Meng Assignee: Siyao Meng {code:bash|title=DistCp MR Job Counters} ... DistCp Counters Bandwidth in Btyes=20971520 Bytes Copied=20971520 Bytes Expected=20971520 Files Copied=1 {code} {noformat} Bandwidth in Btyes -> Bandwidth in Bytes {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL
[ https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HADOOP-15922: -- Fix Version/s: (was: 3.1.2) 3.1.3 > DelegationTokenAuthenticationFilter get wrong doAsUser since it does not > decode URL > --- > > Key: HADOOP-15922 > URL: https://issues.apache.org/jira/browse/HADOOP-15922 > Project: Hadoop Common > Issue Type: Bug > Components: common, kms >Reporter: He Xiaoqiao >Assignee: He Xiaoqiao >Priority: Major > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch, > HADOOP-15922.003.patch, HADOOP-15922.004.patch, HADOOP-15922.005.patch, > HADOOP-15922.006.patch, HADOOP-15922.007.patch > > > DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from > client is complete kerberos name (e.g., user/hostn...@realm.com, actually it > is acceptable), because DelegationTokenAuthenticationFilter does not decode > DOAS parameter in URL which is encoded by {{URLEncoder}} at client. > e.g. KMS as example: > a. KMSClientProvider creates connection to KMS Server using > DelegationTokenAuthenticatedURL#openConnection. > b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} > with url encoded user as one parameter of http request. > {code:java} > // proxyuser > if (doAs != null) { > extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8")); > } > {code} > c. when KMS server receives the request, it does not decode the proxy user. > As result, KMS Server will get the wrong proxy user if this proxy user is > complete Kerberos Name or it includes some special character. Some other > authentication and authorization exception will throws next to it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751064#comment-16751064 ] Hadoop QA commented on HADOOP-15616: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 17 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 6m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 11s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-cloud-storage-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 0s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 10 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 9s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-cloud-storage-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 1s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s{color} | {color:green} hadoop-cos in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 39s{color} | {color:green} hadoop-cloud-storage-project in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 52s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}133m 37s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker |
[jira] [Commented] (HADOOP-15481) Emit FairCallQueue stats as metrics
[ https://issues.apache.org/jira/browse/HADOOP-15481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751065#comment-16751065 ] Takanobu Asanuma commented on HADOOP-15481: --- Updated the fix versions. > Emit FairCallQueue stats as metrics > --- > > Key: HADOOP-15481 > URL: https://issues.apache.org/jira/browse/HADOOP-15481 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics, rpc-server >Reporter: Erik Krogen >Assignee: Christopher Gregorian >Priority: Major > Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3 > > Attachments: HADOOP-15481-branch-2.003.patch, HADOOP-15481.001.patch, > HADOOP-15481.001.patch, HADOOP-15481.002.patch, HADOOP-15481.003.patch > > > Currently FairCallQueue has some statistics which are exported via JMX: the > size of each queue, and the number of overflowed calls per queue. These are > useful statistics to track over time to determine, for example, if queues > need to be resized. We should emit them via the standard metrics system. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15481) Emit FairCallQueue stats as metrics
[ https://issues.apache.org/jira/browse/HADOOP-15481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HADOOP-15481: -- Fix Version/s: 3.1.3 3.2.1 3.0.4 > Emit FairCallQueue stats as metrics > --- > > Key: HADOOP-15481 > URL: https://issues.apache.org/jira/browse/HADOOP-15481 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics, rpc-server >Reporter: Erik Krogen >Assignee: Christopher Gregorian >Priority: Major > Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3 > > Attachments: HADOOP-15481-branch-2.003.patch, HADOOP-15481.001.patch, > HADOOP-15481.001.patch, HADOOP-15481.002.patch, HADOOP-15481.003.patch > > > Currently FairCallQueue has some statistics which are exported via JMX: the > size of each queue, and the number of overflowed calls per queue. These are > useful statistics to track over time to determine, for example, if queues > need to be resized. We should emit them via the standard metrics system. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka closed pull request #83: HDFS-9941. Do not log StandbyException on NN, other minor logging fixes.
aajisaka closed pull request #83: HDFS-9941. Do not log StandbyException on NN, other minor logging fixes. URL: https://github.com/apache/hadoop/pull/83 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka commented on issue #86: HADOOP-12916
aajisaka commented on issue #86: HADOOP-12916 URL: https://github.com/apache/hadoop/pull/86#issuecomment-457149117 This issue has been fixed. Closing. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka closed pull request #72: YARN-4563
aajisaka closed pull request #72: YARN-4563 URL: https://github.com/apache/hadoop/pull/72 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka commented on issue #72: YARN-4563
aajisaka commented on issue #72: YARN-4563 URL: https://github.com/apache/hadoop/pull/72#issuecomment-457149943 This issue has been fixed by https://jira.apache.org/jira/browse/YARN-4653. Closing. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka commented on issue #59: HADOOP-12321
aajisaka commented on issue #59: HADOOP-12321 URL: https://github.com/apache/hadoop/pull/59#issuecomment-457151939 This issue has been fixed. Closing. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka closed pull request #59: HADOOP-12321
aajisaka closed pull request #59: HADOOP-12321 URL: https://github.com/apache/hadoop/pull/59 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16071) Fix typo in DistCp Counters - Bandwidth in Bytes
[ https://issues.apache.org/jira/browse/HADOOP-16071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HADOOP-16071: Attachment: HADOOP-16071.001.patch Status: Patch Available (was: Open) > Fix typo in DistCp Counters - Bandwidth in Bytes > > > Key: HADOOP-16071 > URL: https://issues.apache.org/jira/browse/HADOOP-16071 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.2.0 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16071.001.patch > > > {code:bash|title=DistCp MR Job Counters} > ... > DistCp Counters > Bandwidth in Btyes=20971520 > Bytes Copied=20971520 > Bytes Expected=20971520 > Files Copied=1 > {code} > {noformat} > Bandwidth in Btyes -> Bandwidth in Bytes > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka closed pull request #86: HADOOP-12916
aajisaka closed pull request #86: HADOOP-12916 URL: https://github.com/apache/hadoop/pull/86 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka commented on issue #83: HDFS-9941. Do not log StandbyException on NN, other minor logging fixes.
aajisaka commented on issue #83: HDFS-9941. Do not log StandbyException on NN, other minor logging fixes. URL: https://github.com/apache/hadoop/pull/83#issuecomment-457149269 This issue has been fixed. Closing. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka commented on issue #49: HDFS-9443. Disabling HDFS client socket cache causes logging message …
aajisaka commented on issue #49: HDFS-9443. Disabling HDFS client socket cache causes logging message … URL: https://github.com/apache/hadoop/pull/49#issuecomment-457150962 This issue has been fixed. Closing. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751066#comment-16751066 ] Hadoop QA commented on HADOOP-15616: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 17 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 38s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-cloud-storage-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 25s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 10 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 10s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 57s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-cloud-storage-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 44s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s{color} | {color:green} hadoop-cos in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s{color} | {color:green} hadoop-cloud-storage-project in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 50s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}117m 42s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker |
[jira] [Commented] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16750981#comment-16750981 ] YangY commented on HADOOP-15616: [~Sammi] Thanks for your comment on this work. I have improved this code carefully, based on your suggestions. Please check again. > Incorporate Tencent Cloud COS File System Implementation > > > Key: HADOOP-15616 > URL: https://issues.apache.org/jira/browse/HADOOP-15616 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/cos >Reporter: Junping Du >Assignee: YangY >Priority: Major > Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, > HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, > HADOOP-15616.006.patch, HADOOP-15616.007.patch, HADOOP-15616.008.patch, > Tencent-COS-Integrated.pdf > > > Tencent cloud is top 2 cloud vendors in China market and the object store COS > ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s > cloud users but now it is hard for hadoop user to access data laid on COS > storage as no native support for COS in Hadoop. > This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just > like what we do before for S3, ADL, OSS, etc. With simple configuration, > Hadoop applications can read/write data from COS without any code change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka commented on issue #148: HDFS-11060. Make DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED configurable
aajisaka commented on issue #148: HDFS-11060. Make DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED configurable URL: https://github.com/apache/hadoop/pull/148#issuecomment-457148077 This issue is fixed. Closing. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka closed pull request #148: HDFS-11060. Make DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED configurable
aajisaka closed pull request #148: HDFS-11060. Make DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED configurable URL: https://github.com/apache/hadoop/pull/148 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka commented on issue #150: HADOOP-13773, set heap args for HADOOP_CLIENT_OPTS when HADOOP_HEAPSI…
aajisaka commented on issue #150: HADOOP-13773, set heap args for HADOOP_CLIENT_OPTS when HADOOP_HEAPSI… URL: https://github.com/apache/hadoop/pull/150#issuecomment-457148495 This issue has been fixed. Closing. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka closed pull request #150: HADOOP-13773, set heap args for HADOOP_CLIENT_OPTS when HADOOP_HEAPSI…
aajisaka closed pull request #150: HADOOP-13773, set heap args for HADOOP_CLIENT_OPTS when HADOOP_HEAPSI… URL: https://github.com/apache/hadoop/pull/150 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka closed pull request #43: HDFS-9144: libhdfs++ refactoring
aajisaka closed pull request #43: HDFS-9144: libhdfs++ refactoring URL: https://github.com/apache/hadoop/pull/43 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka closed pull request #49: HDFS-9443. Disabling HDFS client socket cache causes logging message …
aajisaka closed pull request #49: HDFS-9443. Disabling HDFS client socket cache causes logging message … URL: https://github.com/apache/hadoop/pull/49 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka commented on issue #43: HDFS-9144: libhdfs++ refactoring
aajisaka commented on issue #43: HDFS-9144: libhdfs++ refactoring URL: https://github.com/apache/hadoop/pull/43#issuecomment-457150806 This issue has been fixed. Closing. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka commented on issue #7: YARN-1964 Launching containers from docker
aajisaka commented on issue #7: YARN-1964 Launching containers from docker URL: https://github.com/apache/hadoop/pull/7#issuecomment-457152999 This issue has been fixed. Closing. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka closed pull request #7: YARN-1964 Launching containers from docker
aajisaka closed pull request #7: YARN-1964 Launching containers from docker URL: https://github.com/apache/hadoop/pull/7 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka commented on issue #6: YARN-1964 Launching containers from docker
aajisaka commented on issue #6: YARN-1964 Launching containers from docker URL: https://github.com/apache/hadoop/pull/6#issuecomment-457153081 This issue has been fixed. Closing. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] aajisaka closed pull request #6: YARN-1964 Launching containers from docker
aajisaka closed pull request #6: YARN-1964 Launching containers from docker URL: https://github.com/apache/hadoop/pull/6 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL
[ https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751062#comment-16751062 ] Takanobu Asanuma commented on HADOOP-15922: --- Updated the fix versions since branch-3.1 is 3.1.3. > DelegationTokenAuthenticationFilter get wrong doAsUser since it does not > decode URL > --- > > Key: HADOOP-15922 > URL: https://issues.apache.org/jira/browse/HADOOP-15922 > Project: Hadoop Common > Issue Type: Bug > Components: common, kms >Reporter: He Xiaoqiao >Assignee: He Xiaoqiao >Priority: Major > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch, > HADOOP-15922.003.patch, HADOOP-15922.004.patch, HADOOP-15922.005.patch, > HADOOP-15922.006.patch, HADOOP-15922.007.patch > > > DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from > client is complete kerberos name (e.g., user/hostn...@realm.com, actually it > is acceptable), because DelegationTokenAuthenticationFilter does not decode > DOAS parameter in URL which is encoded by {{URLEncoder}} at client. > e.g. KMS as example: > a. KMSClientProvider creates connection to KMS Server using > DelegationTokenAuthenticatedURL#openConnection. > b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} > with url encoded user as one parameter of http request. > {code:java} > // proxyuser > if (doAs != null) { > extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8")); > } > {code} > c. when KMS server receives the request, it does not decode the proxy user. > As result, KMS Server will get the wrong proxy user if this proxy user is > complete Kerberos Name or it includes some special character. Some other > authentication and authorization exception will throws next to it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16071) Fix typo in DistCp Counters - Bandwidth in Bytes
[ https://issues.apache.org/jira/browse/HADOOP-16071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751152#comment-16751152 ] Hadoop QA commented on HADOOP-16071: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 35m 14s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 53s{color} | {color:green} hadoop-distcp in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 62m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-16071 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12956137/HADOOP-16071.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient | | uname | Linux 31f8b6826d2e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f3d8265 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15837/testReport/ | | Max. process+thread count | 441 (vs. ulimit of 1) | | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15837/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Fix typo in DistCp Counters - Bandwidth in Bytes > > > Key: HADOOP-16071 > URL: https://issues.apache.org/jira/browse/HADOOP-16071 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.2.0 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16071.001.patch > > > {code:bash|title=DistCp MR Job Counters} > ... > DistCp Counters > Bandwidth in Btyes=20971520 >
[jira] [Commented] (HADOOP-10850) KerberosAuthenticator should not do the SPNEGO handshake
[ https://issues.apache.org/jira/browse/HADOOP-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751232#comment-16751232 ] Steve Loughran commented on HADOOP-10850: - Anyone know if this problem still exists on newer JVMs? > KerberosAuthenticator should not do the SPNEGO handshake > > > Key: HADOOP-10850 > URL: https://issues.apache.org/jira/browse/HADOOP-10850 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.4.1 >Reporter: Alejandro Abdelnur >Assignee: Alejandro Abdelnur >Priority: Major > Attachments: HADOOP-10850.patch, testFailures.png, testorder.patch > > > As mentioned in HADOOP-10453, the JDK automatically does a SPNEGO handshake > when opening a connection with a URL within a Kerberos login context, there > is no need to do the SPNEGO handshake in the {{KerberosAuthenticator}}, > simply extract the auth token (hadoop-auth cookie) and do the fallback if > necessary. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15781) S3A assumed role tests failing due to changed error text in AWS exceptions
[ https://issues.apache.org/jira/browse/HADOOP-15781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15781: Resolution: Fixed Fix Version/s: 3.1.2 Status: Resolved (was: Patch Available) > S3A assumed role tests failing due to changed error text in AWS exceptions > -- > > Key: HADOOP-15781 > URL: https://issues.apache.org/jira/browse/HADOOP-15781 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Affects Versions: 3.1.0, 3.2.0 > Environment: some of the fault-catching tests in {{ITestAssumeRole}} > are failing as the SDK update of HADOOP-15642 changed the text. Fix the > tests, perhaps by removing the text check entirely > —it's clearly too brittle >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Fix For: 3.2.0, 3.1.2 > > Attachments: HADOOP-15781-001.patch, HADOOP-15781-branch-3.1-002.patch > > > This is caused by HADOOP-15642 but I'd missed it because I'd been playing > with assumed roles locally (restricting their rights) and mistook the > failures for "steve's misconfigured the test role", not "the SDK -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16071) Fix typo in DistCp Counters - Bandwidth in Bytes
[ https://issues.apache.org/jira/browse/HADOOP-16071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751328#comment-16751328 ] Arpit Agarwal commented on HADOOP-16071: +1 > Fix typo in DistCp Counters - Bandwidth in Bytes > > > Key: HADOOP-16071 > URL: https://issues.apache.org/jira/browse/HADOOP-16071 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.2.0 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16071.001.patch > > > {code:bash|title=DistCp MR Job Counters} > ... > DistCp Counters > Bandwidth in Btyes=20971520 > Bytes Copied=20971520 > Bytes Expected=20971520 > Files Copied=1 > {code} > {noformat} > Bandwidth in Btyes -> Bandwidth in Bytes > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16071) Fix typo in DistCp Counters - Bandwidth in Bytes
[ https://issues.apache.org/jira/browse/HADOOP-16071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751363#comment-16751363 ] Steve Loughran commented on HADOOP-16071: - This is actually something I've seen for a while and wondered about fixing for the following reason: Are we all confident that changing this counter name isn't going to break things? > Fix typo in DistCp Counters - Bandwidth in Bytes > > > Key: HADOOP-16071 > URL: https://issues.apache.org/jira/browse/HADOOP-16071 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.2.0 >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16071.001.patch > > > {code:bash|title=DistCp MR Job Counters} > ... > DistCp Counters > Bandwidth in Btyes=20971520 > Bytes Copied=20971520 > Bytes Expected=20971520 > Files Copied=1 > {code} > {noformat} > Bandwidth in Btyes -> Bandwidth in Bytes > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15281) Distcp to add no-rename copy option
[ https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751375#comment-16751375 ] Andrew Olson commented on HADOOP-15281: --- [~ste...@apache.org] thanks, I've attached the patch. > Distcp to add no-rename copy option > --- > > Key: HADOOP-15281 > URL: https://issues.apache.org/jira/browse/HADOOP-15281 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Andrew Olson >Priority: Major > Attachments: HADOOP-15281-001.patch > > > Currently Distcp uploads a file by two strategies > # append parts > # copy to temp then rename > option 2 executes the following sequence in {{promoteTmpToTarget}} > {code} > if ((fs.exists(target) && !fs.delete(target, false)) > || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent())) > || !fs.rename(tmpTarget, target)) { > throw new IOException("Failed to promote tmp-file:" + tmpTarget > + " to: " + target); > } > {code} > For any object store, that's a lot of HTTP requests; for S3A you are looking > at 12+ requests and an O(data) copy call. > This is not a good upload strategy for any store which manifests its output > atomically at the end of the write(). > Proposed: add a switch to write direct to the dest path. either a conf option > or a CLI option -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15281) Distcp to add no-rename copy option
[ https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751310#comment-16751310 ] Steve Loughran commented on HADOOP-15281: - oh, let me give you the permission; restrictions are there to keep out spam not code > Distcp to add no-rename copy option > --- > > Key: HADOOP-15281 > URL: https://issues.apache.org/jira/browse/HADOOP-15281 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Priority: Major > > Currently Distcp uploads a file by two strategies > # append parts > # copy to temp then rename > option 2 executes the following sequence in {{promoteTmpToTarget}} > {code} > if ((fs.exists(target) && !fs.delete(target, false)) > || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent())) > || !fs.rename(tmpTarget, target)) { > throw new IOException("Failed to promote tmp-file:" + tmpTarget > + " to: " + target); > } > {code} > For any object store, that's a lot of HTTP requests; for S3A you are looking > at 12+ requests and an O(data) copy call. > This is not a good upload strategy for any store which manifests its output > atomically at the end of the write(). > Proposed: add a switch to write direct to the dest path. either a conf option > or a CLI option -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15281) Distcp to add no-rename copy option
[ https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751318#comment-16751318 ] Steve Loughran commented on HADOOP-15281: - try now > Distcp to add no-rename copy option > --- > > Key: HADOOP-15281 > URL: https://issues.apache.org/jira/browse/HADOOP-15281 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Priority: Major > > Currently Distcp uploads a file by two strategies > # append parts > # copy to temp then rename > option 2 executes the following sequence in {{promoteTmpToTarget}} > {code} > if ((fs.exists(target) && !fs.delete(target, false)) > || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent())) > || !fs.rename(tmpTarget, target)) { > throw new IOException("Failed to promote tmp-file:" + tmpTarget > + " to: " + target); > } > {code} > For any object store, that's a lot of HTTP requests; for S3A you are looking > at 12+ requests and an O(data) copy call. > This is not a good upload strategy for any store which manifests its output > atomically at the end of the write(). > Proposed: add a switch to write direct to the dest path. either a conf option > or a CLI option -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16065) -Ddynamodb should be -Ddynamo in AWS SDK testing document
[ https://issues.apache.org/jira/browse/HADOOP-16065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751323#comment-16751323 ] Steve Loughran commented on HADOOP-16065: - LGTM +1 > -Ddynamodb should be -Ddynamo in AWS SDK testing document > - > > Key: HADOOP-16065 > URL: https://issues.apache.org/jira/browse/HADOOP-16065 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Minor > Labels: newbie > Attachments: HADOOP-16065.01.patch, HADOOP-16065.02.patch, > HADOOP-16065.03.patch > > > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md > {{-Ddynamodb}} should be {{-Ddynamo}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16055) Upgrade AWS SDK to fix license issue in branch-2.8 and branch-2.7
[ https://issues.apache.org/jira/browse/HADOOP-16055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751329#comment-16751329 ] Steve Loughran commented on HADOOP-16055: - in that case, +1 what was up with the root dir tests? Sometimes, because of S3's inconsistency, they can be unhappy. > Upgrade AWS SDK to fix license issue in branch-2.8 and branch-2.7 > - > > Key: HADOOP-16055 > URL: https://issues.apache.org/jira/browse/HADOOP-16055 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Blocker > Attachments: HADOOP-16055-branch-2-01.patch, > HADOOP-16055-branch-2.8-01.patch, HADOOP-16055-branch-2.8-02.patch > > > Per HADOOP-13794, we must exclude the JSON license. > The upgrade will contain incompatible changes, however, the license issue is > much more important. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16049) DistCp result has data and checksum mismatch when blocks per chunk > 0
[ https://issues.apache.org/jira/browse/HADOOP-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751336#comment-16751336 ] Steve Loughran commented on HADOOP-16049: - We can't update the JDK as branch-2 is java 7: testing with jdk7 keeps us honest > DistCp result has data and checksum mismatch when blocks per chunk > 0 > -- > > Key: HADOOP-16049 > URL: https://issues.apache.org/jira/browse/HADOOP-16049 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 2.9.2 >Reporter: Kai Xie >Assignee: Kai Xie >Priority: Major > Attachments: HADOOP-16049-branch-2-003.patch, > HADOOP-16049-branch-2-003.patch, HADOOP-16049-branch-2-004.patch, > HADOOP-16049-branch-2-005.patch > > > In 2.9.2 RetriableFileCopyCommand.copyBytes, > {code:java} > int bytesRead = readBytes(inStream, buf, sourceOffset); > while (bytesRead >= 0) { > ... > if (action == FileAction.APPEND) { > sourceOffset += bytesRead; > } > ... // write to dst > bytesRead = readBytes(inStream, buf, sourceOffset); > }{code} > it does a positioned read but the position (`sourceOffset` here) is never > updated when blocks per chunk is set to > 0 (which always disables append > action). So for chunk with offset != 0, it will keep copying the first few > bytes again and again, causing result to have data & checksum mismatch. > To re-produce this issue, in branch-2, update BLOCK_SIZE to 10240 (> default > copy buffer size) in class TestDistCpSystem and run it. > HADOOP-15292 has resolved the issue reported in this ticket in > trunk/branch-3.1/branch-3.2 by not using the positioned read, but has not > been backported to branch-2 yet > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15281) Distcp to add no-rename copy option
[ https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Olson reassigned HADOOP-15281: - Assignee: Andrew Olson > Distcp to add no-rename copy option > --- > > Key: HADOOP-15281 > URL: https://issues.apache.org/jira/browse/HADOOP-15281 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Andrew Olson >Priority: Major > > Currently Distcp uploads a file by two strategies > # append parts > # copy to temp then rename > option 2 executes the following sequence in {{promoteTmpToTarget}} > {code} > if ((fs.exists(target) && !fs.delete(target, false)) > || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent())) > || !fs.rename(tmpTarget, target)) { > throw new IOException("Failed to promote tmp-file:" + tmpTarget > + " to: " + target); > } > {code} > For any object store, that's a lot of HTTP requests; for S3A you are looking > at 12+ requests and an O(data) copy call. > This is not a good upload strategy for any store which manifests its output > atomically at the end of the write(). > Proposed: add a switch to write direct to the dest path. either a conf option > or a CLI option -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15281) Distcp to add no-rename copy option
[ https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Olson updated HADOOP-15281: -- Attachment: HADOOP-15281-001.patch > Distcp to add no-rename copy option > --- > > Key: HADOOP-15281 > URL: https://issues.apache.org/jira/browse/HADOOP-15281 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Andrew Olson >Priority: Major > Attachments: HADOOP-15281-001.patch > > > Currently Distcp uploads a file by two strategies > # append parts > # copy to temp then rename > option 2 executes the following sequence in {{promoteTmpToTarget}} > {code} > if ((fs.exists(target) && !fs.delete(target, false)) > || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent())) > || !fs.rename(tmpTarget, target)) { > throw new IOException("Failed to promote tmp-file:" + tmpTarget > + " to: " + target); > } > {code} > For any object store, that's a lot of HTTP requests; for S3A you are looking > at 12+ requests and an O(data) copy call. > This is not a good upload strategy for any store which manifests its output > atomically at the end of the write(). > Proposed: add a switch to write direct to the dest path. either a conf option > or a CLI option -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15281) Distcp to add no-rename copy option
[ https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751403#comment-16751403 ] Steve Loughran commented on HADOOP-15281: - I see it. Hit the "submit patch" button and jenkins will build it > Distcp to add no-rename copy option > --- > > Key: HADOOP-15281 > URL: https://issues.apache.org/jira/browse/HADOOP-15281 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Andrew Olson >Priority: Major > Attachments: HADOOP-15281-001.patch > > > Currently Distcp uploads a file by two strategies > # append parts > # copy to temp then rename > option 2 executes the following sequence in {{promoteTmpToTarget}} > {code} > if ((fs.exists(target) && !fs.delete(target, false)) > || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent())) > || !fs.rename(tmpTarget, target)) { > throw new IOException("Failed to promote tmp-file:" + tmpTarget > + " to: " + target); > } > {code} > For any object store, that's a lot of HTTP requests; for S3A you are looking > at 12+ requests and an O(data) copy call. > This is not a good upload strategy for any store which manifests its output > atomically at the end of the write(). > Proposed: add a switch to write direct to the dest path. either a conf option > or a CLI option -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15566) Remove HTrace support
[ https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16751408#comment-16751408 ] Colin P. McCabe commented on HADOOP-15566: -- HTrace *is* "a lightweight Hadoop API for the tracing where multiple implementation can be plugged in." :) The "H" originally stood for "Hadoop." So you could just move the HTrace API classes into hadoop-common, and then have people continue using Zipkin or something as the backend. And / or write an opentracing backend to interface with those systems. > Remove HTrace support > - > > Key: HADOOP-15566 > URL: https://issues.apache.org/jira/browse/HADOOP-15566 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Affects Versions: 3.1.0 >Reporter: Todd Lipcon >Priority: Major > Labels: security > Attachments: Screen Shot 2018-06-29 at 11.59.16 AM.png, > ss-trace-s3a.png > > > The HTrace incubator project has voted to retire itself and won't be making > further releases. The Hadoop project currently has various hooks with HTrace. > It seems in some cases (eg HDFS-13702) these hooks have had measurable > performance overhead. Given these two factors, I think we should consider > removing the HTrace integration. If there is someone willing to do the work, > replacing it with OpenTracing might be a better choice since there is an > active community. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15281) Distcp to add no-rename copy option
[ https://issues.apache.org/jira/browse/HADOOP-15281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Olson updated HADOOP-15281: -- Status: Patch Available (was: Open) > Distcp to add no-rename copy option > --- > > Key: HADOOP-15281 > URL: https://issues.apache.org/jira/browse/HADOOP-15281 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Andrew Olson >Priority: Major > Attachments: HADOOP-15281-001.patch > > > Currently Distcp uploads a file by two strategies > # append parts > # copy to temp then rename > option 2 executes the following sequence in {{promoteTmpToTarget}} > {code} > if ((fs.exists(target) && !fs.delete(target, false)) > || (!fs.exists(target.getParent()) && !fs.mkdirs(target.getParent())) > || !fs.rename(tmpTarget, target)) { > throw new IOException("Failed to promote tmp-file:" + tmpTarget > + " to: " + target); > } > {code} > For any object store, that's a lot of HTTP requests; for S3A you are looking > at 12+ requests and an O(data) copy call. > This is not a good upload strategy for any store which manifests its output > atomically at the end of the write(). > Proposed: add a switch to write direct to the dest path. either a conf option > or a CLI option -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org