[jira] [Commented] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug
[ https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337573#comment-15337573 ] Hadoop QA commented on HADOOP-13192: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 17 unchanged - 12 fixed = 17 total (was 29) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 45s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 44m 16s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Issue | HADOOP-13192 | | GITHUB PR | https://github.com/apache/hadoop/pull/99 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux cb8ca8844db9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0761379 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/9823/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9823/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9823/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > org.apache.hadoop.util.LineReader match recordDelimiter has a bug > -- > > Key: HADOOP-13192 > URL: https://issues.apache.org/jira/browse/HADOOP-13192 > Project: Hadoop Common > Issue Type: Bug > Components: util >
[jira] [Commented] (HADOOP-13149) Windows distro build fails on dist-copynativelibs.
[ https://issues.apache.org/jira/browse/HADOOP-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337541#comment-15337541 ] Akira AJISAKA commented on HADOOP-13149: Thank you, Chris! > Windows distro build fails on dist-copynativelibs. > -- > > Key: HADOOP-13149 > URL: https://issues.apache.org/jira/browse/HADOOP-13149 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Blocker > Fix For: 2.8.0 > > Attachments: HADOOP-13149.001.patch, HADOOP-13149.branch-2.01.patch > > > HADOOP-12892 pulled the dist-copynativelibs script into an external file. > The call to this script is failing when running a distro build on Windows. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13288) Guard null stats key in FileSystemStorageStatistics
[ https://issues.apache.org/jira/browse/HADOOP-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337514#comment-15337514 ] Hitesh Shah commented on HADOOP-13288: -- {code} if (key == null) { return null; } {code} Shouldnt this be a precondition assert where key passed in should never be null? i.e. if a bad app asks for a value for a null key, throw an error. > Guard null stats key in FileSystemStorageStatistics > --- > > Key: HADOOP-13288 > URL: https://issues.apache.org/jira/browse/HADOOP-13288 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-13288.000.patch > > > Currently in {{FileSystemStorageStatistics}} we simply returns data from > {{FileSystem#Statistics}}. However there is no null key check, which leads to > NPE problems to downstream applications. For example, we got a NPE when > passing a null key to {{FileSystemStorageStatistics#getLong()}}, exception > stack as following: > {quote} > NullPointerException > at > org.apache.hadoop.fs.FileSystemStorageStatistics.fetch(FileSystemStorageStatistics.java:80) > at > org.apache.hadoop.fs.FileSystemStorageStatistics.getLong(FileSystemStorageStatistics.java:108) > at > org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:60) > at > org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118) > at > org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {quote} > This jira is to add null stat key check to {{FileSystemStorageStatistics}}. > Thanks [~hitesh] for trying in Tez and reporting this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-12899) External distribution stitching scripts do not work correctly on Windows.
[ https://issues.apache.org/jira/browse/HADOOP-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roni Burd updated HADOOP-12899: --- Comment: was deleted (was: I'm still getting failure due to CR LF in the .sh files in Windows Cygwin is complaining that the files Cygwin complaining with$'\r': command not found ". I'm trying to set -o ignrcr globally in Cygwin (version 2.5.1) ) > External distribution stitching scripts do not work correctly on Windows. > - > > Key: HADOOP-12899 > URL: https://issues.apache.org/jira/browse/HADOOP-12899 > Project: Hadoop Common > Issue Type: Bug > Components: build > Environment: Windows >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Blocker > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-12899.001.patch, HADOOP-12899.002.patch > > > In HADOOP-12850, we pulled the dist-layout-stitching and dist-tar-stitching > scripts out of hadoop-dist/pom.xml and into external files. It appears this > change is not working correctly on Windows. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13019) Implement ErasureCodec for HitchHiker XOR coding
[ https://issues.apache.org/jira/browse/HADOOP-13019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337404#comment-15337404 ] Kai Zheng commented on HADOOP-13019: It sounds like a good idea. As we did HADOOP-13010 and we have HADOOP-13061, we probablly simplify the codes. > Implement ErasureCodec for HitchHiker XOR coding > > > Key: HADOOP-13019 > URL: https://issues.apache.org/jira/browse/HADOOP-13019 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: HADOOP-13019.01.patch, HADOOP-13019.02.patch > > > Implement a missing {{ErasureCodec}} that uses {{HHXORErasureEncoder}} and > {{HHXORErasureDecoder}} in order to align the interface of each coding > algorithms -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12899) External distribution stitching scripts do not work correctly on Windows.
[ https://issues.apache.org/jira/browse/HADOOP-12899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337398#comment-15337398 ] Roni Burd commented on HADOOP-12899: I'm still getting failure due to CR LF in the .sh files in Windows Cygwin is complaining that the files Cygwin complaining with$'\r': command not found ". I'm trying to set -o ignrcr globally in Cygwin (version 2.5.1) > External distribution stitching scripts do not work correctly on Windows. > - > > Key: HADOOP-12899 > URL: https://issues.apache.org/jira/browse/HADOOP-12899 > Project: Hadoop Common > Issue Type: Bug > Components: build > Environment: Windows >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Blocker > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-12899.001.patch, HADOOP-12899.002.patch > > > In HADOOP-12850, we pulled the dist-layout-stitching and dist-tar-stitching > scripts out of hadoop-dist/pom.xml and into external files. It appears this > change is not working correctly on Windows. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13290) Appropriate use of generics in FairCallQueue
Konstantin Shvachko created HADOOP-13290: Summary: Appropriate use of generics in FairCallQueue Key: HADOOP-13290 URL: https://issues.apache.org/jira/browse/HADOOP-13290 Project: Hadoop Common Issue Type: Bug Components: ipc Affects Versions: 2.6.0 Reporter: Konstantin Shvachko # {{BlockingQueue}} is intermittently used with and without generic parameters in {{FairCallQueue}} class. Should be parameterized. # Same for {{FairCallQueue}}. Should be parameterized. Could be a bit more tricky for that one. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13289) Remove unused variables in TestFairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HADOOP-13289: - Labels: newbie (was: newbiee) > Remove unused variables in TestFairCallQueue > > > Key: HADOOP-13289 > URL: https://issues.apache.org/jira/browse/HADOOP-13289 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Konstantin Shvachko > Labels: newbie > > # Remove unused member {{alwaysZeroScheduler}} and related initialization in > {{TestFairCallQueue}} > # Remove unused local vriable {{sched}} in > {{testOfferSucceedsWhenScheduledLowPriority()}} > And propagate to applicable release branches. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13289) Remove unused variables in TestFairCallQueue
Konstantin Shvachko created HADOOP-13289: Summary: Remove unused variables in TestFairCallQueue Key: HADOOP-13289 URL: https://issues.apache.org/jira/browse/HADOOP-13289 Project: Hadoop Common Issue Type: Bug Components: test Reporter: Konstantin Shvachko # Remove unused member {{alwaysZeroScheduler}} and related initialization in {{TestFairCallQueue}} # Remove unused local vriable {{sched}} in {{testOfferSucceedsWhenScheduledLowPriority()}} And propagate to applicable release branches. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13019) Implement ErasureCodec for HitchHiker XOR coding
[ https://issues.apache.org/jira/browse/HADOOP-13019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337333#comment-15337333 ] Zhe Zhang commented on HADOOP-13019: Thanks [~lewuathe]. I think enabling HH is a very useful task for 3.0-alpha1 release. Patch itself looks good. [~drankye] I have a more general question about the {{AbstractErasureCodec}} structure. If all classes extending the abstract class only contain trivial code, should we consider moving this layer of abstraction? > Implement ErasureCodec for HitchHiker XOR coding > > > Key: HADOOP-13019 > URL: https://issues.apache.org/jira/browse/HADOOP-13019 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: Kai Sasaki >Assignee: Kai Sasaki > Attachments: HADOOP-13019.01.patch, HADOOP-13019.02.patch > > > Implement a missing {{ErasureCodec}} that uses {{HHXORErasureEncoder}} and > {{HHXORErasureDecoder}} in order to align the interface of each coding > algorithms -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12718) Incorrect error message by fs -put local dir without permission
[ https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337284#comment-15337284 ] Hadoop QA commented on HADOOP-12718: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 54s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 39m 16s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808090/HADOOP-12718.004.patch | | JIRA Issue | HADOOP-12718 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 69908b0a79d4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0761379 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9822/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9822/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Incorrect error message by fs -put local dir without permission > --- > > Key: HADOOP-12718 > URL: https://issues.apache.org/jira/browse/HADOOP-12718 > Project: Hadoop Common > Issue Type: Bug >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Blocker > Labels: supportability > Attachments: HADOOP-12718.001.patch,
[jira] [Commented] (HADOOP-13255) KMSClientProvider should check and renew tgt when doing delegation token operations.
[ https://issues.apache.org/jira/browse/HADOOP-13255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337264#comment-15337264 ] Zhe Zhang commented on HADOOP-13255: Sorry for chimming in late. Yes I was having a hard time setting the expiry time shorter than 6 mins. I think it is reasonable to backport the patch without the unit test to branch-2 and downward. > KMSClientProvider should check and renew tgt when doing delegation token > operations. > > > Key: HADOOP-13255 > URL: https://issues.apache.org/jira/browse/HADOOP-13255 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1 > > Attachments: HADOOP-13255.01.patch, HADOOP-13255.02.patch, > HADOOP-13255.03.patch, HADOOP-13255.04.patch, HADOOP-13255.05.patch, > HADOOP-13255.branch-2.patch, HADOOP-13255.test.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13288) Guard null stats key in FileSystemStorageStatistics
[ https://issues.apache.org/jira/browse/HADOOP-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337217#comment-15337217 ] Hadoop QA commented on HADOOP-13288: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 53s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811480/HADOOP-13288.000.patch | | JIRA Issue | HADOOP-13288 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 8326ba5440fa 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0761379 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/9821/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9821/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9821/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Guard null stats key in FileSystemStorageStatistics > --- > > Key: HADOOP-13288 > URL: https://issues.apache.org/jira/browse/HADOOP-13288 > Project: Hadoop Common >
[jira] [Updated] (HADOOP-13149) Windows distro build fails on dist-copynativelibs.
[ https://issues.apache.org/jira/browse/HADOOP-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13149: --- Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) +1. I committed this to branch-2 and branch-2.8. [~ajisakaa], thank you. It turns out that we didn't need to bring in HADOOP-12899, because the patch that caused that problem is still only in trunk. We did need to bring in HDFS-10353 though. I'll comment over there. > Windows distro build fails on dist-copynativelibs. > -- > > Key: HADOOP-13149 > URL: https://issues.apache.org/jira/browse/HADOOP-13149 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Blocker > Fix For: 2.8.0 > > Attachments: HADOOP-13149.001.patch, HADOOP-13149.branch-2.01.patch > > > HADOOP-12892 pulled the dist-copynativelibs script into an external file. > The call to this script is failing when running a distro build on Windows. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13285) DecayRpcScheduler MXBean should only report decayed CallVolumeSummary
[ https://issues.apache.org/jira/browse/HADOOP-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337170#comment-15337170 ] Hudson commented on HADOOP-13285: - SUCCESS: Integrated in Hadoop-trunk-Commit #9980 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9980/]) HADOOP-13285. DecayRpcScheduler MXBean should only report decayed (xyao: rev 0761379fe45898c44c8f161834c298ef932e4d8c) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java * hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestDecayRpcScheduler.java > DecayRpcScheduler MXBean should only report decayed CallVolumeSummary > - > > Key: HADOOP-13285 > URL: https://issues.apache.org/jira/browse/HADOOP-13285 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: Namit Maheshwari >Assignee: Xiaoyu Yao > Fix For: 2.8.0 > > Attachments: HADOOP-13285.00.patch > > > HADOOP-13197 added non-decayed call metrics in metrics2 source for > DecayedRpcScheduler. However, CallVolumeSummary in MXBean was affected > unexpectedly to include both decayed and non-decayed call volume. The root > cause is Jackson ObjectMapper simply serialize all the content of the > callCounts map which contains both non-decayed and decayed counter after > HADOOP-13197. This ticket is opened to fix the CallVolumeSummary in MXBean to > include only decayed call volume for backward compatibility and add unit test > for DecayRpcScheduler MXBean to catch this in future. > CallVolumeSummary JMX example before HADOOP-13197 > {code} > "CallVolumeSummary" : "{\"hbase\":1,\"mapred\":1}" > {code} > CallVolumeSummary JMX example after HADOOP-13197 > {code} > "CallVolumeSummary" : "{\"user_x\":[1,2]}" > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13263) Reload cached groups in background after expiry
[ https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337145#comment-15337145 ] Hadoop QA commented on HADOOP-13263: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 41s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 7s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 27s{color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 93 new + 216 unchanged - 0 fixed = 309 total (was 216) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 35s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 8s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 33s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-common-project/hadoop-common | | | Inconsistent synchronization of org.apache.hadoop.security.Groups$GroupCacheLoader.executorService; locked 66% of time Unsynchronized access at Groups.java:66% of time Unsynchronized access at Groups.java:[line 334] | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811466/HADOOP-13263.003.patch | | JIRA Issue | HADOOP-13263 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux fe6b7590ae89 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2800695 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/9819/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/9819/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-common.html | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9819/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output |
[jira] [Updated] (HADOOP-13285) DecayRpcScheduler MXBean should only report decayed CallVolumeSummary
[ https://issues.apache.org/jira/browse/HADOOP-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-13285: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.0 Target Version/s: 2.8.0 Status: Resolved (was: Patch Available) Thank [~jnp] for the review. I've commit the patch to trunk, branch-2 and branch-2.8. > DecayRpcScheduler MXBean should only report decayed CallVolumeSummary > - > > Key: HADOOP-13285 > URL: https://issues.apache.org/jira/browse/HADOOP-13285 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: Namit Maheshwari >Assignee: Xiaoyu Yao > Fix For: 2.8.0 > > Attachments: HADOOP-13285.00.patch > > > HADOOP-13197 added non-decayed call metrics in metrics2 source for > DecayedRpcScheduler. However, CallVolumeSummary in MXBean was affected > unexpectedly to include both decayed and non-decayed call volume. The root > cause is Jackson ObjectMapper simply serialize all the content of the > callCounts map which contains both non-decayed and decayed counter after > HADOOP-13197. This ticket is opened to fix the CallVolumeSummary in MXBean to > include only decayed call volume for backward compatibility and add unit test > for DecayRpcScheduler MXBean to catch this in future. > CallVolumeSummary JMX example before HADOOP-13197 > {code} > "CallVolumeSummary" : "{\"hbase\":1,\"mapred\":1}" > {code} > CallVolumeSummary JMX example after HADOOP-13197 > {code} > "CallVolumeSummary" : "{\"user_x\":[1,2]}" > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12804) Read Proxy Password from Credential Providers in S3 FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Larry McCay updated HADOOP-12804: - Status: Open (was: Patch Available) Canceling the patch due to checkstyle errors and test error. Will provide a new one shortly. > Read Proxy Password from Credential Providers in S3 FileSystem > -- > > Key: HADOOP-12804 > URL: https://issues.apache.org/jira/browse/HADOOP-12804 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Larry McCay >Assignee: Larry McCay >Priority: Minor > Attachments: HADOOP-12804-001.patch, HADOOP-12804-branch-2-002.patch > > > HADOOP-12548 added credential provider support for the AWS credentials to > S3FileSystem. This JIRA is for considering the use of the credential > providers for the proxy password as well. > Instead of adding the proxy password to the config file directly and in clear > text, we could provision it in addition to the AWS credentials into a > credential provider and keep it out of clear text. > In terms of usage, it could be added to the same credential store as the AWS > credentials or potentially to a more universally available path - since it is > the same for everyone. This would however require multiple providers to be > configured in the provider.path property and more open file permissions on > the store itself. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13288) Guard null stats key in FileSystemStorageStatistics
[ https://issues.apache.org/jira/browse/HADOOP-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13288: --- Status: Patch Available (was: Open) > Guard null stats key in FileSystemStorageStatistics > --- > > Key: HADOOP-13288 > URL: https://issues.apache.org/jira/browse/HADOOP-13288 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-13288.000.patch > > > Currently in {{FileSystemStorageStatistics}} we simply returns data from > {{FileSystem#Statistics}}. However there is no null key check, which leads to > NPE problems to downstream applications. For example, we got a NPE when > passing a null key to {{FileSystemStorageStatistics#getLong()}}, exception > stack as following: > {quote} > NullPointerException > at > org.apache.hadoop.fs.FileSystemStorageStatistics.fetch(FileSystemStorageStatistics.java:80) > at > org.apache.hadoop.fs.FileSystemStorageStatistics.getLong(FileSystemStorageStatistics.java:108) > at > org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:60) > at > org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118) > at > org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {quote} > This jira is to add null stat key check to {{FileSystemStorageStatistics}}. > Thanks [~hitesh] for trying in Tez and reporting this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13288) Guard null stats key in FileSystemStorageStatistics
[ https://issues.apache.org/jira/browse/HADOOP-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13288: --- Attachment: HADOOP-13288.000.patch The NPE was caused by the fact that {{switch}} statement does not accept a null value in {{fetch()}}. The v0 patch adds the null stat (key and data) check. > Guard null stats key in FileSystemStorageStatistics > --- > > Key: HADOOP-13288 > URL: https://issues.apache.org/jira/browse/HADOOP-13288 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-13288.000.patch > > > Currently in {{FileSystemStorageStatistics}} we simply returns data from > {{FileSystem#Statistics}}. However there is no null key check, which leads to > NPE problems to downstream applications. For example, we got a NPE when > passing a null key to {{FileSystemStorageStatistics#getLong()}}, exception > stack as following: > {quote} > NullPointerException > at > org.apache.hadoop.fs.FileSystemStorageStatistics.fetch(FileSystemStorageStatistics.java:80) > at > org.apache.hadoop.fs.FileSystemStorageStatistics.getLong(FileSystemStorageStatistics.java:108) > at > org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:60) > at > org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118) > at > org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {quote} > This jira is to add null stat key check to {{FileSystemStorageStatistics}}. > Thanks [~hitesh] for trying in Tez and reporting this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-10048) LocalDirAllocator should avoid holding locks while accessing the filesystem
[ https://issues.apache.org/jira/browse/HADOOP-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337119#comment-15337119 ] Zhe Zhang commented on HADOOP-10048: Thanks [~jlowe] for the fix and [~djp] for the review. Is this a valid bug fix for 2.7 and 2.6 as well? Should we consider backporting it? > LocalDirAllocator should avoid holding locks while accessing the filesystem > --- > > Key: HADOOP-10048 > URL: https://issues.apache.org/jira/browse/HADOOP-10048 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.3.0 >Reporter: Jason Lowe >Assignee: Jason Lowe > Fix For: 2.8.0 > > Attachments: HADOOP-10048.003.patch, HADOOP-10048.004.patch, > HADOOP-10048.005.patch, HADOOP-10048.006.patch, HADOOP-10048.patch, > HADOOP-10048.trunk.patch > > > As noted in MAPREDUCE-5584 and HADOOP-7016, LocalDirAllocator can be a > bottleneck for multithreaded setups like the ShuffleHandler. We should > consider moving to a lockless design or minimizing the critical sections to a > very small amount of time that does not involve I/O operations. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13285) DecayRpcScheduler MXBean should only report decayed CallVolumeSummary
[ https://issues.apache.org/jira/browse/HADOOP-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337107#comment-15337107 ] Jitendra Nath Pandey commented on HADOOP-13285: --- +1 > DecayRpcScheduler MXBean should only report decayed CallVolumeSummary > - > > Key: HADOOP-13285 > URL: https://issues.apache.org/jira/browse/HADOOP-13285 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: Namit Maheshwari >Assignee: Xiaoyu Yao > Attachments: HADOOP-13285.00.patch > > > HADOOP-13197 added non-decayed call metrics in metrics2 source for > DecayedRpcScheduler. However, CallVolumeSummary in MXBean was affected > unexpectedly to include both decayed and non-decayed call volume. The root > cause is Jackson ObjectMapper simply serialize all the content of the > callCounts map which contains both non-decayed and decayed counter after > HADOOP-13197. This ticket is opened to fix the CallVolumeSummary in MXBean to > include only decayed call volume for backward compatibility and add unit test > for DecayRpcScheduler MXBean to catch this in future. > CallVolumeSummary JMX example before HADOOP-13197 > {code} > "CallVolumeSummary" : "{\"hbase\":1,\"mapred\":1}" > {code} > CallVolumeSummary JMX example after HADOOP-13197 > {code} > "CallVolumeSummary" : "{\"user_x\":[1,2]}" > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13285) DecayRpcScheduler MXBean should only report decayed CallVolumeSummary
[ https://issues.apache.org/jira/browse/HADOOP-13285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HADOOP-13285: -- Description: HADOOP-13197 added non-decayed call metrics in metrics2 source for DecayedRpcScheduler. However, CallVolumeSummary in MXBean was affected unexpectedly to include both decayed and non-decayed call volume. The root cause is Jackson ObjectMapper simply serialize all the content of the callCounts map which contains both non-decayed and decayed counter after HADOOP-13197. This ticket is opened to fix the CallVolumeSummary in MXBean to include only decayed call volume for backward compatibility and add unit test for DecayRpcScheduler MXBean to catch this in future. CallVolumeSummary JMX example before HADOOP-13197 {code} "CallVolumeSummary" : "{\"hbase\":1,\"mapred\":1}" {code} CallVolumeSummary JMX example after HADOOP-13197 {code} "CallVolumeSummary" : "{\"user_x\":[1,2]}" {code} was: HADOOP-13197 added non-decayed call metrics in metrics2 source for DecayedRpcScheduler. However, CallVolumeSummary in MXBean was affected unexpectedly to include both decayed and non-decayed call volume. The root cause is Jackson ObjectMapper simply serialize all the content of the callCounts map which contains both non-decayed and decayed counter after HADOOP-13197. This ticket is opened to fix the CallVolumeSummary in MXBean to include only decayed call volume for backward compatibility and add unit test for DecayRpcScheduler MXBean to catch this in future. CallVolumeSummary JMX example before HADOOP-13197 {code} "CallVolumeSummary" : "{\"hbase\":1,\"mapred\":1}" {code} CallVolumeSummary JMX example after HADOOP-13197 {code} "CallVolumeSummary" : "{\"hrt_qa\":[1,2]}" {code} > DecayRpcScheduler MXBean should only report decayed CallVolumeSummary > - > > Key: HADOOP-13285 > URL: https://issues.apache.org/jira/browse/HADOOP-13285 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Reporter: Namit Maheshwari >Assignee: Xiaoyu Yao > Attachments: HADOOP-13285.00.patch > > > HADOOP-13197 added non-decayed call metrics in metrics2 source for > DecayedRpcScheduler. However, CallVolumeSummary in MXBean was affected > unexpectedly to include both decayed and non-decayed call volume. The root > cause is Jackson ObjectMapper simply serialize all the content of the > callCounts map which contains both non-decayed and decayed counter after > HADOOP-13197. This ticket is opened to fix the CallVolumeSummary in MXBean to > include only decayed call volume for backward compatibility and add unit test > for DecayRpcScheduler MXBean to catch this in future. > CallVolumeSummary JMX example before HADOOP-13197 > {code} > "CallVolumeSummary" : "{\"hbase\":1,\"mapred\":1}" > {code} > CallVolumeSummary JMX example after HADOOP-13197 > {code} > "CallVolumeSummary" : "{\"user_x\":[1,2]}" > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13288) Guard null stats key in FileSystemStorageStatistics
[ https://issues.apache.org/jira/browse/HADOOP-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13288: --- Fix Version/s: (was: 2.8.0) > Guard null stats key in FileSystemStorageStatistics > --- > > Key: HADOOP-13288 > URL: https://issues.apache.org/jira/browse/HADOOP-13288 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > > Currently in {{FileSystemStorageStatistics}} we simply returns data from > {{FileSystem#Statistics}}. However there is no null key check, which leads to > NPE problems to downstream applications. For example, we got a NPE when > passing a null key to {{FileSystemStorageStatistics#getLong()}}, exception > stack as following: > {quote} > NullPointerException > at > org.apache.hadoop.fs.FileSystemStorageStatistics.fetch(FileSystemStorageStatistics.java:80) > at > org.apache.hadoop.fs.FileSystemStorageStatistics.getLong(FileSystemStorageStatistics.java:108) > at > org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:60) > at > org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118) > at > org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {quote} > This jira is to add null stat key check to {{FileSystemStorageStatistics}}. > Thanks [~hitesh] for trying in Tez and reporting this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13288) Guard null stats key in FileSystemStorageStatistics
[ https://issues.apache.org/jira/browse/HADOOP-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13288: --- Affects Version/s: 3.0.0-alpha1 2.8.0 > Guard null stats key in FileSystemStorageStatistics > --- > > Key: HADOOP-13288 > URL: https://issues.apache.org/jira/browse/HADOOP-13288 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > > Currently in {{FileSystemStorageStatistics}} we simply returns data from > {{FileSystem#Statistics}}. However there is no null key check, which leads to > NPE problems to downstream applications. For example, we got a NPE when > passing a null key to {{FileSystemStorageStatistics#getLong()}}, exception > stack as following: > {quote} > NullPointerException > at > org.apache.hadoop.fs.FileSystemStorageStatistics.fetch(FileSystemStorageStatistics.java:80) > at > org.apache.hadoop.fs.FileSystemStorageStatistics.getLong(FileSystemStorageStatistics.java:108) > at > org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:60) > at > org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118) > at > org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {quote} > This jira is to add null stat key check to {{FileSystemStorageStatistics}}. > Thanks [~hitesh] for trying in Tez and reporting this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13288) Guard null stats key in FileSystemStorageStatistics
[ https://issues.apache.org/jira/browse/HADOOP-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13288: --- Description: Currently in {{FileSystemStorageStatistics}} we simply returns data from {{FileSystem#Statistics}}. However there is no null key check, which leads to NPE problems to downstream applications. For example, we got a NPE when passing a null key to {{FileSystemStorageStatistics#getLong()}}, exception stack as following: {quote} NullPointerException at org.apache.hadoop.fs.FileSystemStorageStatistics.fetch(FileSystemStorageStatistics.java:80) at org.apache.hadoop.fs.FileSystemStorageStatistics.getLong(FileSystemStorageStatistics.java:108) at org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:60) at org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118) at org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {quote} This jira is to add null stat key check to {{FileSystemStorageStatistics}}. Thanks [~hitesh] for trying in Tez and reporting this. was: Currently in {{FileSystemStorageStatistics}} we simply returns data from {{FileSystem#Statistics}}. However there is no null key check, which leads to NPE problems to downstream applications. For example, we got a NPE when passing a null key to {{FileSystemStorageStatistics#getLong()}}, exception stack as following: {quote} NullPointerException at org.apache.hadoop.fs.FileSystemStorageStatistics.fetch(FileSystemStorageStatistics.java:80) at org.apache.hadoop.fs.FileSystemStorageStatistics.getLong(FileSystemStorageStatistics.java:108) at org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:60) at org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118) at org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {quote} This jira is to add null stat key check to {{FileSystemStorageStatistics}}. > Guard null stats key in FileSystemStorageStatistics > --- > > Key: HADOOP-13288 > URL: https://issues.apache.org/jira/browse/HADOOP-13288 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > > Currently in {{FileSystemStorageStatistics}} we simply returns data from > {{FileSystem#Statistics}}. However there is no null key check, which leads to > NPE problems to downstream applications. For example, we got a NPE when > passing a null key to {{FileSystemStorageStatistics#getLong()}}, exception > stack as following: > {quote} > NullPointerException > at > org.apache.hadoop.fs.FileSystemStorageStatistics.fetch(FileSystemStorageStatistics.java:80) > at > org.apache.hadoop.fs.FileSystemStorageStatistics.getLong(FileSystemStorageStatistics.java:108) > at > org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:60) > at > org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118) > at > org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at >
[jira] [Commented] (HADOOP-13287) TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains '+'.
[ https://issues.apache.org/jira/browse/HADOOP-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337091#comment-15337091 ] Hadoop QA commented on HADOOP-13287: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811471/HADOOP-13287.001.patch | | JIRA Issue | HADOOP-13287 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux ec989efeb769 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2800695 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9820/testReport/ | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9820/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains > '+'. > --- > > Key: HADOOP-13287 > URL: https://issues.apache.org/jira/browse/HADOOP-13287 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Minor > Attachments: HADOOP-13287.001.patch > > > HADOOP-3733 fixed accessing S3A with credentials on the command line for an > AWS secret key containing a '/'.
[jira] [Created] (HADOOP-13288) Guard null stats key in FileSystemStorageStatistics
Mingliang Liu created HADOOP-13288: -- Summary: Guard null stats key in FileSystemStorageStatistics Key: HADOOP-13288 URL: https://issues.apache.org/jira/browse/HADOOP-13288 Project: Hadoop Common Issue Type: Sub-task Reporter: Mingliang Liu Assignee: Mingliang Liu Currently in {{FileSystemStorageStatistics}} we simply returns data from {{FileSystem#Statistics}}. However there is no null key check, which leads to NPE problems to downstream applications. For example, we got a NPE when passing a null key to {{FileSystemStorageStatistics#getLong()}}, exception stack as following: {quote} NullPointerException at org.apache.hadoop.fs.FileSystemStorageStatistics.fetch(FileSystemStorageStatistics.java:80) at org.apache.hadoop.fs.FileSystemStorageStatistics.getLong(FileSystemStorageStatistics.java:108) at org.apache.tez.runtime.metrics.FileSystemStatisticsUpdater2.updateCounters(FileSystemStatisticsUpdater2.java:60) at org.apache.tez.runtime.metrics.TaskCounterUpdater.updateCounters(TaskCounterUpdater.java:118) at org.apache.tez.runtime.RuntimeTask.setFrameworkCounters(RuntimeTask.java:172) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:100) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {quote} This jira is to add null stat key check to {{FileSystemStorageStatistics}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded
[ https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337054#comment-15337054 ] Chris Nauroth commented on HADOOP-3733: --- The new test fails if your AWS secret key contains a '+'. I have posted a patch on HADOOP-13287. > "s3:" URLs break when Secret Key contains a slash, even if encoded > -- > > Key: HADOOP-3733 > URL: https://issues.apache.org/jira/browse/HADOOP-3733 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 0.17.1, 2.0.2-alpha >Reporter: Stuart Sierra >Assignee: Steve Loughran >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-3733-20130223T011025Z.patch, > HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, > HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, > HADOOP-3733-branch-2-005.patch, HADOOP-3733-branch-2-006.patch, > HADOOP-3733-branch-2-007.patch, HADOOP-3733.patch, hadoop-3733.patch > > > When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, > distcp fails if the SECRET contains a slash, even when the slash is > URL-encoded as %2F. > Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH > And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv > And your bucket is called "mybucket" > You can URL-encode the Secret KKey as > Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv > But this doesn't work: > {noformat} > $ bin/hadoop distcp file:///source > s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest > 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source] > 08/07/09 15:05:22 INFO util.CopyFiles: > destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest > 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: > mybucket > org.jets3t.service.S3ServiceException: S3 HEAD request failed. > ResponseCode=403, ResponseMessage=Forbidden > at > org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339) > ... > With failures, global counters are inaccurate; consider running with -i > Copy failed: org.apache.hadoop.fs.s3.S3Exception: > org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: > encoding="UTF-8"?>SignatureDoesNotMatchThe > request signature we calculated does not match the signature you provided. > Check your key and signing method. > at > org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141) > ... > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13287) TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains '+'.
[ https://issues.apache.org/jira/browse/HADOOP-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13287: --- Status: Patch Available (was: Open) > TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains > '+'. > --- > > Key: HADOOP-13287 > URL: https://issues.apache.org/jira/browse/HADOOP-13287 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Minor > Attachments: HADOOP-13287.001.patch > > > HADOOP-3733 fixed accessing S3A with credentials on the command line for an > AWS secret key containing a '/'. The patch added a new test suite: > {{TestS3ACredentialsInURL}}. One of the tests fails if your AWS secret key > contains a '+'. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13287) TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains '+'.
[ https://issues.apache.org/jira/browse/HADOOP-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13287: --- Attachment: HADOOP-13287.001.patch Reattaching correct patch. > TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains > '+'. > --- > > Key: HADOOP-13287 > URL: https://issues.apache.org/jira/browse/HADOOP-13287 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Minor > Attachments: HADOOP-13287.001.patch > > > HADOOP-3733 fixed accessing S3A with credentials on the command line for an > AWS secret key containing a '/'. The patch added a new test suite: > {{TestS3ACredentialsInURL}}. One of the tests fails if your AWS secret key > contains a '+'. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13287) TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains '+'.
[ https://issues.apache.org/jira/browse/HADOOP-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13287: --- Attachment: (was: HADOOP-13287.001.patch) > TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains > '+'. > --- > > Key: HADOOP-13287 > URL: https://issues.apache.org/jira/browse/HADOOP-13287 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Minor > > HADOOP-3733 fixed accessing S3A with credentials on the command line for an > AWS secret key containing a '/'. The patch added a new test suite: > {{TestS3ACredentialsInURL}}. One of the tests fails if your AWS secret key > contains a '+'. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13287) TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains '+'.
[ https://issues.apache.org/jira/browse/HADOOP-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337042#comment-15337042 ] Chris Nauroth edited comment on HADOOP-13287 at 6/17/16 9:49 PM: - My test run was on branch-2.8 against an S3 bucket in US-west-2. What I saw happening was a double decoding in {{S3xLoginHelper#extractLoginDetails}}: {code} public static Login extractLoginDetails(URI name) { try { String authority = name.getAuthority(); ... String password = URLDecoder.decode(login.substring(loginSplit + 1), "UTF-8"); {code} According to the JavaDocs for [{{URI#getAuthority}}|http://docs.oracle.com/javase/7/docs/api/java/net/URI.html#getAuthority()], it performs decoding already on the output. Then we do a second explicit decoding by calling {{URLDecoder#decode}}. First, {{getAuthority}} translates "%2B" to "\+". Then, {{URLDecoder#decode}} translates "\+" to " ", which isn't correct for the credentials. However, this appear to be only a problem in the JUnit test runs. I also built a distro and tested manually with URIs that contain '\+' encoded as "%2B", and that worked just fine. The reason it works fine there is because of different encoding rules applied by round-tripping through a {{Path}} before the {{FileSystem#get}} call gets triggered. With {{Path}}, the '\+' gets double-encoded to "%252B", so double-decoding at the S3A layer is correct logic. To make this work, the test should follow the same encoding as would be used on the CLI. The attached path switches from constructing a {{URI}} to constructing a {{Path}}. I switched the exception stifling logic to catch {{IllegalArgumentException}}, becauses that's what {{Path}} throws. With this, the test passes with a secret containing a '+'. [~ste...@apache.org] or [~raviprak], I understand one of you might have a secret with a '/' from your work on HADOOP-3733. Would you mind testing this patch to make sure the test still passes with '/'? was (Author: cnauroth): My test run was on branch-2.8 against an S3 bucket in US-west-2. What I saw happening was a double decoding in {{S3xLoginHelper#extractLoginDetails}}: {code} public static Login extractLoginDetails(URI name) { try { String authority = name.getAuthority(); ... String password = URLDecoder.decode(login.substring(loginSplit + 1), "UTF-8"); {code} According to the JavaDocs for [{{URI#getAuthority}}|http://docs.oracle.com/javase/7/docs/api/java/net/URI.html#getAuthority()], it performs decoding already on the output. Then we do a second explicit decoding by calling {{URLDecoder#decode}}. First, {{getAuthority}} translates "%2B" to "\+". Then, {{URLDecoder#decode}} translates "\+" to " ", which isn't correct for the credentials. However, this appear to be only a problem in the JUnit test runs. I also built a distro and tested manually with URIs that contain '+' encoded as "%2B", and that worked just fine. The reason it works fine there is because of different encoding rules applied by round-tripping through a {{Path}} before the {{FileSystem#get}} call gets triggered. With {{Path}}, the '+' gets double-encoded to "%252B", so double-decoding at the S3A layer is correct logic. To make this work, the test should follow the same encoding as would be used on the CLI. The attached path switches from constructing a {{URI}} to constructing a {{Path}}. I switched the exception stifling logic to catch {{IllegalArgumentException}}, becauses that's what {{Path}} throws. With this, the test passes with a secret containing a '+'. [~ste...@apache.org] or [~raviprak], I understand one of you might have a secret with a '/' from your work on HADOOP-3733. Would you mind testing this patch to make sure the test still passes with '/'? > TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains > '+'. > --- > > Key: HADOOP-13287 > URL: https://issues.apache.org/jira/browse/HADOOP-13287 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Minor > Attachments: HADOOP-13287.001.patch > > > HADOOP-3733 fixed accessing S3A with credentials on the command line for an > AWS secret key containing a '/'. The patch added a new test suite: > {{TestS3ACredentialsInURL}}. One of the tests fails if your AWS secret key > contains a '+'. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13287) TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains '+'.
[ https://issues.apache.org/jira/browse/HADOOP-13287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13287: --- Attachment: HADOOP-13287.001.patch My test run was on branch-2.8 against an S3 bucket in US-west-2. What I saw happening was a double decoding in {{S3xLoginHelper#extractLoginDetails}}: {code} public static Login extractLoginDetails(URI name) { try { String authority = name.getAuthority(); ... String password = URLDecoder.decode(login.substring(loginSplit + 1), "UTF-8"); {code} According to the JavaDocs for [{{URI#getAuthority}}|http://docs.oracle.com/javase/7/docs/api/java/net/URI.html#getAuthority()], it performs decoding already on the output. Then we do a second explicit decoding by calling {{URLDecoder#decode}}. First, {{getAuthority}} translates "%2B" to "\+". Then, {{URLDecoder#decode}} translates "\+" to " ", which isn't correct for the credentials. However, this appear to be only a problem in the JUnit test runs. I also built a distro and tested manually with URIs that contain '+' encoded as "%2B", and that worked just fine. The reason it works fine there is because of different encoding rules applied by round-tripping through a {{Path}} before the {{FileSystem#get}} call gets triggered. With {{Path}}, the '+' gets double-encoded to "%252B", so double-decoding at the S3A layer is correct logic. To make this work, the test should follow the same encoding as would be used on the CLI. The attached path switches from constructing a {{URI}} to constructing a {{Path}}. I switched the exception stifling logic to catch {{IllegalArgumentException}}, becauses that's what {{Path}} throws. With this, the test passes with a secret containing a '+'. [~ste...@apache.org] or [~raviprak], I understand one of you might have a secret with a '/' from your work on HADOOP-3733. Would you mind testing this patch to make sure the test still passes with '/'? > TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains > '+'. > --- > > Key: HADOOP-13287 > URL: https://issues.apache.org/jira/browse/HADOOP-13287 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Minor > Attachments: HADOOP-13287.001.patch > > > HADOOP-3733 fixed accessing S3A with credentials on the command line for an > AWS secret key containing a '/'. The patch added a new test suite: > {{TestS3ACredentialsInURL}}. One of the tests fails if your AWS secret key > contains a '+'. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13287) TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains '+'.
Chris Nauroth created HADOOP-13287: -- Summary: TestS3ACredentials#testInstantiateFromURL fails if AWS secret key contains '+'. Key: HADOOP-13287 URL: https://issues.apache.org/jira/browse/HADOOP-13287 Project: Hadoop Common Issue Type: Bug Components: fs/s3, test Reporter: Chris Nauroth Assignee: Chris Nauroth Priority: Minor HADOOP-3733 fixed accessing S3A with credentials on the command line for an AWS secret key containing a '/'. The patch added a new test suite: {{TestS3ACredentialsInURL}}. One of the tests fails if your AWS secret key contains a '+'. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps
[ https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15337000#comment-15337000 ] Mingliang Liu commented on HADOOP-13280: Thansk [~cmccabe] for the review and comments. {code} Long.valueOf(data.getLargeReadOps()); {code} {{data.getReadOps()}} is int type and {{Long.valueOf(long)}} accepts a long type parameter. Thus the implicit typecast can not be avoided I think? Besides my IntelliJ 2016.1 along with Java 8 suggests the boxing is unnecessary. But I think it's actually a matter of coding style. > FileSystemStorageStatistics#getLong(“readOps“) should return readOps + > largeReadOps > --- > > Key: HADOOP-13280 > URL: https://issues.apache.org/jira/browse/HADOOP-13280 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Fix For: 2.8.0 > > Attachments: HADOOP-13280-branch-2.8.000.patch, > HADOOP-13280.000.patch, HADOOP-13280.001.patch > > > Currently {{FileSystemStorageStatistics}} instance simply returns data from > {{FileSystem$Statistics}}. As to {{readOps}}, the > {{FileSystem$Statistics#getReadOps()}} returns {{readOps + largeReadOps}}. We > should make the {{FileSystemStorageStatistics#getLong(“readOps“)}} return the > sum as well. > Moreover, there is no unit tests for {{FileSystemStorageStatistics}} and this > JIRA will also address this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13263) Reload cached groups in background after expiry
[ https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336992#comment-15336992 ] Stephen O'Donnell commented on HADOOP-13263: Thanks for the further review [~arpitagarwal]. Nice spot on the missing synchronized block and thanks for pointing out the AtomicLong - I had not come across it before. I uploaded a 3rd version of the patch incorporating the second round of review comments and 1 more test to check the counters. I spotted the compile error in the QA run, and I got the same one when I ran the compile from the top level - looks like I removed the 'public' from one of the constructors by mistake, so I have fixed that too. > Reload cached groups in background after expiry > --- > > Key: HADOOP-13263 > URL: https://issues.apache.org/jira/browse/HADOOP-13263 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell > Attachments: HADOOP-13263.001.patch, HADOOP-13263.002.patch, > HADOOP-13263.003.patch > > > In HADOOP-11238 the Guava cache was introduced to allow refreshes on the > Namenode group cache to run in the background, avoiding many slow group > lookups. Even with this change, I have seen quite a few clusters with issues > due to slow group lookups. The problem is most prevalent in HA clusters, > where a slow group lookup on the hdfs user can fail to return for over 45 > seconds causing the Failover Controller to kill it. > The way the current Guava cache implementation works is approximately: > 1) On initial load, the first thread to request groups for a given user > blocks until it returns. Any subsequent threads requesting that user block > until that first thread populates the cache. > 2) When the key expires, the first thread to hit the cache after expiry > blocks. While it is blocked, other threads will return the old value. > I feel it is this blocking thread that still gives the Namenode issues on > slow group lookups. If the call from the FC is the one that blocks and > lookups are slow, if can cause the NN to be killed. > Guava has the ability to refresh expired keys completely in the background, > where the first thread that hits an expired key schedules a background cache > reload, but still returns the old value. Then the cache is eventually > updated. This patch introduces this background reload feature. There are two > new parameters: > 1) hadoop.security.groups.cache.background.reload - default false to keep the > current behaviour. Set to true to enable a small thread pool and background > refresh for expired keys > 2) hadoop.security.groups.cache.background.reload.threads - only relevant if > the above is set to true. Controls how many threads are in the background > refresh pool. Default is 1, which is likely to be enough. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13263) Reload cached groups in background after expiry
[ https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen O'Donnell updated HADOOP-13263: --- Attachment: HADOOP-13263.003.patch > Reload cached groups in background after expiry > --- > > Key: HADOOP-13263 > URL: https://issues.apache.org/jira/browse/HADOOP-13263 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell > Attachments: HADOOP-13263.001.patch, HADOOP-13263.002.patch, > HADOOP-13263.003.patch > > > In HADOOP-11238 the Guava cache was introduced to allow refreshes on the > Namenode group cache to run in the background, avoiding many slow group > lookups. Even with this change, I have seen quite a few clusters with issues > due to slow group lookups. The problem is most prevalent in HA clusters, > where a slow group lookup on the hdfs user can fail to return for over 45 > seconds causing the Failover Controller to kill it. > The way the current Guava cache implementation works is approximately: > 1) On initial load, the first thread to request groups for a given user > blocks until it returns. Any subsequent threads requesting that user block > until that first thread populates the cache. > 2) When the key expires, the first thread to hit the cache after expiry > blocks. While it is blocked, other threads will return the old value. > I feel it is this blocking thread that still gives the Namenode issues on > slow group lookups. If the call from the FC is the one that blocks and > lookups are slow, if can cause the NN to be killed. > Guava has the ability to refresh expired keys completely in the background, > where the first thread that hits an expired key schedules a background cache > reload, but still returns the old value. Then the cache is eventually > updated. This patch introduces this background reload feature. There are two > new parameters: > 1) hadoop.security.groups.cache.background.reload - default false to keep the > current behaviour. Set to true to enable a small thread pool and background > refresh for expired keys > 2) hadoop.security.groups.cache.background.reload.threads - only relevant if > the above is set to true. Controls how many threads are in the background > refresh pool. Default is 1, which is likely to be enough. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13263) Reload cached groups in background after expiry
[ https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen O'Donnell updated HADOOP-13263: --- Attachment: (was: HADOOP-13263.003.patch) > Reload cached groups in background after expiry > --- > > Key: HADOOP-13263 > URL: https://issues.apache.org/jira/browse/HADOOP-13263 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell > Attachments: HADOOP-13263.001.patch, HADOOP-13263.002.patch > > > In HADOOP-11238 the Guava cache was introduced to allow refreshes on the > Namenode group cache to run in the background, avoiding many slow group > lookups. Even with this change, I have seen quite a few clusters with issues > due to slow group lookups. The problem is most prevalent in HA clusters, > where a slow group lookup on the hdfs user can fail to return for over 45 > seconds causing the Failover Controller to kill it. > The way the current Guava cache implementation works is approximately: > 1) On initial load, the first thread to request groups for a given user > blocks until it returns. Any subsequent threads requesting that user block > until that first thread populates the cache. > 2) When the key expires, the first thread to hit the cache after expiry > blocks. While it is blocked, other threads will return the old value. > I feel it is this blocking thread that still gives the Namenode issues on > slow group lookups. If the call from the FC is the one that blocks and > lookups are slow, if can cause the NN to be killed. > Guava has the ability to refresh expired keys completely in the background, > where the first thread that hits an expired key schedules a background cache > reload, but still returns the old value. Then the cache is eventually > updated. This patch introduces this background reload feature. There are two > new parameters: > 1) hadoop.security.groups.cache.background.reload - default false to keep the > current behaviour. Set to true to enable a small thread pool and background > refresh for expired keys > 2) hadoop.security.groups.cache.background.reload.threads - only relevant if > the above is set to true. Controls how many threads are in the background > refresh pool. Default is 1, which is likely to be enough. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13263) Reload cached groups in background after expiry
[ https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen O'Donnell updated HADOOP-13263: --- Attachment: HADOOP-13263.003.patch > Reload cached groups in background after expiry > --- > > Key: HADOOP-13263 > URL: https://issues.apache.org/jira/browse/HADOOP-13263 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell > Attachments: HADOOP-13263.001.patch, HADOOP-13263.002.patch, > HADOOP-13263.003.patch > > > In HADOOP-11238 the Guava cache was introduced to allow refreshes on the > Namenode group cache to run in the background, avoiding many slow group > lookups. Even with this change, I have seen quite a few clusters with issues > due to slow group lookups. The problem is most prevalent in HA clusters, > where a slow group lookup on the hdfs user can fail to return for over 45 > seconds causing the Failover Controller to kill it. > The way the current Guava cache implementation works is approximately: > 1) On initial load, the first thread to request groups for a given user > blocks until it returns. Any subsequent threads requesting that user block > until that first thread populates the cache. > 2) When the key expires, the first thread to hit the cache after expiry > blocks. While it is blocked, other threads will return the old value. > I feel it is this blocking thread that still gives the Namenode issues on > slow group lookups. If the call from the FC is the one that blocks and > lookups are slow, if can cause the NN to be killed. > Guava has the ability to refresh expired keys completely in the background, > where the first thread that hits an expired key schedules a background cache > reload, but still returns the old value. Then the cache is eventually > updated. This patch introduces this background reload feature. There are two > new parameters: > 1) hadoop.security.groups.cache.background.reload - default false to keep the > current behaviour. Set to true to enable a small thread pool and background > refresh for expired keys > 2) hadoop.security.groups.cache.background.reload.threads - only relevant if > the above is set to true. Controls how many threads are in the background > refresh pool. Default is 1, which is likely to be enough. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13263) Reload cached groups in background after expiry
[ https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336819#comment-15336819 ] Hadoop QA commented on HADOOP-13263: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 2m 27s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 2m 27s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 27s{color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 85 new + 216 unchanged - 0 fixed = 301 total (was 216) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 46s{color} | {color:red} hadoop-common-project/hadoop-common generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 25s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 45m 53s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-common-project/hadoop-common | | | Should org.apache.hadoop.security.Groups$Counter be a _static_ inner class? At Groups.java:inner class? At Groups.java:[lines 249-263] | | | Increment of volatile field org.apache.hadoop.security.Groups$Counter.value in org.apache.hadoop.security.Groups$Counter.decr() At Groups.java:in org.apache.hadoop.security.Groups$Counter.decr() At Groups.java:[line 262] | | | Increment of volatile field org.apache.hadoop.security.Groups$Counter.value in org.apache.hadoop.security.Groups$Counter.incr() At Groups.java:in org.apache.hadoop.security.Groups$Counter.incr() At Groups.java:[line 258] | | Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics | | Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811219/HADOOP-13263.002.patch | | JIRA Issue | HADOOP-13263 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 884937add4bd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2800695 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | compile |
[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request
[ https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336814#comment-15336814 ] Hadoop QA commented on HADOOP-13203: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 37s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 30s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 21s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 21s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 33s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 0s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 43s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 21s{color} | {color:red} root: The patch generated 5 new + 44 unchanged - 7 fixed = 49 total (was 51) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 50 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 46s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 50s{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 67m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-tools/hadoop-aws | | | Unread field:S3AInputStream.java:[line 173] | \\ \\ || Subsystem || Report/Notes || | Docker |
[jira] [Commented] (HADOOP-13286) add a scale test to do gunzip and linecount
[ https://issues.apache.org/jira/browse/HADOOP-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336786#comment-15336786 ] Chris Nauroth commented on HADOOP-13286: It's not clear to me that this test is distinct enough from others that it justifies the increased test runtime, shown here as ~2 minutes (though parallel execution can mask that). Using a compression codec and line-oriented text formats is a common pattern, but that's just extra pieces on top of a sequential file access pattern at the {{FileSystem}} layer. In HADOOP-13203, the existing {{TestS3AInputStreamPerformance#testReadAheadDefault}} was sufficient for me to flag a performance regression on sequential reads. Could the {{logStreamStatistics}} and {{NanoTimer}} usage be applied to that test or other pre-existing tests instead of adding a new test? If I missed something unique about what this test is covering, please let me know, and I'll go ahead and review it. > add a scale test to do gunzip and linecount > --- > > Key: HADOOP-13286 > URL: https://issues.apache.org/jira/browse/HADOOP-13286 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13286-branch-2-001.patch > > > the HADOOP-13203 patch proposal showed that there were performance problems > downstream which weren't surfacing in the current scale tests. > Trying to decompress the .gz test file and then go through it with LineReader > models a basic use case: parse a .csv.gz data source. > Add this, with metric printing -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13263) Reload cached groups in background after expiry
[ https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336748#comment-15336748 ] Arpit Agarwal commented on HADOOP-13263: Thank you for the updated patch [~sodonnell]. This is looking good. A few comments: # We can use [AtomicLong|https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/atomic/AtomicLong.html] in place of {{Counter}}. # The {{if (executorService == null)}} check in reload should be protected with a synchronized block. # This exception block in {{reload}} exists just to increment the counter. Instead you can have a boolean {{success}} that is set to true at the end of the try block. You can increment {{backgroundRefreshException}} in {{finally}} if the boolean is false. # Nitpick: {{new LinkedBlockingQueue()}} can be replaced with {{new LinkedBlockingQueue<>()}}. Rest lgtm. I am still reviewing the tests (thank you for the extensive new tests!). > Reload cached groups in background after expiry > --- > > Key: HADOOP-13263 > URL: https://issues.apache.org/jira/browse/HADOOP-13263 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell > Attachments: HADOOP-13263.001.patch, HADOOP-13263.002.patch > > > In HADOOP-11238 the Guava cache was introduced to allow refreshes on the > Namenode group cache to run in the background, avoiding many slow group > lookups. Even with this change, I have seen quite a few clusters with issues > due to slow group lookups. The problem is most prevalent in HA clusters, > where a slow group lookup on the hdfs user can fail to return for over 45 > seconds causing the Failover Controller to kill it. > The way the current Guava cache implementation works is approximately: > 1) On initial load, the first thread to request groups for a given user > blocks until it returns. Any subsequent threads requesting that user block > until that first thread populates the cache. > 2) When the key expires, the first thread to hit the cache after expiry > blocks. While it is blocked, other threads will return the old value. > I feel it is this blocking thread that still gives the Namenode issues on > slow group lookups. If the call from the FC is the one that blocks and > lookups are slow, if can cause the NN to be killed. > Guava has the ability to refresh expired keys completely in the background, > where the first thread that hits an expired key schedules a background cache > reload, but still returns the old value. Then the cache is eventually > updated. This patch introduces this background reload feature. There are two > new parameters: > 1) hadoop.security.groups.cache.background.reload - default false to keep the > current behaviour. Set to true to enable a small thread pool and background > refresh for expired keys > 2) hadoop.security.groups.cache.background.reload.threads - only relevant if > the above is set to true. Controls how many threads are in the background > refresh pool. Default is 1, which is likely to be enough. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13263) Reload cached groups in background after expiry
[ https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HADOOP-13263: --- Status: Patch Available (was: Open) > Reload cached groups in background after expiry > --- > > Key: HADOOP-13263 > URL: https://issues.apache.org/jira/browse/HADOOP-13263 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell > Attachments: HADOOP-13263.001.patch, HADOOP-13263.002.patch > > > In HADOOP-11238 the Guava cache was introduced to allow refreshes on the > Namenode group cache to run in the background, avoiding many slow group > lookups. Even with this change, I have seen quite a few clusters with issues > due to slow group lookups. The problem is most prevalent in HA clusters, > where a slow group lookup on the hdfs user can fail to return for over 45 > seconds causing the Failover Controller to kill it. > The way the current Guava cache implementation works is approximately: > 1) On initial load, the first thread to request groups for a given user > blocks until it returns. Any subsequent threads requesting that user block > until that first thread populates the cache. > 2) When the key expires, the first thread to hit the cache after expiry > blocks. While it is blocked, other threads will return the old value. > I feel it is this blocking thread that still gives the Namenode issues on > slow group lookups. If the call from the FC is the one that blocks and > lookups are slow, if can cause the NN to be killed. > Guava has the ability to refresh expired keys completely in the background, > where the first thread that hits an expired key schedules a background cache > reload, but still returns the old value. Then the cache is eventually > updated. This patch introduces this background reload feature. There are two > new parameters: > 1) hadoop.security.groups.cache.background.reload - default false to keep the > current behaviour. Set to true to enable a small thread pool and background > refresh for expired keys > 2) hadoop.security.groups.cache.background.reload.threads - only relevant if > the above is set to true. Controls how many threads are in the background > refresh pool. Default is 1, which is likely to be enough. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13251) DelegationTokenAuthenticationHandler should detect actual renewer when renew token
[ https://issues.apache.org/jira/browse/HADOOP-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336698#comment-15336698 ] Hadoop QA commented on HADOOP-13251: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 17s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 34s{color} | {color:red} hadoop-common-project: The patch generated 13 new + 188 unchanged - 3 fixed = 201 total (was 191) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 32s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 2s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 44m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ipc.TestRPC | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811417/HADOOP-13251.03.patch | | JIRA Issue | HADOOP-13251 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 93a394404e65 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2800695 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/9816/artifact/patchprocess/diff-checkstyle-hadoop-common-project.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/9816/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9816/testReport/ | | modules | C: hadoop-common-project/hadoop-common
[jira] [Commented] (HADOOP-13188) S3A file-create should throw error rather than overwrite directories
[ https://issues.apache.org/jira/browse/HADOOP-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336655#comment-15336655 ] Steve Loughran commented on HADOOP-13188: - thanks, I 'll commit it over the weekend. if you get S3a.toString() on your FS instance, you'll get the full io stats printed out.Be interesting to see the changes. I don't think you should be seeing any perf diff. There's a call to exists(path), which is getFileStatus in a try/catch block. Same overhead, simply without check for the path referring to a directory if it is there > S3A file-create should throw error rather than overwrite directories > > > Key: HADOOP-13188 > URL: https://issues.apache.org/jira/browse/HADOOP-13188 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.2 >Reporter: Raymie Stata >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13188-branch-2-001.patch > > > S3A.create(Path,FsPermission,boolean,int,short,long,Progressable) is not > checking to see if it's being asked to overwrite a directory. It could > easily do so, and should throw an error in this case. > There is a test-case for this in AbstractFSContractTestBase, but it's being > skipped because S3A is a blobstore. However, both the Azure and Swift file > systems make this test, and the new S3 one should as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13239) Deprecate s3:// in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336654#comment-15336654 ] Mingliang Liu commented on HADOOP-13239: Thanks [~ste...@apache.org] for your review. This and [HADOOP-12709] together should address the proposal for abandoning s3:// in {{trunk}} and {{branch-2}}. > Deprecate s3:// in branch-2 > --- > > Key: HADOOP-13239 > URL: https://issues.apache.org/jira/browse/HADOOP-13239 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-13239-branch-2.000.patch, > HADOOP-13239-branch-2.001.patch > > > The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* > shows that it's not being used. while invaluable at the time, s3n and > especially s3a render it obsolete except for reading existing data. > [HADOOP-12709] cuts the s3:// from {{trunk}} branch, and this JIRA ticket is > to deprecate it from {{branch-2}}. > # Mark Java source as {{@deprecated}} > # Warn the first time in a JVM that an S3 instance is created, "deprecated > -will be removed in future releases" > Thanks [~ste...@apache.org] for the proposal. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13280) FileSystemStorageStatistics#getLong(“readOps“) should return readOps + largeReadOps
[ https://issues.apache.org/jira/browse/HADOOP-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336647#comment-15336647 ] Colin Patrick McCabe commented on HADOOP-13280: --- Thanks, [~liuml07]. Does {{Long.valueOf(...)}} work? It would be nice to avoid the typecast if possible. > FileSystemStorageStatistics#getLong(“readOps“) should return readOps + > largeReadOps > --- > > Key: HADOOP-13280 > URL: https://issues.apache.org/jira/browse/HADOOP-13280 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Fix For: 2.8.0 > > Attachments: HADOOP-13280-branch-2.8.000.patch, > HADOOP-13280.000.patch, HADOOP-13280.001.patch > > > Currently {{FileSystemStorageStatistics}} instance simply returns data from > {{FileSystem$Statistics}}. As to {{readOps}}, the > {{FileSystem$Statistics#getReadOps()}} returns {{readOps + largeReadOps}}. We > should make the {{FileSystemStorageStatistics#getLong(“readOps“)}} return the > sum as well. > Moreover, there is no unit tests for {{FileSystemStorageStatistics}} and this > JIRA will also address this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request
[ https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13203: Status: Patch Available (was: Open) > S3a: Consider reducing the number of connection aborts by setting correct > length in s3 request > -- > > Key: HADOOP-13203 > URL: https://issues.apache.org/jira/browse/HADOOP-13203 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-13203-branch-2-001.patch, > HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, > HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, > stream_stats.tar.gz > > > Currently file's "contentLength" is set as the "requestedStreamLen", when > invoking S3AInputStream::reopen(). As a part of lazySeek(), sometimes the > stream had to be closed and reopened. But lots of times the stream was closed > with abort() causing the internal http connection to be unusable. This incurs > lots of connection establishment cost in some jobs. It would be good to set > the correct value for the stream length to avoid connection aborts. > I will post the patch once aws tests passes in my machine. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request
[ https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13203: Affects Version/s: 2.8.0 Priority: Major (was: Minor) > S3a: Consider reducing the number of connection aborts by setting correct > length in s3 request > -- > > Key: HADOOP-13203 > URL: https://issues.apache.org/jira/browse/HADOOP-13203 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan > Attachments: HADOOP-13203-branch-2-001.patch, > HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, > HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, > stream_stats.tar.gz > > > Currently file's "contentLength" is set as the "requestedStreamLen", when > invoking S3AInputStream::reopen(). As a part of lazySeek(), sometimes the > stream had to be closed and reopened. But lots of times the stream was closed > with abort() causing the internal http connection to be unusable. This incurs > lots of connection establishment cost in some jobs. It would be good to set > the correct value for the stream length to avoid connection aborts. > I will post the patch once aws tests passes in my machine. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request
[ https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336641#comment-15336641 ] Steve Loughran commented on HADOOP-13203: - patch 005. This is a WiP, just wanted to push it up to show where I'm going here. The key change is that it introduces the notion of an InputStrategy to S3a, currently: general, positioned, sequential As of now, there's also no diff between positioned and general: they both say "to end of stream"; I think general may want to consider having slightly shorter range, though still something big. Logic of seekInStream enhanced to not try seeking if the end of the range passed in is beyond the end of the current read. The metrics track more details on range overshoot Limits -now need to test both codepaths. The strategy can be set on an instantiated FS instance to allow testing without recreating FS instances. -still wasteful of data in the current read if the next read overshoots (maybe counter could track the missed quantitiy there), then go to having read(bytes[]) return the amount of available data, with the readFully() calls handling the incomplete response by asking for more. -what would a good policy for "general" be? Not positioned, clearly...but is sequential it? > S3a: Consider reducing the number of connection aborts by setting correct > length in s3 request > -- > > Key: HADOOP-13203 > URL: https://issues.apache.org/jira/browse/HADOOP-13203 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-13203-branch-2-001.patch, > HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, > HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, > stream_stats.tar.gz > > > Currently file's "contentLength" is set as the "requestedStreamLen", when > invoking S3AInputStream::reopen(). As a part of lazySeek(), sometimes the > stream had to be closed and reopened. But lots of times the stream was closed > with abort() causing the internal http connection to be unusable. This incurs > lots of connection establishment cost in some jobs. It would be good to set > the correct value for the stream length to avoid connection aborts. > I will post the patch once aws tests passes in my machine. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request
[ https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13203: Attachment: HADOOP-13203-branch-2-005.patch > S3a: Consider reducing the number of connection aborts by setting correct > length in s3 request > -- > > Key: HADOOP-13203 > URL: https://issues.apache.org/jira/browse/HADOOP-13203 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-13203-branch-2-001.patch, > HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, > HADOOP-13203-branch-2-004.patch, HADOOP-13203-branch-2-005.patch, > stream_stats.tar.gz > > > Currently file's "contentLength" is set as the "requestedStreamLen", when > invoking S3AInputStream::reopen(). As a part of lazySeek(), sometimes the > stream had to be closed and reopened. But lots of times the stream was closed > with abort() causing the internal http connection to be unusable. This incurs > lots of connection establishment cost in some jobs. It would be good to set > the correct value for the stream length to avoid connection aborts. > I will post the patch once aws tests passes in my machine. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13188) S3A file-create should throw error rather than overwrite directories
[ https://issues.apache.org/jira/browse/HADOOP-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336614#comment-15336614 ] Ravi Prakash edited comment on HADOOP-13188 at 6/17/16 6:18 PM: Looks good to me. +1. FWIW, I was concerned about the performance penalty. So I uploaded 16 files (using {{hadoop fs -put s3a://:@bucket/path}}). With the patch it took {code} real2m16.564s user0m11.571s sys 0m0.582s {code} Without the patch: {code} real2m5.481s user0m10.811s sys 0m0.472s{code} So its not that bad a degradation.. Still pretty bad performance though, but that is clearly another JIRA was (Author: raviprak): Looks good to me. +1. FWIW, I was concerned about the performance penalty. So I uploaded 16 files (using {{hadoop fs -put s3a://:@bucket/path}}). With the patch it took {code} real2m16.564s user0m11.571s sys 0m0.582s {code} Without the patch: {code} real2m5.481s user0m10.811s sys 0m0.472s{code} So its not that bad a degradation.. Still pretty bad performance though, but that is clearly another JIRA > S3A file-create should throw error rather than overwrite directories > > > Key: HADOOP-13188 > URL: https://issues.apache.org/jira/browse/HADOOP-13188 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.2 >Reporter: Raymie Stata >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13188-branch-2-001.patch > > > S3A.create(Path,FsPermission,boolean,int,short,long,Progressable) is not > checking to see if it's being asked to overwrite a directory. It could > easily do so, and should throw an error in this case. > There is a test-case for this in AbstractFSContractTestBase, but it's being > skipped because S3A is a blobstore. However, both the Azure and Swift file > systems make this test, and the new S3 one should as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13188) S3A file-create should throw error rather than overwrite directories
[ https://issues.apache.org/jira/browse/HADOOP-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336614#comment-15336614 ] Ravi Prakash commented on HADOOP-13188: --- Looks good to me. +1. FWIW, I was concerned about the performance penalty. So I uploaded 16 files (using {{hadoop fs -put s3a://:@bucket/path}}). With the patch it took {code} real2m16.564s user0m11.571s sys 0m0.582s {code} Without the patch: {code} real2m5.481s user0m10.811s sys 0m0.472s{code} So its not that bad a degradation.. Still pretty bad performance though, but that is clearly another JIRA > S3A file-create should throw error rather than overwrite directories > > > Key: HADOOP-13188 > URL: https://issues.apache.org/jira/browse/HADOOP-13188 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.2 >Reporter: Raymie Stata >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13188-branch-2-001.patch > > > S3A.create(Path,FsPermission,boolean,int,short,long,Progressable) is not > checking to see if it's being asked to overwrite a directory. It could > easily do so, and should throw an error in this case. > There is a test-case for this in AbstractFSContractTestBase, but it's being > skipped because S3A is a blobstore. However, both the Azure and Swift file > systems make this test, and the new S3 one should as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13251) DelegationTokenAuthenticationHandler should detect actual renewer when renew token
[ https://issues.apache.org/jira/browse/HADOOP-13251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13251: --- Attachment: HADOOP-13251.03.patch Patch 3 limits the request user scope to only renewer and canceler, after a sharp point brought up by ATM in an offline chat. > DelegationTokenAuthenticationHandler should detect actual renewer when renew > token > -- > > Key: HADOOP-13251 > URL: https://issues.apache.org/jira/browse/HADOOP-13251 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.8.0 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-13251.01.patch, HADOOP-13251.02.patch, > HADOOP-13251.03.patch, HADOOP-13251.innocent.patch > > > Turns out KMS delegation token renewal feature (HADOOP-13155) does not work > well with client side impersonation. > In a MR example, an end user (UGI:user) gets all kinds of DTs (with > renewer=yarn), and pass them to Yarn. Yarn's resource manager (UGI:yarn) then > renews these DTs as long as the MR jobs are running. But currently, the token > is used at the kms server side to decide the renewer, in which case is always > the token's owner. This ends up rejecting the renew request due to renewer > mismatch. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13286) add a scale test to do gunzip and linecount
[ https://issues.apache.org/jira/browse/HADOOP-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336387#comment-15336387 ] Hadoop QA commented on HADOOP-13286: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 35s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 35s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} hadoop-tools/hadoop-aws: The patch generated 2 new + 4 unchanged - 7 fixed = 6 total (was 11) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:d1c475d | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811391/HADOOP-13286-branch-2-001.patch | | JIRA Issue | HADOOP-13286 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 6d30b1e0924f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / 6afa044 | | Default Java | 1.7.0_101 | | Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_91
[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request
[ https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336378#comment-15336378 ] Steve Loughran commented on HADOOP-13203: - Performance of the HADOOP-13286 patch {code} testDecompression128K: Decompress with a 128K readahead 2016-06-17 17:14:57,072 [Thread-0] INFO compress.CodecPool (CodecPool.java:getDecompressor(181)) - Got brand-new decompressor [.gz] 2016-06-17 17:15:32,986 [Thread-0] INFO contract.ContractTestUtils (ContractTestUtils.java:end(1262)) - Duration of Time to read 514690 lines [99896260 bytes expanded, 22633778 raw] with readahead = 131072: 36,078,064,490 nS 2016-06-17 17:15:32,986 [Thread-0] INFO scale.TestS3AInputStreamPerformance (TestS3AInputStreamPerformance.java:logTimePerIOP(144)) - Time per IOP: 70,096 nS 2016-06-17 17:15:32,987 [Thread-0] INFO scale.TestS3AInputStreamPerformance (TestS3AInputStreamPerformance.java:logStreamStatistics(306)) - Stream Statistics StreamStatistics{OpenOperations=175, CloseOperations=175, Closed=175, Aborted=0, SeekOperations=0, ReadExceptions=0, ForwardSeekOperations=0, BackwardSeekOperations=0, BytesSkippedOnSeek=0, BytesBackwardsOnSeek=0, BytesRead=22633778, BytesRead excluding skipped=22633778, ReadOperations=6680, ReadFullyOperations=0, ReadsIncomplete=1583} {code} > S3a: Consider reducing the number of connection aborts by setting correct > length in s3 request > -- > > Key: HADOOP-13203 > URL: https://issues.apache.org/jira/browse/HADOOP-13203 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-13203-branch-2-001.patch, > HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, > HADOOP-13203-branch-2-004.patch, stream_stats.tar.gz > > > Currently file's "contentLength" is set as the "requestedStreamLen", when > invoking S3AInputStream::reopen(). As a part of lazySeek(), sometimes the > stream had to be closed and reopened. But lots of times the stream was closed > with abort() causing the internal http connection to be unusable. This incurs > lots of connection establishment cost in some jobs. It would be good to set > the correct value for the stream length to avoid connection aborts. > I will post the patch once aws tests passes in my machine. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13286) add a scale test to do gunzip and linecount
[ https://issues.apache.org/jira/browse/HADOOP-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336352#comment-15336352 ] Steve Loughran commented on HADOOP-13286: - tested reading from amazon US west landsat; test time {code} Running org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 134.879 sec - in org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance {code} > add a scale test to do gunzip and linecount > --- > > Key: HADOOP-13286 > URL: https://issues.apache.org/jira/browse/HADOOP-13286 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13286-branch-2-001.patch > > > the HADOOP-13203 patch proposal showed that there were performance problems > downstream which weren't surfacing in the current scale tests. > Trying to decompress the .gz test file and then go through it with LineReader > models a basic use case: parse a .csv.gz data source. > Add this, with metric printing -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13286) add a scale test to do gunzip and linecount
[ https://issues.apache.org/jira/browse/HADOOP-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13286: Status: Patch Available (was: Open) > add a scale test to do gunzip and linecount > --- > > Key: HADOOP-13286 > URL: https://issues.apache.org/jira/browse/HADOOP-13286 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13286-branch-2-001.patch > > > the HADOOP-13203 patch proposal showed that there were performance problems > downstream which weren't surfacing in the current scale tests. > Trying to decompress the .gz test file and then go through it with LineReader > models a basic use case: parse a .csv.gz data source. > Add this, with metric printing -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13286) add a scale test to do gunzip and linecount
[ https://issues.apache.org/jira/browse/HADOOP-13286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13286: Attachment: HADOOP-13286-branch-2-001.patch Patch 001; streams the test data through the (presumably) non-native gz codec, then into LineReader. Simulates a mapper applied to a .CSV.gz file timings {code} testDecompression128K: Decompress with a 128K readahead 2016-06-17 16:30:42,408 [Thread-0] INFO compress.CodecPool (CodecPool.java:getDecompressor(181)) - Got brand-new decompressor [.gz] 2016-06-17 16:30:47,345 [Thread-0] INFO contract.ContractTestUtils (ContractTestUtils.java:end(1262)) - Duration of Time to read 514690 lines [99896260 bytes expanded, 22633778 raw] with readahead = 131072: 5,107,155,982 nS 2016-06-17 16:30:47,345 [Thread-0] INFO scale.TestS3AInputStreamPerformance (TestS3AInputStreamPerformance.java:logTimePerIOP(144)) - Time per IOP: 9,922 nS 2016-06-17 16:30:47,346 [Thread-0] INFO scale.TestS3AInputStreamPerformance (TestS3AInputStreamPerformance.java:logStreamStatistics(301)) - Stream Statistics StreamStatistics{OpenOperations=1, CloseOperations=1, Closed=1, Aborted=0, SeekOperations=0, ReadExceptions=0, ForwardSeekOperations=0, BackwardSeekOperations=0, BytesSkippedOnSeek=0, BytesBackwardsOnSeek=0, BytesRead=22633778, BytesRead excluding skipped=22633778, ReadOperations=5708, ReadFullyOperations=0, ReadsIncomplete=243} {code} that is: 1 microsecond/line; 5.1s for the entire 20MB file, which expands to 99MB on the way through the pipeline > add a scale test to do gunzip and linecount > --- > > Key: HADOOP-13286 > URL: https://issues.apache.org/jira/browse/HADOOP-13286 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13286-branch-2-001.patch > > > the HADOOP-13203 patch proposal showed that there were performance problems > downstream which weren't surfacing in the current scale tests. > Trying to decompress the .gz test file and then go through it with LineReader > models a basic use case: parse a .csv.gz data source. > Add this, with metric printing -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13286) add a scale test to do gunzip and linecount
Steve Loughran created HADOOP-13286: --- Summary: add a scale test to do gunzip and linecount Key: HADOOP-13286 URL: https://issues.apache.org/jira/browse/HADOOP-13286 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 2.8.0 Reporter: Steve Loughran Assignee: Steve Loughran the HADOOP-13203 patch proposal showed that there were performance problems downstream which weren't surfacing in the current scale tests. Trying to decompress the .gz test file and then go through it with LineReader models a basic use case: parse a .csv.gz data source. Add this, with metric printing -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13239) Deprecate s3:// in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336206#comment-15336206 ] Steve Loughran commented on HADOOP-13239: - +1 tested against S3 ireland; saw a transient failure in an s3a test, but that is implicitly unrelated to this. > Deprecate s3:// in branch-2 > --- > > Key: HADOOP-13239 > URL: https://issues.apache.org/jira/browse/HADOOP-13239 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-13239-branch-2.000.patch, > HADOOP-13239-branch-2.001.patch > > > The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* > shows that it's not being used. while invaluable at the time, s3n and > especially s3a render it obsolete except for reading existing data. > [HADOOP-12709] cuts the s3:// from {{trunk}} branch, and this JIRA ticket is > to deprecate it from {{branch-2}}. > # Mark Java source as {{@deprecated}} > # Warn the first time in a JVM that an S3 instance is created, "deprecated > -will be removed in future releases" > Thanks [~ste...@apache.org] for the proposal. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336188#comment-15336188 ] Eric Badger edited comment on HADOOP-12893 at 6/17/16 2:17 PM: --- If hadoop-project depends on hadoop-build-tools, shouldn't we make an explicit dependency on hadoop-build-tools rather than rely on specifying a specific module build order? I tried adding hadoop-build-tools as a dependency in the hadoop-project pom.xml file, but was unable to get the build to succeed. However, that seems to me to be the better fix. I'm not a maven expert, so input from someone with more expertise in this area would be good. Edit: I should've reloaded the page before adding my comment. Suffice it to say that I agree with [~busbey]. was (Author: ebadger): If hadoop-project depends on hadoop-build-tools, shouldn't we make an explicit dependency on hadoop-build-tools rather than rely on specifying a specific module build order? I tried adding hadoop-build-tools as a dependency in the hadoop-project pom.xml file, but was unable to get the build to succeed. However, that seems to me to be the better fix. I'm not a maven expert, so input from someone with more expertise in this area would be good. > Verify LICENSE.txt and NOTICE.txt > - > > Key: HADOOP-12893 > URL: https://issues.apache.org/jira/browse/HADOOP-12893 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Xiao Chen >Priority: Blocker > Fix For: 2.7.3, 2.6.5 > > Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, > HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, > HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, > HADOOP-12893.01.patch, HADOOP-12893.011.patch, HADOOP-12893.012.patch, > HADOOP-12893.10.patch, HADOOP-12893.branch-2.01.patch, > HADOOP-12893.branch-2.6.01.patch, HADOOP-12893.branch-2.7.01.patch, > HADOOP-12893.branch-2.7.02.patch, HADOOP-12893.branch-2.7.3.01.patch > > > We have many bundled dependencies in both the source and the binary artifacts > that are not in LICENSE.txt and NOTICE.txt. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336188#comment-15336188 ] Eric Badger commented on HADOOP-12893: -- If hadoop-project depends on hadoop-build-tools, shouldn't we make an explicit dependency on hadoop-build-tools rather than rely on specifying a specific module build order? I tried adding hadoop-build-tools as a dependency in the hadoop-project pom.xml file, but was unable to get the build to succeed. However, that seems to me to be the better fix. I'm not a maven expert, so input from someone with more expertise in this area would be good. > Verify LICENSE.txt and NOTICE.txt > - > > Key: HADOOP-12893 > URL: https://issues.apache.org/jira/browse/HADOOP-12893 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Xiao Chen >Priority: Blocker > Fix For: 2.7.3, 2.6.5 > > Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, > HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, > HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, > HADOOP-12893.01.patch, HADOOP-12893.011.patch, HADOOP-12893.012.patch, > HADOOP-12893.10.patch, HADOOP-12893.branch-2.01.patch, > HADOOP-12893.branch-2.6.01.patch, HADOOP-12893.branch-2.7.01.patch, > HADOOP-12893.branch-2.7.02.patch, HADOOP-12893.branch-2.7.3.01.patch > > > We have many bundled dependencies in both the source and the binary artifacts > that are not in LICENSE.txt and NOTICE.txt. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336116#comment-15336116 ] Sean Busbey commented on HADOOP-12893: -- we should instead list hadoop-build-tools as a dependency of hadoop-project so that maven will correctly order the modules. Relying on pom module definition order is brittle and coincidental behavior. > Verify LICENSE.txt and NOTICE.txt > - > > Key: HADOOP-12893 > URL: https://issues.apache.org/jira/browse/HADOOP-12893 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Xiao Chen >Priority: Blocker > Fix For: 2.7.3, 2.6.5 > > Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, > HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, > HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, > HADOOP-12893.01.patch, HADOOP-12893.011.patch, HADOOP-12893.012.patch, > HADOOP-12893.10.patch, HADOOP-12893.branch-2.01.patch, > HADOOP-12893.branch-2.6.01.patch, HADOOP-12893.branch-2.7.01.patch, > HADOOP-12893.branch-2.7.02.patch, HADOOP-12893.branch-2.7.3.01.patch > > > We have many bundled dependencies in both the source and the binary artifacts > that are not in LICENSE.txt and NOTICE.txt. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12804) Read Proxy Password from Credential Providers in S3 FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335997#comment-15335997 ] Hadoop QA commented on HADOOP-12804: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 41s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 31s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} hadoop-tools/hadoop-aws: The patch generated 3 new + 18 unchanged - 0 fixed = 21 total (was 18) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 41s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:d1c475d | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12811355/HADOOP-12804-branch-2-002.patch | | JIRA Issue | HADOOP-12804 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux facd20348bca 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / a36aa92 | | Default Java | 1.7.0_101 | | Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_91
[jira] [Updated] (HADOOP-12804) Read Proxy Password from Credential Providers in S3 FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Larry McCay updated HADOOP-12804: - Status: Open (was: Patch Available) > Read Proxy Password from Credential Providers in S3 FileSystem > -- > > Key: HADOOP-12804 > URL: https://issues.apache.org/jira/browse/HADOOP-12804 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Larry McCay >Assignee: Larry McCay >Priority: Minor > Attachments: HADOOP-12804-001.patch, HADOOP-12804-branch-2-002.patch > > > HADOOP-12548 added credential provider support for the AWS credentials to > S3FileSystem. This JIRA is for considering the use of the credential > providers for the proxy password as well. > Instead of adding the proxy password to the config file directly and in clear > text, we could provision it in addition to the AWS credentials into a > credential provider and keep it out of clear text. > In terms of usage, it could be added to the same credential store as the AWS > credentials or potentially to a more universally available path - since it is > the same for everyone. This would however require multiple providers to be > configured in the provider.path property and more open file permissions on > the store itself. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12804) Read Proxy Password from Credential Providers in S3 FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Larry McCay updated HADOOP-12804: - Status: Patch Available (was: Open) > Read Proxy Password from Credential Providers in S3 FileSystem > -- > > Key: HADOOP-12804 > URL: https://issues.apache.org/jira/browse/HADOOP-12804 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Larry McCay >Assignee: Larry McCay >Priority: Minor > Attachments: HADOOP-12804-001.patch, HADOOP-12804-branch-2-002.patch > > > HADOOP-12548 added credential provider support for the AWS credentials to > S3FileSystem. This JIRA is for considering the use of the credential > providers for the proxy password as well. > Instead of adding the proxy password to the config file directly and in clear > text, we could provision it in addition to the AWS credentials into a > credential provider and keep it out of clear text. > In terms of usage, it could be added to the same credential store as the AWS > credentials or potentially to a more universally available path - since it is > the same for everyone. This would however require multiple providers to be > configured in the provider.path property and more open file permissions on > the store itself. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12804) Read Proxy Password from Credential Providers in S3 FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Larry McCay updated HADOOP-12804: - Attachment: HADOOP-12804-branch-2-002.patch Branch-2 patch > Read Proxy Password from Credential Providers in S3 FileSystem > -- > > Key: HADOOP-12804 > URL: https://issues.apache.org/jira/browse/HADOOP-12804 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Larry McCay >Assignee: Larry McCay >Priority: Minor > Attachments: HADOOP-12804-001.patch, HADOOP-12804-branch-2-002.patch > > > HADOOP-12548 added credential provider support for the AWS credentials to > S3FileSystem. This JIRA is for considering the use of the credential > providers for the proxy password as well. > Instead of adding the proxy password to the config file directly and in clear > text, we could provision it in addition to the AWS credentials into a > credential provider and keep it out of clear text. > In terms of usage, it could be added to the same credential store as the AWS > credentials or potentially to a more universally available path - since it is > the same for everyone. This would however require multiple providers to be > configured in the provider.path property and more open file permissions on > the store itself. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12804) Read Proxy Password from Credential Providers in S3 FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335889#comment-15335889 ] Larry McCay commented on HADOOP-12804: -- Yes, I will get this done today - [~ste...@apache.org]. > Read Proxy Password from Credential Providers in S3 FileSystem > -- > > Key: HADOOP-12804 > URL: https://issues.apache.org/jira/browse/HADOOP-12804 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Larry McCay >Assignee: Larry McCay >Priority: Minor > Attachments: HADOOP-12804-001.patch > > > HADOOP-12548 added credential provider support for the AWS credentials to > S3FileSystem. This JIRA is for considering the use of the credential > providers for the proxy password as well. > Instead of adding the proxy password to the config file directly and in clear > text, we could provision it in addition to the AWS credentials into a > credential provider and keep it out of clear text. > In terms of usage, it could be added to the same credential store as the AWS > credentials or potentially to a more universally available path - since it is > the same for everyone. This would however require multiple providers to be > configured in the provider.path property and more open file permissions on > the store itself. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request
[ https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335844#comment-15335844 ] Steve Loughran commented on HADOOP-13203: - Chris, it's work over a .gz file; that codec has to go through the entire file. We also have do do some measurements of seeks on large server-side-encrypted files: if the decryption is in blocks, seeks should be affordable. If it has to start from the front each time, we'd expect the duration of open(), seek(pos) read() to be O(pos) > S3a: Consider reducing the number of connection aborts by setting correct > length in s3 request > -- > > Key: HADOOP-13203 > URL: https://issues.apache.org/jira/browse/HADOOP-13203 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-13203-branch-2-001.patch, > HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, > HADOOP-13203-branch-2-004.patch, stream_stats.tar.gz > > > Currently file's "contentLength" is set as the "requestedStreamLen", when > invoking S3AInputStream::reopen(). As a part of lazySeek(), sometimes the > stream had to be closed and reopened. But lots of times the stream was closed > with abort() causing the internal http connection to be unusable. This incurs > lots of connection establishment cost in some jobs. It would be good to set > the correct value for the stream length to avoid connection aborts. > I will post the patch once aws tests passes in my machine. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request
[ https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13203: Status: Open (was: Patch Available) > S3a: Consider reducing the number of connection aborts by setting correct > length in s3 request > -- > > Key: HADOOP-13203 > URL: https://issues.apache.org/jira/browse/HADOOP-13203 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-13203-branch-2-001.patch, > HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch, > HADOOP-13203-branch-2-004.patch, stream_stats.tar.gz > > > Currently file's "contentLength" is set as the "requestedStreamLen", when > invoking S3AInputStream::reopen(). As a part of lazySeek(), sometimes the > stream had to be closed and reopened. But lots of times the stream was closed > with abort() causing the internal http connection to be unusable. This incurs > lots of connection establishment cost in some jobs. It would be good to set > the correct value for the stream length to avoid connection aborts. > I will post the patch once aws tests passes in my machine. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug
[ https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] binde updated HADOOP-13192: --- Comment: was deleted (was: ok) > org.apache.hadoop.util.LineReader match recordDelimiter has a bug > -- > > Key: HADOOP-13192 > URL: https://issues.apache.org/jira/browse/HADOOP-13192 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.6.2 >Reporter: binde >Assignee: binde > Attachments: > 0001-HADOOP-13192-org.apache.hadoop.util.LineReader-match.patch, > 0002-fix-bug-hadoop-1392-add-test-case-for-LineReader.patch > > Original Estimate: 5m > Remaining Estimate: 5m > > org.apache.hadoop.util.LineReader.readCustomLine() has a bug, > when line is bccc, recordDelimiter is aaab, the result should be a,ccc, > show the code on line 310: > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > bufferPosn--; > delPosn = 0; > } > } > shoud be : > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > // - change here - start > bufferPosn -= delPosn; > // - change here - end > > delPosn = 0; > } > } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug
[ https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335664#comment-15335664 ] binde commented on HADOOP-13192: ok > org.apache.hadoop.util.LineReader match recordDelimiter has a bug > -- > > Key: HADOOP-13192 > URL: https://issues.apache.org/jira/browse/HADOOP-13192 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.6.2 >Reporter: binde >Assignee: binde > Attachments: > 0001-HADOOP-13192-org.apache.hadoop.util.LineReader-match.patch, > 0002-fix-bug-hadoop-1392-add-test-case-for-LineReader.patch > > Original Estimate: 5m > Remaining Estimate: 5m > > org.apache.hadoop.util.LineReader.readCustomLine() has a bug, > when line is bccc, recordDelimiter is aaab, the result should be a,ccc, > show the code on line 310: > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > bufferPosn--; > delPosn = 0; > } > } > shoud be : > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > // - change here - start > bufferPosn -= delPosn; > // - change here - end > > delPosn = 0; > } > } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug
[ https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335663#comment-15335663 ] binde commented on HADOOP-13192: ok > org.apache.hadoop.util.LineReader match recordDelimiter has a bug > -- > > Key: HADOOP-13192 > URL: https://issues.apache.org/jira/browse/HADOOP-13192 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.6.2 >Reporter: binde >Assignee: binde > Attachments: > 0001-HADOOP-13192-org.apache.hadoop.util.LineReader-match.patch, > 0002-fix-bug-hadoop-1392-add-test-case-for-LineReader.patch > > Original Estimate: 5m > Remaining Estimate: 5m > > org.apache.hadoop.util.LineReader.readCustomLine() has a bug, > when line is bccc, recordDelimiter is aaab, the result should be a,ccc, > show the code on line 310: > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > bufferPosn--; > delPosn = 0; > } > } > shoud be : > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > // - change here - start > bufferPosn -= delPosn; > // - change here - end > > delPosn = 0; > } > } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release
[ https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335661#comment-15335661 ] Hadoop QA commented on HADOOP-9613: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 32 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 11s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 43s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 9m 13s{color} | {color:red} root generated 2 new + 693 unchanged - 0 fixed = 695 total (was 693) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 50s{color} | {color:red} root: The patch generated 8 new + 371 unchanged - 56 fixed = 379 total (was 427) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 49s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 12s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 0s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 46s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 47s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 45s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 4s{color} | {color:green}
[jira] [Commented] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug
[ https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335613#comment-15335613 ] Akira AJISAKA commented on HADOOP-13192: Thanks [~zhudebin] for updating the patch. The fix looks good to me. I ran {{mvn test -Dtest=*Reader*}} and all the tests succeeded. Would you fix the checkstyle warnings? I'm +1 if that is addressed. > org.apache.hadoop.util.LineReader match recordDelimiter has a bug > -- > > Key: HADOOP-13192 > URL: https://issues.apache.org/jira/browse/HADOOP-13192 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.6.2 >Reporter: binde >Assignee: binde > Attachments: > 0001-HADOOP-13192-org.apache.hadoop.util.LineReader-match.patch, > 0002-fix-bug-hadoop-1392-add-test-case-for-LineReader.patch > > Original Estimate: 5m > Remaining Estimate: 5m > > org.apache.hadoop.util.LineReader.readCustomLine() has a bug, > when line is bccc, recordDelimiter is aaab, the result should be a,ccc, > show the code on line 310: > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > bufferPosn--; > delPosn = 0; > } > } > shoud be : > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > // - change here - start > bufferPosn -= delPosn; > // - change here - end > > delPosn = 0; > } > } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug
[ https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335613#comment-15335613 ] Akira AJISAKA edited comment on HADOOP-13192 at 6/17/16 7:47 AM: - Thanks [~zhudebin] for updating the patch. The fix looks good to me. I ran {{mvn test -Dtest=\*Reader\*}} and all the tests succeeded. Would you fix the checkstyle warnings? I'm +1 if that is addressed. was (Author: ajisakaa): Thanks [~zhudebin] for updating the patch. The fix looks good to me. I ran {{mvn test -Dtest=*Reader*}} and all the tests succeeded. Would you fix the checkstyle warnings? I'm +1 if that is addressed. > org.apache.hadoop.util.LineReader match recordDelimiter has a bug > -- > > Key: HADOOP-13192 > URL: https://issues.apache.org/jira/browse/HADOOP-13192 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.6.2 >Reporter: binde >Assignee: binde > Attachments: > 0001-HADOOP-13192-org.apache.hadoop.util.LineReader-match.patch, > 0002-fix-bug-hadoop-1392-add-test-case-for-LineReader.patch > > Original Estimate: 5m > Remaining Estimate: 5m > > org.apache.hadoop.util.LineReader.readCustomLine() has a bug, > when line is bccc, recordDelimiter is aaab, the result should be a,ccc, > show the code on line 310: > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > bufferPosn--; > delPosn = 0; > } > } > shoud be : > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > // - change here - start > bufferPosn -= delPosn; > // - change here - end > > delPosn = 0; > } > } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12943) Add -w -r options in dfs -test command
[ https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335598#comment-15335598 ] Hudson commented on HADOOP-12943: - SUCCESS: Integrated in Hadoop-trunk-Commit #9977 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9977/]) HADOOP-12943. Add -w -r options in dfs -test command. Contributed by (aajisaka: rev 09e82acaf9a6d7663bc51bbca0cdeca4b582b535) * hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Test.java > Add -w -r options in dfs -test command > -- > > Key: HADOOP-12943 > URL: https://issues.apache.org/jira/browse/HADOOP-12943 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, scripts, tools >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Fix For: 2.8.0 > > Attachments: HADOOP-12943.001.patch, HADOOP-12943.002.patch, > HADOOP-12943.003.patch, HADOOP-12943.004.patch, HADOOP-12943.005.patch > > > Currently the dfs -test command only supports > -d, -e, -f, -s, -z > options. It would be helpful if we add > -w, -r > to verify permission of r/w before actual read or write. This will help > script programming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12943) Add -w -r options in dfs -test command
[ https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335582#comment-15335582 ] Weiwei Yang commented on HADOOP-12943: -- Thanks [~ajisakaa] > Add -w -r options in dfs -test command > -- > > Key: HADOOP-12943 > URL: https://issues.apache.org/jira/browse/HADOOP-12943 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, scripts, tools >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Fix For: 2.8.0 > > Attachments: HADOOP-12943.001.patch, HADOOP-12943.002.patch, > HADOOP-12943.003.patch, HADOOP-12943.004.patch, HADOOP-12943.005.patch > > > Currently the dfs -test command only supports > -d, -e, -f, -s, -z > options. It would be helpful if we add > -w, -r > to verify permission of r/w before actual read or write. This will help > script programming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12943) Add -w -r options in dfs -test command
[ https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA updated HADOOP-12943: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) Committed this to trunk, branch-2, and branch-2.8. Thanks [~cheersyang] for the contribution. > Add -w -r options in dfs -test command > -- > > Key: HADOOP-12943 > URL: https://issues.apache.org/jira/browse/HADOOP-12943 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, scripts, tools >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Fix For: 2.8.0 > > Attachments: HADOOP-12943.001.patch, HADOOP-12943.002.patch, > HADOOP-12943.003.patch, HADOOP-12943.004.patch, HADOOP-12943.005.patch > > > Currently the dfs -test command only supports > -d, -e, -f, -s, -z > options. It would be helpful if we add > -w, -r > to verify permission of r/w before actual read or write. This will help > script programming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12943) Add -w -r options in dfs -test command
[ https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335579#comment-15335579 ] Akira AJISAKA commented on HADOOP-12943: +1, the checkstyle warnings and test failures seem to be unrelated to the patch. > Add -w -r options in dfs -test command > -- > > Key: HADOOP-12943 > URL: https://issues.apache.org/jira/browse/HADOOP-12943 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, scripts, tools >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HADOOP-12943.001.patch, HADOOP-12943.002.patch, > HADOOP-12943.003.patch, HADOOP-12943.004.patch, HADOOP-12943.005.patch > > > Currently the dfs -test command only supports > -d, -e, -f, -s, -z > options. It would be helpful if we add > -w, -r > to verify permission of r/w before actual read or write. This will help > script programming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug
[ https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335566#comment-15335566 ] Hadoop QA commented on HADOOP-13192: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 35s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s{color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 4 new + 29 unchanged - 0 fixed = 33 total (was 29) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 20s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 39m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Issue | HADOOP-13192 | | GITHUB PR | https://github.com/apache/hadoop/pull/99 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 89017c85d9eb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 51d497f | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/9813/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9813/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9813/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > org.apache.hadoop.util.LineReader match recordDelimiter has a bug > -- > > Key: HADOOP-13192 > URL: https://issues.apache.org/jira/browse/HADOOP-13192 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.6.2 >Reporter: binde >Assignee:
[jira] [Commented] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys
[ https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335556#comment-15335556 ] Hudson commented on HADOOP-13242: - SUCCESS: Integrated in Hadoop-trunk-Commit #9976 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9976/]) HADOOP-13242. Authenticate to Azure Data Lake using client ID and keys. (cnauroth: rev 51d16e7b38d247f73b0ec2ffd8b2b02069c05a33) * hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/oauth2/AzureADClientCredentialBasedAccesTokenProvider.java * hadoop-tools/hadoop-azure-datalake/src/site/markdown/index.md * hadoop-tools/hadoop-azure-datalake/pom.xml > Authenticate to Azure Data Lake using client ID and keys > > > Key: HADOOP-13242 > URL: https://issues.apache.org/jira/browse/HADOOP-13242 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure > Environment: All >Reporter: Atul Sikaria >Assignee: Atul Sikaria > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-13242-003.patch, HADOOP-13242-004.patch, > HADOOP-13242-005.patch, HADOOP-13242-006.patch, HDFS-10462-001.patch, > HDFS-10462-002.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > Current OAuth2 support (used by HADOOP-12666) supports getting a token using > client creds. However, the client creds support does not pass the "resource" > parameter required by Azure AD. This work adds support for the "resource" > parameter when acquring the OAuth2 token from Azure AD, so the client > credentials can be used to authenticate to Azure Data Lake. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12943) Add -w -r options in dfs -test command
[ https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335548#comment-15335548 ] Hadoop QA commented on HADOOP-12943: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 7s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 24s{color} | {color:red} root: The patch generated 2 new + 186 unchanged - 25 fixed = 188 total (was 211) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 36s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 9s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}124m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | | | hadoop.hdfs.server.namenode.TestEditLog | | | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:e2f6409 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12810783/HADOOP-12943.005.patch | | JIRA Issue | HADOOP-12943 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c1cc34c4cfe8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 51d497f | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/9810/artifact/patchprocess/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/9810/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | unit |
[jira] [Updated] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys
[ https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13242: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha1 Status: Resolved (was: Patch Available) +1 for patch 006. I have committed this to trunk. [~ASikaria], thank you for the contribution. > Authenticate to Azure Data Lake using client ID and keys > > > Key: HADOOP-13242 > URL: https://issues.apache.org/jira/browse/HADOOP-13242 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure > Environment: All >Reporter: Atul Sikaria >Assignee: Atul Sikaria > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-13242-003.patch, HADOOP-13242-004.patch, > HADOOP-13242-005.patch, HADOOP-13242-006.patch, HDFS-10462-001.patch, > HDFS-10462-002.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > Current OAuth2 support (used by HADOOP-12666) supports getting a token using > client creds. However, the client creds support does not pass the "resource" > parameter required by Azure AD. This work adds support for the "resource" > parameter when acquring the OAuth2 token from Azure AD, so the client > credentials can be used to authenticate to Azure Data Lake. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug
[ https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335515#comment-15335515 ] binde commented on HADOOP-13192: Okay, I understand. > org.apache.hadoop.util.LineReader match recordDelimiter has a bug > -- > > Key: HADOOP-13192 > URL: https://issues.apache.org/jira/browse/HADOOP-13192 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.6.2 >Reporter: binde >Assignee: binde > Attachments: > 0001-HADOOP-13192-org.apache.hadoop.util.LineReader-match.patch, > 0002-fix-bug-hadoop-1392-add-test-case-for-LineReader.patch > > Original Estimate: 5m > Remaining Estimate: 5m > > org.apache.hadoop.util.LineReader.readCustomLine() has a bug, > when line is bccc, recordDelimiter is aaab, the result should be a,ccc, > show the code on line 310: > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > bufferPosn--; > delPosn = 0; > } > } > shoud be : > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > // - change here - start > bufferPosn -= delPosn; > // - change here - end > > delPosn = 0; > } > } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug
[ https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335478#comment-15335478 ] Akira AJISAKA commented on HADOOP-13192: Thanks [~zhudebin] for attaching the patches in the jira, but actually we don't need to attach the patches when there is a corresponding GitHub pull request because the Jenkins precommit job runs on the pull request. However, we need to hit "Submit Patch" to change the status to "Patch Available" to run the precommit job. > org.apache.hadoop.util.LineReader match recordDelimiter has a bug > -- > > Key: HADOOP-13192 > URL: https://issues.apache.org/jira/browse/HADOOP-13192 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.6.2 >Reporter: binde >Assignee: binde > Attachments: > 0001-HADOOP-13192-org.apache.hadoop.util.LineReader-match.patch, > 0002-fix-bug-hadoop-1392-add-test-case-for-LineReader.patch > > Original Estimate: 5m > Remaining Estimate: 5m > > org.apache.hadoop.util.LineReader.readCustomLine() has a bug, > when line is bccc, recordDelimiter is aaab, the result should be a,ccc, > show the code on line 310: > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > bufferPosn--; > delPosn = 0; > } > } > shoud be : > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > // - change here - start > bufferPosn -= delPosn; > // - change here - end > > delPosn = 0; > } > } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org