[jira] [Commented] (HADOOP-11844) SLS docs point to invalid rumen link
[ https://issues.apache.org/jira/browse/HADOOP-11844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517116#comment-14517116 ] Vinayakumar B commented on HADOOP-11844: [~andreina], I think you are creating the patch on the branch-2.6. due to which its not applying on branch-2. In the later versions all .apt files are converted to markdown (.md) files. And the Exactly this issue has been fixed in 2.7.0 through HADOOP-11558. [~aw], Do you need this in branch-2.6 as well ? in case any more releases from branch-2.6 SLS docs point to invalid rumen link Key: HADOOP-11844 URL: https://issues.apache.org/jira/browse/HADOOP-11844 Project: Hadoop Common Issue Type: Bug Components: documentation Affects Versions: 2.6.0 Reporter: Allen Wittenauer Assignee: J.Andreina Priority: Trivial Labels: newbie Attachments: HADOOP-11844-branch-2.002.patch, HADOOP-11844-branch-2.003.patch, HADOOP-11844.1.patch SchedulerLoadSimulator at least on 2.6.0 points to an invalid link to rumen. Need to verify and potentially fix this link in newer releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11844) SLS docs point to invalid rumen link
[ https://issues.apache.org/jira/browse/HADOOP-11844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517082#comment-14517082 ] Hadoop QA commented on HADOOP-11844: \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | patch | 0m 0s | The patch command could not apply the patch during dryrun. | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12728795/HADOOP-11844-branch-2.003.patch | | Optional Tests | site | | git revision | branch-2 / 1d03ac3 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/6205/console | This message was automatically generated. SLS docs point to invalid rumen link Key: HADOOP-11844 URL: https://issues.apache.org/jira/browse/HADOOP-11844 Project: Hadoop Common Issue Type: Bug Components: documentation Affects Versions: 2.6.0 Reporter: Allen Wittenauer Assignee: J.Andreina Priority: Trivial Labels: newbie Attachments: HADOOP-11844-branch-2.002.patch, HADOOP-11844-branch-2.003.patch, HADOOP-11844.1.patch SchedulerLoadSimulator at least on 2.6.0 points to an invalid link to rumen. Need to verify and potentially fix this link in newer releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11715) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception.
[ https://issues.apache.org/jira/browse/HADOOP-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517035#comment-14517035 ] Hadoop QA commented on HADOOP-11715: \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 14m 34s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 1 new or modified test files. | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | javac | 7m 30s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 32s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 22s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 7m 54s | The applied patch generated 1 additional checkstyle issues. | | {color:green}+1{color} | install | 1m 33s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 32s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 0m 39s | The patch does not introduce any new Findbugs (version 2.0.3) warnings. | | {color:green}+1{color} | tools/hadoop tests | 1m 9s | Tests passed in hadoop-azure. | | | | 43m 53s | | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12728778/HADOOP-11715.2.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 99fe03e | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/6203/artifact/patchprocess/checkstyle-result-diff.txt | | hadoop-azure test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/6203/artifact/patchprocess/testrun_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/6203/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/6203/console | This message was automatically generated. azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. - Key: HADOOP-11715 URL: https://issues.apache.org/jira/browse/HADOOP-11715 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.7.0 Reporter: Brandon Li Assignee: nijel Fix For: 2.8.0 Attachments: HADOOP-11715.1.patch, HADOOP-11715.2.patch azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. For example, it complains filenotfound instead of wrong-fs for an hdfs path: Caused by: java.io.FileNotFoundException: hdfs://headnode0:9000/hive/scratch/hadoopqa/a7d34a22-57eb-4678-84b4-43d84027d45f/hive_2015-03-02_23-13-04_713_5722627238053417441-1/hadoopqa/_tez_scratch_dir/_tez_scratch_dir/split_Map_1/job.split: No such file or directory. at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:1625) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11844) SLS docs point to invalid rumen link
[ https://issues.apache.org/jira/browse/HADOOP-11844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] J.Andreina updated HADOOP-11844: Attachment: HADOOP-11844-branch-2.003.patch Updated the patch file SLS docs point to invalid rumen link Key: HADOOP-11844 URL: https://issues.apache.org/jira/browse/HADOOP-11844 Project: Hadoop Common Issue Type: Bug Components: documentation Affects Versions: 2.6.0 Reporter: Allen Wittenauer Assignee: J.Andreina Priority: Trivial Labels: newbie Attachments: HADOOP-11844-branch-2.002.patch, HADOOP-11844-branch-2.003.patch, HADOOP-11844.1.patch SchedulerLoadSimulator at least on 2.6.0 points to an invalid link to rumen. Need to verify and potentially fix this link in newer releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-1540) distcp should support an exclude list
[ https://issues.apache.org/jira/browse/HADOOP-1540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517921#comment-14517921 ] Rich Haase commented on HADOOP-1540: [~jingzhao] Just finished rebasing against trunk and testing the patch. distcp should support an exclude list - Key: HADOOP-1540 URL: https://issues.apache.org/jira/browse/HADOOP-1540 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.6.0 Reporter: Senthil Subramanian Assignee: Rich Haase Priority: Minor Labels: patch Fix For: 2.6.0 Attachments: HADOOP-1540.001.patch, HADOOP-1540.branch-2.6.0.001.patch There should be a way to ignore specific paths (eg: those that have already been copied over under the current srcPath). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-1540) distcp should support an exclude list
[ https://issues.apache.org/jira/browse/HADOOP-1540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rich Haase updated HADOOP-1540: --- Attachment: HADOOP-1540.001.patch rebased patch against trunk distcp should support an exclude list - Key: HADOOP-1540 URL: https://issues.apache.org/jira/browse/HADOOP-1540 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.6.0 Reporter: Senthil Subramanian Assignee: Rich Haase Priority: Minor Labels: patch Fix For: 2.6.0 Attachments: HADOOP-1540.001.patch, HADOOP-1540.branch-2.6.0.001.patch There should be a way to ignore specific paths (eg: those that have already been copied over under the current srcPath). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-1540) distcp should support an exclude list
[ https://issues.apache.org/jira/browse/HADOOP-1540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rich Haase updated HADOOP-1540: --- Attachment: (was: HADOOP-1540.001.patch) distcp should support an exclude list - Key: HADOOP-1540 URL: https://issues.apache.org/jira/browse/HADOOP-1540 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.6.0 Reporter: Senthil Subramanian Assignee: Rich Haase Priority: Minor Labels: patch Fix For: 2.6.0 Attachments: HADOOP-1540.002.patch There should be a way to ignore specific paths (eg: those that have already been copied over under the current srcPath). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-1540) distcp should support an exclude list
[ https://issues.apache.org/jira/browse/HADOOP-1540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518495#comment-14518495 ] Hadoop QA commented on HADOOP-1540: --- \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 14m 54s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 2 new or modified test files. | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | javac | 7m 37s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 50s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 23s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 4m 1s | The applied patch generated 5 additional checkstyle issues. | | {color:green}+1{color} | install | 1m 35s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 35s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 0m 38s | The patch does not introduce any new Findbugs (version 2.0.3) warnings. | | {color:green}+1{color} | tools/hadoop tests | 6m 20s | Tests passed in hadoop-distcp. | | | | 45m 59s | | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12728958/HADOOP-1540.002.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 5190923 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/6208/artifact/patchprocess/checkstyle-result-diff.txt | | hadoop-distcp test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/6208/artifact/patchprocess/testrun_hadoop-distcp.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/6208/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/6208/console | This message was automatically generated. distcp should support an exclude list - Key: HADOOP-1540 URL: https://issues.apache.org/jira/browse/HADOOP-1540 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.6.0 Reporter: Senthil Subramanian Assignee: Rich Haase Priority: Minor Labels: patch Fix For: 2.6.0 Attachments: HADOOP-1540.002.patch There should be a way to ignore specific paths (eg: those that have already been copied over under the current srcPath). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11885) hadoop-dist dist-layout-stitching.sh does not work with dash
[ https://issues.apache.org/jira/browse/HADOOP-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-11885: - Assignee: Andrew Wang Affects Version/s: (was: 3.0.0) 2.7.0 Status: Patch Available (was: Open) hadoop-dist dist-layout-stitching.sh does not work with dash Key: HADOOP-11885 URL: https://issues.apache.org/jira/browse/HADOOP-11885 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 2.7.0 Reporter: Andrew Wang Assignee: Andrew Wang Attachments: hadoop-11885.001.patch Saw this while building the EC branch, pretty sure it'll repro on trunk though too. {noformat} [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-common-project/hadoop-nfs/target/hadoop-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs-nfs/target/hadoop-hdfs-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-yarn-project/target/hadoop-yarn-project-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-mapreduce-project/target/hadoop-mapreduce-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-tools/hadoop-tools-dist/target/hadoop-tools-dist-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11885) hadoop-dist dist-layout-stitching.sh does not work with dash
[ https://issues.apache.org/jira/browse/HADOOP-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518523#comment-14518523 ] Brahma Reddy Battula commented on HADOOP-11885: --- Nice catch..Shall I post patch ..? hadoop-dist dist-layout-stitching.sh does not work with dash Key: HADOOP-11885 URL: https://issues.apache.org/jira/browse/HADOOP-11885 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 2.7.0 Reporter: Andrew Wang Assignee: Andrew Wang Attachments: hadoop-11885.001.patch Saw this while building the EC branch, pretty sure it'll repro on trunk though too. {noformat} [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-common-project/hadoop-nfs/target/hadoop-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs-nfs/target/hadoop-hdfs-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-yarn-project/target/hadoop-yarn-project-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-mapreduce-project/target/hadoop-mapreduce-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-tools/hadoop-tools-dist/target/hadoop-tools-dist-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11885) hadoop-dist dist-layout-stitching.sh does not work with dash
[ https://issues.apache.org/jira/browse/HADOOP-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518284#comment-14518284 ] Andrew Wang commented on HADOOP-11885: -- [~aw] I think you were the last person to touch this script, any thoughts? hadoop-dist dist-layout-stitching.sh does not work with dash Key: HADOOP-11885 URL: https://issues.apache.org/jira/browse/HADOOP-11885 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Reporter: Andrew Wang Saw this while building the EC branch, pretty sure it'll repro on trunk though too. {noformat} [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-common-project/hadoop-nfs/target/hadoop-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs-nfs/target/hadoop-hdfs-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-yarn-project/target/hadoop-yarn-project-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-mapreduce-project/target/hadoop-mapreduce-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-tools/hadoop-tools-dist/target/hadoop-tools-dist-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11821) Fix findbugs warnings in hadoop-sls
[ https://issues.apache.org/jira/browse/HADOOP-11821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-11821: -- Attachment: HADOOP-11821-005.patch Fix findbugs warnings in hadoop-sls --- Key: HADOOP-11821 URL: https://issues.apache.org/jira/browse/HADOOP-11821 Project: Hadoop Common Issue Type: Bug Components: tools Reporter: Akira AJISAKA Assignee: Brahma Reddy Battula Attachments: HADOOP-11821-001.patch, HADOOP-11821-002.patch, HADOOP-11821-003.patch, HADOOP-11821-004.patch, HADOOP-11821-005.patch, HADOOP-11821.patch Per https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5388//artifact/patchprocess/newPatchFindbugsWarningshadoop-sls.html, there are 13 warnings to be fixed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11821) Fix findbugs warnings in hadoop-sls
[ https://issues.apache.org/jira/browse/HADOOP-11821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518565#comment-14518565 ] Brahma Reddy Battula commented on HADOOP-11821: --- [~ajisakaa] Updated the patch based on your comment..Kindly review.. Fix findbugs warnings in hadoop-sls --- Key: HADOOP-11821 URL: https://issues.apache.org/jira/browse/HADOOP-11821 Project: Hadoop Common Issue Type: Bug Components: tools Reporter: Akira AJISAKA Assignee: Brahma Reddy Battula Attachments: HADOOP-11821-001.patch, HADOOP-11821-002.patch, HADOOP-11821-003.patch, HADOOP-11821-004.patch, HADOOP-11821-005.patch, HADOOP-11821.patch Per https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5388//artifact/patchprocess/newPatchFindbugsWarningshadoop-sls.html, there are 13 warnings to be fixed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11886) Failed to run distcp against ftp server installed on Windows
[ https://issues.apache.org/jira/browse/HADOOP-11886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518604#comment-14518604 ] sam liu commented on HADOOP-11886: -- I found the FTPClient code will hang on the line 'client.setFileTransferMode(FTP.BLOCK_TRANSFER_MODE)' against IIS ftp server on Windows, and then it will failed with some FTP connection exception. So I removed the line 'client.setFileTransferMode(FTP.BLOCK_TRANSFER_MODE)' from hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java#connect(), and then found some things improved: now the distcp tool could launch the MR job(it could not before changing the code) agsint IIS ftp server on Windows, but still always failed on the step 'map 100% reduce 0%': hadoop distcp ftp://Viewer:passw...@hostname1.com:1121/ftp_file1.txt /tmp/ 15/04/28 01:28:48 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[ftp://Viewer:passw...@hostname1.com:1121/ftp_file1.txt], targetPath=/tmp, targetPathExists=true, preserveRawXattrs=false} 15/04/28 01:28:49 INFO impl.TimelineClientImpl: Timeline service address: http://hostname2.com:8188/ws/v1/timeline/ 15/04/28 01:28:50 INFO client.RMProxy: Connecting to ResourceManager at hostname2.com/9.30.249.187:8050 15/04/28 01:29:12 INFO impl.TimelineClientImpl: Timeline service address: http://hostname2.com:8188/ws/v1/timeline/ 15/04/28 01:29:12 INFO client.RMProxy: Connecting to ResourceManager at hostname2.com/9.30.249.187:8050 15/04/28 01:29:13 INFO mapreduce.JobSubmitter: number of splits:1 15/04/28 01:29:13 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1430212928460_0002 15/04/28 01:29:13 INFO impl.YarnClientImpl: Submitted application application_1430212928460_0002 15/04/28 01:29:13 INFO mapreduce.Job: The url to track the job: http://hostname2.com:8088/proxy/application_1430212928460_0002/ 15/04/28 01:29:13 INFO tools.DistCp: DistCp job-id: job_1430212928460_0002 15/04/28 01:29:13 INFO mapreduce.Job: Running job: job_1430212928460_0002 15/04/28 01:29:20 INFO mapreduce.Job: Job job_1430212928460_0002 running in uber mode : false 15/04/28 01:29:20 INFO mapreduce.Job: map 0% reduce 0% 15/04/28 01:29:31 INFO mapreduce.Job: map 100% reduce 0% 15/04/28 01:31:38 INFO mapreduce.Job: Task Id : attempt_1430212928460_0002_m_00_0, Status : FAILED Error: java.net.SocketException: Connection reset at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:118) at java.net.SocketOutputStream.write(SocketOutputStream.java:159) at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291) at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295) at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141) at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229) at java.io.BufferedWriter.flush(BufferedWriter.java:254) at org.apache.commons.net.ftp.FTP.__send(FTP.java:501) at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:475) at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:552) at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:601) at org.apache.commons.net.ftp.FTP.quit(FTP.java:809) at org.apache.commons.net.ftp.FTPClient.logout(FTPClient.java:979) at org.apache.hadoop.fs.ftp.FTPFileSystem.disconnect(FTPFileSystem.java:162) at org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:410) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:218) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Failed to run distcp against ftp server installed on Windows Key: HADOOP-11886 URL: https://issues.apache.org/jira/browse/HADOOP-11886 Project: Hadoop Common Issue Type: Bug Components: tools/distcp Reporter: sam liu Assignee: sam liu Priority: Blocker Could run distcp against
[jira] [Created] (HADOOP-11886) Failed to run distcp against ftp server installed on Windows
sam liu created HADOOP-11886: Summary: Failed to run distcp against ftp server installed on Windows Key: HADOOP-11886 URL: https://issues.apache.org/jira/browse/HADOOP-11886 Project: Hadoop Common Issue Type: Bug Components: tools/distcp Reporter: sam liu Assignee: sam liu Priority: Blocker Could run distcp against ftp server installed on Linux, but could NOT run distcp against ftp server installed on Windows(such as IIS ftp service). However, distcp works well for FileZilla ftp server installed on Windows -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11885) hadoop-dist dist-layout-stitching.sh does not work with dash
[ https://issues.apache.org/jira/browse/HADOOP-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518354#comment-14518354 ] Allen Wittenauer commented on HADOOP-11885: --- Oh, FWIW, it was decided a while back (like 0.x days) that the official shell was going to be bash. So I agree that forcing this to bash should be a quick and 'legal' fix. hadoop-dist dist-layout-stitching.sh does not work with dash Key: HADOOP-11885 URL: https://issues.apache.org/jira/browse/HADOOP-11885 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Reporter: Andrew Wang Saw this while building the EC branch, pretty sure it'll repro on trunk though too. {noformat} [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-common-project/hadoop-nfs/target/hadoop-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs-nfs/target/hadoop-hdfs-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-yarn-project/target/hadoop-yarn-project-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-mapreduce-project/target/hadoop-mapreduce-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-tools/hadoop-tools-dist/target/hadoop-tools-dist-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11885) hadoop-dist dist-layout-stitching.sh does not work with dash
[ https://issues.apache.org/jira/browse/HADOOP-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-11885: - Attachment: hadoop-11885.001.patch Thanks for commenting Allen. This attached patch changes sh to bash where my grep found it. I'm also somewhat puzzled why dash doesn't end up with a non-zero error code when it's spitting all these parse errors. I played with -e and the run() wrapper to no avail. This is pretty scary to me since it means we are silently ignoring errors. If you have a sec, would appreciate a quick look from a more experienced shell scripter. {noformat} - % mvn -f hadoop-dist/pom.xml package -Pdist ...or... - % dash target/dist-layout-stitching.sh {noformat} Sounds like we really need that dev-support refactor, or maven assembly. hadoop-dist dist-layout-stitching.sh does not work with dash Key: HADOOP-11885 URL: https://issues.apache.org/jira/browse/HADOOP-11885 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Reporter: Andrew Wang Attachments: hadoop-11885.001.patch Saw this while building the EC branch, pretty sure it'll repro on trunk though too. {noformat} [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-common-project/hadoop-nfs/target/hadoop-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs-nfs/target/hadoop-hdfs-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-yarn-project/target/hadoop-yarn-project-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-mapreduce-project/target/hadoop-mapreduce-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-tools/hadoop-tools-dist/target/hadoop-tools-dist-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-10532) Jenkins test-patch timed out on a large patch touching files in multiple modules.
[ https://issues.apache.org/jira/browse/HADOOP-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518561#comment-14518561 ] Allen Wittenauer commented on HADOOP-10532: --- This is pretty much Working As Designed. Jenkins has a setting for how long a given patch run should be. If it goes over that value, it thinks the job is hung and kills it. The problem we've got is that hadoop-hdfs unit tests runs for an extremely long time. It would probably be well worth the effort to break it up from a single module to multiple modules, similarly to how yarn is currently designed. This would limit the possibility of running over that time for the vast majority of patches. We probably still couldn't run ALL of the unit tests, but those types of patches are extremely rare and/or can be broken up into multiple patches. Jenkins test-patch timed out on a large patch touching files in multiple modules. - Key: HADOOP-10532 URL: https://issues.apache.org/jira/browse/HADOOP-10532 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Reporter: Chris Nauroth Assignee: Jean-Pierre Matsumoto Attachments: PreCommit-HADOOP-Build-3821-consoleText.txt.gz On HADOOP-10503, I had posted a consolidated patch touching multiple files across all sub-modules: Hadoop, HDFS, YARN and MapReduce. The Jenkins test-patch runs for these consolidated patches timed out. I also experimented with a dummy patch that simply added one-line comment changes to files. This patch also timed out, which seems to indicate a bug in our automation rather than a problem with any patch in particular. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11813) releasedocmaker.py should use today's date instead of unreleased
[ https://issues.apache.org/jira/browse/HADOOP-11813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518387#comment-14518387 ] Allen Wittenauer commented on HADOOP-11813: --- Yeah, the escape of the asterisk completely broke the releasenotes that use asterisks as bullet points. releasedocmaker.py should use today's date instead of unreleased Key: HADOOP-11813 URL: https://issues.apache.org/jira/browse/HADOOP-11813 Project: Hadoop Common Issue Type: Task Components: build Affects Versions: 3.0.0 Reporter: Allen Wittenauer Assignee: Darrell Taylor Priority: Minor Labels: newbie Attachments: HADOOP-11813.patch After discussing with a few folks, it'd be more convenient if releasedocmaker used the current date rather than unreleased when processing a version that JIRA hasn't declared released. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-9572) Enhance Pre-Commit Admin job to test-patch multiple branches
[ https://issues.apache.org/jira/browse/HADOOP-9572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HADOOP-9572. -- Resolution: Fixed Closing this out as contained by HADOOP-11746 based upon the previous comments. Enhance Pre-Commit Admin job to test-patch multiple branches Key: HADOOP-9572 URL: https://issues.apache.org/jira/browse/HADOOP-9572 Project: Hadoop Common Issue Type: Bug Reporter: Giridharan Kesavan Assignee: Giridharan Kesavan Currently PreCommit-Admin job supports triggering PreCommit test jobs on trunk for a given project. This jira it to enhance the admin job to support running test-patch on any branches for a given project based on the uploaded patch name. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11885) hadoop-dist dist-layout-stitching.sh does not work with dash
[ https://issues.apache.org/jira/browse/HADOOP-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518310#comment-14518310 ] Allen Wittenauer commented on HADOOP-11885: --- Actually, I committed the last change but it wasn't my change... you'd know if it was mine. ;) Ideally, assuming we want to keep this as a separate script and not something that maven does itself using maven facilities (I think [~busbey] mentioned assemblies), it should really get pushed into dev-support and then rewritten to use modern shell practices, etc, etc. That way shellcheck could pick it up, etc. hadoop-dist dist-layout-stitching.sh does not work with dash Key: HADOOP-11885 URL: https://issues.apache.org/jira/browse/HADOOP-11885 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Reporter: Andrew Wang Saw this while building the EC branch, pretty sure it'll repro on trunk though too. {noformat} [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-common-project/hadoop-nfs/target/hadoop-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs-nfs/target/hadoop-hdfs-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-yarn-project/target/hadoop-yarn-project-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-mapreduce-project/target/hadoop-mapreduce-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-tools/hadoop-tools-dist/target/hadoop-tools-dist-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11886) Failed to run distcp against ftp server installed on Windows
[ https://issues.apache.org/jira/browse/HADOOP-11886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518597#comment-14518597 ] sam liu commented on HADOOP-11886: -- [Scenario 1] I installed a BI cluster using trunk build on HadoopNode1, and then could copy file from a ftp installed on Linux to hdfs using command: hadoop distcp ftp://user1:user1@9.185.68.201/home/user1/ftp.txt hdfs://HadoopNode1:9000/tmp/ [Scenario 2] [Success on FileZilla ftp server on Windows7]: [h...@hostname2.com ~]$ hadoop distcp ftp://ftp:f...@hostname1.com:121/ftp_test.txt /tmp/ 15/04/26 22:56:20 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[ftp://ftp:f...@hostname1.com:121/ftp_test.txt], targetPath=/tmp, targetPathExists=true, preserveRawXattrs=false} 15/04/26 22:56:21 INFO impl.TimelineClientImpl: Timeline service address: http://hostname2.com:8188/ws/v1/timeline/ 15/04/26 22:56:21 INFO client.RMProxy: Connecting to ResourceManager at hostname2.com/9.32.249.181:8050 15/04/26 22:56:43 INFO impl.TimelineClientImpl: Timeline service address: http://hostname2.com:8188/ws/v1/timeline/ 15/04/26 22:56:43 INFO client.RMProxy: Connecting to ResourceManager at hostname2.com/9.32.249.181:8050 15/04/26 22:56:43 INFO mapreduce.JobSubmitter: number of splits:1 15/04/26 22:56:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1429858372957_0002 15/04/26 22:56:44 INFO impl.YarnClientImpl: Submitted application application_1429858372957_0002 15/04/26 22:56:44 INFO mapreduce.Job: The url to track the job: http://hostname2.com:8088/proxy/application_1429858372957_0002/ 15/04/26 22:56:44 INFO tools.DistCp: DistCp job-id: job_1429858372957_0002 15/04/26 22:56:44 INFO mapreduce.Job: Running job: job_1429858372957_0002 15/04/26 22:56:51 INFO mapreduce.Job: Job job_1429858372957_0002 running in uber mode : false 15/04/26 22:56:51 INFO mapreduce.Job: map 0% reduce 0% [Scenario 3] On the same hadoop node, I can copy file from a remote ftp server installed on Windows7 using command: wget ftp://Viewer:password1@9.126.148.79/ftp-win.txt. But I failed to copy file from a ftp installed on Windows7 to hdfs using command: [user1@HadoopNode1 ~]$ hadoop distcp ftp://Viewer:password1@9.126.148.79/ftp-win.txt /tmp/ 15/02/01 23:03:37 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[ftp://Viewer:password1@9.126.148.79/ftp-win.txt], targetPath=/tmp, targetPathExists=true} 15/02/01 23:03:38 INFO client.RMProxy: Connecting to ResourceManager at HadoopNode1/9.30.239.166:8032 15/02/01 23:05:50 ERROR tools.DistCp: Exception encountered org.apache.commons.net.ftp.FTPConnectionClosedException: Connection closed without indication. at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:313) at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:290) at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:479) at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:552) at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:601) at org.apache.commons.net.ftp.FTP.quit(FTP.java:809) at org.apache.commons.net.ftp.FTPClient.logout(FTPClient.java:979) at org.apache.hadoop.fs.ftp.FTPFileSystem.disconnect(FTPFileSystem.java:151) at org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:395) at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57) at org.apache.hadoop.fs.Globber.glob(Globber.java:248) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1632) at org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77) at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:80) at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:342) at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154) at org.apache.hadoop.tools.DistCp.run(DistCp.java:121) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.tools.DistCp.main(DistCp.java:390) Failed to run distcp against ftp server installed on Windows Key: HADOOP-11886 URL: https://issues.apache.org/jira/browse/HADOOP-11886 Project: Hadoop Common Issue Type: Bug Components: tools/distcp Reporter: sam liu Assignee: sam liu Priority: Blocker Could run distcp against ftp server installed on Linux, but could NOT run distcp
[jira] [Commented] (HADOOP-11870) [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues
[ https://issues.apache.org/jira/browse/HADOOP-11870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516892#comment-14516892 ] Hudson commented on HADOOP-11870: - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #177 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/177/]) HADOOP-11870. [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues (rkanter) (rkanter: rev 9fec02c069f9bb24b5ee99031917075b4c7a7682) * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/SignerSecretProvider.java * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/CertificateUtil.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java * hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues --- Key: HADOOP-11870 URL: https://issues.apache.org/jira/browse/HADOOP-11870 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Environment: Jenkins on Java8 Reporter: Robert Kanter Assignee: Robert Kanter Fix For: 2.8.0 Attachments: HADOOP-11870.001.patch Jenkins on Java8 is failing due to a number of Javadoc violations that are now considered ERRORs in the following classes: - AuthenticationFilter.java - CertificateUtil.java - RolloverSignerSecretProvider.java - SignerSecretProvider.java - ZKSignerSecretProvider.java - KeyAuthorizationKeyProvider.java -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm
[ https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] jack liuquan updated HADOOP-11828: -- Attachment: HADOOP-11828-hitchhikerXOR-V4.zip Implement the Hitchhiker erasure coding algorithm - Key: HADOOP-11828 URL: https://issues.apache.org/jira/browse/HADOOP-11828 Project: Hadoop Common Issue Type: Sub-task Reporter: Zhe Zhang Assignee: jack liuquan Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, HADOOP-11828-hitchhikerXOR-V4.zip, HDFS-7715-hhxor-decoder.patch, HDFS-7715-hhxor-encoder.patch [Hitchhiker | http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is a new erasure coding algorithm developed as a research project at UC Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% during data reconstruction. This JIRA aims to introduce Hitchhiker to the HDFS-EC framework, as one of the pluggable codec algorithms. The existing implementation is based on HDFS-RAID. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11715) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception.
[ https://issues.apache.org/jira/browse/HADOOP-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516978#comment-14516978 ] nijel commented on HADOOP-11715: Updated the patch to fix the white space issue and test failures. I didnt not see any details of checkstyle comments. Can anyone guide me ? Thanks in advance azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. - Key: HADOOP-11715 URL: https://issues.apache.org/jira/browse/HADOOP-11715 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.7.0 Reporter: Brandon Li Assignee: nijel Fix For: 2.8.0 Attachments: HADOOP-11715.1.patch, HADOOP-11715.2.patch azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. For example, it complains filenotfound instead of wrong-fs for an hdfs path: Caused by: java.io.FileNotFoundException: hdfs://headnode0:9000/hive/scratch/hadoopqa/a7d34a22-57eb-4678-84b4-43d84027d45f/hive_2015-03-02_23-13-04_713_5722627238053417441-1/hadoopqa/_tez_scratch_dir/_tez_scratch_dir/split_Map_1/job.split: No such file or directory. at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:1625) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11844) SLS docs point to invalid rumen link
[ https://issues.apache.org/jira/browse/HADOOP-11844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517002#comment-14517002 ] Hadoop QA commented on HADOOP-11844: \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | patch | 0m 0s | The patch command could not apply the patch during dryrun. | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12728781/HADOOP-11844-branch-2.002.patch | | Optional Tests | site | | git revision | branch-2 / 1d03ac3 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/6204/console | This message was automatically generated. SLS docs point to invalid rumen link Key: HADOOP-11844 URL: https://issues.apache.org/jira/browse/HADOOP-11844 Project: Hadoop Common Issue Type: Bug Components: documentation Affects Versions: 2.6.0 Reporter: Allen Wittenauer Assignee: J.Andreina Priority: Trivial Labels: newbie Attachments: HADOOP-11844-branch-2.002.patch, HADOOP-11844.1.patch SchedulerLoadSimulator at least on 2.6.0 points to an invalid link to rumen. Need to verify and potentially fix this link in newer releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11715) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception.
[ https://issues.apache.org/jira/browse/HADOOP-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516813#comment-14516813 ] Hadoop QA commented on HADOOP-11715: \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 14m 38s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 1 new or modified test files. | | {color:red}-1{color} | whitespace | 0m 0s | The patch has 3 line(s) that end in whitespace. | | {color:green}+1{color} | javac | 7m 31s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 36s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 24s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 7m 46s | The applied patch generated 1 additional checkstyle issues. | | {color:green}+1{color} | install | 1m 31s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 33s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 0m 38s | The patch does not introduce any new Findbugs (version 2.0.3) warnings. | | {color:red}-1{color} | tools/hadoop tests | 0m 50s | Tests failed in hadoop-azure. | | | | 43m 36s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.fs.azure.TestNativeAzureFileSystemMocked | | | hadoop.fs.azure.TestBlobMetadata | | | hadoop.fs.azure.TestNativeAzureFileSystemBlockLocations | | | hadoop.fs.azure.TestOutOfBandAzureBlobOperations | | | hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck | | | hadoop.fs.azure.TestWasbUriAndConfiguration | | | hadoop.fs.azure.TestWasbFsck | | | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency | | | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked | | | hadoop.fs.azure.TestNativeAzureFileSystemContractMocked | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12728752/HADOOP-11715.1.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 99fe03e | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/6202/artifact/patchprocess/whitespace.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/6202/artifact/patchprocess/checkstyle-result-diff.txt | | hadoop-azure test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/6202/artifact/patchprocess/testrun_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/6202/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/6202/console | This message was automatically generated. azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. - Key: HADOOP-11715 URL: https://issues.apache.org/jira/browse/HADOOP-11715 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.7.0 Reporter: Brandon Li Assignee: nijel Fix For: 2.8.0 Attachments: HADOOP-11715.1.patch azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. For example, it complains filenotfound instead of wrong-fs for an hdfs path: Caused by: java.io.FileNotFoundException: hdfs://headnode0:9000/hive/scratch/hadoopqa/a7d34a22-57eb-4678-84b4-43d84027d45f/hive_2015-03-02_23-13-04_713_5722627238053417441-1/hadoopqa/_tez_scratch_dir/_tez_scratch_dir/split_Map_1/job.split: No such file or directory. at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:1625) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11870) [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues
[ https://issues.apache.org/jira/browse/HADOOP-11870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516862#comment-14516862 ] Hudson commented on HADOOP-11870: - FAILURE: Integrated in Hadoop-Hdfs-trunk #2109 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2109/]) HADOOP-11870. [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues (rkanter) (rkanter: rev 9fec02c069f9bb24b5ee99031917075b4c7a7682) * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/SignerSecretProvider.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/CertificateUtil.java * hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues --- Key: HADOOP-11870 URL: https://issues.apache.org/jira/browse/HADOOP-11870 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Environment: Jenkins on Java8 Reporter: Robert Kanter Assignee: Robert Kanter Fix For: 2.8.0 Attachments: HADOOP-11870.001.patch Jenkins on Java8 is failing due to a number of Javadoc violations that are now considered ERRORs in the following classes: - AuthenticationFilter.java - CertificateUtil.java - RolloverSignerSecretProvider.java - SignerSecretProvider.java - ZKSignerSecretProvider.java - KeyAuthorizationKeyProvider.java -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11844) SLS docs point to invalid rumen link
[ https://issues.apache.org/jira/browse/HADOOP-11844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] J.Andreina updated HADOOP-11844: Attachment: HADOOP-11844-branch-2.002.patch Thanks [~aw] for clarifying. I have updated the patch name . bq.Also, is this broken in trunk as well? We need to check. Verified in trunk ,it is not been broken. Please review. SLS docs point to invalid rumen link Key: HADOOP-11844 URL: https://issues.apache.org/jira/browse/HADOOP-11844 Project: Hadoop Common Issue Type: Bug Components: documentation Affects Versions: 2.6.0 Reporter: Allen Wittenauer Assignee: J.Andreina Priority: Trivial Labels: newbie Attachments: HADOOP-11844-branch-2.002.patch, HADOOP-11844.1.patch SchedulerLoadSimulator at least on 2.6.0 points to an invalid link to rumen. Need to verify and potentially fix this link in newer releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11715) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception.
[ https://issues.apache.org/jira/browse/HADOOP-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-11715: --- Attachment: HADOOP-11715.2.patch azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. - Key: HADOOP-11715 URL: https://issues.apache.org/jira/browse/HADOOP-11715 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.7.0 Reporter: Brandon Li Assignee: nijel Fix For: 2.8.0 Attachments: HADOOP-11715.1.patch, HADOOP-11715.2.patch azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. For example, it complains filenotfound instead of wrong-fs for an hdfs path: Caused by: java.io.FileNotFoundException: hdfs://headnode0:9000/hive/scratch/hadoopqa/a7d34a22-57eb-4678-84b4-43d84027d45f/hive_2015-03-02_23-13-04_713_5722627238053417441-1/hadoopqa/_tez_scratch_dir/_tez_scratch_dir/split_Map_1/job.split: No such file or directory. at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:1625) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11870) [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues
[ https://issues.apache.org/jira/browse/HADOOP-11870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516914#comment-14516914 ] Hudson commented on HADOOP-11870: - FAILURE: Integrated in Hadoop-Yarn-trunk #911 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/911/]) HADOOP-11870. [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues (rkanter) (rkanter: rev 9fec02c069f9bb24b5ee99031917075b4c7a7682) * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/SignerSecretProvider.java * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/CertificateUtil.java [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues --- Key: HADOOP-11870 URL: https://issues.apache.org/jira/browse/HADOOP-11870 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Environment: Jenkins on Java8 Reporter: Robert Kanter Assignee: Robert Kanter Fix For: 2.8.0 Attachments: HADOOP-11870.001.patch Jenkins on Java8 is failing due to a number of Javadoc violations that are now considered ERRORs in the following classes: - AuthenticationFilter.java - CertificateUtil.java - RolloverSignerSecretProvider.java - SignerSecretProvider.java - ZKSignerSecretProvider.java - KeyAuthorizationKeyProvider.java -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11870) [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues
[ https://issues.apache.org/jira/browse/HADOOP-11870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516876#comment-14516876 ] Hudson commented on HADOOP-11870: - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #168 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/168/]) HADOOP-11870. [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues (rkanter) (rkanter: rev 9fec02c069f9bb24b5ee99031917075b4c7a7682) * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/CertificateUtil.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/SignerSecretProvider.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java * hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java * hadoop-common-project/hadoop-common/CHANGES.txt [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues --- Key: HADOOP-11870 URL: https://issues.apache.org/jira/browse/HADOOP-11870 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Environment: Jenkins on Java8 Reporter: Robert Kanter Assignee: Robert Kanter Fix For: 2.8.0 Attachments: HADOOP-11870.001.patch Jenkins on Java8 is failing due to a number of Javadoc violations that are now considered ERRORs in the following classes: - AuthenticationFilter.java - CertificateUtil.java - RolloverSignerSecretProvider.java - SignerSecretProvider.java - ZKSignerSecretProvider.java - KeyAuthorizationKeyProvider.java -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11885) hadoop-dist dist-layout-stitching.sh does not work with dash
[ https://issues.apache.org/jira/browse/HADOOP-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518279#comment-14518279 ] Andrew Wang commented on HADOOP-11885: -- I see this line in the pom.xml: {noformat} exec executable=sh dir=${project.build.directory} failonerror=true arg line=./dist-layout-stitching.sh/ /exec {noformat} sh is dash by default on ubuntu. I tried invoking it directly with dash, got the above errors. When I tried it with bash, it worked. Fix might be as easy as changing sh to instead be bash. hadoop-dist dist-layout-stitching.sh does not work with dash Key: HADOOP-11885 URL: https://issues.apache.org/jira/browse/HADOOP-11885 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Reporter: Andrew Wang Saw this while building the EC branch, pretty sure it'll repro on trunk though too. {noformat} [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-common-project/hadoop-nfs/target/hadoop-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs-nfs/target/hadoop-hdfs-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-yarn-project/target/hadoop-yarn-project-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-mapreduce-project/target/hadoop-mapreduce-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-tools/hadoop-tools-dist/target/hadoop-tools-dist-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11885) hadoop-dist dist-layout-stitching.sh does not work with dash
Andrew Wang created HADOOP-11885: Summary: hadoop-dist dist-layout-stitching.sh does not work with dash Key: HADOOP-11885 URL: https://issues.apache.org/jira/browse/HADOOP-11885 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Reporter: Andrew Wang Saw this while building the EC branch, pretty sure it'll repro on trunk though too. {noformat} [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-common-project/hadoop-nfs/target/hadoop-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs-nfs/target/hadoop-hdfs-nfs-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-yarn-project/target/hadoop-yarn-project-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-mapreduce-project/target/hadoop-mapreduce-3.0.0-SNAPSHOT . [exec] $ copy /home/andrew/dev/hadoop/hdfs-7285/hadoop-tools/hadoop-tools-dist/target/hadoop-tools-dist-3.0.0-SNAPSHOT . [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11870) [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues
[ https://issues.apache.org/jira/browse/HADOOP-11870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517236#comment-14517236 ] Hudson commented on HADOOP-11870: - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2127 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2127/]) HADOOP-11870. [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues (rkanter) (rkanter: rev 9fec02c069f9bb24b5ee99031917075b4c7a7682) * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/SignerSecretProvider.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/CertificateUtil.java * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java * hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues --- Key: HADOOP-11870 URL: https://issues.apache.org/jira/browse/HADOOP-11870 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Environment: Jenkins on Java8 Reporter: Robert Kanter Assignee: Robert Kanter Fix For: 2.8.0 Attachments: HADOOP-11870.001.patch Jenkins on Java8 is failing due to a number of Javadoc violations that are now considered ERRORs in the following classes: - AuthenticationFilter.java - CertificateUtil.java - RolloverSignerSecretProvider.java - SignerSecretProvider.java - ZKSignerSecretProvider.java - KeyAuthorizationKeyProvider.java -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm
[ https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] jack liuquan updated HADOOP-11828: -- Attachment: HADOOP-11828-hitchhikerXOR-V4.patch Hi Kai, I have uploaded a new patch for covering your comment. Please review, thank you! Implement the Hitchhiker erasure coding algorithm - Key: HADOOP-11828 URL: https://issues.apache.org/jira/browse/HADOOP-11828 Project: Hadoop Common Issue Type: Sub-task Reporter: Zhe Zhang Assignee: jack liuquan Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, HADOOP-11828-hitchhikerXOR-V4.patch, HDFS-7715-hhxor-decoder.patch, HDFS-7715-hhxor-encoder.patch [Hitchhiker | http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is a new erasure coding algorithm developed as a research project at UC Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% during data reconstruction. This JIRA aims to introduce Hitchhiker to the HDFS-EC framework, as one of the pluggable codec algorithms. The existing implementation is based on HDFS-RAID. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11847) Enhance raw coder allowing to read least required inputs in decoding
[ https://issues.apache.org/jira/browse/HADOOP-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517142#comment-14517142 ] Zhe Zhang commented on HADOOP-11847: bq. erasedIndexes - erasedIndices I did a search and found the same thing. So let's keep the current names. Enhance raw coder allowing to read least required inputs in decoding Key: HADOOP-11847 URL: https://issues.apache.org/jira/browse/HADOOP-11847 Project: Hadoop Common Issue Type: Sub-task Components: io Reporter: Kai Zheng Assignee: Kai Zheng Attachments: HADOOP-11847-v1.patch, HADOOP-11847-v2.patch This is to enhance raw erasure coder to allow only reading least required inputs while decoding. It will also refine and document the relevant APIs for better understanding and usage. When using least required inputs, it may add computating overhead but will possiblly outperform overall since less network traffic and disk IO are involved. This is something planned to do but just got reminded by [~zhz]' s question raised in HDFS-7678, also copied here: bq.Kai Zheng I have a question about decoding: in a (6+3) schema, if block #2 is missing, and I want to repair it with blocks 0, 1, 3, 4, 5, 8, how should I construct the inputs to RawErasureDecoder#decode? With this work, hopefully the answer to above question would be obvious. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm
[ https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] jack liuquan updated HADOOP-11828: -- Attachment: (was: HADOOP-11828-hitchhikerXOR-V4.zip) Implement the Hitchhiker erasure coding algorithm - Key: HADOOP-11828 URL: https://issues.apache.org/jira/browse/HADOOP-11828 Project: Hadoop Common Issue Type: Sub-task Reporter: Zhe Zhang Assignee: jack liuquan Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, HADOOP-11828-hitchhikerXOR-V4.patch, HDFS-7715-hhxor-decoder.patch, HDFS-7715-hhxor-encoder.patch [Hitchhiker | http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is a new erasure coding algorithm developed as a research project at UC Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% during data reconstruction. This JIRA aims to introduce Hitchhiker to the HDFS-EC framework, as one of the pluggable codec algorithms. The existing implementation is based on HDFS-RAID. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11870) [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues
[ https://issues.apache.org/jira/browse/HADOOP-11870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517168#comment-14517168 ] Hudson commented on HADOOP-11870: - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #178 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/178/]) HADOOP-11870. [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues (rkanter) (rkanter: rev 9fec02c069f9bb24b5ee99031917075b4c7a7682) * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/RolloverSignerSecretProvider.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/CertificateUtil.java * hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KeyAuthorizationKeyProvider.java * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/SignerSecretProvider.java * hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java [JDK8] AuthenticationFilter, CertificateUtil, SignerSecretProviders, KeyAuthorizationKeyProvider Javadoc issues --- Key: HADOOP-11870 URL: https://issues.apache.org/jira/browse/HADOOP-11870 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0 Environment: Jenkins on Java8 Reporter: Robert Kanter Assignee: Robert Kanter Fix For: 2.8.0 Attachments: HADOOP-11870.001.patch Jenkins on Java8 is failing due to a number of Javadoc violations that are now considered ERRORs in the following classes: - AuthenticationFilter.java - CertificateUtil.java - RolloverSignerSecretProvider.java - SignerSecretProvider.java - ZKSignerSecretProvider.java - KeyAuthorizationKeyProvider.java -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11594) Improve the readability of site index of documentation
[ https://issues.apache.org/jira/browse/HADOOP-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HADOOP-11594: -- Attachment: HADOOP-11594.003.patch I attached rebased patch. Improve the readability of site index of documentation -- Key: HADOOP-11594 URL: https://issues.apache.org/jira/browse/HADOOP-11594 Project: Hadoop Common Issue Type: Improvement Components: documentation Reporter: Masatake Iwasaki Assignee: Masatake Iwasaki Priority: Minor Attachments: HADOOP-11594.001.patch, HADOOP-11594.002.patch, HADOOP-11594.003.patch * change the order of items * make redundant title shorter and fit it in single line as far as possible -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11821) Fix findbugs warnings in hadoop-sls
[ https://issues.apache.org/jira/browse/HADOOP-11821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518673#comment-14518673 ] Hadoop QA commented on HADOOP-11821: \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 14m 52s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:red}-1{color} | tests included | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | | {color:red}-1{color} | whitespace | 0m 0s | The patch has 4 line(s) that end in whitespace. | | {color:green}+1{color} | javac | 7m 33s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 41s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 23s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 5m 24s | The applied patch generated 2 additional checkstyle issues. | | {color:green}+1{color} | install | 1m 32s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 33s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 0m 40s | The patch does not introduce any new Findbugs (version 2.0.3) warnings. | | {color:green}+1{color} | tools/hadoop tests | 0m 52s | Tests passed in hadoop-sls. | | | | 41m 39s | | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12729025/HADOOP-11821-005.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 439614b | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/6210/artifact/patchprocess/whitespace.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/6210/artifact/patchprocess/checkstyle-result-diff.txt | | hadoop-sls test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/6210/artifact/patchprocess/testrun_hadoop-sls.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/6210/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/6210/console | This message was automatically generated. Fix findbugs warnings in hadoop-sls --- Key: HADOOP-11821 URL: https://issues.apache.org/jira/browse/HADOOP-11821 Project: Hadoop Common Issue Type: Bug Components: tools Reporter: Akira AJISAKA Assignee: Brahma Reddy Battula Attachments: HADOOP-11821-001.patch, HADOOP-11821-002.patch, HADOOP-11821-003.patch, HADOOP-11821-004.patch, HADOOP-11821-005.patch, HADOOP-11821.patch Per https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5388//artifact/patchprocess/newPatchFindbugsWarningshadoop-sls.html, there are 13 warnings to be fixed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11866) increase readability of the output of white space and checkstyle script
[ https://issues.apache.org/jira/browse/HADOOP-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518655#comment-14518655 ] Naganarasimha G R commented on HADOOP-11866: Hi [~jeagles] [~aw], can we get this patch in and address other check-style issues in either HADOOP-11869 or HADOOP-11778 ? increase readability of the output of white space and checkstyle script --- Key: HADOOP-11866 URL: https://issues.apache.org/jira/browse/HADOOP-11866 Project: Hadoop Common Issue Type: Bug Reporter: Naganarasimha G R Assignee: Naganarasimha G R Priority: Minor Attachments: HADOOP-11866-checkstyle.patch, HADOOP-11866.20150422-1.patch, HADOOP-11866.20150423-1.patch, HADOOP-11866.20150427-1.patch HADOOP-11746 supports listing of the lines which has trailing white spaces but doesn't inform patch line number. Without this report output will not be of much help as in most cases it reports blank lines. Also for the first timers it would be difficult to understand the output check style script hence adding an header -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11881) test-patch.sh javac result is wildly wrong
[ https://issues.apache.org/jira/browse/HADOOP-11881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kengo Seki updated HADOOP-11881: Assignee: Kengo Seki Status: Patch Available (was: Open) test-patch.sh javac result is wildly wrong -- Key: HADOOP-11881 URL: https://issues.apache.org/jira/browse/HADOOP-11881 Project: Hadoop Common Issue Type: Test Components: build, test Reporter: Allen Wittenauer Assignee: Kengo Seki Labels: newbie Attachments: HADOOP-11881.001.patch The summary report appears to list the total amount of javac warnings, not the amount of new ones. See MAPREDUCE-6192 as an example. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11881) test-patch.sh javac result is wildly wrong
[ https://issues.apache.org/jira/browse/HADOOP-11881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kengo Seki updated HADOOP-11881: Attachment: HADOOP-11881.001.patch Attaching a patch. I applied MAPREDUCE-6192.006.patch and confirm that the number of javac warnings in the summary report was the same as the number of differences appeared in diffJavacWarnings.txt. test-patch.sh javac result is wildly wrong -- Key: HADOOP-11881 URL: https://issues.apache.org/jira/browse/HADOOP-11881 Project: Hadoop Common Issue Type: Test Components: build, test Reporter: Allen Wittenauer Labels: newbie Attachments: HADOOP-11881.001.patch The summary report appears to list the total amount of javac warnings, not the amount of new ones. See MAPREDUCE-6192 as an example. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11821) Fix findbugs warnings in hadoop-sls
[ https://issues.apache.org/jira/browse/HADOOP-11821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517318#comment-14517318 ] Brahma Reddy Battula commented on HADOOP-11821: --- {quote}Would you render the line within 80 characters?{quote} I think, we can ignore this since this variable ( even for packages )..Please let me know your opinion. Fix findbugs warnings in hadoop-sls --- Key: HADOOP-11821 URL: https://issues.apache.org/jira/browse/HADOOP-11821 Project: Hadoop Common Issue Type: Bug Components: tools Reporter: Akira AJISAKA Assignee: Brahma Reddy Battula Attachments: HADOOP-11821-001.patch, HADOOP-11821-002.patch, HADOOP-11821-003.patch, HADOOP-11821-004.patch, HADOOP-11821.patch Per https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5388//artifact/patchprocess/newPatchFindbugsWarningshadoop-sls.html, there are 13 warnings to be fixed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11884) test-patch.sh should pull the real findbugs version
Allen Wittenauer created HADOOP-11884: - Summary: test-patch.sh should pull the real findbugs version Key: HADOOP-11884 URL: https://issues.apache.org/jira/browse/HADOOP-11884 Project: Hadoop Common Issue Type: Improvement Components: test Reporter: Allen Wittenauer test-patch.sh currently uses the CLI utilities for findbugs to discover the version. This isn't really accurate since maven pulls down the jars as part of the pom. It should be possible to either read the generated HTML file(s) or perhaps read the pom to discover the real version that was used. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11830) resolved
[ https://issues.apache.org/jira/browse/HADOOP-11830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ankush updated HADOOP-11830: Description: resolved (was: While for configuring Netezza Drivers on Access nodes getting below bugs. please suggest on this. ERROR sqoop.Sqoop: Got exception running Sqoop: java.lang.RuntimeException: Could not load db driver class: org.netezza.Driver java.lang.RuntimeException: Could not load db driver class: org.netezza.Driver at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:848) at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52) at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:736) at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:759) ) Summary: resolved (was: configuring Netezza Drivers on Access nodes) resolved Key: HADOOP-11830 URL: https://issues.apache.org/jira/browse/HADOOP-11830 Project: Hadoop Common Issue Type: Bug Reporter: ankush resolved -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11873) Include disk read/write time in FileSystem.Statistics
[ https://issues.apache.org/jira/browse/HADOOP-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517987#comment-14517987 ] Anu Engineer commented on HADOOP-11873: --- You can read them via http://localhost:datanodeport/jmx or via JMX APIs with Java. If you use http you can see that data in JSON format. here is an example : {code} curl -i http://localhost:50075/jmx HTTP/1.1 200 OK Cache-Control: no-cache Expires: Tue, 28 Apr 2015 20:29:38 GMT Date: Tue, 28 Apr 2015 20:29:38 GMT Pragma: no-cache Expires: Tue, 28 Apr 2015 20:29:38 GMT Date: Tue, 28 Apr 2015 20:29:38 GMT Pragma: no-cache Content-Type: application/json; charset=utf-8 Access-Control-Allow-Methods: GET Access-Control-Allow-Origin: * Connection: close Server: Jetty(6.1.26) { beans : [ { name : JMImplementation:type=MBeanServerDelegate, modelerType : javax.management.MBeanServerDelegate, MBeanServerId : hw11767.local_1430252919240, SpecificationName : Java Management Extensions, SpecificationVersion : 1.4, snip {code} For your purpose, if you are running the computation on the same node (like MapReduce) the time reported by the data node should be very close to time spend on reading data. Include disk read/write time in FileSystem.Statistics - Key: HADOOP-11873 URL: https://issues.apache.org/jira/browse/HADOOP-11873 Project: Hadoop Common Issue Type: New Feature Components: metrics Reporter: Kay Ousterhout Priority: Minor Measuring the time spent blocking on reading / writing data from / to disk is very useful for debugging performance problems in applications that read data from Hadoop, and can give much more information (e.g., to reflect disk contention) than just knowing the total amount of data read. I'd like to add something like diskMillis to FileSystem#Statistics to track this. For data read from HDFS, this can be done with very low overhead by adding logging around calls to RemoteBlockReader2.readNextPacket (because this reads larger chunks of data, the time added by the instrumentation is very small relative to the time to actually read the data). For data written to HDFS, this can be done in DFSOutputStream.waitAndQueueCurrentPacket. As far as I know, if you want this information today, it is only currently accessible by turning on HTrace. It looks like HTrace can't be selectively enabled, so a user can't just turn on the tracing on RemoteBlockReader2.readNextPacket for example, and instead needs to turn on tracing everywhere (which then introduces a bunch of overhead -- so sampling is necessary). It would be hugely helpful to have native metrics for time reading / writing to disk that are sufficiently low-overhead to be always on. (Please correct me if I'm wrong here about what's possible today!) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-1540) distcp should support an exclude list
[ https://issues.apache.org/jira/browse/HADOOP-1540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518148#comment-14518148 ] Rich Haase commented on HADOOP-1540: Working on fixing the items Jenkins is complaining about. distcp should support an exclude list - Key: HADOOP-1540 URL: https://issues.apache.org/jira/browse/HADOOP-1540 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.6.0 Reporter: Senthil Subramanian Assignee: Rich Haase Priority: Minor Labels: patch Fix For: 2.6.0 Attachments: HADOOP-1540.001.patch There should be a way to ignore specific paths (eg: those that have already been copied over under the current srcPath). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-1540) distcp should support an exclude list
[ https://issues.apache.org/jira/browse/HADOOP-1540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518046#comment-14518046 ] Hadoop QA commented on HADOOP-1540: --- \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 14m 34s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 2 new or modified test files. | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:red}-1{color} | javac | 7m 30s | The applied patch generated 1 additional warning messages. | | {color:green}+1{color} | javadoc | 9m 35s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 22s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 5m 25s | The applied patch generated 5 additional checkstyle issues. | | {color:green}+1{color} | install | 1m 33s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 32s | The patch built with eclipse:eclipse. | | {color:red}-1{color} | findbugs | 0m 43s | The patch appears to introduce 2 new Findbugs (version 2.0.3) warnings. | | {color:green}+1{color} | tools/hadoop tests | 6m 17s | Tests passed in hadoop-distcp. | | | | 46m 37s | | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-distcp | | | Dead store to localFS in org.apache.hadoop.tools.DistCp.addExclusionsFileToDistCache(Job, Path) At DistCp.java:org.apache.hadoop.tools.DistCp.addExclusionsFileToDistCache(Job, Path) At DistCp.java:[line 270] | | | Found reliance on default encoding in org.apache.hadoop.tools.mapred.CopyMapper.initializeExclusionPatterns(Mapper$Context):in org.apache.hadoop.tools.mapred.CopyMapper.initializeExclusionPatterns(Mapper$Context): new java.io.InputStreamReader(InputStream) At CopyMapper.java:[line 163] | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12728904/HADOOP-1540.001.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 5639bf0 | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/6207/artifact/patchprocess/diffJavacWarnings.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/6207/artifact/patchprocess/checkstyle-result-diff.txt | | Findbugs warnings | https://builds.apache.org/job/PreCommit-HADOOP-Build/6207/artifact/patchprocess/newPatchFindbugsWarningshadoop-distcp.html | | hadoop-distcp test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/6207/artifact/patchprocess/testrun_hadoop-distcp.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/6207/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/6207/console | This message was automatically generated. distcp should support an exclude list - Key: HADOOP-1540 URL: https://issues.apache.org/jira/browse/HADOOP-1540 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.6.0 Reporter: Senthil Subramanian Assignee: Rich Haase Priority: Minor Labels: patch Fix For: 2.6.0 Attachments: HADOOP-1540.001.patch There should be a way to ignore specific paths (eg: those that have already been copied over under the current srcPath). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-1540) distcp should support an exclude list
[ https://issues.apache.org/jira/browse/HADOOP-1540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rich Haase updated HADOOP-1540: --- Attachment: HADOOP-1540.002.patch This revision of the patch should fix findbugs/javac warnings. distcp should support an exclude list - Key: HADOOP-1540 URL: https://issues.apache.org/jira/browse/HADOOP-1540 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.6.0 Reporter: Senthil Subramanian Assignee: Rich Haase Priority: Minor Labels: patch Fix For: 2.6.0 Attachments: HADOOP-1540.001.patch, HADOOP-1540.002.patch There should be a way to ignore specific paths (eg: those that have already been copied over under the current srcPath). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-1540) distcp should support an exclude list
[ https://issues.apache.org/jira/browse/HADOOP-1540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rich Haase updated HADOOP-1540: --- Attachment: (was: HADOOP-1540.branch-2.6.0.001.patch) distcp should support an exclude list - Key: HADOOP-1540 URL: https://issues.apache.org/jira/browse/HADOOP-1540 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.6.0 Reporter: Senthil Subramanian Assignee: Rich Haase Priority: Minor Labels: patch Fix For: 2.6.0 Attachments: HADOOP-1540.001.patch There should be a way to ignore specific paths (eg: those that have already been copied over under the current srcPath). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11813) releasedocmaker.py should use today's date instead of unreleased
[ https://issues.apache.org/jira/browse/HADOOP-11813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518237#comment-14518237 ] Allen Wittenauer commented on HADOOP-11813: --- I use: {code} mvn clean site site:stage -Preleasedocs -DstagingDirectory=/tmp/hadoop-site {code} Just so you don't have to hunt for it, output will be at: file:///tmp/hadoop-site/hadoop-project/hadoop-project-dist/hadoop-common/release/ with that command line. If pandoc is breaking, though, we should probably escape it. Maven's markdown to html conversion is... err, lenient, in some instances. releasedocmaker.py should use today's date instead of unreleased Key: HADOOP-11813 URL: https://issues.apache.org/jira/browse/HADOOP-11813 Project: Hadoop Common Issue Type: Task Components: build Affects Versions: 3.0.0 Reporter: Allen Wittenauer Assignee: Darrell Taylor Priority: Minor Labels: newbie Attachments: HADOOP-11813.patch After discussing with a few folks, it'd be more convenient if releasedocmaker used the current date rather than unreleased when processing a version that JIRA hasn't declared released. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11712) resolved
[ https://issues.apache.org/jira/browse/HADOOP-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ankush updated HADOOP-11712: Description: resolved (was: when running a query to retrieve row data , receiving the other error “ExecuteStatement finished with operation state: CLOSED_STATE” Error type: Odbc error. Odbc operation attempted: SQLExecDirect. [S1000:35: on HSTMT] [Cloudera][HiveODBC] (35) Error from Hive: error code: '0' error message: 'ExecuteStatement finished with operation state: CLOSED_STATE'. Connection String: DSN=MSTR_HIVE;UID=srv-hdp-mstry-d;. SQL Statement: select a11.region_number region_number,) Summary: resolved (was: Error : ExecuteStatement finished with operation state: CLOSED_STATE” :) resolved Key: HADOOP-11712 URL: https://issues.apache.org/jira/browse/HADOOP-11712 Project: Hadoop Common Issue Type: Bug Components: build, scripts, tools, tools/distcp Environment: Cloudera ODBC Driver v. 2.5.13 (32 bit) used by Microstrategy application to connect to HiveServer2 Reporter: ankush Assignee: ankush Labels: build, features, hadoop resolved -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11873) Include disk read/write time in FileSystem.Statistics
[ https://issues.apache.org/jira/browse/HADOOP-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14518007#comment-14518007 ] Anu Engineer commented on HADOOP-11873: --- Nope, it is not possible to get the read time of a particular block. Think of this as counters that offer a view of the total performance of your datanode. I have not see block level counters even on linux. if you really need that I would suppose it is something that you would do it in your application as opposed to in the infrastructure. Include disk read/write time in FileSystem.Statistics - Key: HADOOP-11873 URL: https://issues.apache.org/jira/browse/HADOOP-11873 Project: Hadoop Common Issue Type: New Feature Components: metrics Reporter: Kay Ousterhout Priority: Minor Measuring the time spent blocking on reading / writing data from / to disk is very useful for debugging performance problems in applications that read data from Hadoop, and can give much more information (e.g., to reflect disk contention) than just knowing the total amount of data read. I'd like to add something like diskMillis to FileSystem#Statistics to track this. For data read from HDFS, this can be done with very low overhead by adding logging around calls to RemoteBlockReader2.readNextPacket (because this reads larger chunks of data, the time added by the instrumentation is very small relative to the time to actually read the data). For data written to HDFS, this can be done in DFSOutputStream.waitAndQueueCurrentPacket. As far as I know, if you want this information today, it is only currently accessible by turning on HTrace. It looks like HTrace can't be selectively enabled, so a user can't just turn on the tracing on RemoteBlockReader2.readNextPacket for example, and instead needs to turn on tracing everywhere (which then introduces a bunch of overhead -- so sampling is necessary). It would be hugely helpful to have native metrics for time reading / writing to disk that are sufficiently low-overhead to be always on. (Please correct me if I'm wrong here about what's possible today!) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11873) Include disk read/write time in FileSystem.Statistics
[ https://issues.apache.org/jira/browse/HADOOP-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517998#comment-14517998 ] Kay Ousterhout commented on HADOOP-11873: - Thanks for the example! This will give the total time across all reads though, right? So it's still not possible to get the read time for a particular block? Include disk read/write time in FileSystem.Statistics - Key: HADOOP-11873 URL: https://issues.apache.org/jira/browse/HADOOP-11873 Project: Hadoop Common Issue Type: New Feature Components: metrics Reporter: Kay Ousterhout Priority: Minor Measuring the time spent blocking on reading / writing data from / to disk is very useful for debugging performance problems in applications that read data from Hadoop, and can give much more information (e.g., to reflect disk contention) than just knowing the total amount of data read. I'd like to add something like diskMillis to FileSystem#Statistics to track this. For data read from HDFS, this can be done with very low overhead by adding logging around calls to RemoteBlockReader2.readNextPacket (because this reads larger chunks of data, the time added by the instrumentation is very small relative to the time to actually read the data). For data written to HDFS, this can be done in DFSOutputStream.waitAndQueueCurrentPacket. As far as I know, if you want this information today, it is only currently accessible by turning on HTrace. It looks like HTrace can't be selectively enabled, so a user can't just turn on the tracing on RemoteBlockReader2.readNextPacket for example, and instead needs to turn on tracing everywhere (which then introduces a bunch of overhead -- so sampling is necessary). It would be hugely helpful to have native metrics for time reading / writing to disk that are sufficiently low-overhead to be always on. (Please correct me if I'm wrong here about what's possible today!) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HADOOP-8728) Display (fs -text) shouldn't hard-depend on Writable serialized sequence files.
[ https://issues.apache.org/jira/browse/HADOOP-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA reassigned HADOOP-8728: - Assignee: Akira AJISAKA Display (fs -text) shouldn't hard-depend on Writable serialized sequence files. --- Key: HADOOP-8728 URL: https://issues.apache.org/jira/browse/HADOOP-8728 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.6.0 Reporter: Harsh J Assignee: Akira AJISAKA Priority: Minor Attachments: HADOOP-8728-002.patch, HADOOP-8728-003.patch, HADOOP-8728.patch The Display command (fs -text) currently reads only Writable-based SequenceFiles. This isn't necessary to do, and prevents reading non-Writable-based serialization in SequenceFiles from the shell. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11813) releasedocmaker.py should use today's date instead of unreleased
[ https://issues.apache.org/jira/browse/HADOOP-11813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516647#comment-14516647 ] Darrell Taylor commented on HADOOP-11813: - Allen - would you be able to provide me with the magical maven runes that I need to run to actually generate the HTML without having to do an entire build each time. Myself and maven have not reached an understanding yet :/ releasedocmaker.py should use today's date instead of unreleased Key: HADOOP-11813 URL: https://issues.apache.org/jira/browse/HADOOP-11813 Project: Hadoop Common Issue Type: Task Components: build Affects Versions: 3.0.0 Reporter: Allen Wittenauer Assignee: Darrell Taylor Priority: Minor Labels: newbie Attachments: HADOOP-11813.patch After discussing with a few folks, it'd be more convenient if releasedocmaker used the current date rather than unreleased when processing a version that JIRA hasn't declared released. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11881) test-patch.sh javac result is wildly wrong
[ https://issues.apache.org/jira/browse/HADOOP-11881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11881: -- Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) test-patch.sh javac result is wildly wrong -- Key: HADOOP-11881 URL: https://issues.apache.org/jira/browse/HADOOP-11881 Project: Hadoop Common Issue Type: Test Components: build, test Reporter: Allen Wittenauer Assignee: Kengo Seki Labels: newbie Fix For: 2.8.0 Attachments: HADOOP-11881.001.patch The summary report appears to list the total amount of javac warnings, not the amount of new ones. See MAPREDUCE-6192 as an example. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11881) test-patch.sh javac result is wildly wrong
[ https://issues.apache.org/jira/browse/HADOOP-11881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517509#comment-14517509 ] Hudson commented on HADOOP-11881: - FAILURE: Integrated in Hadoop-trunk-Commit #7693 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/7693/]) HADOOP-11881. test-patch.sh javac result is wildly wrong (Kengo Seki via aw) (aw: rev eccf709a619b05aaa92b27693a9c302d349acf22) * hadoop-common-project/hadoop-common/CHANGES.txt * dev-support/test-patch.sh test-patch.sh javac result is wildly wrong -- Key: HADOOP-11881 URL: https://issues.apache.org/jira/browse/HADOOP-11881 Project: Hadoop Common Issue Type: Test Components: build, test Reporter: Allen Wittenauer Assignee: Kengo Seki Labels: newbie Attachments: HADOOP-11881.001.patch The summary report appears to list the total amount of javac warnings, not the amount of new ones. See MAPREDUCE-6192 as an example. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11881) test-patch.sh javac result is wildly wrong
[ https://issues.apache.org/jira/browse/HADOOP-11881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517456#comment-14517456 ] Hadoop QA commented on HADOOP-11881: (!) A patch to test-patch or smart-apply-patch has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/PreCommit-HADOOP-Build/6206/console in case of problems. test-patch.sh javac result is wildly wrong -- Key: HADOOP-11881 URL: https://issues.apache.org/jira/browse/HADOOP-11881 Project: Hadoop Common Issue Type: Test Components: build, test Reporter: Allen Wittenauer Assignee: Kengo Seki Labels: newbie Attachments: HADOOP-11881.001.patch The summary report appears to list the total amount of javac warnings, not the amount of new ones. See MAPREDUCE-6192 as an example. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11881) test-patch.sh javac result is wildly wrong
[ https://issues.apache.org/jira/browse/HADOOP-11881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517457#comment-14517457 ] Hadoop QA commented on HADOOP-11881: \\ \\ | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | reexec | 0m 0s | dev-support patch detected. | | {color:blue}0{color} | pre-patch | 0m 0s | Pre-patch trunk compilation is healthy. | | {color:blue}0{color} | @author | 0m 0s | Skipping @author checks as test-patch has been patched. | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | release audit | 0m 18s | The applied patch does not increase the total number of release audit warnings. | | {color:blue}0{color} | shellcheck | 0m 18s | Shellcheck was not available. | | | | 0m 26s | | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12728839/HADOOP-11881.001.patch | | Optional Tests | shellcheck | | git revision | trunk / 99fe03e | | Java | 1.7.0_55 | | uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/6206/console | This message was automatically generated. test-patch.sh javac result is wildly wrong -- Key: HADOOP-11881 URL: https://issues.apache.org/jira/browse/HADOOP-11881 Project: Hadoop Common Issue Type: Test Components: build, test Reporter: Allen Wittenauer Assignee: Kengo Seki Labels: newbie Attachments: HADOOP-11881.001.patch The summary report appears to list the total amount of javac warnings, not the amount of new ones. See MAPREDUCE-6192 as an example. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11881) test-patch.sh javac result is wildly wrong
[ https://issues.apache.org/jira/browse/HADOOP-11881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517482#comment-14517482 ] Allen Wittenauer commented on HADOOP-11881: --- I figured it was something dumb I did. :) +1 will commit here in a bit. Thanks! test-patch.sh javac result is wildly wrong -- Key: HADOOP-11881 URL: https://issues.apache.org/jira/browse/HADOOP-11881 Project: Hadoop Common Issue Type: Test Components: build, test Reporter: Allen Wittenauer Assignee: Kengo Seki Labels: newbie Attachments: HADOOP-11881.001.patch The summary report appears to list the total amount of javac warnings, not the amount of new ones. See MAPREDUCE-6192 as an example. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-9984) FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by default
[ https://issues.apache.org/jira/browse/HADOOP-9984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517600#comment-14517600 ] Colin Patrick McCabe commented on HADOOP-9984: -- Hi Sanjay, The problem with dereferencing all symlinks in listStatus is that it's disastrously inefficient. In a directory with 100 symlinks, it leads to 101 RPCs to the NameNode. 1 to do the listStatus, and 100 to dereference the symlinks. RPC load on the NameNode is already a concern for us. A scheme like this is just not practical. I understand the concerns that led to this idea. People are unsure if their software can handle symlinks in the listStatus return value. But in my opinion a better solution to this is for people to keep symlinks disabled until they can test it with their software. I also want to clarify that there are also a lot of blocker issues in HADOOP-10019. There's at least 5 or 6 other JIRAs we would need to implement to get symlinks anywhere near usable. For example, cross-filesystem symlinks are even more controversial than this JIRA (some people want to get rid of them altogether), isSymlink is broken for dangling symlinks, FileSystem#rename is broken for symlinks, the behavior of symlinks in globStatus is controversial, distCp doesn't support it, etc. etc. The application-level security issues are even worse (will post a follow-up about them) FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by default -- Key: HADOOP-9984 URL: https://issues.apache.org/jira/browse/HADOOP-9984 Project: Hadoop Common Issue Type: Sub-task Components: fs Affects Versions: 2.1.0-beta Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Critical Attachments: HADOOP-9984.001.patch, HADOOP-9984.003.patch, HADOOP-9984.005.patch, HADOOP-9984.007.patch, HADOOP-9984.009.patch, HADOOP-9984.010.patch, HADOOP-9984.011.patch, HADOOP-9984.012.patch, HADOOP-9984.013.patch, HADOOP-9984.014.patch, HADOOP-9984.015.patch During the process of adding symlink support to FileSystem, we realized that many existing HDFS clients would be broken by listStatus and globStatus returning symlinks. One example is applications that assume that !FileStatus#isFile implies that the inode is a directory. As we discussed in HADOOP-9972 and HADOOP-9912, we should default these APIs to returning resolved paths. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-1540) distcp should support an exclude list
[ https://issues.apache.org/jira/browse/HADOOP-1540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517639#comment-14517639 ] Jing Zhao commented on HADOOP-1540: --- Hi [~rhaase], thanks for uploading the patch! Do you mind rebasing the patch against the current trunk branch? 2.6 has already been released. And for new features and improvement we usually first commit them into trunk and then merge into branch-2 (which is currently aiming for 2.8). distcp should support an exclude list - Key: HADOOP-1540 URL: https://issues.apache.org/jira/browse/HADOOP-1540 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.6.0 Reporter: Senthil Subramanian Assignee: Rich Haase Priority: Minor Labels: patch Fix For: 2.6.0 Attachments: HADOOP-1540.branch-2.6.0.001.patch There should be a way to ignore specific paths (eg: those that have already been copied over under the current srcPath). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-10724) `hadoop fs -du -h` incorrectly formatted
[ https://issues.apache.org/jira/browse/HADOOP-10724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517661#comment-14517661 ] Sam Steingold commented on HADOOP-10724: I updated the pull reuest `hadoop fs -du -h` incorrectly formatted Key: HADOOP-10724 URL: https://issues.apache.org/jira/browse/HADOOP-10724 Project: Hadoop Common Issue Type: Bug Components: fs Reporter: Sam Steingold Assignee: Sam Steingold {{hadoop fs -du -h}} prints sizes with a space between the number and the unit: {code} $ hadoop fs -du -h . 91.7 G 583.1 M 97.6 K . {code} The standard unix {{du -h}} does not: {code} $ du -h 400K... 404K 480K. {code} the result is that the output of {{du -h}} is properly sorted by {{sort -h}} while the output of {{hadoop fs -du -h}} is *not* properly sorted by it. Please see * [sort|http://linux.die.net/man/1/sort]: -h --human-numeric-sort compare human readable numbers (e.g., 2K 1G) * [du|http://linux.die.net/man/1/du]: -h, --human-readable print sizes in human readable format (e.g., 1K 234M 2G) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11873) Include disk read/write time in FileSystem.Statistics
[ https://issues.apache.org/jira/browse/HADOOP-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14517668#comment-14517668 ] Kay Ousterhout commented on HADOOP-11873: - Thanks [~anu]! I took a look at that patch and it looks like all of that code is on the server side. I was hoping for data on the client side, so frameworks that, for example, read data from HDFS can measure how long the data read took compared to other operations done (e.g., computing on that data). Include disk read/write time in FileSystem.Statistics - Key: HADOOP-11873 URL: https://issues.apache.org/jira/browse/HADOOP-11873 Project: Hadoop Common Issue Type: New Feature Components: metrics Reporter: Kay Ousterhout Priority: Minor Measuring the time spent blocking on reading / writing data from / to disk is very useful for debugging performance problems in applications that read data from Hadoop, and can give much more information (e.g., to reflect disk contention) than just knowing the total amount of data read. I'd like to add something like diskMillis to FileSystem#Statistics to track this. For data read from HDFS, this can be done with very low overhead by adding logging around calls to RemoteBlockReader2.readNextPacket (because this reads larger chunks of data, the time added by the instrumentation is very small relative to the time to actually read the data). For data written to HDFS, this can be done in DFSOutputStream.waitAndQueueCurrentPacket. As far as I know, if you want this information today, it is only currently accessible by turning on HTrace. It looks like HTrace can't be selectively enabled, so a user can't just turn on the tracing on RemoteBlockReader2.readNextPacket for example, and instead needs to turn on tracing everywhere (which then introduces a bunch of overhead -- so sampling is necessary). It would be hugely helpful to have native metrics for time reading / writing to disk that are sufficiently low-overhead to be always on. (Please correct me if I'm wrong here about what's possible today!) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-10724) `hadoop fs -du -h` incorrectly formatted
[ https://issues.apache.org/jira/browse/HADOOP-10724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sam Steingold updated HADOOP-10724: --- Attachment: 0001-HADOOP-10724-do-not-insert-a-space-between-number-an.patch rebased patch `hadoop fs -du -h` incorrectly formatted Key: HADOOP-10724 URL: https://issues.apache.org/jira/browse/HADOOP-10724 Project: Hadoop Common Issue Type: Bug Components: fs Reporter: Sam Steingold Assignee: Sam Steingold Attachments: 0001-HADOOP-10724-do-not-insert-a-space-between-number-an.patch {{hadoop fs -du -h}} prints sizes with a space between the number and the unit: {code} $ hadoop fs -du -h . 91.7 G 583.1 M 97.6 K . {code} The standard unix {{du -h}} does not: {code} $ du -h 400K... 404K 480K. {code} the result is that the output of {{du -h}} is properly sorted by {{sort -h}} while the output of {{hadoop fs -du -h}} is *not* properly sorted by it. Please see * [sort|http://linux.die.net/man/1/sort]: -h --human-numeric-sort compare human readable numbers (e.g., 2K 1G) * [du|http://linux.die.net/man/1/du]: -h, --human-readable print sizes in human readable format (e.g., 1K 234M 2G) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11813) releasedocmaker.py should use today's date instead of unreleased
[ https://issues.apache.org/jira/browse/HADOOP-11813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516575#comment-14516575 ] Darrell Taylor commented on HADOOP-11813: - Looking at the code now I should have probably read the comments around tableclean() and notableclean(), the are pretty clear. I'm going to do a bit more investigation because the asterisk that broke it for me was in the middle of a cell and not a bullet-point. Also I initially compiled the markdown using pandoc, but I'll double check if the asterisk also breaks whatever maven uses to compile the markdown. The jira that broke it (for pandoc) is MAPREDUCE-5785. {code} | [MAPREDUCE-5785](https://issues.apache.org/jira/browse/MAPREDUCE-5785) | Derive heap size or mapreduce.*.memory.mb automatically | Major | mr-am, task | Gera Shegalov | Gera Shegalov | {code} I'll update with my findings later releasedocmaker.py should use today's date instead of unreleased Key: HADOOP-11813 URL: https://issues.apache.org/jira/browse/HADOOP-11813 Project: Hadoop Common Issue Type: Task Components: build Affects Versions: 3.0.0 Reporter: Allen Wittenauer Assignee: Darrell Taylor Priority: Minor Labels: newbie Attachments: HADOOP-11813.patch After discussing with a few folks, it'd be more convenient if releasedocmaker used the current date rather than unreleased when processing a version that JIRA hasn't declared released. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11878) NPE in FileContext.java # fixRelativePart(Path p)
[ https://issues.apache.org/jira/browse/HADOOP-11878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516467#comment-14516467 ] Tsuyoshi Ozawa commented on HADOOP-11878: - [~brahmareddy] Good catch. It's correct to call Preconditions.checkState() for the assertion here. NPE in FileContext.java # fixRelativePart(Path p) - Key: HADOOP-11878 URL: https://issues.apache.org/jira/browse/HADOOP-11878 Project: Hadoop Common Issue Type: Bug Reporter: Brahma Reddy Battula Assignee: Brahma Reddy Battula Attachments: HADOOP-11878.patch Following will come when job failed and deletion service trying to delete the log fiels 2015-04-27 14:56:17,113 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : null 2015-04-27 14:56:17,113 ERROR org.apache.hadoop.yarn.server.nodemanager.DeletionService: Exception during execution of task in DeletionService java.lang.NullPointerException at org.apache.hadoop.fs.FileContext.fixRelativePart(FileContext.java:274) at org.apache.hadoop.fs.FileContext.delete(FileContext.java:761) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.deleteAsUser(DefaultContainerExecutor.java:457) at org.apache.hadoop.yarn.server.nodemanager.DeletionService$FileDeletionTask.run(DeletionService.java:293) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11878) NPE in FileContext.java # fixRelativePart(Path p)
[ https://issues.apache.org/jira/browse/HADOOP-11878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516470#comment-14516470 ] Tsuyoshi Ozawa commented on HADOOP-11878: - Sorry, s/checkState/checkArgument/ NPE in FileContext.java # fixRelativePart(Path p) - Key: HADOOP-11878 URL: https://issues.apache.org/jira/browse/HADOOP-11878 Project: Hadoop Common Issue Type: Bug Reporter: Brahma Reddy Battula Assignee: Brahma Reddy Battula Attachments: HADOOP-11878.patch Following will come when job failed and deletion service trying to delete the log fiels 2015-04-27 14:56:17,113 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : null 2015-04-27 14:56:17,113 ERROR org.apache.hadoop.yarn.server.nodemanager.DeletionService: Exception during execution of task in DeletionService java.lang.NullPointerException at org.apache.hadoop.fs.FileContext.fixRelativePart(FileContext.java:274) at org.apache.hadoop.fs.FileContext.delete(FileContext.java:761) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.deleteAsUser(DefaultContainerExecutor.java:457) at org.apache.hadoop.yarn.server.nodemanager.DeletionService$FileDeletionTask.run(DeletionService.java:293) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm
[ https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516526#comment-14516526 ] Kai Zheng commented on HADOOP-11828: Hi Jack, You're still using {{JRSRawEncoder}}, but it's renamed to {{RSRawEncoder}} quite some time ago. How about naming the new coder {{HitchhikerXORErasureEncoder}} to {{HHXORErasureEnoder}}? Similar to other coders. Implement the Hitchhiker erasure coding algorithm - Key: HADOOP-11828 URL: https://issues.apache.org/jira/browse/HADOOP-11828 Project: Hadoop Common Issue Type: Sub-task Reporter: Zhe Zhang Assignee: jack liuquan Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, HDFS-7715-hhxor-decoder.patch, HDFS-7715-hhxor-encoder.patch [Hitchhiker | http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is a new erasure coding algorithm developed as a research project at UC Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% during data reconstruction. This JIRA aims to introduce Hitchhiker to the HDFS-EC framework, as one of the pluggable codec algorithms. The existing implementation is based on HDFS-RAID. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11715) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception.
[ https://issues.apache.org/jira/browse/HADOOP-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-11715: --- Attachment: HADOOP-11715.1.patch Attaching the patch Please review azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. - Key: HADOOP-11715 URL: https://issues.apache.org/jira/browse/HADOOP-11715 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.7.0 Reporter: Brandon Li Assignee: nijel Attachments: HADOOP-11715.1.patch azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. For example, it complains filenotfound instead of wrong-fs for an hdfs path: Caused by: java.io.FileNotFoundException: hdfs://headnode0:9000/hive/scratch/hadoopqa/a7d34a22-57eb-4678-84b4-43d84027d45f/hive_2015-03-02_23-13-04_713_5722627238053417441-1/hadoopqa/_tez_scratch_dir/_tez_scratch_dir/split_Map_1/job.split: No such file or directory. at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:1625) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11715) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception.
[ https://issues.apache.org/jira/browse/HADOOP-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-11715: --- Fix Version/s: 2.8.0 Status: Patch Available (was: Open) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. - Key: HADOOP-11715 URL: https://issues.apache.org/jira/browse/HADOOP-11715 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.7.0 Reporter: Brandon Li Assignee: nijel Fix For: 2.8.0 Attachments: HADOOP-11715.1.patch azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. For example, it complains filenotfound instead of wrong-fs for an hdfs path: Caused by: java.io.FileNotFoundException: hdfs://headnode0:9000/hive/scratch/hadoopqa/a7d34a22-57eb-4678-84b4-43d84027d45f/hive_2015-03-02_23-13-04_713_5722627238053417441-1/hadoopqa/_tez_scratch_dir/_tez_scratch_dir/split_Map_1/job.split: No such file or directory. at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:1625) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm
[ https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516731#comment-14516731 ] jack liuquan commented on HADOOP-11828: --- Hi Kai, bq. I mentioned that in my rough thought, would you clarify it in details and provide your thoughts? This would be desired as it's a rather major design change. Hitchhiker algorithm builds on top of RS codes and XOR codes, it is more suitable to put hitchhiker in ErasureCoder layer for architecture consideration. And it is convenient to replace underlying RS codes and XOR codes of Hitchhiker to get better performance in ErasureCoder layer. bq.You're still using JRSRawEncoder, but it's renamed to RSRawEncoder quite some time ago. Oh yes, I see. I will check and modify it. bq.How about naming the new coder HitchhikerXORErasureEncoder to HHXORErasureEnoder? Similar to other coders. OK, sounds great. Implement the Hitchhiker erasure coding algorithm - Key: HADOOP-11828 URL: https://issues.apache.org/jira/browse/HADOOP-11828 Project: Hadoop Common Issue Type: Sub-task Reporter: Zhe Zhang Assignee: jack liuquan Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, HDFS-7715-hhxor-decoder.patch, HDFS-7715-hhxor-encoder.patch [Hitchhiker | http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is a new erasure coding algorithm developed as a research project at UC Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% during data reconstruction. This JIRA aims to introduce Hitchhiker to the HDFS-EC framework, as one of the pluggable codec algorithms. The existing implementation is based on HDFS-RAID. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11335) KMS ACL in meta data or database
[ https://issues.apache.org/jira/browse/HADOOP-11335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516595#comment-14516595 ] Hadoop QA commented on HADOOP-11335: \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 14m 36s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 6 new or modified test files. | | {color:red}-1{color} | whitespace | 0m 0s | The patch has 6 line(s) that end in whitespace. | | {color:green}+1{color} | javac | 7m 30s | There were no new javac warning messages. | | {color:red}-1{color} | javadoc | 9m 36s | The applied patch generated 4 additional warning messages. | | {color:green}+1{color} | release audit | 0m 22s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 5m 22s | The applied patch generated 16 additional checkstyle issues. | | {color:green}+1{color} | install | 1m 33s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 32s | The patch built with eclipse:eclipse. | | {color:red}-1{color} | findbugs | 5m 22s | The patch appears to introduce 3 new Findbugs (version 2.0.3) warnings. | | {color:green}+1{color} | common tests | 24m 26s | Tests passed in hadoop-common. | | {color:green}+1{color} | common tests | 1m 42s | Tests passed in hadoop-kms. | | {color:red}-1{color} | hdfs tests | 185m 12s | Tests failed in hadoop-hdfs. | | | | 256m 33s | | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-common | | | Impossible downcast of toArray() result to String[] in org.apache.hadoop.crypto.key.KeyShell$RemoveAttributeCommand.execute() At KeyShell.java:to String[] in org.apache.hadoop.crypto.key.KeyShell$RemoveAttributeCommand.execute() At KeyShell.java:[line 599] | | | Impossible downcast of toArray() result to String[] in org.apache.hadoop.crypto.key.KeyShell$RemoveAttributeCommand.validate() At KeyShell.java:to String[] in org.apache.hadoop.crypto.key.KeyShell$RemoveAttributeCommand.validate() At KeyShell.java:[line 573] | | FindBugs | module:hadoop-kms | | | Write to static field org.apache.hadoop.crypto.key.kms.server.keyacls.PerKeyACLs.perKeyACLs from instance method org.apache.hadoop.crypto.key.kms.server.keyacls.PerKeyACLs.clear() At PerKeyACLs.java:from instance method org.apache.hadoop.crypto.key.kms.server.keyacls.PerKeyACLs.clear() At PerKeyACLs.java:[line 59] | | Timed out tests | org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12728686/HADOOP-11335.007.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / feb68cb | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/6200/artifact/patchprocess/whitespace.txt | | javadoc | https://builds.apache.org/job/PreCommit-HADOOP-Build/6200/artifact/patchprocess/diffJavadocWarnings.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/6200/artifact/patchprocess/checkstyle-result-diff.txt | | Findbugs warnings | https://builds.apache.org/job/PreCommit-HADOOP-Build/6200/artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html | | Findbugs warnings | https://builds.apache.org/job/PreCommit-HADOOP-Build/6200/artifact/patchprocess/newPatchFindbugsWarningshadoop-kms.html | | hadoop-common test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/6200/artifact/patchprocess/testrun_hadoop-common.txt | | hadoop-kms test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/6200/artifact/patchprocess/testrun_hadoop-kms.txt | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HADOOP-Build/6200/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/6200/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/6200/console | This message was automatically generated. KMS ACL in meta data or database Key: HADOOP-11335 URL: https://issues.apache.org/jira/browse/HADOOP-11335 Project: Hadoop Common Issue Type: Improvement Components: kms Affects Versions: 2.6.0 Reporter: Jerry Chen Assignee: Dian Fu Labels: Security Attachments: HADOOP-11335.001.patch, HADOOP-11335.002.patch,
[jira] [Commented] (HADOOP-11821) Fix findbugs warnings in hadoop-sls
[ https://issues.apache.org/jira/browse/HADOOP-11821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516631#comment-14516631 ] Akira AJISAKA commented on HADOOP-11821: Hi [~brahmareddy], would you fix checkstyle warnings? {code:title=SLSWebApp} private transient MapSchedulerEventType, Histogram handleOperTimecostHistogramMap; {code} Would you render the line within 80 characters? {code:title=RumenToSLSConverter#generateSLSLoadFile} try(Writer output = new OutputStreamWriter(new FileOutputStream(outputFile), UTF-8);) { {code} * Would you please add a whitespace between 'try' and '(' ? * The semicolon is unnecessarily. Fix findbugs warnings in hadoop-sls --- Key: HADOOP-11821 URL: https://issues.apache.org/jira/browse/HADOOP-11821 Project: Hadoop Common Issue Type: Bug Components: tools Reporter: Akira AJISAKA Assignee: Brahma Reddy Battula Attachments: HADOOP-11821-001.patch, HADOOP-11821-002.patch, HADOOP-11821-003.patch, HADOOP-11821-004.patch, HADOOP-11821.patch Per https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5388//artifact/patchprocess/newPatchFindbugsWarningshadoop-sls.html, there are 13 warnings to be fixed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)