[jira] [Commented] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073122#comment-16073122 ] Hadoop QA commented on HADOOP-14587: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 23 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 11s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 11s{color} | {color:red} root generated 20 new + 825 unchanged - 0 fixed = 845 total (was 825) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 480 unchanged - 2 fixed = 480 total (was 482) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 34s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s{color} | {color:green} hadoop-nfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 63m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14587 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12875567/HADOOP-14587.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux f323add3cbd2 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bf1f599 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/12709/artifact/patchprocess/diff-compile-javac-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12709/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-common-project/hadoop-nfs U: hadoop-common-project | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12709/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Use
[jira] [Comment Edited] (HADOOP-13414) Hide Jetty Server version header in HTTP responses
[ https://issues.apache.org/jira/browse/HADOOP-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073100#comment-16073100 ] Vinayakumar B edited comment on HADOOP-13414 at 7/4/17 4:53 AM: +1 for trunk patch. Needs separate patch for branch-2*. Please attach a patch for branch-2. Thanks was (Author: vinayrpet): +1, > Hide Jetty Server version header in HTTP responses > -- > > Key: HADOOP-13414 > URL: https://issues.apache.org/jira/browse/HADOOP-13414 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Vinayakumar B >Assignee: Surendra Singh Lilhore > Attachments: Aftrerfix.png, BeforeFix.png, HADOOP-13414-001.patch, > HADOOP-13414-002.patch > > > Hide Jetty Server version in HTTP Response header. Some security analyzers > would think this as an issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13414) Hide Jetty Server version header in HTTP responses
[ https://issues.apache.org/jira/browse/HADOOP-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073100#comment-16073100 ] Vinayakumar B commented on HADOOP-13414: +1, > Hide Jetty Server version header in HTTP responses > -- > > Key: HADOOP-13414 > URL: https://issues.apache.org/jira/browse/HADOOP-13414 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Vinayakumar B >Assignee: Surendra Singh Lilhore > Attachments: Aftrerfix.png, BeforeFix.png, HADOOP-13414-001.patch, > HADOOP-13414-002.patch > > > Hide Jetty Server version in HTTP Response header. Some security analyzers > would think this as an issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wenxin He updated HADOOP-14587: --- Status: Patch Available (was: Open) > Use GenericTestUtils.setLogLevel when available in hadoop-common > > > Key: HADOOP-14587 > URL: https://issues.apache.org/jira/browse/HADOOP-14587 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Wenxin He >Assignee: Wenxin He > Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, > HADOOP-14587.003.patch, HADOOP-14587.004.patch > > > Based on Brahma's comment in HADOOP-14296, it's better to use > GenericTestUtils.setLogLevel as possible to make the migration easier. > Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for > hadoop-common change. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wenxin He updated HADOOP-14587: --- Status: Open (was: Patch Available) > Use GenericTestUtils.setLogLevel when available in hadoop-common > > > Key: HADOOP-14587 > URL: https://issues.apache.org/jira/browse/HADOOP-14587 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Wenxin He >Assignee: Wenxin He > Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, > HADOOP-14587.003.patch, HADOOP-14587.004.patch > > > Based on Brahma's comment in HADOOP-14296, it's better to use > GenericTestUtils.setLogLevel as possible to make the migration easier. > Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for > hadoop-common change. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wenxin He updated HADOOP-14587: --- Attachment: HADOOP-14587.004.patch 004.patch: fix checkstyle, remove unused import > Use GenericTestUtils.setLogLevel when available in hadoop-common > > > Key: HADOOP-14587 > URL: https://issues.apache.org/jira/browse/HADOOP-14587 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Wenxin He >Assignee: Wenxin He > Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, > HADOOP-14587.003.patch, HADOOP-14587.004.patch > > > Based on Brahma's comment in HADOOP-14296, it's better to use > GenericTestUtils.setLogLevel as possible to make the migration easier. > Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for > hadoop-common change. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints
[ https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072870#comment-16072870 ] Hadoop QA commented on HADOOP-13786: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 42 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 33s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 1s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 6s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 57s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 37s{color} | {color:green} HADOOP-13345 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 38s{color} | {color:red} hadoop-tools/hadoop-aws in HADOOP-13345 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s{color} | {color:green} HADOOP-13345 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 16s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 58s{color} | {color:orange} root: The patch generated 46 new + 121 unchanged - 24 fixed = 167 total (was 145) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 49s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 13 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 53s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry generated 0 new + 45 unchanged - 3 fixed = 45 total (was 48) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 28s{color} | {color:red} hadoop-aws in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 9s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 51s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 11s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}102m 26s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-tools/hadoop-aws | | | Format-string method String.format(String, Object[]) called with format string
[jira] [Commented] (HADOOP-12490) Add default resources to Configuration which are not inside the class path
[ https://issues.apache.org/jira/browse/HADOOP-12490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072812#comment-16072812 ] Hadoop QA commented on HADOOP-12490: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 34s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 38s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 25 new + 146 unchanged - 2 fixed = 171 total (was 148) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 24s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 34s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 58m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | | | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-12490 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12767355/HADOOP-12490.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux d693c85d0d19 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bf1f599 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/12707/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/12707/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12707/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HADOOP-Build/12707/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12707/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add default resources to Configuration which are not inside the class path > -- > > Key: HADOOP-12490 > URL:
[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints
[ https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13786: Status: Patch Available (was: Open) > Add S3Guard committer for zero-rename commits to S3 endpoints > - > > Key: HADOOP-13786 > URL: https://issues.apache.org/jira/browse/HADOOP-13786 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13786-HADOOP-13345-001.patch, > HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, > HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, > HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, > HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, > HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, > HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, > HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, > HADOOP-13786-HADOOP-13345-017.patch, HADOOP-13786-HADOOP-13345-018.patch, > HADOOP-13786-HADOOP-13345-019.patch, HADOOP-13786-HADOOP-13345-020.patch, > HADOOP-13786-HADOOP-13345-021.patch, HADOOP-13786-HADOOP-13345-022.patch, > HADOOP-13786-HADOOP-13345-023.patch, HADOOP-13786-HADOOP-13345-024.patch, > HADOOP-13786-HADOOP-13345-025.patch, HADOOP-13786-HADOOP-13345-026.patch, > HADOOP-13786-HADOOP-13345-027.patch, HADOOP-13786-HADOOP-13345-028.patch, > HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-029.patch, > HADOOP-13786-HADOOP-13345-030.patch, HADOOP-13786-HADOOP-13345-031.patch, > HADOOP-13786-HADOOP-13345-032.patch, HADOOP-13786-HADOOP-13345-033.patch, > objectstore.pdf, s3committer-master.zip > > > A goal of this code is "support O(1) commits to S3 repositories in the > presence of failures". Implement it, including whatever is needed to > demonstrate the correctness of the algorithm. (that is, assuming that s3guard > provides a consistent view of the presence/absence of blobs, show that we can > commit directly). > I consider ourselves free to expose the blobstore-ness of the s3 output > streams (ie. not visible until the close()), if we need to use that to allow > us to abort commit operations. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14537) FindBugs warning in ECSchema#toString
[ https://issues.apache.org/jira/browse/HADOOP-14537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hongyuan Li resolved HADOOP-14537. -- Resolution: Duplicate [~ste...@apache.org] resolved it. > FindBugs warning in ECSchema#toString > - > > Key: HADOOP-14537 > URL: https://issues.apache.org/jira/browse/HADOOP-14537 > Project: Hadoop Common > Issue Type: Bug > Components: common, io >Reporter: Hongyuan Li >Assignee: Hongyuan Li > > should we use entryset instead of keyset? which is more efficient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints
[ https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13786: Status: Open (was: Patch Available) > Add S3Guard committer for zero-rename commits to S3 endpoints > - > > Key: HADOOP-13786 > URL: https://issues.apache.org/jira/browse/HADOOP-13786 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13786-HADOOP-13345-001.patch, > HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, > HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, > HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, > HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, > HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, > HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, > HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, > HADOOP-13786-HADOOP-13345-017.patch, HADOOP-13786-HADOOP-13345-018.patch, > HADOOP-13786-HADOOP-13345-019.patch, HADOOP-13786-HADOOP-13345-020.patch, > HADOOP-13786-HADOOP-13345-021.patch, HADOOP-13786-HADOOP-13345-022.patch, > HADOOP-13786-HADOOP-13345-023.patch, HADOOP-13786-HADOOP-13345-024.patch, > HADOOP-13786-HADOOP-13345-025.patch, HADOOP-13786-HADOOP-13345-026.patch, > HADOOP-13786-HADOOP-13345-027.patch, HADOOP-13786-HADOOP-13345-028.patch, > HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-029.patch, > HADOOP-13786-HADOOP-13345-030.patch, HADOOP-13786-HADOOP-13345-031.patch, > HADOOP-13786-HADOOP-13345-032.patch, objectstore.pdf, s3committer-master.zip > > > A goal of this code is "support O(1) commits to S3 repositories in the > presence of failures". Implement it, including whatever is needed to > demonstrate the correctness of the algorithm. (that is, assuming that s3guard > provides a consistent view of the presence/absence of blobs, show that we can > commit directly). > I consider ourselves free to expose the blobstore-ness of the s3 output > streams (ie. not visible until the close()), if we need to use that to allow > us to abort commit operations. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14586) org.apache.hadoop.util.Shell in 2.7 breaks on Java 9 RC build; backport HADOOP-10775 to 2.7.x
[ https://issues.apache.org/jira/browse/HADOOP-14586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072777#comment-16072777 ] Konstantin Shvachko commented on HADOOP-14586: -- He Steve. I really don't like the idea of cherry picking parts of a patch. It makes it harder to backport when you really need that change as a whole. I like the simplicity of [~thetaphi]' change. We just need a JavaDoc comment to minimize confusion. > org.apache.hadoop.util.Shell in 2.7 breaks on Java 9 RC build; > backport HADOOP-10775 to 2.7.x > --- > > Key: HADOOP-14586 > URL: https://issues.apache.org/jira/browse/HADOOP-14586 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.7.2 > Environment: Java 9, build 175 (Java 9 release candidate as of June > 25th, 2017) >Reporter: Uwe Schindler >Assignee: Akira Ajisaka >Priority: Minor > Labels: Java9 > Attachments: HADOOP-14586-branch-2.7-01.patch, > HADOOP-14586-branch-2.7-02.patch > > > You cannot use any pre-Hadoop 2.8 component anymore with the latest release > candidate build of Java 9, because it fails with an > StringIndexOutOfBoundsException in {{org.apache.hadoop.util.Shell#}}. > This leads to a whole cascade of failing classes (next in chain is > StringUtils). > The reason is that the release candidate build of Java 9 no longer has "-ea" > in the version string and the system property "java.version" is now simply > "9". This causes the following line to fail fatally: > {code:java} > private static boolean IS_JAVA7_OR_ABOVE = > System.getProperty("java.version").substring(0, 3).compareTo("1.7") >= > 0; > {code} > Analysis: > - This code looks wrong, as comparing a version this way is incorrect. > - The {{substring(0, 3)}} is not needed, {{compareTo}} also works without it, > although it is still an invalid way to compare a version. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072775#comment-16072775 ] Hadoop QA commented on HADOOP-1: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 31 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-tools {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 32s{color} | {color:orange} hadoop-tools: The patch generated 488 new + 0 unchanged - 0 fixed = 488 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-tools {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s{color} | {color:green} hadoop-ftp in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 46m 19s{color} | {color:green} hadoop-tools in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 78m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-1 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12875529/HADOOP-1.4.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux 3ac0f56a39bc 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bf1f599 | | Default Java | 1.8.0_131 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/12706/artifact/patchprocess/diff-checkstyle-hadoop-tools.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12706/testReport/ | | modules | C: hadoop-tools/hadoop-ftp hadoop-tools U: hadoop-tools | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12706/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT
[jira] [Commented] (HADOOP-14537) FindBugs warning in ECSchema#toString
[ https://issues.apache.org/jira/browse/HADOOP-14537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072774#comment-16072774 ] Steve Loughran commented on HADOOP-14537: - find the fix which addressed it, close this as a duplicate or contained by of it. Thanks. It's probably the checkstyle/findbugs thing. Someone updated the style checkers and its been complaining so much it's forced someone to do a big patch to shut the build up > FindBugs warning in ECSchema#toString > - > > Key: HADOOP-14537 > URL: https://issues.apache.org/jira/browse/HADOOP-14537 > Project: Hadoop Common > Issue Type: Bug > Components: common, io >Reporter: Hongyuan Li >Assignee: Hongyuan Li > > should we use entryset instead of keyset? which is more efficient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14470) CommandWithDestination#create used redundant ternary operator
[ https://issues.apache.org/jira/browse/HADOOP-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14470: Resolution: Duplicate Status: Resolved (was: Patch Available) > CommandWithDestination#create used redundant ternary operator > --- > > Key: HADOOP-14470 > URL: https://issues.apache.org/jira/browse/HADOOP-14470 > Project: Hadoop Common > Issue Type: Improvement > Components: common, fs >Affects Versions: 3.0.0-alpha3 >Reporter: Hongyuan Li >Assignee: Hongyuan Li >Priority: Trivial > Attachments: HADOOP-14470-001.patch > > > in if statement,the lazyPersist is always true, thus the ternary operator is > redundant, > {{lazyPersist == true}} in if statment, so {{lazyPersist ? 1 : > getDefaultReplication(item.path)}} is redundant. > related code like below, which is in > {{org.apache.hadoop.fs.shell.CommandWithDestination}} lineNumber : 504 : > {code:java} >FSDataOutputStream create(PathData item, boolean lazyPersist, > boolean direct) > throws IOException { > try { > if (lazyPersist) { // in if stament, lazyPersist is always true > …… > return create(item.path, > FsPermission.getFileDefault().applyUMask( > FsPermission.getUMask(getConf())), > createFlags, > getConf().getInt(IO_FILE_BUFFER_SIZE_KEY, > IO_FILE_BUFFER_SIZE_DEFAULT), > lazyPersist ? 1 : getDefaultReplication(item.path), > // *this is redundant* > getDefaultBlockSize(), > null, > null); > } else { > return create(item.path, true); > } > } finally { // might have been created but stream was interrupted > if (!direct) { > deleteOnExit(item.path); > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14621) S3A client raising ConnectionPoolTimeoutException
[ https://issues.apache.org/jira/browse/HADOOP-14621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072705#comment-16072705 ] Steve Loughran commented on HADOOP-14621: - Stack {code} testCommitterWithNoOutputs(org.apache.hadoop.fs.s3a.commit.magic.ITestMagicCommitProtocol) Time elapsed: 3.15 sec <<< ERROR! java.io.InterruptedIOException: getFileStatus on s3a://hwdev-steve-ireland-new/fork-0007/test: com.amazonaws.SdkClientException: Unable to execute HTTP request: Timeout waiting for connection from pool at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:145) at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:119) at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2040) at org.apache.hadoop.fs.s3a.S3AFileSystem.checkPathForDirectory(S3AFileSystem.java:1857) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:1890) at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1826) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2230) at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338) at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193) at org.apache.hadoop.fs.s3a.commit.AbstractCommitITest.setup(AbstractCommitITest.java:93) at org.apache.hadoop.fs.s3a.commit.AbstractITCommitProtocol.setup(AbstractITCommitProtocol.java:140) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: com.amazonaws.SdkClientException: Unable to execute HTTP request: Timeout waiting for connection from pool at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1069) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1035) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4221) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4168) at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1249) at org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1162) at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2022) at org.apache.hadoop.fs.s3a.S3AFileSystem.checkPathForDirectory(S3AFileSystem.java:1857) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:1890) at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1826) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2230) at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338) at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193) at org.apache.hadoop.fs.s3a.commit.AbstractCommitITest.setup(AbstractCommitITest.java:93) at org.apache.hadoop.fs.s3a.commit.AbstractITCommitProtocol.setup(AbstractITCommitProtocol.java:140) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at
[jira] [Created] (HADOOP-14621) S3A client raising ConnectionPoolTimeoutException
Steve Loughran created HADOOP-14621: --- Summary: S3A client raising ConnectionPoolTimeoutException Key: HADOOP-14621 URL: https://issues.apache.org/jira/browse/HADOOP-14621 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3, test Affects Versions: 3.0.0-beta1 Environment: Home network with 2+ other users on high bandwidth activities Reporter: Steve Loughran Priority: Minor Parallel test with threads = 12 triggering connection pool timeout. Hypothesis? Congested network triggering pool timeout. Fix? For tests, could increase pool size For retry logic, this should be considered retriable, even on idempotent calls (as its a failure to acquire a connection -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072688#comment-16072688 ] Hongyuan Li edited comment on HADOOP-1 at 7/3/17 4:48 PM: -- 1、{{SFTPChannel.java}}#{{getFileStatus}} should use {{long acessTime =attr.getATime() * 1000L;}} instead of {{long accessTime = 0;}}. 2、{{String user = Integer.toString(attr.getUId());}} and {{String group = Integer.toString(attr.getGId())}} should got real user and group name which can be parsed from {{LsEntry sftpFile}}#{{sftpFile.getLongname()}}, which is used by my own written SFTPFileSystem. 3、{{SFTPChannel }}#{{listFiles}} using {code} try { //Get all items from the parent directory sftpFiles = client.ls(pathName); } catch (SftpException e) { LOG.debug("Error when listing files", e); throw new FileNotFoundException(String.format(ErrorStrings.E_FILE_NOTFOUND, file)); } {code}, but deep into ChannelSFTP only {{SftpException.id}} == {{ChannelSftp.SSH_FX_NO_SUCH_FILE}} means file not found. suggest you to deep into the {{jcraft}} code It is only my suggestion. *Update* 4、SFTPFileSystem may implements {{FileSystem}}#{{access}}, FTPFileSystem cannot because ftp protocol cannot return actual user and group of file or directory. was (Author: hongyuan li): 1、{{SFTPChannel.java}}#{{getFileStatus}} should use {{long acessTime =attr.getATime() * 1000L;}} instead of {{long accessTime = 0;}}. 2、{{String user = Integer.toString(attr.getUId());}} and {{String group = Integer.toString(attr.getGId())}} should got real user and group name which can be parsed from {LsEntry sftpFile}#{{sftpFile.getLongname()}}, which is used by my own written SFTPFileSystem. 3、{{SFTPChannel }}#{{listFiles}} using {code} try { //Get all items from the parent directory sftpFiles = client.ls(pathName); } catch (SftpException e) { LOG.debug("Error when listing files", e); throw new FileNotFoundException(String.format(ErrorStrings.E_FILE_NOTFOUND, file)); } {code}, but deep into ChannelSFTP only {{SftpException.id}} == {{ChannelSftp.SSH_FX_NO_SUCH_FILE}} means file not found. suggest you to deep into the {{jcraft}} code It is only my suggestion. *Update* 4、SFTPFileSystem may implements {{FileSystem}}#{{access}}, FTPFileSystem cannot because ftp protocol cannot return actual user and group of file or directory. > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.4.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072688#comment-16072688 ] Hongyuan Li edited comment on HADOOP-1 at 7/3/17 4:45 PM: -- 1、{{SFTPChannel.java}}#{{getFileStatus}} should use {{long acessTime =attr.getATime() * 1000L;}} instead of {{long accessTime = 0;}}. 2、{{String user = Integer.toString(attr.getUId());}} and {{String group = Integer.toString(attr.getGId())}} should got real user and group name which can be parsed from {LsEntry sftpFile}#{{sftpFile.getLongname()}}, which is used by my own written SFTPFileSystem. 3、{{SFTPChannel }}#{{listFiles}} using {code} try { //Get all items from the parent directory sftpFiles = client.ls(pathName); } catch (SftpException e) { LOG.debug("Error when listing files", e); throw new FileNotFoundException(String.format(ErrorStrings.E_FILE_NOTFOUND, file)); } {code}, but deep into ChannelSFTP only {{SftpException.id}} == {{ChannelSftp.SSH_FX_NO_SUCH_FILE}} means file not found. suggest you to deep into the {{jcraft}} code It is only my suggestion. *Update* 4、SFTPFileSystem may implements {{FileSystem}}#{{access}}, FTPFileSystem cannot because ftp protocol cannot return actual user and group of file or directory. was (Author: hongyuan li): 1、{{SFTPChannel.java}}#{{getFileStatus}} should use {{long acessTime =attr.getATime() * 1000L;}} instead of {{long accessTime = 0;}}. 2、{{String user = Integer.toString(attr.getUId());}} and {{String group = Integer.toString(attr.getGId())}} should got real user and group name which can be parsed from {LsEntry sftpFile}#{{sftpFile.getLongname()}}, which is used by my own written SFTPFileSystem. 3、{{SFTPChannel }}#{{listFiles}} using {code} try { //Get all items from the parent directory sftpFiles = client.ls(pathName); } catch (SftpException e) { LOG.debug("Error when listing files", e); throw new FileNotFoundException(String.format(ErrorStrings.E_FILE_NOTFOUND, file)); } {code}, but deep into ChannelSFTP only {{SftpException.id}} == {{ChannelSftp.SSH_FX_NO_SUCH_FILE}} means file not found. suggest you to deep into the {{jcraft}} code It is only my suggestion. > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.4.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072688#comment-16072688 ] Hongyuan Li commented on HADOOP-1: -- 1、{{SFTPChannel.java}}#{{getFileStatus}} should use {{long acessTime =attr.getATime() * 1000L;}} instead of {{long accessTime = 0;}}. 2、{{String user = Integer.toString(attr.getUId());}} and {{String group = Integer.toString(attr.getGId())}} should got real user and group name which can be parsed from {LsEntry sftpFile}#{{sftpFile.getLongname()}}, which is used by my own written SFTPFileSystem. 3、{{SFTPChannel }}#{{listFiles}} using {code} try { //Get all items from the parent directory sftpFiles = client.ls(pathName); } catch (SftpException e) { LOG.debug("Error when listing files", e); throw new FileNotFoundException(String.format(ErrorStrings.E_FILE_NOTFOUND, file)); } {code}, but deep into ChannelSFTP only {{SftpException.id}} == {{ChannelSftp.SSH_FX_NO_SUCH_FILE}} means file not found. suggest you to deep into the {{jcraft}} code It is only my suggestion. > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.4.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072674#comment-16072674 ] Hongyuan Li edited comment on HADOOP-1 at 7/3/17 4:17 PM: -- i update the comment, {{i suggest use redis or LevelDB to implement the dirTree}} This may be very difficult. Ignore this. was (Author: hongyuan li): i update the comment, {{i suggest use redis or LevelDB to implement the dirTree}} This may be very difficult. > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.4.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14620) S3A authentication failure for regions other than us-east-1
Ilya Fourmanov created HADOOP-14620: --- Summary: S3A authentication failure for regions other than us-east-1 Key: HADOOP-14620 URL: https://issues.apache.org/jira/browse/HADOOP-14620 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Affects Versions: 2.7.3, 2.8.0 Reporter: Ilya Fourmanov hadoop fs s3a:// operations fail authentication for s3 buckets hosted in regions other than default us-east-1 Steps to reproduce: # create s3 bucket in eu-west-1 # Using IAM instance profile or fs.s3a.access.key/fs.s3a.secret.key run following command: {code} hadoop --loglevel DEBUG -D fs.s3a.endpoint=s3.eu-west-1.amazonaws.com -ls s3a://your-eu-west-1-hosted-bucket/ {code} Expected behaviour: You will see listing of the bucket Actual behaviour: You will get 403 Authentication Denied response for AWS S3. Reason is mismatch in string to sign as defined in http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html provided by hadoop and expected by AWS. If you use https://aws.amazon.com/code/199 to analyse StringToSignBytes returned by AWS, you will see that AWS expects CanonicalizedResource to be in form /your-eu-west-1-hosted-bucket{color:red}.s3.eu-west-1.amazonaws.com{color}/. Hadoop provides it as /your-eu-west-1-hosted-bucket/ Note that AWS documentation doesn't explicitly state that endpoint or full dns address should be appended to CanonicalizedResource however practice shows it is actually required. I've also submitted this to AWS for them to correct behaviour or documentation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072669#comment-16072669 ] Lukas Waldmann commented on HADOOP-1: - should this property be identified by host or something? : possibly - granularity on directory level seems to me over the top i suggest use redis or LevelDB to implement the dirTree: I understand your point - but I feel it's out of the scope of this commit. Generally speaking I would rather now concentrate on making the FS part of Hadoop and leave improvements for later stage (if there are not critical for functionality) But as we all know critical functionality is different for each of us :) > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.4.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072637#comment-16072637 ] Hongyuan Li edited comment on HADOOP-1 at 7/3/17 3:57 PM: -- should this property be identified by host or something ? like {{fscache.enabled}}. I'm not good at naming properties. *Update* i suggest use {{redis}} or {{LevelDB}} to implement th de {{dirTree}}, which is seen by all process that staring at this directories. *Update* Ignore update ablove, hard to implment it. was (Author: hongyuan li): should this property be identified by host or something ? like {{fscache.enabled}}. I'm not good at naming properties. *Update* i suggest use {{redis}} or {{LevelDB}} to implement th de {{dirTree}}, which is seen by all process that staring at this directories. > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.4.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072637#comment-16072637 ] Hongyuan Li edited comment on HADOOP-1 at 7/3/17 3:53 PM: -- should this property be identified by host or something ? like {{fscache.enabled}}. I'm not good at naming properties. *Update* i suggest use {{redis}} or {{LevelDB}} to implement th de {{dirTree}}, which is seen by all process that staring at this directories. was (Author: hongyuan li): should this property be identified by host or something ? like {{fscache.enabled}}. I'm not good at naming properties. *Update* i suggest use {{redis}} or {{LevelDB}} to implement th de {{dirTree}}, which is seen by all process that staring at this progress. > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.4.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Waldmann updated HADOOP-1: Status: Patch Available (was: In Progress) > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.4.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Waldmann updated HADOOP-1: Attachment: HADOOP-1.4.patch > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.4.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Waldmann updated HADOOP-1: Status: In Progress (was: Patch Available) Fix for doc issues > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072637#comment-16072637 ] Hongyuan Li edited comment on HADOOP-1 at 7/3/17 3:40 PM: -- should this property be identified by host or something ? like {{fscache.enabled}}. I'm not good at naming properties. *Update* i suggest use {{redis}} or {{LevelDB}} to implement th de {{dirTree}}, which is seen by all process that staring at this progress. was (Author: hongyuan li): should this property be identified by host or something ? like {{fscache.enabled}}. I'm not good at naming properties. > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072637#comment-16072637 ] Hongyuan Li commented on HADOOP-1: -- should this property be identified by host or something ? like {{fscache.enabled}}. I'm not good at naming properties. > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072628#comment-16072628 ] Lukas Waldmann commented on HADOOP-1: - This boolean property defines if cache is used: fs..cache.directories > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072553#comment-16072553 ] Hongyuan Li commented on HADOOP-1: -- add a configuration property to control if the cache is opened is good, i think. > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072541#comment-16072541 ] Hadoop QA commented on HADOOP-1: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 31 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-tools {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 32s{color} | {color:red} hadoop-tools generated 2 new + 159 unchanged - 0 fixed = 161 total (was 159) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 32s{color} | {color:orange} hadoop-tools: The patch generated 487 new + 0 unchanged - 0 fixed = 487 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-tools {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 13s{color} | {color:red} hadoop-ftp in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 52s{color} | {color:red} hadoop-tools in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s{color} | {color:green} hadoop-ftp in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 50m 58s{color} | {color:green} hadoop-tools in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 84m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-1 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12875506/HADOOP-1.3.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux 561dca8e8c67 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bf1f599 | | Default Java | 1.8.0_131 | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/12705/artifact/patchprocess/diff-compile-javac-hadoop-tools.txt | | checkstyle |
[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072439#comment-16072439 ] Lukas Waldmann commented on HADOOP-1: - Hongyuan, but files on ftp server are changed by a different application, right? In any case, cache is is updated for any create/mkdir/write operation performed of filesystem created by FileSystem fs = FileSystem.get(conf); Outside changes are not monitored and if such situation is common cache should be disabled. Please, try new filesystem and let me know if you come across any problem. It can be used independently of current file systems - see readme.md. So far we used it internally to out full satisfaction. > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12502) SetReplication OutOfMemoryError
[ https://issues.apache.org/jira/browse/HADOOP-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072420#comment-16072420 ] Hadoop QA commented on HADOOP-12502: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 431 unchanged - 1 fixed = 431 total (was 432) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 1s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 65m 8s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-12502 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12875489/HADOOP-12502-06.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 4e9413e6d9a6 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bf1f599 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12703/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12703/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > SetReplication OutOfMemoryError > --- > > Key: HADOOP-12502 > URL: https://issues.apache.org/jira/browse/HADOOP-12502 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.3.0 >Reporter: Philipp Schuegerl >Assignee: Vinayakumar B > Attachments: HADOOP-12502-01.patch, HADOOP-12502-02.patch, > HADOOP-12502-03.patch, HADOOP-12502-04.patch, HADOOP-12502-05.patch, > HADOOP-12502-06.patch > > > Setting the replication of a HDFS folder recursively can run out of memory. > E.g. with a large /var/log directory: > hdfs dfs -setrep -R -w 1 /var/log > Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit > exceeded > at
[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Waldmann updated HADOOP-1: Attachment: HADOOP-1.3.patch > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Waldmann updated HADOOP-1: Status: Patch Available (was: In Progress) Various build warnings fixed > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Waldmann updated HADOOP-1: Status: In Progress (was: Patch Available) Fixing various build issues > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14443) Azure: Support retry and client side failover for authorization, SASKey and delegation token generation
[ https://issues.apache.org/jira/browse/HADOOP-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072412#comment-16072412 ] Hadoop QA commented on HADOOP-14443: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HADOOP-14443 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-14443 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12875503/HADOOP-14443-branch2-1.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12704/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Azure: Support retry and client side failover for authorization, SASKey and > delegation token generation > --- > > Key: HADOOP-14443 > URL: https://issues.apache.org/jira/browse/HADOOP-14443 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 2.9.0 >Reporter: Santhosh G Nayak >Assignee: Santhosh G Nayak > Fix For: 3.0.0-alpha4 > > Attachments: HADOOP-14443.1.patch, HADOOP-14443.2.patch, > HADOOP-14443.3.patch, HADOOP-14443.4.patch, HADOOP-14443.5.patch, > HADOOP-14443.6.patch, HADOOP-14443.7.patch, HADOOP-14443-branch2-1.patch > > > Currently, {{WasRemoteCallHelper}} can be configured to talk to only one URL > for authorization, SASKey generation and delegation token generation. If for > some reason the service is down, all the requests will fail. > So proposal is to, > - Add support to configure multiple URLs, so that if communication to one URL > fails, client can retry on another instance of the service running on > different node for authorization, SASKey generation and delegation token > generation. > - Rename the configurations {{fs.azure.authorization.remote.service.url}} to > {{fs.azure.authorization.remote.service.urls}} and > {{fs.azure.cred.service.url}} to {{fs.azure.cred.service.urls}} to support > the comma separated list of URLs. > - Introduce a new configuration {{fs.azure.delegation.token.service.urls}} to > configure the comma separated list of service URLs to get the delegation > token. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14443) Azure: Support retry and client side failover for authorization, SASKey and delegation token generation
[ https://issues.apache.org/jira/browse/HADOOP-14443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Santhosh G Nayak updated HADOOP-14443: -- Attachment: HADOOP-14443-branch2-1.patch Thanks [~liuml07] for committing the patch to {{trunk}}. Attached a patch for {{branch-2}} with {{ObjectMapper}} optimization and resolved conflicts. Could you please review and commit it? > Azure: Support retry and client side failover for authorization, SASKey and > delegation token generation > --- > > Key: HADOOP-14443 > URL: https://issues.apache.org/jira/browse/HADOOP-14443 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 2.9.0 >Reporter: Santhosh G Nayak >Assignee: Santhosh G Nayak > Fix For: 3.0.0-alpha4 > > Attachments: HADOOP-14443.1.patch, HADOOP-14443.2.patch, > HADOOP-14443.3.patch, HADOOP-14443.4.patch, HADOOP-14443.5.patch, > HADOOP-14443.6.patch, HADOOP-14443.7.patch, HADOOP-14443-branch2-1.patch > > > Currently, {{WasRemoteCallHelper}} can be configured to talk to only one URL > for authorization, SASKey generation and delegation token generation. If for > some reason the service is down, all the requests will fail. > So proposal is to, > - Add support to configure multiple URLs, so that if communication to one URL > fails, client can retry on another instance of the service running on > different node for authorization, SASKey generation and delegation token > generation. > - Rename the configurations {{fs.azure.authorization.remote.service.url}} to > {{fs.azure.authorization.remote.service.urls}} and > {{fs.azure.cred.service.url}} to {{fs.azure.cred.service.urls}} to support > the comma separated list of URLs. > - Introduce a new configuration {{fs.azure.delegation.token.service.urls}} to > configure the comma separated list of service URLs to get the delegation > token. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14275) Rename 'seq.io.sort.mb' and 'seq.io.sort.factor' with prefix 'io.seqfile'
[ https://issues.apache.org/jira/browse/HADOOP-14275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072400#comment-16072400 ] Hadoop QA commented on HADOOP-14275: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 460 unchanged - 1 fixed = 460 total (was 461) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 40s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 60m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14275 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12875488/HADOOP-14275-02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 6fc08238dfc2 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bf1f599 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12702/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12702/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Rename 'seq.io.sort.mb' and 'seq.io.sort.factor' with prefix 'io.seqfile' > - > > Key: HADOOP-14275 > URL: https://issues.apache.org/jira/browse/HADOOP-14275 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Vinayakumar B >Assignee: Vinayakumar B > Attachments: HADOOP-14275-01.patch, HADOOP-14275-02.patch > > > HADOOP-6801 introduced new configs 'seq.io.sort.mb' and 'seq.io.sort.factor' . > These can be renamed to have prefix 'io.seqfile' to be consistent with
[jira] [Commented] (HADOOP-13435) Add thread local mechanism for aggregating file system storage stats
[ https://issues.apache.org/jira/browse/HADOOP-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072341#comment-16072341 ] Hongyuan Li commented on HADOOP-13435: -- should FileSystem#Cache be set as a single class and and a {{list}} method to make it easier for debugging? > Add thread local mechanism for aggregating file system storage stats > > > Key: HADOOP-13435 > URL: https://issues.apache.org/jira/browse/HADOOP-13435 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-13435.000.patch, HADOOP-13435.001.patch, > HADOOP-13435.002.patch > > > As discussed in [HADOOP-13032], this is to add thread local mechanism for > aggregating file system storage stats. This class will also be used in > [HADOOP-13031], which is to separate the distance-oriented rack-aware read > bytes logic from {{FileSystemStorageStatistics}} to new > DFSRackAwareStorageStatistics as it's DFS-specific. After this patch, the > {{FileSystemStorageStatistics}} can live without the to-be-removed > {{FileSystem$Statistics}} implementation. > A unit test should also be added. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13435) Add thread local mechanism for aggregating file system storage stats
[ https://issues.apache.org/jira/browse/HADOOP-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072341#comment-16072341 ] Hongyuan Li edited comment on HADOOP-13435 at 7/3/17 12:24 PM: --- should {{FileSystem}}#{{Cache}} be set as a single class and and a {{list}} method to make it easier for debugging? was (Author: hongyuan li): should FileSystem#Cache be set as a single class and and a {{list}} method to make it easier for debugging? > Add thread local mechanism for aggregating file system storage stats > > > Key: HADOOP-13435 > URL: https://issues.apache.org/jira/browse/HADOOP-13435 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-13435.000.patch, HADOOP-13435.001.patch, > HADOOP-13435.002.patch > > > As discussed in [HADOOP-13032], this is to add thread local mechanism for > aggregating file system storage stats. This class will also be used in > [HADOOP-13031], which is to separate the distance-oriented rack-aware read > bytes logic from {{FileSystemStorageStatistics}} to new > DFSRackAwareStorageStatistics as it's DFS-specific. After this patch, the > {{FileSystemStorageStatistics}} can live without the to-be-removed > {{FileSystem$Statistics}} implementation. > A unit test should also be added. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14470) CommandWithDestination#create used redundant ternary operator
[ https://issues.apache.org/jira/browse/HADOOP-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072329#comment-16072329 ] Hongyuan Li commented on HADOOP-14470: -- seems this has been rsolved. Should this be closed? [~ste...@apache.org] > CommandWithDestination#create used redundant ternary operator > --- > > Key: HADOOP-14470 > URL: https://issues.apache.org/jira/browse/HADOOP-14470 > Project: Hadoop Common > Issue Type: Improvement > Components: common, fs >Affects Versions: 3.0.0-alpha3 >Reporter: Hongyuan Li >Assignee: Hongyuan Li >Priority: Trivial > Attachments: HADOOP-14470-001.patch > > > in if statement,the lazyPersist is always true, thus the ternary operator is > redundant, > {{lazyPersist == true}} in if statment, so {{lazyPersist ? 1 : > getDefaultReplication(item.path)}} is redundant. > related code like below, which is in > {{org.apache.hadoop.fs.shell.CommandWithDestination}} lineNumber : 504 : > {code:java} >FSDataOutputStream create(PathData item, boolean lazyPersist, > boolean direct) > throws IOException { > try { > if (lazyPersist) { // in if stament, lazyPersist is always true > …… > return create(item.path, > FsPermission.getFileDefault().applyUMask( > FsPermission.getUMask(getConf())), > createFlags, > getConf().getInt(IO_FILE_BUFFER_SIZE_KEY, > IO_FILE_BUFFER_SIZE_DEFAULT), > lazyPersist ? 1 : getDefaultReplication(item.path), > // *this is redundant* > getDefaultBlockSize(), > null, > null); > } else { > return create(item.path, true); > } > } finally { // might have been created but stream was interrupted > if (!direct) { > deleteOnExit(item.path); > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14537) FindBugs warning in ECSchema#toString
[ https://issues.apache.org/jira/browse/HADOOP-14537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072321#comment-16072321 ] Hongyuan Li commented on HADOOP-14537: -- seems resolved in latest code, ping [~ste...@apache.org] to close this issue. > FindBugs warning in ECSchema#toString > - > > Key: HADOOP-14537 > URL: https://issues.apache.org/jira/browse/HADOOP-14537 > Project: Hadoop Common > Issue Type: Bug > Components: common, io >Reporter: Hongyuan Li >Assignee: Hongyuan Li > > should we use entryset instead of keyset? which is more efficient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12502) SetReplication OutOfMemoryError
[ https://issues.apache.org/jira/browse/HADOOP-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B updated HADOOP-12502: --- Attachment: HADOOP-12502-06.patch Updated the patch to fix checkstyle issues. > SetReplication OutOfMemoryError > --- > > Key: HADOOP-12502 > URL: https://issues.apache.org/jira/browse/HADOOP-12502 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.3.0 >Reporter: Philipp Schuegerl >Assignee: Vinayakumar B > Attachments: HADOOP-12502-01.patch, HADOOP-12502-02.patch, > HADOOP-12502-03.patch, HADOOP-12502-04.patch, HADOOP-12502-05.patch, > HADOOP-12502-06.patch > > > Setting the replication of a HDFS folder recursively can run out of memory. > E.g. with a large /var/log directory: > hdfs dfs -setrep -R -w 1 /var/log > Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit > exceeded > at java.util.Arrays.copyOfRange(Arrays.java:2694) > at java.lang.String.(String.java:203) > at java.lang.String.substring(String.java:1913) > at java.net.URI$Parser.substring(URI.java:2850) > at java.net.URI$Parser.parse(URI.java:3046) > at java.net.URI.(URI.java:753) > at org.apache.hadoop.fs.Path.initialize(Path.java:203) > at org.apache.hadoop.fs.Path.(Path.java:116) > at org.apache.hadoop.fs.Path.(Path.java:94) > at > org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:222) > at > org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:246) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:689) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:102) > at > org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712) > at > org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:708) > at > org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:268) > at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308) > at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308) > at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308) > at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308) > at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308) > at > org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278) > at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260) > at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244) > at > org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:76) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14537) FindBugs warning in ECSchema#toString
[ https://issues.apache.org/jira/browse/HADOOP-14537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hongyuan Li updated HADOOP-14537: - Description: should we use entryset instead of keyset? which is more efficient. code like below: {code} for (Map.Entryentry : extraOptions.entrySet()) { sb.append(entry.getKey() + "=" + entry.getValue() + (++i < extraOptions.size() ? ", " : "")); } {code} was: should we use entryset instead of keyset? which is more efficient. code like below: {code} {code} > FindBugs warning in ECSchema#toString > - > > Key: HADOOP-14537 > URL: https://issues.apache.org/jira/browse/HADOOP-14537 > Project: Hadoop Common > Issue Type: Bug > Components: common, io >Reporter: Hongyuan Li >Assignee: Hongyuan Li > > should we use entryset instead of keyset? which is more efficient. > code like below: > {code} > for (Map.Entry entry : extraOptions.entrySet()) { > sb.append(entry.getKey() + "=" + entry.getValue() + > (++i < extraOptions.size() ? ", " : "")); > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14537) FindBugs warning in ECSchema#toString
[ https://issues.apache.org/jira/browse/HADOOP-14537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hongyuan Li updated HADOOP-14537: - Description: should we use entryset instead of keyset? which is more efficient. code like below: {code} {code} was:should we use entryset instead of keyset? which is more efficient. > FindBugs warning in ECSchema#toString > - > > Key: HADOOP-14537 > URL: https://issues.apache.org/jira/browse/HADOOP-14537 > Project: Hadoop Common > Issue Type: Bug > Components: common, io >Reporter: Hongyuan Li >Assignee: Hongyuan Li > > should we use entryset instead of keyset? which is more efficient. > code like below: > {code} > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14275) Rename 'seq.io.sort.mb' and 'seq.io.sort.factor' with prefix 'io.seqfile'
[ https://issues.apache.org/jira/browse/HADOOP-14275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B updated HADOOP-14275: --- Attachment: HADOOP-14275-02.patch Updated the patch. Fixed test, checkstyle, javac and javadoc warnings. > Rename 'seq.io.sort.mb' and 'seq.io.sort.factor' with prefix 'io.seqfile' > - > > Key: HADOOP-14275 > URL: https://issues.apache.org/jira/browse/HADOOP-14275 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Vinayakumar B >Assignee: Vinayakumar B > Attachments: HADOOP-14275-01.patch, HADOOP-14275-02.patch > > > HADOOP-6801 introduced new configs 'seq.io.sort.mb' and 'seq.io.sort.factor' . > These can be renamed to have prefix 'io.seqfile' to be consistent with other > configs related to sequence file. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072249#comment-16072249 ] Hadoop QA commented on HADOOP-14587: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 23 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 52s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 52s{color} | {color:red} root generated 20 new + 825 unchanged - 0 fixed = 845 total (was 825) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 56s{color} | {color:orange} hadoop-common-project: The patch generated 1 new + 480 unchanged - 2 fixed = 481 total (was 482) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 40s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 37s{color} | {color:green} hadoop-nfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 72m 32s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | | | hadoop.ha.TestZKFailoverController | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14587 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12875468/HADOOP-14587.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux a42e3024e9cd 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bf1f599 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/12701/artifact/patchprocess/diff-compile-javac-root.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/12701/artifact/patchprocess/diff-checkstyle-hadoop-common-project.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/12701/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results |
[jira] [Assigned] (HADOOP-14581) Restrict setOwner to list of user when security is enabled in wasb
[ https://issues.apache.org/jira/browse/HADOOP-14581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-14581: --- Assignee: Varada Hemeswari (was: Steve Loughran) > Restrict setOwner to list of user when security is enabled in wasb > -- > > Key: HADOOP-14581 > URL: https://issues.apache.org/jira/browse/HADOOP-14581 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Affects Versions: 3.0.0-alpha3 >Reporter: Varada Hemeswari >Assignee: Varada Hemeswari > Labels: azure, fs, secure, wasb > Attachments: HADOOP-14581.1.patch, HADOOP-14581.2.patch > > > Currently in azure FS, setOwner api is exposed to all the users accessing the > file system. > When Authorization is enabled, access to some files/folders is given to > particular users based on whether the user is the owner of the file. > So setOwner has to be restricted to limited set of users to prevent users > from exploiting owner based authorization of files and folders. > Introducing a new config called fs.azure.chown.allowed.userlist which is a > comma seperated list of users who are allowed to perform chown operation when > authorization is enabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14586) org.apache.hadoop.util.Shell in 2.7 breaks on Java 9 RC build; backport HADOOP-10775 to 2.7.x
[ https://issues.apache.org/jira/browse/HADOOP-14586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072248#comment-16072248 ] Steve Loughran commented on HADOOP-14586: - Konstantin: Uwe is right, the probe isn't needed. 2.7 is Java 7+. 2.8 contains the patch which removes the probe as part of a big cleanup of Shell itself. Cherry picking the smallest bits of that patch needed to fix the condition and remove some codepaths which can be guaranteed never to be executed would make this problem go away > org.apache.hadoop.util.Shell in 2.7 breaks on Java 9 RC build; > backport HADOOP-10775 to 2.7.x > --- > > Key: HADOOP-14586 > URL: https://issues.apache.org/jira/browse/HADOOP-14586 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.7.2 > Environment: Java 9, build 175 (Java 9 release candidate as of June > 25th, 2017) >Reporter: Uwe Schindler >Assignee: Akira Ajisaka >Priority: Minor > Labels: Java9 > Attachments: HADOOP-14586-branch-2.7-01.patch, > HADOOP-14586-branch-2.7-02.patch > > > You cannot use any pre-Hadoop 2.8 component anymore with the latest release > candidate build of Java 9, because it fails with an > StringIndexOutOfBoundsException in {{org.apache.hadoop.util.Shell#}}. > This leads to a whole cascade of failing classes (next in chain is > StringUtils). > The reason is that the release candidate build of Java 9 no longer has "-ea" > in the version string and the system property "java.version" is now simply > "9". This causes the following line to fail fatally: > {code:java} > private static boolean IS_JAVA7_OR_ABOVE = > System.getProperty("java.version").substring(0, 3).compareTo("1.7") >= > 0; > {code} > Analysis: > - This code looks wrong, as comparing a version this way is incorrect. > - The {{substring(0, 3)}} is not needed, {{compareTo}} also works without it, > although it is still an invalid way to compare a version. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14615) Add ServiceOperations.stopQuietly that accept slf4j logger API
[ https://issues.apache.org/jira/browse/HADOOP-14615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072246#comment-16072246 ] Steve Loughran commented on HADOOP-14615: - Should we add a test for this? Once the API gets adopted, it's less relevant, but it's probably worth during for completion, at least with a dummy service which throws an IOE in its close(). That way we get to verify that the method really is resilient to failures > Add ServiceOperations.stopQuietly that accept slf4j logger API > -- > > Key: HADOOP-14615 > URL: https://issues.apache.org/jira/browse/HADOOP-14615 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Wenxin He >Assignee: Wenxin He > Attachments: HADOOP-14615.001.patch, HADOOP-14615.002.patch > > > Split from HADOOP-14539. > Now ServiceOperations.stopQuietly only accepts commons-logging logger API. > Now we are migrating the APIs to slf4j, slf4j logger API should be accepted > as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14619) S3A authenticators to log origin of .secret.key options
Steve Loughran created HADOOP-14619: --- Summary: S3A authenticators to log origin of .secret.key options Key: HADOOP-14619 URL: https://issues.apache.org/jira/browse/HADOOP-14619 Project: Hadoop Common Issue Type: Sub-task Components: s3 Affects Versions: 2.8.1 Reporter: Steve Loughran Priority: Minor Even though we can't log the values of the id, secret and session options, we could aid debugging what's going on with auth failures by logging the origin of the values. e.g. {code} DEBUG authenticating with secrets obtained from hive-site.xml DEBUG authenticating with secrets obtained from hive-site.xml and bucket options landsat {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wenxin He updated HADOOP-14587: --- Status: Open (was: Patch Available) > Use GenericTestUtils.setLogLevel when available in hadoop-common > > > Key: HADOOP-14587 > URL: https://issues.apache.org/jira/browse/HADOOP-14587 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Wenxin He >Assignee: Wenxin He > Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, > HADOOP-14587.003.patch > > > Based on Brahma's comment in HADOOP-14296, it's better to use > GenericTestUtils.setLogLevel as possible to make the migration easier. > Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for > hadoop-common change. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wenxin He updated HADOOP-14587: --- Attachment: HADOOP-14587.003.patch Yes, more simple and clear. Thanks. 003.patch attached: change to {{setLogLevel(LogManager.getRootLogger(), Level.toLevel(level.toString()));}} > Use GenericTestUtils.setLogLevel when available in hadoop-common > > > Key: HADOOP-14587 > URL: https://issues.apache.org/jira/browse/HADOOP-14587 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Wenxin He >Assignee: Wenxin He > Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, > HADOOP-14587.003.patch > > > Based on Brahma's comment in HADOOP-14296, it's better to use > GenericTestUtils.setLogLevel as possible to make the migration easier. > Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for > hadoop-common change. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wenxin He updated HADOOP-14587: --- Status: Patch Available (was: Open) > Use GenericTestUtils.setLogLevel when available in hadoop-common > > > Key: HADOOP-14587 > URL: https://issues.apache.org/jira/browse/HADOOP-14587 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Wenxin He >Assignee: Wenxin He > Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch, > HADOOP-14587.003.patch > > > Based on Brahma's comment in HADOOP-14296, it's better to use > GenericTestUtils.setLogLevel as possible to make the migration easier. > Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for > hadoop-common change. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072146#comment-16072146 ] Akira Ajisaka commented on HADOOP-14587: Thank you for the update. {code:title=GenericTestUtils.java} + public static void setRootLogLevel(org.slf4j.event.Level level) { +setLogLevel(toLog4j(getLogger("org")), Level.toLevel(level.toString())); + } {code} In this method, {{setLogLevel(LogManager.getRootLogger(), Level.toLevel(level.toString()));}} is straightforward to me. > Use GenericTestUtils.setLogLevel when available in hadoop-common > > > Key: HADOOP-14587 > URL: https://issues.apache.org/jira/browse/HADOOP-14587 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Wenxin He >Assignee: Wenxin He > Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch > > > Based on Brahma's comment in HADOOP-14296, it's better to use > GenericTestUtils.setLogLevel as possible to make the migration easier. > Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for > hadoop-common change. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14618) Build failure due to failing Bats test
Sonia Garudi created HADOOP-14618: - Summary: Build failure due to failing Bats test Key: HADOOP-14618 URL: https://issues.apache.org/jira/browse/HADOOP-14618 Project: Hadoop Common Issue Type: Test Components: common Affects Versions: 3.0.0-alpha4 Environment: Ubuntu 14.04 x86, ppc64le $ java -version openjdk version "1.8.0_111" OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) Reporter: Sonia Garudi Priority: Minor The build fails with the following error : {code} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (common-test-bats-driver) on project hadoop-common: An Ant BuildException has occured: exec returned: 1 [ERROR] around Ant part .. @ 4:69 in /ws/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml {code} There is a failure in the bats test as follows : {code} [exec] Running bats -t hadoop_mkdir.bats [exec] 1..3 [exec] ok 1 hadoop_mkdir (create) [exec] ok 2 hadoop_mkdir (exists) [exec] not ok 3 hadoop_mkdir (failed) [exec] # (in test file hadoop_mkdir.bats, line 41) [exec] # `[ "${status}" != 0 ]' failed [exec] # bindir: /var/lib/jenkins/workspace/hadoop-master/hadoop-common-project/hadoop-common/src/test/scripts {code} The required directories are getting created, however still the test fails. I am using the following bats version : {code} # bats -version Bats 0.4.0 {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072127#comment-16072127 ] Hadoop QA commented on HADOOP-14587: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 23 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 6s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 5s{color} | {color:red} root generated 20 new + 825 unchanged - 0 fixed = 845 total (was 825) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 480 unchanged - 2 fixed = 480 total (was 482) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 36s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s{color} | {color:green} hadoop-nfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 68m 29s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.sftp.TestSFTPFileSystem | | | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14587 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12875460/HADOOP-14587.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 3051f1684965 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bf1f599 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/12700/artifact/patchprocess/diff-compile-javac-root.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/12700/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12700/testReport/ | | modules | C: hadoop-common-project/hadoop-common
[jira] [Updated] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator
[ https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14163: --- Target Version/s: asf-site > Refactor existing hadoop site to use more usable static website generator > - > > Key: HADOOP-14163 > URL: https://issues.apache.org/jira/browse/HADOOP-14163 > Project: Hadoop Common > Issue Type: Improvement > Components: site >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HADOOP-14163-001.zip, HADOOP-14163-002.zip, > HADOOP-14163-003.zip, hadoop-site.tar.gz, hadop-site-rendered.tar.gz > > > From the dev mailing list: > "Publishing can be attacked via a mix of scripting and revamping the darned > website. Forrest is pretty bad compared to the newer static site generators > out there (e.g. need to write XML instead of markdown, it's hard to review a > staging site because of all the absolute links, hard to customize, did I > mention XML?), and the look and feel of the site is from the 00s. We don't > actually have that much site content, so it should be possible to migrate to > a new system." > This issue is find a solution to migrate the old site to a new modern static > site generator using a more contemprary theme. > Goals: > * existing links should work (or at least redirected) > * It should be easy to add more content required by a release automatically > (most probably with creating separated markdown files) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator
[ https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16072122#comment-16072122 ] Akira Ajisaka commented on HADOOP-14163: Thank you for merging this! +1, I'll push this to asf-site branch tomorrow if there is no comment. > Refactor existing hadoop site to use more usable static website generator > - > > Key: HADOOP-14163 > URL: https://issues.apache.org/jira/browse/HADOOP-14163 > Project: Hadoop Common > Issue Type: Improvement > Components: site >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HADOOP-14163-001.zip, HADOOP-14163-002.zip, > HADOOP-14163-003.zip, hadoop-site.tar.gz, hadop-site-rendered.tar.gz > > > From the dev mailing list: > "Publishing can be attacked via a mix of scripting and revamping the darned > website. Forrest is pretty bad compared to the newer static site generators > out there (e.g. need to write XML instead of markdown, it's hard to review a > staging site because of all the absolute links, hard to customize, did I > mention XML?), and the look and feel of the site is from the 00s. We don't > actually have that much site content, so it should be possible to migrate to > a new system." > This issue is find a solution to migrate the old site to a new modern static > site generator using a more contemprary theme. > Goals: > * existing links should work (or at least redirected) > * It should be easy to add more content required by a release automatically > (most probably with creating separated markdown files) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wenxin He updated HADOOP-14587: --- Status: Open (was: Patch Available) > Use GenericTestUtils.setLogLevel when available in hadoop-common > > > Key: HADOOP-14587 > URL: https://issues.apache.org/jira/browse/HADOOP-14587 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Wenxin He >Assignee: Wenxin He > Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch > > > Based on Brahma's comment in HADOOP-14296, it's better to use > GenericTestUtils.setLogLevel as possible to make the migration easier. > Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for > hadoop-common change. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wenxin He updated HADOOP-14587: --- Status: Patch Available (was: Open) > Use GenericTestUtils.setLogLevel when available in hadoop-common > > > Key: HADOOP-14587 > URL: https://issues.apache.org/jira/browse/HADOOP-14587 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Wenxin He >Assignee: Wenxin He > Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch > > > Based on Brahma's comment in HADOOP-14296, it's better to use > GenericTestUtils.setLogLevel as possible to make the migration easier. > Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for > hadoop-common change. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14587) Use GenericTestUtils.setLogLevel when available in hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-14587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wenxin He updated HADOOP-14587: --- Attachment: HADOOP-14587.002.patch Thanks, [~ajisakaa]. Creating a helper method in GenericTestUtils is much better, we can reuse it somewhere else. Attach 002.patch: add {{setRootLogLevel(org.slf4j.event.Level level)}} > Use GenericTestUtils.setLogLevel when available in hadoop-common > > > Key: HADOOP-14587 > URL: https://issues.apache.org/jira/browse/HADOOP-14587 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Wenxin He >Assignee: Wenxin He > Attachments: HADOOP-14587.001.patch, HADOOP-14587.002.patch > > > Based on Brahma's comment in HADOOP-14296, it's better to use > GenericTestUtils.setLogLevel as possible to make the migration easier. > Based on Akira Ajisaka's comment in HADOOP-14549, create a separate jira for > hadoop-common change. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14188) Remove the usage of org.mockito.internal.util.reflection.Whitebox
[ https://issues.apache.org/jira/browse/HADOOP-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16071991#comment-16071991 ] Hadoop QA commented on HADOOP-14188: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 48 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 26s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 51s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 30s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 48s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 40s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 20s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 20s{color} | {color:red} root generated 164 new + 825 unchanged - 0 fixed = 989 total (was 825) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 9s{color} | {color:green} root: The patch generated 0 new + 972 unchanged - 1 fixed = 972 total (was 973) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 30s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 5s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}143m 59s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14188 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12875444/HADOOP-14188.06.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 10c3bcdd24e8 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bf1f599 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs |