[jira] [Commented] (HADOOP-14866) Backport implementation of parallel block copy in Distcp to hadoop 2.8
[ https://issues.apache.org/jira/browse/HADOOP-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165779#comment-16165779 ] Huafeng Wang commented on HADOOP-14866: --- Replaced the original patch with one targeting to branch 2.8 > Backport implementation of parallel block copy in Distcp to hadoop 2.8 > -- > > Key: HADOOP-14866 > URL: https://issues.apache.org/jira/browse/HADOOP-14866 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Reporter: Huafeng Wang >Assignee: Huafeng Wang > Attachments: HADOOP-14866.001.branch.2.8.patch > > > The implementation of parallel block copy in Distcp targets to version 2.9. > It would be great to have this feature in version 2.8. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14866) Backport implementation of parallel block copy in Distcp to hadoop 2.8
[ https://issues.apache.org/jira/browse/HADOOP-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huafeng Wang updated HADOOP-14866: -- Status: Patch Available (was: Open) > Backport implementation of parallel block copy in Distcp to hadoop 2.8 > -- > > Key: HADOOP-14866 > URL: https://issues.apache.org/jira/browse/HADOOP-14866 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Reporter: Huafeng Wang >Assignee: Huafeng Wang > Attachments: HADOOP-14866.001.branch.2.8.patch > > > The implementation of parallel block copy in Distcp targets to version 2.9. > It would be great to have this feature in version 2.8. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14866) Backport implementation of parallel block copy in Distcp to hadoop 2.8
[ https://issues.apache.org/jira/browse/HADOOP-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huafeng Wang updated HADOOP-14866: -- Attachment: HADOOP-14866.001.branch.2.8.patch > Backport implementation of parallel block copy in Distcp to hadoop 2.8 > -- > > Key: HADOOP-14866 > URL: https://issues.apache.org/jira/browse/HADOOP-14866 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Reporter: Huafeng Wang >Assignee: Huafeng Wang > Attachments: HADOOP-14866.001.branch.2.8.patch > > > The implementation of parallel block copy in Distcp targets to version 2.9. > It would be great to have this feature in version 2.8. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14866) Backport implementation of parallel block copy in Distcp to hadoop 2.8
[ https://issues.apache.org/jira/browse/HADOOP-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huafeng Wang updated HADOOP-14866: -- Attachment: (was: HADOOP-14866.001.branch2.8.2.patch) > Backport implementation of parallel block copy in Distcp to hadoop 2.8 > -- > > Key: HADOOP-14866 > URL: https://issues.apache.org/jira/browse/HADOOP-14866 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Reporter: Huafeng Wang >Assignee: Huafeng Wang > Attachments: HADOOP-14866.001.branch.2.8.patch > > > The implementation of parallel block copy in Distcp targets to version 2.9. > It would be great to have this feature in version 2.8. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14738) Remove S3N and obsolete bits of S3A; rework docs
[ https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165775#comment-16165775 ] Hadoop QA commented on HADOOP-14738: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 30 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 33s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-client-modules/hadoop-client-minicluster {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 46s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 46s{color} | {color:red} root generated 1 new + 1282 unchanged - 1 fixed = 1283 total (was 1283) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 23s{color} | {color:orange} root: The patch generated 2 new + 33 unchanged - 105 fixed = 35 total (was 138) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 8s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-client-modules/hadoop-client-minicluster {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 8s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s{color} | {color:green} hadoop-client-minicluster in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}111m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14738 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886978/HADOOP-14738-006.patch | | Optional Tests | asflicense compile javac
[jira] [Commented] (HADOOP-14238) [Umbrella] Rechecking Guava's object is not exposed to user-facing API
[ https://issues.apache.org/jira/browse/HADOOP-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165754#comment-16165754 ] Bharat Viswanadham commented on HADOOP-14238: - [~andrew.wang] As this is umbrella jira, created subtask for AMRMClient. And in the comments it is mentioned that hadoop-hdfs-project/hadoop-hdfs-client/ does not have any references. I can look over this apilyzer plugin, but i have not played with that and it may take some time for me, if any one has idea about that, they can take this up work item if it is needed work item by end of week. > [Umbrella] Rechecking Guava's object is not exposed to user-facing API > -- > > Key: HADOOP-14238 > URL: https://issues.apache.org/jira/browse/HADOOP-14238 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Tsuyoshi Ozawa >Assignee: Bharat Viswanadham >Priority: Blocker > > This is reported by [~hitesh] on HADOOP-10101. > At least, AMRMClient#waitFor takes Guava's Supplier instance as an instance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14238) [Umbrella] Rechecking Guava's object is not exposed to user-facing API
[ https://issues.apache.org/jira/browse/HADOOP-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165751#comment-16165751 ] Andrew Wang commented on HADOOP-14238: -- Hi [~bharatviswa] is this JIRA tracking towards completion for beta1 (end-of-week)? [~busbey] mentioned the [apilyzer|https://github.com/revelc/apilyzer-maven-plugin] plugin to me which Accumulo uses, it might help for finding these classes. > [Umbrella] Rechecking Guava's object is not exposed to user-facing API > -- > > Key: HADOOP-14238 > URL: https://issues.apache.org/jira/browse/HADOOP-14238 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Tsuyoshi Ozawa >Assignee: Bharat Viswanadham >Priority: Blocker > > This is reported by [~hitesh] on HADOOP-10101. > At least, AMRMClient#waitFor takes Guava's Supplier instance as an instance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14864) FSDataInputStream#unbuffer UOE should include stream class name
[ https://issues.apache.org/jira/browse/HADOOP-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165704#comment-16165704 ] Bharat Viswanadham edited comment on HADOOP-14864 at 9/14/17 4:59 AM: -- [~jzhuge] Thanks for review, and for updating patch with checkstyle issues. I think previous failed testcases are not related to this patch. was (Author: bharatviswa): [~jzhuge] Thanks for review, and updating patch with checkstyle issues. I think previous failed testcases are not related to this patch. > FSDataInputStream#unbuffer UOE should include stream class name > --- > > Key: HADOOP-14864 > URL: https://issues.apache.org/jira/browse/HADOOP-14864 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie, supportability > Attachments: HADOOP-14864.01.patch, HADOOP-14864.02.patch, > HADOOP-14864.patch > > > The current exception message: > {noformat} > org/apache/hadoop/fs/ failed: error: > UnsupportedOperationException: this stream does not support > unbuffering.java.lang.UnsupportedOperationException: this stream does not > support unbuffering. > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14864) FSDataInputStream#unbuffer UOE should include stream class name
[ https://issues.apache.org/jira/browse/HADOOP-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165704#comment-16165704 ] Bharat Viswanadham commented on HADOOP-14864: - [~jzhuge] Thanks for review, and updating patch with checkstyle issues. I think previous failed testcases are not related to this patch. > FSDataInputStream#unbuffer UOE should include stream class name > --- > > Key: HADOOP-14864 > URL: https://issues.apache.org/jira/browse/HADOOP-14864 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie, supportability > Attachments: HADOOP-14864.01.patch, HADOOP-14864.02.patch, > HADOOP-14864.patch > > > The current exception message: > {noformat} > org/apache/hadoop/fs/ failed: error: > UnsupportedOperationException: this stream does not support > unbuffering.java.lang.UnsupportedOperationException: this stream does not > support unbuffering. > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165701#comment-16165701 ] ASF GitHub Bot commented on HADOOP-13600: - Github user sahilTakiar commented on the issue: https://github.com/apache/hadoop/pull/157 Updates: * Moved the parallel rename logic into a dedicated class called `ParallelDirectoryRenamer` * A few other bug fixes, the core logic remains the same @steveloughran your last comment on HADOOP-13786 suggested you may move the retry logic out into a separate patch? Are you planning to do that? If not, do you think this patch requires waiting for all the work in HADOOP-13786 to be completed? If there are concerns with retry behavior, we could also set the default value of the copy thread pool to be 1, that way this feature is essentially off by default. Also what do you mean by "isn't going to be resilient to large copies where you are much more likely to hit parallel IO"? What parallel IO are you referring to? > S3a rename() to copy files in a directory in parallel > - > > Key: HADOOP-13600 > URL: https://issues.apache.org/jira/browse/HADOOP-13600 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Sahil Takiar > Attachments: HADOOP-13600.001.patch > > > Currently a directory rename does a one-by-one copy, making the request > O(files * data). If the copy operations were launched in parallel, the > duration of the copy may be reducable to the duration of the longest copy. > For a directory with many files, this will be significant -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14864) FSDataInputStream#unbuffer UOE should include stream class name
[ https://issues.apache.org/jira/browse/HADOOP-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14864: Attachment: HADOOP-14864.02.patch +1 LGTM [~bharatviswa] Uploaded patch 02 with checkstyle fixes. Will commit after pre-commit tests pass. > FSDataInputStream#unbuffer UOE should include stream class name > --- > > Key: HADOOP-14864 > URL: https://issues.apache.org/jira/browse/HADOOP-14864 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie, supportability > Attachments: HADOOP-14864.01.patch, HADOOP-14864.02.patch, > HADOOP-14864.patch > > > The current exception message: > {noformat} > org/apache/hadoop/fs/ failed: error: > UnsupportedOperationException: this stream does not support > unbuffering.java.lang.UnsupportedOperationException: this stream does not > support unbuffering. > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165696#comment-16165696 ] ASF GitHub Bot commented on HADOOP-13600: - Github user sahilTakiar commented on a diff in the pull request: https://github.com/apache/hadoop/pull/157#discussion_r138791863 --- Diff: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java --- @@ -241,26 +242,17 @@ public StorageStatistics provide() { } }); - int maxThreads = conf.getInt(MAX_THREADS, DEFAULT_MAX_THREADS); - if (maxThreads < 2) { -LOG.warn(MAX_THREADS + " must be at least 2: forcing to 2."); -maxThreads = 2; - } + int maxThreads = getMaxThreads(conf, MAX_THREADS, DEFAULT_MAX_THREADS); int totalTasks = intOption(conf, MAX_TOTAL_TASKS, DEFAULT_MAX_TOTAL_TASKS, 1); long keepAliveTime = longOption(conf, KEEPALIVE_TIME, DEFAULT_KEEPALIVE_TIME, 0); + --- End diff -- Fixed > S3a rename() to copy files in a directory in parallel > - > > Key: HADOOP-13600 > URL: https://issues.apache.org/jira/browse/HADOOP-13600 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Sahil Takiar > Attachments: HADOOP-13600.001.patch > > > Currently a directory rename does a one-by-one copy, making the request > O(files * data). If the copy operations were launched in parallel, the > duration of the copy may be reducable to the duration of the longest copy. > For a directory with many files, this will be significant -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165698#comment-16165698 ] ASF GitHub Bot commented on HADOOP-13600: - Github user sahilTakiar commented on a diff in the pull request: https://github.com/apache/hadoop/pull/157#discussion_r138791881 --- Diff: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java --- @@ -303,7 +296,37 @@ public StorageStatistics provide() { } catch (AmazonClientException e) { throw translateException("initializing ", new Path(name), e); } + } + + private int getMaxThreads(Configuration conf, String maxThreadsKey, int defaultMaxThreads) { +int maxThreads = conf.getInt(maxThreadsKey, defaultMaxThreads); +if (maxThreads < 2) { + LOG.warn(maxThreadsKey + " must be at least 2: forcing to 2."); + maxThreads = 2; +} +return maxThreads; + } + + private LazyTransferManager createLazyUploadTransferManager(Configuration conf) { --- End diff -- Done > S3a rename() to copy files in a directory in parallel > - > > Key: HADOOP-13600 > URL: https://issues.apache.org/jira/browse/HADOOP-13600 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Sahil Takiar > Attachments: HADOOP-13600.001.patch > > > Currently a directory rename does a one-by-one copy, making the request > O(files * data). If the copy operations were launched in parallel, the > duration of the copy may be reducable to the duration of the longest copy. > For a directory with many files, this will be significant -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165697#comment-16165697 ] ASF GitHub Bot commented on HADOOP-13600: - Github user sahilTakiar commented on a diff in the pull request: https://github.com/apache/hadoop/pull/157#discussion_r138791871 --- Diff: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java --- @@ -303,7 +296,37 @@ public StorageStatistics provide() { } catch (AmazonClientException e) { throw translateException("initializing ", new Path(name), e); } + } + + private int getMaxThreads(Configuration conf, String maxThreadsKey, int defaultMaxThreads) { --- End diff -- Done > S3a rename() to copy files in a directory in parallel > - > > Key: HADOOP-13600 > URL: https://issues.apache.org/jira/browse/HADOOP-13600 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Sahil Takiar > Attachments: HADOOP-13600.001.patch > > > Currently a directory rename does a one-by-one copy, making the request > O(files * data). If the copy operations were launched in parallel, the > duration of the copy may be reducable to the duration of the longest copy. > For a directory with many files, this will be significant -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14864) FSDataInputStream#unbuffer UOE should include stream class name
[ https://issues.apache.org/jira/browse/HADOOP-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14864: Summary: FSDataInputStream#unbuffer UOE should include stream class name (was: FSDataInputStream#unbuffer UOE exception should print the stream class name) > FSDataInputStream#unbuffer UOE should include stream class name > --- > > Key: HADOOP-14864 > URL: https://issues.apache.org/jira/browse/HADOOP-14864 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie, supportability > Attachments: HADOOP-14864.01.patch, HADOOP-14864.patch > > > The current exception message: > {noformat} > org/apache/hadoop/fs/ failed: error: > UnsupportedOperationException: this stream does not support > unbuffering.java.lang.UnsupportedOperationException: this stream does not > support unbuffering. > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165695#comment-16165695 ] ASF GitHub Bot commented on HADOOP-13600: - Github user sahilTakiar commented on a diff in the pull request: https://github.com/apache/hadoop/pull/157#discussion_r138791767 --- Diff: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/CopyContext.java --- @@ -0,0 +1,34 @@ +package org.apache.hadoop.fs.s3a; + +import com.amazonaws.services.s3.transfer.Copy; + +class CopyContext { + + private final Copy copy; + private final String srcKey; + private final String dstKey; + private final long length; + + CopyContext(Copy copy, String srcKey, String dstKey, long length) { +this.copy = copy; +this.srcKey = srcKey; +this.dstKey = dstKey; +this.length = length; + } + + Copy getCopy() { +return copy; + } + + String getSrcKey() { +return srcKey; + } + + String getDstKey() { --- End diff -- Fixed > S3a rename() to copy files in a directory in parallel > - > > Key: HADOOP-13600 > URL: https://issues.apache.org/jira/browse/HADOOP-13600 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Sahil Takiar > Attachments: HADOOP-13600.001.patch > > > Currently a directory rename does a one-by-one copy, making the request > O(files * data). If the copy operations were launched in parallel, the > duration of the copy may be reducable to the duration of the longest copy. > For a directory with many files, this will be significant -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165693#comment-16165693 ] ASF GitHub Bot commented on HADOOP-13600: - Github user sahilTakiar commented on a diff in the pull request: https://github.com/apache/hadoop/pull/157#discussion_r138791721 --- Diff: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java --- @@ -891,50 +902,123 @@ private boolean innerRename(Path source, Path dest) } List keysToDelete = new ArrayList<>(); + List dirKeysToDelete = new ArrayList<>(); if (dstStatus != null && dstStatus.isEmptyDirectory() == Tristate.TRUE) { // delete unnecessary fake directory. keysToDelete.add(new DeleteObjectsRequest.KeyVersion(dstKey)); } - Path parentPath = keyToPath(srcKey); - RemoteIterator iterator = listFilesAndEmptyDirectories( - parentPath, true); - while (iterator.hasNext()) { -LocatedFileStatus status = iterator.next(); -long length = status.getLen(); -String key = pathToKey(status.getPath()); -if (status.isDirectory() && !key.endsWith("/")) { - key += "/"; -} -keysToDelete -.add(new DeleteObjectsRequest.KeyVersion(key)); -String newDstKey = -dstKey + key.substring(srcKey.length()); -copyFile(key, newDstKey, length); - -if (hasMetadataStore()) { - // with a metadata store, the object entries need to be updated, - // including, potentially, the ancestors - Path childSrc = keyToQualifiedPath(key); - Path childDst = keyToQualifiedPath(newDstKey); - if (objectRepresentsDirectory(key, length)) { -S3Guard.addMoveDir(metadataStore, srcPaths, dstMetas, childSrc, -childDst, username); + // A blocking queue that tracks all objects that need to be deleted + BlockingQueuedeleteQueue = new ArrayBlockingQueue<>( + (int) Math.round(MAX_ENTRIES_TO_DELETE * 1.5)); + + // Used to track if the delete thread was gracefully shutdown + boolean deleteFutureComplete = false; + FutureTask deleteFuture = null; + + try { +// Launch a thread that will read from the deleteQueue and batch delete any files that have already been copied +deleteFuture = new FutureTask<>(() -> { + while (true) { +while (keysToDelete.size() < MAX_ENTRIES_TO_DELETE) { + Optional key = deleteQueue.take(); + + // The thread runs until is is given an EOF message (an Optional#empty()) + if (key.isPresent()) { --- End diff -- I removed the usage of `Optional`. Using a `private static final DeleteObjectsRequest.KeyVersion END_OF_KEYS_TO_DELETE = new DeleteObjectsRequest.KeyVersion(null, null);` as the EOF instead. > S3a rename() to copy files in a directory in parallel > - > > Key: HADOOP-13600 > URL: https://issues.apache.org/jira/browse/HADOOP-13600 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Sahil Takiar > Attachments: HADOOP-13600.001.patch > > > Currently a directory rename does a one-by-one copy, making the request > O(files * data). If the copy operations were launched in parallel, the > duration of the copy may be reducable to the duration of the longest copy. > For a directory with many files, this will be significant -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165694#comment-16165694 ] ASF GitHub Bot commented on HADOOP-13600: - Github user sahilTakiar commented on a diff in the pull request: https://github.com/apache/hadoop/pull/157#discussion_r138791756 --- Diff: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java --- @@ -891,50 +902,123 @@ private boolean innerRename(Path source, Path dest) } List keysToDelete = new ArrayList<>(); + List dirKeysToDelete = new ArrayList<>(); if (dstStatus != null && dstStatus.isEmptyDirectory() == Tristate.TRUE) { // delete unnecessary fake directory. keysToDelete.add(new DeleteObjectsRequest.KeyVersion(dstKey)); } - Path parentPath = keyToPath(srcKey); - RemoteIterator iterator = listFilesAndEmptyDirectories( - parentPath, true); - while (iterator.hasNext()) { -LocatedFileStatus status = iterator.next(); -long length = status.getLen(); -String key = pathToKey(status.getPath()); -if (status.isDirectory() && !key.endsWith("/")) { - key += "/"; -} -keysToDelete -.add(new DeleteObjectsRequest.KeyVersion(key)); -String newDstKey = -dstKey + key.substring(srcKey.length()); -copyFile(key, newDstKey, length); - -if (hasMetadataStore()) { - // with a metadata store, the object entries need to be updated, - // including, potentially, the ancestors - Path childSrc = keyToQualifiedPath(key); - Path childDst = keyToQualifiedPath(newDstKey); - if (objectRepresentsDirectory(key, length)) { -S3Guard.addMoveDir(metadataStore, srcPaths, dstMetas, childSrc, -childDst, username); + // A blocking queue that tracks all objects that need to be deleted + BlockingQueuedeleteQueue = new ArrayBlockingQueue<>( + (int) Math.round(MAX_ENTRIES_TO_DELETE * 1.5)); + + // Used to track if the delete thread was gracefully shutdown + boolean deleteFutureComplete = false; + FutureTask deleteFuture = null; + + try { +// Launch a thread that will read from the deleteQueue and batch delete any files that have already been copied +deleteFuture = new FutureTask<>(() -> { + while (true) { +while (keysToDelete.size() < MAX_ENTRIES_TO_DELETE) { + Optional key = deleteQueue.take(); + + // The thread runs until is is given an EOF message (an Optional#empty()) + if (key.isPresent()) { +keysToDelete.add(key.get()); + } else { + +// Delete any remaining keys and exit +removeKeys(keysToDelete, true, false); +return null; + } +} +removeKeys(keysToDelete, true, false); + } +}); + +Thread deleteThread = new Thread(deleteFuture); +deleteThread.setName("s3a-rename-delete-thread"); +deleteThread.start(); + +// Used to abort future copy tasks as soon as one copy task fails +AtomicBoolean copyFailure = new AtomicBoolean(false); +List copies = new ArrayList<>(); + +Path parentPath = keyToPath(srcKey); +RemoteIterator iterator = listFilesAndEmptyDirectories( +parentPath, true); +while (iterator.hasNext()) { + LocatedFileStatus status = iterator.next(); + long length = status.getLen(); + String key = pathToKey(status.getPath()); + if (status.isDirectory() && !key.endsWith("/")) { +key += "/"; + } + if (status.isDirectory()) { +dirKeysToDelete.add(new DeleteObjectsRequest.KeyVersion(key)); + } + String newDstKey = + dstKey + key.substring(srcKey.length()); + + // If no previous file hit a copy failure, copy this file + if (!copyFailure.get()) { +copies.add(new CopyContext(copyFileAsync(key, newDstKey, +new RenameProgressListener(this, srcStatus, status.isDirectory() ? null : +new DeleteObjectsRequest.KeyVersion(key), deleteQueue, copyFailure)), +key, newDstKey, length)); } else { -S3Guard.addMoveFile(metadataStore, srcPaths, dstMetas, childSrc, -
[jira] [Commented] (HADOOP-14835) mvn site build throws SAX errors
[ https://issues.apache.org/jira/browse/HADOOP-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165691#comment-16165691 ] Hadoop QA commented on HADOOP-14835: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 54s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 58s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 49s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 29s{color} | {color:red} hadoop-mapreduce-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 10s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}184m 16s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}288m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.web.TestWebHDFSXAttr | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.web.TestHttpsFileSystem | | | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup | | | hadoop.hdfs.TestReplaceDatanodeOnFailure | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14835 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886979/HADOOP-14835.003.patch | | Optional Tests | asflicense shellcheck shelldocs compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 2f773b481c6d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017
[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165689#comment-16165689 ] ASF GitHub Bot commented on HADOOP-13600: - Github user sahilTakiar commented on a diff in the pull request: https://github.com/apache/hadoop/pull/157#discussion_r138791626 --- Diff: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/CopyContext.java --- @@ -0,0 +1,34 @@ +package org.apache.hadoop.fs.s3a; --- End diff -- Whoops, always forget those. Fixed. > S3a rename() to copy files in a directory in parallel > - > > Key: HADOOP-13600 > URL: https://issues.apache.org/jira/browse/HADOOP-13600 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Sahil Takiar > Attachments: HADOOP-13600.001.patch > > > Currently a directory rename does a one-by-one copy, making the request > O(files * data). If the copy operations were launched in parallel, the > duration of the copy may be reducable to the duration of the longest copy. > For a directory with many files, this will be significant -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13600) S3a rename() to copy files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165690#comment-16165690 ] ASF GitHub Bot commented on HADOOP-13600: - Github user sahilTakiar commented on a diff in the pull request: https://github.com/apache/hadoop/pull/157#discussion_r138791635 --- Diff: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/LazyTransferManager.java --- @@ -0,0 +1,63 @@ +package org.apache.hadoop.fs.s3a; --- End diff -- Fixed > S3a rename() to copy files in a directory in parallel > - > > Key: HADOOP-13600 > URL: https://issues.apache.org/jira/browse/HADOOP-13600 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Sahil Takiar > Attachments: HADOOP-13600.001.patch > > > Currently a directory rename does a one-by-one copy, making the request > O(files * data). If the copy operations were launched in parallel, the > duration of the copy may be reducable to the duration of the longest copy. > For a directory with many files, this will be significant -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client
[ https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165683#comment-16165683 ] Hadoop QA commented on HADOOP-13917: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s{color} | {color:green} hadoop-minicluster in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 16m 24s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-13917 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12887020/HADOOP-13917.WIP.0.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 83eccf481b25 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e0b3c64 | | Default Java | 1.8.0_144 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13284/testReport/ | | modules | C: hadoop-minicluster U: hadoop-minicluster | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13284/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ensure nightly builds run the integration tests for the shaded client > - > > Key: HADOOP-13917 > URL: https://issues.apache.org/jira/browse/HADOOP-13917 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, test >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-13917.WIP.0.patch > > > Either QBT or a different jenkins job should run our integration tests, > specifically the ones added for the shaded client. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14651) Update okhttp version to 2.7.5
[ https://issues.apache.org/jira/browse/HADOOP-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165677#comment-16165677 ] John Zhuge edited comment on HADOOP-14651 at 9/14/17 3:39 AM: -- Gotta bump "com.squareup.okhttp.mockwebserver" to 2.7.5 as well. Uploaded patch 002. All ADLS live unit tests passed. was (Author: jzhuge): Uploaded patch 002. All ADLS live unit tests passed. > Update okhttp version to 2.7.5 > -- > > Key: HADOOP-14651 > URL: https://issues.apache.org/jira/browse/HADOOP-14651 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14651.001.patch, HADOOP-14651.002.patch > > > The current artifact is: > com.squareup.okhttp:okhttp:2.4.0 > That version could either be bumped to 2.7.5 (the latest of that line), or > use the latest artifact: > com.squareup.okhttp3:okhttp:3.8.1 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14651) Update okhttp version to 2.7.5
[ https://issues.apache.org/jira/browse/HADOOP-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14651: Attachment: HADOOP-14651.002.patch Uploaded patch 002. All ADLS live unit tests passed. > Update okhttp version to 2.7.5 > -- > > Key: HADOOP-14651 > URL: https://issues.apache.org/jira/browse/HADOOP-14651 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14651.001.patch, HADOOP-14651.002.patch > > > The current artifact is: > com.squareup.okhttp:okhttp:2.4.0 > That version could either be bumped to 2.7.5 (the latest of that line), or > use the latest artifact: > com.squareup.okhttp3:okhttp:3.8.1 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14738) Remove S3N and obsolete bits of S3A; rework docs
[ https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165668#comment-16165668 ] Aaron Fabbri edited comment on HADOOP-14738 at 9/14/17 3:38 AM: javac warning is existing FAST_UPLOAD constant deprecation. mvninstall issue w/ minicluster is a dependency version issue, with minicluster using a non-SNAPSHOT build number: {noformat} [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ hadoop-client-minicluster --- [WARNING] Dependency convergence error for org.apache.hadoop:hadoop-annotations:3.1.0-20170913.230947-58 paths to dependency are: +-org.apache.hadoop:hadoop-client-minicluster:3.1.0-SNAPSHOT +-org.apache.hadoop:hadoop-annotations:3.1.0-20170913.230947-58 and +-org.apache.hadoop:hadoop-client-minicluster:3.1.0-SNAPSHOT +-org.apache.hadoop:hadoop-minicluster:3.1.0-SNAPSHOT +-org.apache.hadoop:hadoop-yarn-server-tests:3.1.0-SNAPSHOT +-org.apache.hadoop:hadoop-annotations:3.1.0-SNAPSHOT {noformat} I'm not sure were that 3.0.1-2017... version is coming from? was (Author: fabbri): javac warning is existing FAST_UPLOAD constant deprecation. mvninstall issue w/ minicluster is a dependency version issue, with minicluster using a non-SNAPSHOT build number: {noformat} [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ hadoop-client-minicluster --- [WARNING] Dependency convergence error for org.apache.hadoop:hadoop-annotations:3.1.0-20170913.230947-58 paths to dependency are: +-org.apache.hadoop:hadoop-client-minicluster:3.1.0-SNAPSHOT +-org.apache.hadoop:hadoop-annotations:3.1.0-20170913.230947-58 and +-org.apache.hadoop:hadoop-client-minicluster:3.1.0-SNAPSHOT +-org.apache.hadoop:hadoop-minicluster:3.1.0-SNAPSHOT +-org.apache.hadoop:hadoop-yarn-server-tests:3.1.0-SNAPSHOT +-org.apache.hadoop:hadoop-annotations:3.1.0-SNAPSHOT {noformat} > Remove S3N and obsolete bits of S3A; rework docs > > > Key: HADOOP-14738 > URL: https://issues.apache.org/jira/browse/HADOOP-14738 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0, 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Attachments: HADOOP-14738-002.patch, HADOOP-14738-003.patch, > HADOOP-14738-004.patch, HADOOP-14738-005.patch, HADOOP-14738-006.patch, > HADOOP-14739-001.patch > > > We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf > since Hadoop 2.8 > It's now time to kill S3N off, remove the source, the tests, the transitive > dependencies. This patch does that. > It also removes the obsolete, original s3a output stream; the fast/block > upload stream has been stable and is much more manageable and maintained (put > differently: we don't ever look at the original S3A output stream, and tell > people not to use it for performance reasons). > As well as cutting the features, this patch updates the aws docs with > * split out s3n migration page > * split out troubleshooting page > * rework of the "uploading data with s3a" section of index.md, as there's no > need to discuss the slow upload except in the past tense...all that is needed > is to list the buffering and thread tuning options of the block uploader. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14738) Remove S3N and obsolete bits of S3A; rework docs
[ https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165668#comment-16165668 ] Aaron Fabbri commented on HADOOP-14738: --- javac warning is existing FAST_UPLOAD constant deprecation. mvninstall issue w/ minicluster is a dependency version issue, with minicluster using a non-SNAPSHOT build number: {noformat} [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (depcheck) @ hadoop-client-minicluster --- [WARNING] Dependency convergence error for org.apache.hadoop:hadoop-annotations:3.1.0-20170913.230947-58 paths to dependency are: +-org.apache.hadoop:hadoop-client-minicluster:3.1.0-SNAPSHOT +-org.apache.hadoop:hadoop-annotations:3.1.0-20170913.230947-58 and +-org.apache.hadoop:hadoop-client-minicluster:3.1.0-SNAPSHOT +-org.apache.hadoop:hadoop-minicluster:3.1.0-SNAPSHOT +-org.apache.hadoop:hadoop-yarn-server-tests:3.1.0-SNAPSHOT +-org.apache.hadoop:hadoop-annotations:3.1.0-SNAPSHOT {noformat} > Remove S3N and obsolete bits of S3A; rework docs > > > Key: HADOOP-14738 > URL: https://issues.apache.org/jira/browse/HADOOP-14738 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0, 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Attachments: HADOOP-14738-002.patch, HADOOP-14738-003.patch, > HADOOP-14738-004.patch, HADOOP-14738-005.patch, HADOOP-14738-006.patch, > HADOOP-14739-001.patch > > > We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf > since Hadoop 2.8 > It's now time to kill S3N off, remove the source, the tests, the transitive > dependencies. This patch does that. > It also removes the obsolete, original s3a output stream; the fast/block > upload stream has been stable and is much more manageable and maintained (put > differently: we don't ever look at the original S3A output stream, and tell > people not to use it for performance reasons). > As well as cutting the features, this patch updates the aws docs with > * split out s3n migration page > * split out troubleshooting page > * rework of the "uploading data with s3a" section of index.md, as there's no > need to discuss the slow upload except in the past tense...all that is needed > is to list the buffering and thread tuning options of the block uploader. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14864) FSDataInputStream#unbuffer UOE exception should print the stream class name
[ https://issues.apache.org/jira/browse/HADOOP-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165666#comment-16165666 ] Hadoop QA commented on HADOOP-14864: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 56s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 4s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 19s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 15s{color} | {color:orange} root: The patch generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 16s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}126m 17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}236m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.http.TestHttpServer | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy | | | hadoop.hdfs.web.TestWebHDFSForHA | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14864 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886981/HADOOP-14864.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 8e4322416ab5 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Updated] (HADOOP-14807) should prevent the possibility of NPE about ReconfigurableBase.java
[ https://issues.apache.org/jira/browse/HADOOP-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hu xiaodong updated HADOOP-14807: - Status: In Progress (was: Patch Available) > should prevent the possibility of NPE about ReconfigurableBase.java > > > Key: HADOOP-14807 > URL: https://issues.apache.org/jira/browse/HADOOP-14807 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha3 >Reporter: hu xiaodong >Assignee: hu xiaodong >Priority: Minor > Attachments: HADOOP-14807.001.patch > > > 1.NameNode.java may throw a ReconfigurationException which getCause() is null > {code:title=NameNode.java|borderStyle=solid} > protected String reconfigurePropertyImpl(String property, String newVal) > throws ReconfigurationException { > final DatanodeManager datanodeManager = namesystem.getBlockManager() > .getDatanodeManager(); > if (property.equals(DFS_HEARTBEAT_INTERVAL_KEY)) { > return reconfHeartbeatInterval(datanodeManager, property, newVal); > } else if (property.equals(DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY)) { > return reconfHeartbeatRecheckInterval(datanodeManager, property, > newVal); > } else if (property.equals(FS_PROTECTED_DIRECTORIES)) { > return reconfProtectedDirectories(newVal); > } else if (property.equals(HADOOP_CALLER_CONTEXT_ENABLED_KEY)) { > return reconfCallerContextEnabled(newVal); > } else if (property.equals(ipcClientRPCBackoffEnable)) { > return reconfigureIPCBackoffEnabled(newVal); > } >//=== >//here may throw a ReconfigurationException which getCause() is null >//=== >else { > throw new ReconfigurationException(property, newVal, getConf().get( > property)); > } > } > {code} > 2. ReconfigurationThread.java will call > ReconfigurationException.getCause().getMessage() which will cause NPE. > {code:title=ReconfigurationThread.java|borderStyle=solid} > private static class ReconfigurationThread extends Thread { > private ReconfigurableBase parent; > ReconfigurationThread(ReconfigurableBase base) { > this.parent = base; > } > // See {@link ReconfigurationServlet#applyChanges} > public void run() { > LOG.info("Starting reconfiguration task."); > final Configuration oldConf = parent.getConf(); > final Configuration newConf = parent.getNewConf(); > final Collection changes = > parent.getChangedProperties(newConf, oldConf); > Mapresults = Maps.newHashMap(); > ConfigRedactor oldRedactor = new ConfigRedactor(oldConf); > ConfigRedactor newRedactor = new ConfigRedactor(newConf); > for (PropertyChange change : changes) { > String errorMessage = null; > String oldValRedacted = oldRedactor.redact(change.prop, > change.oldVal); > String newValRedacted = newRedactor.redact(change.prop, > change.newVal); > if (!parent.isPropertyReconfigurable(change.prop)) { > LOG.info(String.format( > "Property %s is not configurable: old value: %s, new value: %s", > change.prop, > oldValRedacted, > newValRedacted)); > continue; > } > LOG.info("Change property: " + change.prop + " from \"" > + ((change.oldVal == null) ? "" : oldValRedacted) > + "\" to \"" > + ((change.newVal == null) ? "" : newValRedacted) > + "\"."); > try { > String effectiveValue = > parent.reconfigurePropertyImpl(change.prop, change.newVal); > if (change.newVal != null) { > oldConf.set(change.prop, effectiveValue); > } else { > oldConf.unset(change.prop); > } > } catch (ReconfigurationException e) { > //=== > // here may occurs NPE, because e.getCause() may be null. > //=== > errorMessage = e.getCause().getMessage(); > } > results.put(change, Optional.fromNullable(errorMessage)); > } > synchronized (parent.reconfigLock) { > parent.endTime = Time.now(); > parent.status = Collections.unmodifiableMap(results); > parent.reconfigThread = null; > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail:
[jira] [Commented] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure
[ https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165663#comment-16165663 ] Aaron Fabbri commented on HADOOP-14553: --- Just noticed your comment above which answers one of my questions (CleanupTestContainers must be explicitly specified to run). I applied the v16 patch and ran all tests in hadoop-azure and hit a bunch of failures.. Not sure if I've done something stupid but this is what it looks like: {noformat} Failed tests: TestNativeAzureFileSystemConcurrency.testNoTempBlobsVisible:100->Assert.assertEquals:144->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88 expected: but was: TestNativeAzureFileSystemContractMocked>FileSystemContractBaseTest.testListStatusRootDir:866->FileSystemContractBaseTest.assertListStatusFinds:896 Path wasb://mockcontai...@mockaccount.blob.core.windows.net/FileSystemContractBaseTest not found in directory wasb://mockcontai...@mockaccount.blob.core.windows.net/:FileStatus{path=wasb://mockAccount.blob.core.windows.net/mockContainer/FileSystemContractBaseTest; isDirectory=false; length=2048; replication=1; blocksize=536870912; modification_time=1505351566081; access_time=0; owner=fabbri; group=supergroup; permission=rw-r--r--; isSymlink=false; hasAcl=false; isEncrypted=false; isErasureCoded=false} TestNativeAzureFileSystemContractMocked>FileSystemContractBaseTest.testListStatus:330 expected:<1> but was:<2> TestNativeAzureFileSystemContractMocked>FileSystemContractBaseTest.testLSRootDir:856->FileSystemContractBaseTest.assertListFilesFinds:881 Path wasb://mockcontai...@mockaccount.blob.core.windows.net/FileSystemContractBaseTest not found in directory wasb://mockcontai...@mockaccount.blob.core.windows.net/:LocatedFileStatus{path=wasb://mockAccount.blob.core.windows.net/mockContainer/FileSystemContractBaseTest; isDirectory=false; length=2048; replication=1; blocksize=536870912; modification_time=1505351566450; access_time=0; owner=fabbri; group=supergroup; permission=rw-r--r--; isSymlink=false; hasAcl=false; isEncrypted=false; isErasureCoded=false} TestNativeAzureFileSystemContractMocked>FileSystemContractBaseTest.testDeleteRecursively:452 File doesn't exist TestNativeAzureFileSystemFileNameCheck.testWasbFsck:120->Assert.assertTrue:52->Assert.assertTrue:41->Assert.fail:86 null TestNativeAzureFileSystemMocked>NativeAzureFileSystemBaseTest.testDeepFileCreation:228->NativeAzureFileSystemBaseTest.testDeepFileCreationBase:218->NativeAzureFileSystemBaseTest.assertPathDoesNotExist:87->Assert.fail:88 deleted file: unexpectedly found deep/file/creation/test as FileStatus{path=wasb://mockcontai...@mockaccount.blob.core.windows.net/user/fabbri/deep/file/creation/test; isDirectory=false; length=0; replication=1; blocksize=536870912; modification_time=1505351569092; access_time=0; owner=fabbri; group=supergroup; permission=rw-r--r--; isSymlink=false; hasAcl=false; isEncrypted=false; isErasureCoded=false} TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testListStatus:312 expected:<1> but was:<2> TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testListStatusFilterWithSomeMatches:369 null TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusSomeMatchesInDirectories:426 expected:<2> but was:<4> TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testGlobStatusWithMultipleWildCardMatches:451 expected:<4> but was:<8> TestNativeAzureFileSystemOperationsMocked>FSMainOperationsBaseTest.testDeleteRecursively:777 File doesn't exist TestOutOfBandAzureBlobOperations.testImplicitFolderDeleted:94->Assert.assertFalse:74->Assert.assertFalse:64->Assert.assertTrue:41->Assert.fail:86 null TestOutOfBandAzureBlobOperations.testImplicitFolderListed:80->Assert.assertEquals:144->Assert.assertEquals:115 expected: but was: TestWasbFsck.testDelete:130->Assert.assertEquals:542->Assert.assertEquals:555->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88 expected:<0> but was:<1> TestNativeAzureFileSystemMocked>NativeAzureFileSystemBaseTest.testListDirectory:358->Assert.assertEquals:542->Assert.assertEquals:555->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88 expected:<1> but was:<2> TestNativeAzureFileSystemMocked>NativeAzureFileSystemBaseTest.testRedoRenameFolder:780->Assert.assertTrue:52->Assert.assertTrue:41->Assert.fail:86 null TestNativeAzureFileSystemMocked>NativeAzureFileSystemBaseTest.testStoreDeleteFolder:137->NativeAzureFileSystemBaseTest.assertPathDoesNotExist:87->Assert.fail:88 inner file: unexpectedly found wasb://mockcontai...@mockaccount.blob.core.windows.net/user/fabbri/fork-2/testStoreDeleteFolder/innerFile as FileStatus{path=wasb://mockcontai...@mockaccount.blob.core.windows.net/user/fabbri/fork-2/testStoreDeleteFolder/innerFile; isDirectory=false; length=0;
[jira] [Commented] (HADOOP-14652) Update metrics-core version to 3.2.4
[ https://issues.apache.org/jira/browse/HADOOP-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165662#comment-16165662 ] Hadoop QA commented on HADOOP-14652: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 36s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 6s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}147m 0s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}247m 22s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestReplaceDatanodeOnFailure | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14652 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886755/HADOOP-14652.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux c32437e7471e 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bb34ae9 | | Default Java | 1.8.0_144 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13282/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13282/testReport/ | | modules | C: hadoop-project hadoop-common-project/hadoop-kms hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-tools/hadoop-sls . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13282/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was
[jira] [Updated] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client
[ https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-13917: - Attachment: HADOOP-13917.WIP.0.patch -0 test > Ensure nightly builds run the integration tests for the shaded client > - > > Key: HADOOP-13917 > URL: https://issues.apache.org/jira/browse/HADOOP-13917 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, test >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-13917.WIP.0.patch > > > Either QBT or a different jenkins job should run our integration tests, > specifically the ones added for the shaded client. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client
[ https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-13917: - Status: Patch Available (was: In Progress) > Ensure nightly builds run the integration tests for the shaded client > - > > Key: HADOOP-13917 > URL: https://issues.apache.org/jira/browse/HADOOP-13917 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, test >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > Attachments: HADOOP-13917.WIP.0.patch > > > Either QBT or a different jenkins job should run our integration tests, > specifically the ones added for the shaded client. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14865) Mvnsite fail to execute macro defined in the document HDFSErasureCoding.md
[ https://issues.apache.org/jira/browse/HADOOP-14865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165647#comment-16165647 ] Huafeng Wang commented on HADOOP-14865: --- Hi [~Sammi], I can not reproduce this problem on the latest trunk, maybe we can close this one. > Mvnsite fail to execute macro defined in the document HDFSErasureCoding.md > -- > > Key: HADOOP-14865 > URL: https://issues.apache.org/jira/browse/HADOOP-14865 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: SammiChen >Assignee: Huafeng Wang > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.6:site (default-site) on project > hadoop-hdfs: Error parsing > '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSErasureCoding.md': > line [-1] Error parsing the model: Unable to execute macro in the document: > toc -> [Help 1] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14651) Update okhttp version to 2.7.5
[ https://issues.apache.org/jira/browse/HADOOP-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165644#comment-16165644 ] John Zhuge commented on HADOOP-14651: - Hit TestACLFeatures failure during ADLS live unit tests: {noformat} 2017-09-13 20:02:34,204 [main] DEBUG HttpTransport - HTTPRequest,Failed,cReqId:ea949b65-d369-46f6-a557-ae8e4164bf1a.0,lat:19,err:java.net.ConnectException,Reqlen:0,Resplen:0,token_ns:670772,sReqId:null,path:/test1/test2,qp:op=GETACLSTATUS=true=2016-11-01Sep 13, 2017 8:02:34 PM com.squareup.okhttp.mockwebserver.MockWebServer$2 execute WARNING: MockWebServer[56116] failed unexpectedly java.lang.NoClassDefFoundError: com/squareup/okhttp/internal/spdy/IncomingStreamHandler at com.squareup.okhttp.mockwebserver.MockWebServer.serveConnection(MockWebServer.java:384) at com.squareup.okhttp.mockwebserver.MockWebServer.access$700(MockWebServer.java:88) at com.squareup.okhttp.mockwebserver.MockWebServer$2.acceptConnections(MockWebServer.java:360) at com.squareup.okhttp.mockwebserver.MockWebServer$2.execute(MockWebServer.java:327) at com.squareup.okhttp.internal.NamedRunnable.run(NamedRunnable.java:33) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.ClassNotFoundException: com.squareup.okhttp.internal.spdy.IncomingStreamHandler at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 8 more {noformat} > Update okhttp version to 2.7.5 > -- > > Key: HADOOP-14651 > URL: https://issues.apache.org/jira/browse/HADOOP-14651 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14651.001.patch > > > The current artifact is: > com.squareup.okhttp:okhttp:2.4.0 > That version could either be bumped to 2.7.5 (the latest of that line), or > use the latest artifact: > com.squareup.okhttp3:okhttp:3.8.1 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14738) Remove S3N and obsolete bits of S3A; rework docs
[ https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165570#comment-16165570 ] Hadoop QA commented on HADOOP-14738: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 30 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 57s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-client-modules/hadoop-client-minicluster {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 45s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 3m 10s{color} | {color:red} hadoop-client-minicluster in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 22s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 22s{color} | {color:red} root generated 1 new + 1282 unchanged - 1 fixed = 1283 total (was 1283) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 20s{color} | {color:orange} root: The patch generated 2 new + 33 unchanged - 105 fixed = 35 total (was 138) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 7s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-client-modules/hadoop-client-minicluster {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 2s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s{color} | {color:green} hadoop-client-minicluster in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}117m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14738 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886978/HADOOP-14738-006.patch | | Optional Tests |
[jira] [Commented] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure
[ https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165549#comment-16165549 ] Aaron Fabbri commented on HADOOP-14553: --- Coming back to test and review this again [~ste...@apache.org]. Have to remember my WASB setup. ;-) Quick question: Why is the container cleanup step a unit test? {noformat} mvn test -Dtest=CleanupTestContainers {noformat} Does that not run automatically in the unit test phase? I assume the other non-ITest extensions of {{AbstractWasbTestBase}} are all mocked tests? > Add (parallelized) integration tests to hadoop-azure > > > Key: HADOOP-14553 > URL: https://issues.apache.org/jira/browse/HADOOP-14553 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14553-001.patch, HADOOP-14553-002.patch, > HADOOP-14553-003.patch, HADOOP-14553-004.patch, HADOOP-14553-005.patch, > HADOOP-14553-006.patch, HADOOP-14553-007.patch, HADOOP-14553-008.patch, > HADOOP-14553-009.patch, HADOOP-14553-010.patch, HADOOP-14553-011.patch, > HADOOP-14553-012.patch, HADOOP-14553-014.patch, HADOOP-14553-015.patch, > HADOOP-14553-016.patch > > > The Azure tests are slow to run as they are serialized, as they are all > called Test* there's no clear differentiation from unit tests which Jenkins > can run, and integration tests which it can't. > Move the azure tests {{Test*}} to integration tests {{ITest*}}, parallelize > (which includes having separate paths for every test suite). The code in > hadoop-aws's POM show what to do. > *UPDATE August 4, 2017*: Adding a list of requirements to clarify the > acceptance criteria for this JIRA: > # Parallelize test execution > # Define test groups: i) UnitTests - self-contained, executed by Jenkins, ii) > IntegrationTests - requires Azure Storage account, executed by engineers > prior to check-in, and if needed, iii) ScaleTests – long running performance > and scalability tests. > # Define configuration profiles to run tests with different settings. Allows > an engineer to run “IntegrationTests” with fs.azure.secure.mode = true and > false. Need to review settings to see what else would benefit. > # Maven commands to run b) and c). Turns out it is not easy to do with > Maven, so we might have to run it multiple times to run with different > configuration settings. > # Document how to add and run tests and the process for contributing to > Apache Hadoop. Steve shared an example at > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md > > # UnitTests should run in under 2 minutes and IntegrationTests should run in > under 15 minutes, even on slower network connections. (These are rough goals) > # Ensure test data (containers/blobs/etc) is deleted. Exceptions for large > persistent content used repeatedly to expedite test execution. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14835) mvn site build throws SAX errors
[ https://issues.apache.org/jira/browse/HADOOP-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165517#comment-16165517 ] Allen Wittenauer commented on HADOOP-14835: --- Sorry, I've been super behind on everything. I'm a little hesitant to disable shading though, especially if the goal is to push users towards it. I'd be more inclined to disable javadoc honestly. > mvn site build throws SAX errors > > > Key: HADOOP-14835 > URL: https://issues.apache.org/jira/browse/HADOOP-14835 > Project: Hadoop Common > Issue Type: Bug > Components: build, site >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer >Assignee: Andrew Wang >Priority: Blocker > Attachments: HADOOP-14835.001.patch, HADOOP-14835.002.patch, > HADOOP-14835.003.patch > > > Running mvn install site site:stage -DskipTests -Pdist,src > -Preleasedocs,docs results in a stack trace when run on a fresh .m2 > directory. It appears to be coming from the jdiff doclets in the annotations > code. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165513#comment-16165513 ] Hudson commented on HADOOP-14089: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12866 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12866/]) HADOOP-14089. Automated checking for malformed client. Contributed by (wang: rev c3f35c422bbb7fe9c8e6509063896de549b127d1) * (edit) hadoop-client-modules/hadoop-client-api/pom.xml * (add) hadoop-client-modules/hadoop-client-check-invariants/src/test/resources/ensure-jars-have-correct-contents.sh * (add) hadoop-client-modules/hadoop-client-check-test-invariants/src/test/resources/ensure-jars-have-correct-contents.sh * (edit) hadoop-client-modules/hadoop-client-minicluster/pom.xml * (edit) hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml * (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/pom.xml * (edit) hadoop-client-modules/hadoop-client-check-invariants/pom.xml * (edit) hadoop-client-modules/hadoop-client-runtime/pom.xml > Automated checking for malformed client artifacts. > -- > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Blocker > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14089.2.patch, HADOOP-14089.3.patch, > HADOOP-14089.WIP.0.patch, HADOOP-14089.WIP.1.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13578) Add Codec for ZStandard Compression
[ https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165440#comment-16165440 ] Adam Kennedy edited comment on HADOOP-13578 at 9/14/17 12:07 AM: - ZStd support in 2.7x would be hugely valuable, particularly because it finally gets us the promised land of fast, high compression and splittability all in one file format. (But I yield to my betters in this regard) was (Author: adamkennedy): ZStd support in 2.7x would be hugely valuable, particularly because it finally gets us the promised land of fast, high compression and splittability all in one file format (without having to do an upgrade of everything else) > Add Codec for ZStandard Compression > --- > > Key: HADOOP-13578 > URL: https://issues.apache.org/jira/browse/HADOOP-13578 > Project: Hadoop Common > Issue Type: New Feature >Reporter: churro morales >Assignee: churro morales > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: HADOOP-13578-branch-2.v9.patch, HADOOP-13578.patch, > HADOOP-13578.v1.patch, HADOOP-13578.v2.patch, HADOOP-13578.v3.patch, > HADOOP-13578.v4.patch, HADOOP-13578.v5.patch, HADOOP-13578.v6.patch, > HADOOP-13578.v7.patch, HADOOP-13578.v8.patch, HADOOP-13578.v9.patch > > > ZStandard: https://github.com/facebook/zstd has been used in production for 6 > months by facebook now. v1.0 was recently released. Create a codec for this > library. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13578) Add Codec for ZStandard Compression
[ https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165440#comment-16165440 ] Adam Kennedy edited comment on HADOOP-13578 at 9/14/17 12:05 AM: - ZStd support in 2.7x would be hugely valuable, particularly because it finally gets us the promised land of fast, high compression and splittability all in one file format (without having to do an upgrade of everything else) was (Author: adamkennedy): ZStd support in 2.7x would be hugely valuable, particularly because it finally gets us the promised land of fast, high compression and splittability all in one file format. > Add Codec for ZStandard Compression > --- > > Key: HADOOP-13578 > URL: https://issues.apache.org/jira/browse/HADOOP-13578 > Project: Hadoop Common > Issue Type: New Feature >Reporter: churro morales >Assignee: churro morales > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: HADOOP-13578-branch-2.v9.patch, HADOOP-13578.patch, > HADOOP-13578.v1.patch, HADOOP-13578.v2.patch, HADOOP-13578.v3.patch, > HADOOP-13578.v4.patch, HADOOP-13578.v5.patch, HADOOP-13578.v6.patch, > HADOOP-13578.v7.patch, HADOOP-13578.v8.patch, HADOOP-13578.v9.patch > > > ZStandard: https://github.com/facebook/zstd has been used in production for 6 > months by facebook now. v1.0 was recently released. Create a codec for this > library. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14089: - Resolution: Fixed Fix Version/s: 3.0.0-beta1 Status: Resolved (was: Patch Available) +1 LGTM, committed to trunk and branch-3.0. Thanks for the contribution Sean, and for reviewing Bharat! Let's figure out the lack of new precommit in follow-on work, no need to block on this. > Automated checking for malformed client artifacts. > -- > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Blocker > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14089.2.patch, HADOOP-14089.3.patch, > HADOOP-14089.WIP.0.patch, HADOOP-14089.WIP.1.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165486#comment-16165486 ] Sean Busbey commented on HADOOP-14089: -- well I'm super happy to see jenkins give a thumbs up, but I'm curious why there isn't a feedback item on the new precommit test. :/ If you don't want to wait to push the commit that's fine by me [~andrew.wang]. I'll just post example patches on the jira for getting automated tests in place. > Automated checking for malformed client artifacts. > -- > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14089.2.patch, HADOOP-14089.3.patch, > HADOOP-14089.WIP.0.patch, HADOOP-14089.WIP.1.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression
[ https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165480#comment-16165480 ] Wei-Chiu Chuang commented on HADOOP-13578: -- -1 for including it in 2.7 or 2.8. I am sorry but a new feature is not supposed to land it a maintenance release. > Add Codec for ZStandard Compression > --- > > Key: HADOOP-13578 > URL: https://issues.apache.org/jira/browse/HADOOP-13578 > Project: Hadoop Common > Issue Type: New Feature >Reporter: churro morales >Assignee: churro morales > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: HADOOP-13578-branch-2.v9.patch, HADOOP-13578.patch, > HADOOP-13578.v1.patch, HADOOP-13578.v2.patch, HADOOP-13578.v3.patch, > HADOOP-13578.v4.patch, HADOOP-13578.v5.patch, HADOOP-13578.v6.patch, > HADOOP-13578.v7.patch, HADOOP-13578.v8.patch, HADOOP-13578.v9.patch > > > ZStandard: https://github.com/facebook/zstd has been used in production for 6 > months by facebook now. v1.0 was recently released. Create a codec for this > library. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165465#comment-16165465 ] Hadoop QA commented on HADOOP-14089: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 22m 50s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 6s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 33m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 35m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 57s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 35s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 33m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 33m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 1s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 46s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 19s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 18s{color} | {color:green} hadoop-mapreduce-client-shuffle in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s{color} | {color:green} hadoop-client-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s{color} | {color:green} hadoop-client-runtime in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 46s{color} | {color:green} hadoop-client-check-invariants in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s{color} | {color:green} hadoop-client-minicluster in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s{color} | {color:green} hadoop-client-check-test-invariants in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}179m 0s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14089 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886950/HADOOP-14089.3.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml shellcheck shelldocs | | uname | Linux 161675e51f87 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f153e60 | | Default Java | 1.8.0_144 | | shellcheck | v0.4.6 | | Test Results |
[jira] [Comment Edited] (HADOOP-14864) FSDataInputStream#unbuffer UOE exception should print the stream class name
[ https://issues.apache.org/jira/browse/HADOOP-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165456#comment-16165456 ] Bharat Viswanadham edited comment on HADOOP-14864 at 9/13/17 11:26 PM: --- [~jzhuge] Thanks for review. Updated the patch to add testcase. was (Author: bharatviswa): [~jzhuge] Updated the patch to add testcase. > FSDataInputStream#unbuffer UOE exception should print the stream class name > --- > > Key: HADOOP-14864 > URL: https://issues.apache.org/jira/browse/HADOOP-14864 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie, supportability > Attachments: HADOOP-14864.01.patch, HADOOP-14864.patch > > > The current exception message: > {noformat} > org/apache/hadoop/fs/ failed: error: > UnsupportedOperationException: this stream does not support > unbuffering.java.lang.UnsupportedOperationException: this stream does not > support unbuffering. > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14864) FSDataInputStream#unbuffer UOE exception should print the stream class name
[ https://issues.apache.org/jira/browse/HADOOP-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-14864: Attachment: HADOOP-14864.01.patch > FSDataInputStream#unbuffer UOE exception should print the stream class name > --- > > Key: HADOOP-14864 > URL: https://issues.apache.org/jira/browse/HADOOP-14864 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie, supportability > Attachments: HADOOP-14864.01.patch, HADOOP-14864.patch > > > The current exception message: > {noformat} > org/apache/hadoop/fs/ failed: error: > UnsupportedOperationException: this stream does not support > unbuffering.java.lang.UnsupportedOperationException: this stream does not > support unbuffering. > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14864) FSDataInputStream#unbuffer UOE exception should print the stream class name
[ https://issues.apache.org/jira/browse/HADOOP-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165456#comment-16165456 ] Bharat Viswanadham commented on HADOOP-14864: - [~jzhuge] Updated the patch to add testcase. > FSDataInputStream#unbuffer UOE exception should print the stream class name > --- > > Key: HADOOP-14864 > URL: https://issues.apache.org/jira/browse/HADOOP-14864 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie, supportability > Attachments: HADOOP-14864.01.patch, HADOOP-14864.patch > > > The current exception message: > {noformat} > org/apache/hadoop/fs/ failed: error: > UnsupportedOperationException: this stream does not support > unbuffering.java.lang.UnsupportedOperationException: this stream does not > support unbuffering. > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression
[ https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165440#comment-16165440 ] Adam Kennedy commented on HADOOP-13578: --- ZStd support in 2.7x would be hugely valuable, particularly because it finally gets us the promised land of fast, high compression and splittability all in one file format. > Add Codec for ZStandard Compression > --- > > Key: HADOOP-13578 > URL: https://issues.apache.org/jira/browse/HADOOP-13578 > Project: Hadoop Common > Issue Type: New Feature >Reporter: churro morales >Assignee: churro morales > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: HADOOP-13578-branch-2.v9.patch, HADOOP-13578.patch, > HADOOP-13578.v1.patch, HADOOP-13578.v2.patch, HADOOP-13578.v3.patch, > HADOOP-13578.v4.patch, HADOOP-13578.v5.patch, HADOOP-13578.v6.patch, > HADOOP-13578.v7.patch, HADOOP-13578.v8.patch, HADOOP-13578.v9.patch > > > ZStandard: https://github.com/facebook/zstd has been used in production for 6 > months by facebook now. v1.0 was recently released. Create a codec for this > library. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14651) Update okhttp version to 2.7.5
[ https://issues.apache.org/jira/browse/HADOOP-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165438#comment-16165438 ] Ray Chiang commented on HADOOP-14651: - Version 2.7.5 passes tests. Version 3.8.1 of the library has some issues. Going to go with 2.7.5 for now. Version 2.7.5 is APLv2 and has no NOTICE file. > Update okhttp version to 2.7.5 > -- > > Key: HADOOP-14651 > URL: https://issues.apache.org/jira/browse/HADOOP-14651 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14651.001.patch > > > The current artifact is: > com.squareup.okhttp:okhttp:2.4.0 > That version could either be bumped to 2.7.5 (the latest of that line), or > use the latest artifact: > com.squareup.okhttp3:okhttp:3.8.1 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14651) Update okhttp version to 2.7.5
[ https://issues.apache.org/jira/browse/HADOOP-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165439#comment-16165439 ] Ray Chiang commented on HADOOP-14651: - [~jzhuge], since this touches ADLS is there anything else special I should do to test this change? > Update okhttp version to 2.7.5 > -- > > Key: HADOOP-14651 > URL: https://issues.apache.org/jira/browse/HADOOP-14651 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14651.001.patch > > > The current artifact is: > com.squareup.okhttp:okhttp:2.4.0 > That version could either be bumped to 2.7.5 (the latest of that line), or > use the latest artifact: > com.squareup.okhttp3:okhttp:3.8.1 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14651) Update okhttp version to 2.7.5
[ https://issues.apache.org/jira/browse/HADOOP-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14651: Summary: Update okhttp version to 2.7.5 (was: Update okhttp version) > Update okhttp version to 2.7.5 > -- > > Key: HADOOP-14651 > URL: https://issues.apache.org/jira/browse/HADOOP-14651 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-beta1 >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14651.001.patch > > > The current artifact is: > com.squareup.okhttp:okhttp:2.4.0 > That version could either be bumped to 2.7.5 (the latest of that line), or > use the latest artifact: > com.squareup.okhttp3:okhttp:3.8.1 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14738) Remove S3N and obsolete bits of S3A; rework docs
[ https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri updated HADOOP-14738: -- Status: Patch Available (was: Open) > Remove S3N and obsolete bits of S3A; rework docs > > > Key: HADOOP-14738 > URL: https://issues.apache.org/jira/browse/HADOOP-14738 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0, 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Attachments: HADOOP-14738-002.patch, HADOOP-14738-003.patch, > HADOOP-14738-004.patch, HADOOP-14738-005.patch, HADOOP-14738-006.patch, > HADOOP-14739-001.patch > > > We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf > since Hadoop 2.8 > It's now time to kill S3N off, remove the source, the tests, the transitive > dependencies. This patch does that. > It also removes the obsolete, original s3a output stream; the fast/block > upload stream has been stable and is much more manageable and maintained (put > differently: we don't ever look at the original S3A output stream, and tell > people not to use it for performance reasons). > As well as cutting the features, this patch updates the aws docs with > * split out s3n migration page > * split out troubleshooting page > * rework of the "uploading data with s3a" section of index.md, as there's no > need to discuss the slow upload except in the past tense...all that is needed > is to list the buffering and thread tuning options of the block uploader. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14835) mvn site build throws SAX errors
[ https://issues.apache.org/jira/browse/HADOOP-14835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory updated HADOOP-14835: --- Attachment: HADOOP-14835.003.patch +1 - but attaching a v3 patch that also adds a note to BUILDING.txt as you suggested about the site having to be done in a second pass, which I think is a good idea. > mvn site build throws SAX errors > > > Key: HADOOP-14835 > URL: https://issues.apache.org/jira/browse/HADOOP-14835 > Project: Hadoop Common > Issue Type: Bug > Components: build, site >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer >Assignee: Andrew Wang >Priority: Blocker > Attachments: HADOOP-14835.001.patch, HADOOP-14835.002.patch, > HADOOP-14835.003.patch > > > Running mvn install site site:stage -DskipTests -Pdist,src > -Preleasedocs,docs results in a stack trace when run on a fresh .m2 > directory. It appears to be coming from the jdiff doclets in the annotations > code. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14738) Remove S3N and obsolete bits of S3A; rework docs
[ https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri updated HADOOP-14738: -- Attachment: HADOOP-14738-006.patch Attaching v6 patch.. Just the v5 patch rebased (and whitespace fixed) on latest trunk (there was a conflict w/ the recent list v2 change in index.md) > Remove S3N and obsolete bits of S3A; rework docs > > > Key: HADOOP-14738 > URL: https://issues.apache.org/jira/browse/HADOOP-14738 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0, 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Attachments: HADOOP-14738-002.patch, HADOOP-14738-003.patch, > HADOOP-14738-004.patch, HADOOP-14738-005.patch, HADOOP-14738-006.patch, > HADOOP-14739-001.patch > > > We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf > since Hadoop 2.8 > It's now time to kill S3N off, remove the source, the tests, the transitive > dependencies. This patch does that. > It also removes the obsolete, original s3a output stream; the fast/block > upload stream has been stable and is much more manageable and maintained (put > differently: we don't ever look at the original S3A output stream, and tell > people not to use it for performance reasons). > As well as cutting the features, this patch updates the aws docs with > * split out s3n migration page > * split out troubleshooting page > * rework of the "uploading data with s3a" section of index.md, as there's no > need to discuss the slow upload except in the past tense...all that is needed > is to list the buffering and thread tuning options of the block uploader. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165360#comment-16165360 ] Bharat Viswanadham commented on HADOOP-14089: - +1(non-binding). > Automated checking for malformed client artifacts. > -- > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14089.2.patch, HADOOP-14089.3.patch, > HADOOP-14089.WIP.0.patch, HADOOP-14089.WIP.1.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165341#comment-16165341 ] Andrew Wang commented on HADOOP-14089: -- Maybe my env is working now, I don't see the microsoft stuff anymore. +1 pending Jenkins. > Automated checking for malformed client artifacts. > -- > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14089.2.patch, HADOOP-14089.3.patch, > HADOOP-14089.WIP.0.patch, HADOOP-14089.WIP.1.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression
[ https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165329#comment-16165329 ] churro morales commented on HADOOP-13578: - This was never backported, we did it internally here but never pushed upstream. The build is quite different from Hadoop-2.9 thats where a majority of the changes lie. If there is enough interest I could possibly put up a 2.7x patch. > Add Codec for ZStandard Compression > --- > > Key: HADOOP-13578 > URL: https://issues.apache.org/jira/browse/HADOOP-13578 > Project: Hadoop Common > Issue Type: New Feature >Reporter: churro morales >Assignee: churro morales > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: HADOOP-13578-branch-2.v9.patch, HADOOP-13578.patch, > HADOOP-13578.v1.patch, HADOOP-13578.v2.patch, HADOOP-13578.v3.patch, > HADOOP-13578.v4.patch, HADOOP-13578.v5.patch, HADOOP-13578.v6.patch, > HADOOP-13578.v7.patch, HADOOP-13578.v8.patch, HADOOP-13578.v9.patch > > > ZStandard: https://github.com/facebook/zstd has been used in production for 6 > months by facebook now. v1.0 was recently released. Create a codec for this > library. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14864) FSDataInputStream#unbuffer UOE exception should print the stream class name
[ https://issues.apache.org/jira/browse/HADOOP-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165296#comment-16165296 ] Hadoop QA commented on HADOOP-14864: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 59s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 36s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 11 unchanged - 0 fixed = 12 total (was 11) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 5s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 64m 35s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14864 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886941/HADOOP-14864.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux bdf0c49aec15 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f153e60 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/13278/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13278/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13278/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13278/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > FSDataInputStream#unbuffer UOE exception should print the stream class name >
[jira] [Commented] (HADOOP-14867) Update HDFS Federation setup document, for incorrect property name for secondary name node http address
[ https://issues.apache.org/jira/browse/HADOOP-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165247#comment-16165247 ] Hudson commented on HADOOP-14867: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12864 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12864/]) HADOOP-14867. Update HDFS Federation setup document, for incorrect (arp: rev f153e60576016ddc237aa66eb36e2c2c91efdbf3) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/Federation.md > Update HDFS Federation setup document, for incorrect property name for > secondary name node http address > --- > > Key: HADOOP-14867 > URL: https://issues.apache.org/jira/browse/HADOOP-14867 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 2.7.5 > > Attachments: HADOOP-14867.patch > > > HDFS Federation setup documentation is having incorrect property name for > secondary namenode http port > It is mentioned as > {code} > > dfs.namenode.secondaryhttp-address.ns1 > snn-host1:http-port > > > dfs.namenode.rpc-address.ns2 > nn-host2:rpc-port > > {code} > Actual property should be dfs.namenode.secondary.http-address.ns. > Because of this documentation error, when the document is followed and user > tries to setup HDFS federated cluster, secondary namenode will not be started > and also > hdfs getconf -secondarynamenodes throw's an exception -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13918) Add integration tests for shaded client based on use by HBase
[ https://issues.apache.org/jira/browse/HADOOP-13918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165243#comment-16165243 ] Sean Busbey commented on HADOOP-13918: -- converted to a top-level improvement and targetting 3.1.0 release. thanks! > Add integration tests for shaded client based on use by HBase > - > > Key: HADOOP-13918 > URL: https://issues.apache.org/jira/browse/HADOOP-13918 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Affects Versions: 3.0.0-alpha1 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Minor > > Look at the tests that HBase runs against the Hadoop Minicluster and make > sure that functionality is tested in our integration tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13918) Add integration tests for shaded client based on use by HBase
[ https://issues.apache.org/jira/browse/HADOOP-13918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-13918: - Target Version/s: 3.1.0 (was: 3.0.0-beta1) > Add integration tests for shaded client based on use by HBase > - > > Key: HADOOP-13918 > URL: https://issues.apache.org/jira/browse/HADOOP-13918 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Affects Versions: 3.0.0-alpha1 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Minor > > Look at the tests that HBase runs against the Hadoop Minicluster and make > sure that functionality is tested in our integration tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13918) Add integration tests for shaded client based on use by HBase
[ https://issues.apache.org/jira/browse/HADOOP-13918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-13918: - Issue Type: Improvement (was: Sub-task) Parent: (was: HADOOP-11656) > Add integration tests for shaded client based on use by HBase > - > > Key: HADOOP-13918 > URL: https://issues.apache.org/jira/browse/HADOOP-13918 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Affects Versions: 3.0.0-alpha1 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Minor > > Look at the tests that HBase runs against the Hadoop Minicluster and make > sure that functionality is tested in our integration tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13918) Add integration tests for shaded client based on use by HBase
[ https://issues.apache.org/jira/browse/HADOOP-13918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165237#comment-16165237 ] Bharat Viswanadham commented on HADOOP-13918: - [~busbey] Thanks for Info. I am fine with it, we can move this to 3.1, as this is not affecting any shaded-client actual work. > Add integration tests for shaded client based on use by HBase > - > > Key: HADOOP-13918 > URL: https://issues.apache.org/jira/browse/HADOOP-13918 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 3.0.0-alpha1 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Minor > > Look at the tests that HBase runs against the Hadoop Minicluster and make > sure that functionality is tested in our integration tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-14089: - Status: Patch Available (was: In Progress) > Automated checking for malformed client artifacts. > -- > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14089.2.patch, HADOOP-14089.3.patch, > HADOOP-14089.WIP.0.patch, HADOOP-14089.WIP.1.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-14089: - Attachment: HADOOP-14089.3.patch -3 - rebase to trunk as of f153e60 - fix shellcheck warnings. > Automated checking for malformed client artifacts. > -- > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14089.2.patch, HADOOP-14089.3.patch, > HADOOP-14089.WIP.0.patch, HADOOP-14089.WIP.1.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13918) Add integration tests for shaded client based on use by HBase
[ https://issues.apache.org/jira/browse/HADOOP-13918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165219#comment-16165219 ] Sean Busbey commented on HADOOP-13918: -- fine with putting off until 3.1 [~bharatviswa]? > Add integration tests for shaded client based on use by HBase > - > > Key: HADOOP-13918 > URL: https://issues.apache.org/jira/browse/HADOOP-13918 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 3.0.0-alpha1 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Minor > > Look at the tests that HBase runs against the Hadoop Minicluster and make > sure that functionality is tested in our integration tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13917) Ensure nightly builds run the integration tests for the shaded client
[ https://issues.apache.org/jira/browse/HADOOP-13917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165216#comment-16165216 ] Sean Busbey commented on HADOOP-13917: -- YETUS-543 has merged now. I started a new qbt run: https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86/523/ looks like normal is ~10 hours these days. > Ensure nightly builds run the integration tests for the shaded client > - > > Key: HADOOP-13917 > URL: https://issues.apache.org/jira/browse/HADOOP-13917 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, test >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Critical > > Either QBT or a different jenkins job should run our integration tests, > specifically the ones added for the shaded client. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14864) FSDataInputStream#unbuffer UOE exception should print the stream class name
[ https://issues.apache.org/jira/browse/HADOOP-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165209#comment-16165209 ] John Zhuge commented on HADOOP-14864: - Thanks for working on it! Could you share sample test output? Is it worthwhile to add a simple unit test? BTW, the naming convention for the patch file is "..patch". See https://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch. > FSDataInputStream#unbuffer UOE exception should print the stream class name > --- > > Key: HADOOP-14864 > URL: https://issues.apache.org/jira/browse/HADOOP-14864 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie, supportability > Attachments: HADOOP-14864.patch > > > The current exception message: > {noformat} > org/apache/hadoop/fs/ failed: error: > UnsupportedOperationException: this stream does not support > unbuffering.java.lang.UnsupportedOperationException: this stream does not > support unbuffering. > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14089) Automated checking for malformed client artifacts.
[ https://issues.apache.org/jira/browse/HADOOP-14089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-14089: - Status: In Progress (was: Patch Available) > Automated checking for malformed client artifacts. > -- > > Key: HADOOP-14089 > URL: https://issues.apache.org/jira/browse/HADOOP-14089 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: David Phillips >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14089.2.patch, HADOOP-14089.WIP.0.patch, > HADOOP-14089.WIP.1.patch > > > The jar includes things like {{assets}}, {{okio}}, {{javax/annotation}}, > {{javax/ws}}, {{mozilla}}, etc. > An easy way to verify this is to look at the contents of the jar: > {code} > jar tf hadoop-client-runtime-xxx.jar | sort | grep -v '^org/apache/hadoop' > {code} > For standard dependencies, such as the JSR 305 {{javax.annotation}} or JAX-RS > {{javax.ws}}, it makes sense for those to be normal dependencies in the POM > -- they are standard, so version conflicts shouldn't be a problem. The JSR > 305 annotations can be {{true}} since they aren't needed > at runtime (this is what Guava does). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14864) FSDataInputStream#unbuffer UOE exception should print the stream class name
[ https://issues.apache.org/jira/browse/HADOOP-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165193#comment-16165193 ] Bharat Viswanadham commented on HADOOP-14864: - [~jzhuge] Pls review the change. > FSDataInputStream#unbuffer UOE exception should print the stream class name > --- > > Key: HADOOP-14864 > URL: https://issues.apache.org/jira/browse/HADOOP-14864 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie, supportability > Attachments: HADOOP-14864.patch > > > The current exception message: > {noformat} > org/apache/hadoop/fs/ failed: error: > UnsupportedOperationException: this stream does not support > unbuffering.java.lang.UnsupportedOperationException: this stream does not > support unbuffering. > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14864) FSDataInputStream#unbuffer UOE exception should print the stream class name
[ https://issues.apache.org/jira/browse/HADOOP-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-14864: Status: Patch Available (was: In Progress) > FSDataInputStream#unbuffer UOE exception should print the stream class name > --- > > Key: HADOOP-14864 > URL: https://issues.apache.org/jira/browse/HADOOP-14864 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie, supportability > Attachments: HADOOP-14864.patch > > > The current exception message: > {noformat} > org/apache/hadoop/fs/ failed: error: > UnsupportedOperationException: this stream does not support > unbuffering.java.lang.UnsupportedOperationException: this stream does not > support unbuffering. > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14864) FSDataInputStream#unbuffer UOE exception should print the stream class name
[ https://issues.apache.org/jira/browse/HADOOP-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-14864: Attachment: HADOOP-14864.patch > FSDataInputStream#unbuffer UOE exception should print the stream class name > --- > > Key: HADOOP-14864 > URL: https://issues.apache.org/jira/browse/HADOOP-14864 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie, supportability > Attachments: HADOOP-14864.patch > > > The current exception message: > {noformat} > org/apache/hadoop/fs/ failed: error: > UnsupportedOperationException: this stream does not support > unbuffering.java.lang.UnsupportedOperationException: this stream does not > support unbuffering. > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14867) Update HDFS Federation setup document, for incorrect property name for secondary name node http address
[ https://issues.apache.org/jira/browse/HADOOP-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HADOOP-14867: --- Description: HDFS Federation setup documentation is having incorrect property name for secondary namenode http port It is mentioned as {code} dfs.namenode.secondaryhttp-address.ns1 snn-host1:http-port dfs.namenode.rpc-address.ns2 nn-host2:rpc-port {code} Actual property should be dfs.namenode.secondary.http-address.ns. Because of this documentation error, when the document is followed and user tries to setup HDFS federated cluster, secondary namenode will not be started and also hdfs getconf -secondarynamenodes throw's an exception was: HDFS Federation setup documentation is having incorrect property name for secondary namenode http port It is mentioned as dfs.namenode.secondaryhttp-address.ns1 snn-host1:http-port dfs.namenode.rpc-address.ns2 nn-host2:rpc-port Actual property should be dfs.namenode.secondary.http-address.ns. Because of this documentation error, when the document is followed and user tries to setup HDFS federated cluster, secondary namenode will not be started and also hdfs getconf -secondarynamenodes throw's an exception > Update HDFS Federation setup document, for incorrect property name for > secondary name node http address > --- > > Key: HADOOP-14867 > URL: https://issues.apache.org/jira/browse/HADOOP-14867 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 2.7.5 > > Attachments: HADOOP-14867.patch > > > HDFS Federation setup documentation is having incorrect property name for > secondary namenode http port > It is mentioned as > {code} > > dfs.namenode.secondaryhttp-address.ns1 > snn-host1:http-port > > > dfs.namenode.rpc-address.ns2 > nn-host2:rpc-port > > {code} > Actual property should be dfs.namenode.secondary.http-address.ns. > Because of this documentation error, when the document is followed and user > tries to setup HDFS federated cluster, secondary namenode will not be started > and also > hdfs getconf -secondarynamenodes throw's an exception -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14867) Update HDFS Federation setup document, for incorrect property name for secondary name node http address
[ https://issues.apache.org/jira/browse/HADOOP-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HADOOP-14867: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.7.5 2.8.3 3.0.0-beta1 2.9.0 Status: Resolved (was: Patch Available) +1 nice find [~bharatviswa]. I've committed this. Thanks! > Update HDFS Federation setup document, for incorrect property name for > secondary name node http address > --- > > Key: HADOOP-14867 > URL: https://issues.apache.org/jira/browse/HADOOP-14867 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 2.7.5 > > Attachments: HADOOP-14867.patch > > > HDFS Federation setup documentation is having incorrect property name for > secondary namenode http port > It is mentioned as > > dfs.namenode.secondaryhttp-address.ns1 > snn-host1:http-port > > > dfs.namenode.rpc-address.ns2 > nn-host2:rpc-port > > Actual property should be dfs.namenode.secondary.http-address.ns. > Because of this documentation error, when the document is followed and user > tries to setup HDFS federated cluster, secondary namenode will not be started > and also > hdfs getconf -secondarynamenodes throw's an exception -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14867) Update HDFS Federation setup document, for incorrect property name for secondary name node http address
[ https://issues.apache.org/jira/browse/HADOOP-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165114#comment-16165114 ] Hadoop QA commented on HADOOP-14867: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 18m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14867 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886919/HADOOP-14867.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 5a0835422440 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 5324388 | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13277/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Update HDFS Federation setup document, for incorrect property name for > secondary name node http address > --- > > Key: HADOOP-14867 > URL: https://issues.apache.org/jira/browse/HADOOP-14867 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HADOOP-14867.patch > > > HDFS Federation setup documentation is having incorrect property name for > secondary namenode http port > It is mentioned as > > dfs.namenode.secondaryhttp-address.ns1 > snn-host1:http-port > > > dfs.namenode.rpc-address.ns2 > nn-host2:rpc-port > > Actual property should be dfs.namenode.secondary.http-address.ns. > Because of this documentation error, when the document is followed and user > tries to setup HDFS federated cluster, secondary namenode will not be started > and also > hdfs getconf -secondarynamenodes throw's an exception -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13714) Tighten up our compatibility guidelines for Hadoop 3
[ https://issues.apache.org/jira/browse/HADOOP-13714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165113#comment-16165113 ] Miklos Szegedi commented on HADOOP-13714: - Thank you, [~steve_l] for the explanations. > Tighten up our compatibility guidelines for Hadoop 3 > > > Key: HADOOP-13714 > URL: https://issues.apache.org/jira/browse/HADOOP-13714 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.7.3 >Reporter: Karthik Kambatla >Assignee: Daniel Templeton >Priority: Blocker > Attachments: Compatibility.pdf, HADOOP-13714.001.patch, > HADOOP-13714.002.patch, HADOOP-13714.003.patch, HADOOP-13714.004.patch, > HADOOP-13714.WIP-001.patch, InterfaceClassification.pdf > > > Our current compatibility guidelines are incomplete and loose. For many > categories, we do not have a policy. It would be nice to actually define > those policies so our users know what to expect and the developers know what > releases to target their changes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-14864) FSDataInputStream#unbuffer UOE exception should print the stream class name
[ https://issues.apache.org/jira/browse/HADOOP-14864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-14864 started by Bharat Viswanadham. --- > FSDataInputStream#unbuffer UOE exception should print the stream class name > --- > > Key: HADOOP-14864 > URL: https://issues.apache.org/jira/browse/HADOOP-14864 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: Bharat Viswanadham >Priority: Minor > Labels: newbie, supportability > > The current exception message: > {noformat} > org/apache/hadoop/fs/ failed: error: > UnsupportedOperationException: this stream does not support > unbuffering.java.lang.UnsupportedOperationException: this stream does not > support unbuffering. > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14866) Backport implementation of parallel block copy in Distcp to hadoop 2.8
[ https://issues.apache.org/jira/browse/HADOOP-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165091#comment-16165091 ] Junping Du commented on HADOOP-14866: - Thanks for notifying me, [~drankye]. While I fully agree this is a very nice enhancement to land on 2.8.x branches, but considering 2.8.2 is in RC stage, I will recommend to commit to branch-2.8 only to get released in 2.8.3. Make sense? > Backport implementation of parallel block copy in Distcp to hadoop 2.8 > -- > > Key: HADOOP-14866 > URL: https://issues.apache.org/jira/browse/HADOOP-14866 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Reporter: Huafeng Wang >Assignee: Huafeng Wang > Attachments: HADOOP-14866.001.branch2.8.2.patch > > > The implementation of parallel block copy in Distcp targets to version 2.9. > It would be great to have this feature in version 2.8. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14866) Backport implementation of parallel block copy in Distcp to hadoop 2.8
[ https://issues.apache.org/jira/browse/HADOOP-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165092#comment-16165092 ] Junping Du commented on HADOOP-14866: - Also drop the fix version as the patch hasn't get landed. > Backport implementation of parallel block copy in Distcp to hadoop 2.8 > -- > > Key: HADOOP-14866 > URL: https://issues.apache.org/jira/browse/HADOOP-14866 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Reporter: Huafeng Wang >Assignee: Huafeng Wang > Attachments: HADOOP-14866.001.branch2.8.2.patch > > > The implementation of parallel block copy in Distcp targets to version 2.9. > It would be great to have this feature in version 2.8. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14866) Backport implementation of parallel block copy in Distcp to hadoop 2.8
[ https://issues.apache.org/jira/browse/HADOOP-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HADOOP-14866: Fix Version/s: (was: 2.8.2) > Backport implementation of parallel block copy in Distcp to hadoop 2.8 > -- > > Key: HADOOP-14866 > URL: https://issues.apache.org/jira/browse/HADOOP-14866 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Reporter: Huafeng Wang >Assignee: Huafeng Wang > Attachments: HADOOP-14866.001.branch2.8.2.patch > > > The implementation of parallel block copy in Distcp targets to version 2.9. > It would be great to have this feature in version 2.8. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14867) Update HDFS Federation setup document, for incorrect property name for secondary name node http address
[ https://issues.apache.org/jira/browse/HADOOP-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165086#comment-16165086 ] Bharat Viswanadham commented on HADOOP-14867: - [~arpitagarwal] Could you please review the changes. > Update HDFS Federation setup document, for incorrect property name for > secondary name node http address > --- > > Key: HADOOP-14867 > URL: https://issues.apache.org/jira/browse/HADOOP-14867 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HADOOP-14867.patch > > > HDFS Federation setup documentation is having incorrect property name for > secondary namenode http port > It is mentioned as > > dfs.namenode.secondaryhttp-address.ns1 > snn-host1:http-port > > > dfs.namenode.rpc-address.ns2 > nn-host2:rpc-port > > Actual property should be dfs.namenode.secondary.http-address.ns. > Because of this documentation error, when the document is followed and user > tries to setup HDFS federated cluster, secondary namenode will not be started > and also > hdfs getconf -secondarynamenodes throw's an exception -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14867) Update HDFS Federation setup document, for incorrect property name for secondary name node http address
[ https://issues.apache.org/jira/browse/HADOOP-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-14867: Status: Patch Available (was: In Progress) > Update HDFS Federation setup document, for incorrect property name for > secondary name node http address > --- > > Key: HADOOP-14867 > URL: https://issues.apache.org/jira/browse/HADOOP-14867 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HADOOP-14867.patch > > > HDFS Federation setup documentation is having incorrect property name for > secondary namenode http port > It is mentioned as > > dfs.namenode.secondaryhttp-address.ns1 > snn-host1:http-port > > > dfs.namenode.rpc-address.ns2 > nn-host2:rpc-port > > Actual property should be dfs.namenode.secondary.http-address.ns. > Because of this documentation error, when the document is followed and user > tries to setup HDFS federated cluster, secondary namenode will not be started > and also > hdfs getconf -secondarynamenodes will throw an exception -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14867) Update HDFS Federation setup document, for incorrect property name for secondary name node http address
[ https://issues.apache.org/jira/browse/HADOOP-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-14867: Description: HDFS Federation setup documentation is having incorrect property name for secondary namenode http port It is mentioned as dfs.namenode.secondaryhttp-address.ns1 snn-host1:http-port dfs.namenode.rpc-address.ns2 nn-host2:rpc-port Actual property should be dfs.namenode.secondary.http-address.ns. Because of this documentation error, when the document is followed and user tries to setup HDFS federated cluster, secondary namenode will not be started and also hdfs getconf -secondarynamenodes throw's an exception was: HDFS Federation setup documentation is having incorrect property name for secondary namenode http port It is mentioned as dfs.namenode.secondaryhttp-address.ns1 snn-host1:http-port dfs.namenode.rpc-address.ns2 nn-host2:rpc-port Actual property should be dfs.namenode.secondary.http-address.ns. Because of this documentation error, when the document is followed and user tries to setup HDFS federated cluster, secondary namenode will not be started and also hdfs getconf -secondarynamenodes will throw an exception > Update HDFS Federation setup document, for incorrect property name for > secondary name node http address > --- > > Key: HADOOP-14867 > URL: https://issues.apache.org/jira/browse/HADOOP-14867 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HADOOP-14867.patch > > > HDFS Federation setup documentation is having incorrect property name for > secondary namenode http port > It is mentioned as > > dfs.namenode.secondaryhttp-address.ns1 > snn-host1:http-port > > > dfs.namenode.rpc-address.ns2 > nn-host2:rpc-port > > Actual property should be dfs.namenode.secondary.http-address.ns. > Because of this documentation error, when the document is followed and user > tries to setup HDFS federated cluster, secondary namenode will not be started > and also > hdfs getconf -secondarynamenodes throw's an exception -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14867) Update HDFS Federation setup document, for incorrect property name for secondary name node http address
[ https://issues.apache.org/jira/browse/HADOOP-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-14867: Attachment: HADOOP-14867.patch > Update HDFS Federation setup document, for incorrect property name for > secondary name node http address > --- > > Key: HADOOP-14867 > URL: https://issues.apache.org/jira/browse/HADOOP-14867 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HADOOP-14867.patch > > > HDFS Federation setup documentation is having incorrect property name for > secondary namenode http port > It is mentioned as > > dfs.namenode.secondaryhttp-address.ns1 > snn-host1:http-port > > > dfs.namenode.rpc-address.ns2 > nn-host2:rpc-port > > Actual property should be dfs.namenode.secondary.http-address.ns. > Because of this documentation error, when the document is followed and user > tries to setup HDFS federated cluster, secondary namenode will not be started > and also > hdfs getconf -secondarynamenodes will throw an exception -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14867) Update HDFS Federation setup document, for incorrect property name for secondary name node http address
[ https://issues.apache.org/jira/browse/HADOOP-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HADOOP-14867: Summary: Update HDFS Federation setup document, for incorrect property name for secondary name node http address (was: Update HDFS Federation Document, for incorrect property name for secondary name node) > Update HDFS Federation setup document, for incorrect property name for > secondary name node http address > --- > > Key: HADOOP-14867 > URL: https://issues.apache.org/jira/browse/HADOOP-14867 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > > HDFS Federation setup documentation is having incorrect property name for > secondary namenode http port > It is mentioned as > > dfs.namenode.secondaryhttp-address.ns1 > snn-host1:http-port > > > dfs.namenode.rpc-address.ns2 > nn-host2:rpc-port > > Actual property should be dfs.namenode.secondary.http-address.ns. > Because of this documentation error, when the document is followed and user > tries to setup HDFS federated cluster, secondary namenode will not be started > and also > hdfs getconf -secondarynamenodes will throw an exception -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-14867) Update HDFS Federation Document, for incorrect property name for secondary name node
[ https://issues.apache.org/jira/browse/HADOOP-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-14867 started by Bharat Viswanadham. --- > Update HDFS Federation Document, for incorrect property name for secondary > name node > > > Key: HADOOP-14867 > URL: https://issues.apache.org/jira/browse/HADOOP-14867 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > > HDFS Federation setup documentation is having incorrect property name for > secondary namenode http port > It is mentioned as > > dfs.namenode.secondaryhttp-address.ns1 > snn-host1:http-port > > > dfs.namenode.rpc-address.ns2 > nn-host2:rpc-port > > Actual property should be dfs.namenode.secondary.http-address.ns. > Because of this documentation error, when the document is followed and user > tries to setup HDFS federated cluster, secondary namenode will not be started > and also > hdfs getconf -secondarynamenodes will throw an exception -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14867) Update HDFS Federation Document, for incorrect property name for secondary name node
Bharat Viswanadham created HADOOP-14867: --- Summary: Update HDFS Federation Document, for incorrect property name for secondary name node Key: HADOOP-14867 URL: https://issues.apache.org/jira/browse/HADOOP-14867 Project: Hadoop Common Issue Type: Bug Reporter: Bharat Viswanadham HDFS Federation setup documentation is having incorrect property name for secondary namenode http port It is mentioned as dfs.namenode.secondaryhttp-address.ns1 snn-host1:http-port dfs.namenode.rpc-address.ns2 nn-host2:rpc-port Actual property should be dfs.namenode.secondary.http-address.ns. Because of this documentation error, when the document is followed and user tries to setup HDFS federated cluster, secondary namenode will not be started and also hdfs getconf -secondarynamenodes will throw an exception -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14867) Update HDFS Federation Document, for incorrect property name for secondary name node
[ https://issues.apache.org/jira/browse/HADOOP-14867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned HADOOP-14867: --- Assignee: Bharat Viswanadham > Update HDFS Federation Document, for incorrect property name for secondary > name node > > > Key: HADOOP-14867 > URL: https://issues.apache.org/jira/browse/HADOOP-14867 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > > HDFS Federation setup documentation is having incorrect property name for > secondary namenode http port > It is mentioned as > > dfs.namenode.secondaryhttp-address.ns1 > snn-host1:http-port > > > dfs.namenode.rpc-address.ns2 > nn-host2:rpc-port > > Actual property should be dfs.namenode.secondary.http-address.ns. > Because of this documentation error, when the document is followed and user > tries to setup HDFS federated cluster, secondary namenode will not be started > and also > hdfs getconf -secondarynamenodes will throw an exception -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14857) Fix downstream shaded client integration test
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165072#comment-16165072 ] Hudson commented on HADOOP-14857: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12860 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12860/]) HADOOP-14857. Fix downstream shaded client integration test. Contributed (wang: rev 8277fab2be3b0898ba326d15e4cb641da2ac51ce) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java * (edit) hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java * (edit) pom.xml * (edit) hadoop-client-modules/hadoop-client-minicluster/pom.xml > Fix downstream shaded client integration test > - > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-14857.3.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14804) correct wrong parameters format order in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165071#comment-16165071 ] Hudson commented on HADOOP-14804: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12860 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12860/]) HADOOP-14804. correct wrong parameters format order in core-default.xml. (cliang: rev 10b2cfa96cec8582799c9ae864dfb4eb8a42aeb7) * (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml > correct wrong parameters format order in core-default.xml > - > > Key: HADOOP-14804 > URL: https://issues.apache.org/jira/browse/HADOOP-14804 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Chen Hongfei >Assignee: Chen Hongfei >Priority: Trivial > Fix For: 3.1.0 > > Attachments: HADOOP-14804.001.patch, HADOOP-14804.002.patch, > HADOOP-14804.003.patch > > > descriptions of "HTTP CORS" parameters is before the names: > >Comma separated list of headers that are allowed for web > services needing cross-origin (CORS) support. > hadoop.http.cross-origin.allowed-headers > X-Requested-With,Content-Type,Accept,Origin > > .. > but they should be following value as others. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints
[ https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165069#comment-16165069 ] Hadoop QA commented on HADOOP-13786: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 49 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 4s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 15s{color} | {color:orange} root: The patch generated 23 new + 166 unchanged - 30 fixed = 189 total (was 196) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 14s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 57s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 59s{color} | {color:red} hadoop-tools/hadoop-aws generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry generated 0 new + 46 unchanged - 2 fixed = 46 total (was 48) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 28s{color} | {color:red} hadoop-tools_hadoop-aws generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 56s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 16s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 10s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green}
[jira] [Updated] (HADOOP-14857) Fix downstream shaded client integration test
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14857: - Resolution: Fixed Fix Version/s: 3.0.0-beta1 Status: Resolved (was: Patch Available) Committed to trunk and branch-3.0, thanks for the contribution Sean and reviewing Bharat! > Fix downstream shaded client integration test > - > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-14857.3.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14857) Fix downstream shaded client integration test
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14857: - Summary: Fix downstream shaded client integration test (was: downstream client artifact IT fails) > Fix downstream shaded client integration test > - > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-14857.3.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14857) downstream client artifact IT fails
[ https://issues.apache.org/jira/browse/HADOOP-14857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165040#comment-16165040 ] Andrew Wang commented on HADOOP-14857: -- +1 LGTM I'll commit this shortly. > downstream client artifact IT fails > --- > > Key: HADOOP-14857 > URL: https://issues.apache.org/jira/browse/HADOOP-14857 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Attachments: HADOOP-14857.1.patch, HADOOP-14857.2.patch, > HADOOP-14857.3.patch, HADOOP-18457.0.patch > > > HADOOP-11804 added an IT to make sure downstreamers can use our client > artifacts post-shading. it is currently broken: > {code} useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 6.776 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/mockito/stubbing/Answer > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > at > org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564) > at > org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607) > at > org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667) > at > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874) > at > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494) > at > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453) > at > org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74) > useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster) Time elapsed: > 2.954 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80) > {code} > (edited after I fixed a downed loopback device) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14804) correct wrong parameters format order in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang resolved HADOOP-14804. - Resolution: Fixed > correct wrong parameters format order in core-default.xml > - > > Key: HADOOP-14804 > URL: https://issues.apache.org/jira/browse/HADOOP-14804 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Chen Hongfei >Assignee: Chen Hongfei >Priority: Trivial > Fix For: 3.1.0 > > Attachments: HADOOP-14804.001.patch, HADOOP-14804.002.patch, > HADOOP-14804.003.patch > > > descriptions of "HTTP CORS" parameters is before the names: > >Comma separated list of headers that are allowed for web > services needing cross-origin (CORS) support. > hadoop.http.cross-origin.allowed-headers > X-Requested-With,Content-Type,Accept,Origin > > .. > but they should be following value as others. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14804) correct wrong parameters format order in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16165038#comment-16165038 ] Chen Liang commented on HADOOP-14804: - Thanks [~Hongfei Chen] for the followup, I've committed to trunk and branch-3.0. Thanks for the contribution! > correct wrong parameters format order in core-default.xml > - > > Key: HADOOP-14804 > URL: https://issues.apache.org/jira/browse/HADOOP-14804 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Chen Hongfei >Assignee: Chen Hongfei >Priority: Trivial > Fix For: 3.1.0 > > Attachments: HADOOP-14804.001.patch, HADOOP-14804.002.patch, > HADOOP-14804.003.patch > > > descriptions of "HTTP CORS" parameters is before the names: > >Comma separated list of headers that are allowed for web > services needing cross-origin (CORS) support. > hadoop.http.cross-origin.allowed-headers > X-Requested-With,Content-Type,Accept,Origin > > .. > but they should be following value as others. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14804) correct wrong parameters format order in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HADOOP-14804: Target Version/s: 3.0.0-beta1 (was: 3.0.0-alpha4) > correct wrong parameters format order in core-default.xml > - > > Key: HADOOP-14804 > URL: https://issues.apache.org/jira/browse/HADOOP-14804 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Chen Hongfei >Assignee: Chen Hongfei >Priority: Trivial > Fix For: 3.1.0 > > Attachments: HADOOP-14804.001.patch, HADOOP-14804.002.patch, > HADOOP-14804.003.patch > > > descriptions of "HTTP CORS" parameters is before the names: > >Comma separated list of headers that are allowed for web > services needing cross-origin (CORS) support. > hadoop.http.cross-origin.allowed-headers > X-Requested-With,Content-Type,Accept,Origin > > .. > but they should be following value as others. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14804) correct wrong parameters format order in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HADOOP-14804: Fix Version/s: (was: 3.0.0-alpha4) 3.1.0 > correct wrong parameters format order in core-default.xml > - > > Key: HADOOP-14804 > URL: https://issues.apache.org/jira/browse/HADOOP-14804 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Chen Hongfei >Assignee: Chen Hongfei >Priority: Trivial > Fix For: 3.1.0 > > Attachments: HADOOP-14804.001.patch, HADOOP-14804.002.patch, > HADOOP-14804.003.patch > > > descriptions of "HTTP CORS" parameters is before the names: > >Comma separated list of headers that are allowed for web > services needing cross-origin (CORS) support. > hadoop.http.cross-origin.allowed-headers > X-Requested-With,Content-Type,Accept,Origin > > .. > but they should be following value as others. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14738) Remove S3N and obsolete bits of S3A; rework docs
[ https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14738: Status: Open (was: Patch Available) > Remove S3N and obsolete bits of S3A; rework docs > > > Key: HADOOP-14738 > URL: https://issues.apache.org/jira/browse/HADOOP-14738 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0, 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Attachments: HADOOP-14738-002.patch, HADOOP-14738-003.patch, > HADOOP-14738-004.patch, HADOOP-14738-005.patch, HADOOP-14739-001.patch > > > We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf > since Hadoop 2.8 > It's now time to kill S3N off, remove the source, the tests, the transitive > dependencies. This patch does that. > It also removes the obsolete, original s3a output stream; the fast/block > upload stream has been stable and is much more manageable and maintained (put > differently: we don't ever look at the original S3A output stream, and tell > people not to use it for performance reasons). > As well as cutting the features, this patch updates the aws docs with > * split out s3n migration page > * split out troubleshooting page > * rework of the "uploading data with s3a" section of index.md, as there's no > need to discuss the slow upload except in the past tense...all that is needed > is to list the buffering and thread tuning options of the block uploader. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14738) Remove S3N and obsolete bits of S3A; rework docs
[ https://issues.apache.org/jira/browse/HADOOP-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14738: Attachment: HADOOP-14738-005.patch Patch 005: address Aaron's comments. test: s3 ireland > Remove S3N and obsolete bits of S3A; rework docs > > > Key: HADOOP-14738 > URL: https://issues.apache.org/jira/browse/HADOOP-14738 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0, 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Attachments: HADOOP-14738-002.patch, HADOOP-14738-003.patch, > HADOOP-14738-004.patch, HADOOP-14738-005.patch, HADOOP-14739-001.patch > > > We are all happy with S3A; it's been stable since Hadoop 2.7 and high-perf > since Hadoop 2.8 > It's now time to kill S3N off, remove the source, the tests, the transitive > dependencies. This patch does that. > It also removes the obsolete, original s3a output stream; the fast/block > upload stream has been stable and is much more manageable and maintained (put > differently: we don't ever look at the original S3A output stream, and tell > people not to use it for performance reasons). > As well as cutting the features, this patch updates the aws docs with > * split out s3n migration page > * split out troubleshooting page > * rework of the "uploading data with s3a" section of index.md, as there's no > need to discuss the slow upload except in the past tense...all that is needed > is to list the buffering and thread tuning options of the block uploader. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org