[jira] [Commented] (HADOOP-14872) CryptoInputStream should implement unbuffer
[ https://issues.apache.org/jira/browse/HADOOP-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283081#comment-16283081 ] Xiao Chen commented on HADOOP-14872: Cherry-picked to branch-2 and branch-3.0 > CryptoInputStream should implement unbuffer > --- > > Key: HADOOP-14872 > URL: https://issues.apache.org/jira/browse/HADOOP-14872 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, security >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: John Zhuge > Fix For: 3.1.0, 2.10.0, 3.0.1 > > Attachments: HADOOP-14872.001.patch, HADOOP-14872.002.patch, > HADOOP-14872.003.patch, HADOOP-14872.004.patch, HADOOP-14872.005.patch, > HADOOP-14872.006.patch, HADOOP-14872.007.patch, HADOOP-14872.008.patch, > HADOOP-14872.009.patch, HADOOP-14872.010.patch, HADOOP-14872.011.patch, > HADOOP-14872.012.patch, HADOOP-14872.013.patch > > > Discovered in IMPALA-5909. > Opening an encrypted HDFS file returns a chain of wrapped input streams: > {noformat} > HdfsDataInputStream > CryptoInputStream > DFSInputStream > {noformat} > If an application such as Impala or HBase calls HdfsDataInputStream#unbuffer, > FSDataInputStream#unbuffer will be called: > {code:java} > try { > ((CanUnbuffer)in).unbuffer(); > } catch (ClassCastException e) { > throw new UnsupportedOperationException("this stream does not " + > "support unbuffering."); > } > {code} > If the {{in}} class does not implement CanUnbuffer, UOE will be thrown. If > the application is not careful, tons of UOEs will show up in logs. > In comparison, opening an non-encrypted HDFS file returns this chain: > {noformat} > HdfsDataInputStream > DFSInputStream > {noformat} > DFSInputStream implements CanUnbuffer. > It is good for CryptoInputStream to implement CanUnbuffer for 2 reasons: > * Release buffer, cache, or any other resource when instructed > * Able to call its wrapped DFSInputStream unbuffer > * Avoid the UOE described above. Applications may not handle the UOE very > well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15056) Fix TestUnbuffer#testUnbufferException failure
[ https://issues.apache.org/jira/browse/HADOOP-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283082#comment-16283082 ] Hudson commented on HADOOP-15056: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13346 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13346/]) HADOOP-15056. Fix TestUnbuffer#testUnbufferException failure. (xiao: rev 19e089420999dd9d97d981dcd0abd64b6166152d) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestUnbuffer.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StreamCapabilitiesPolicy.java > Fix TestUnbuffer#testUnbufferException failure > -- > > Key: HADOOP-15056 > URL: https://issues.apache.org/jira/browse/HADOOP-15056 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Affects Versions: 2.9.0 >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Minor > Fix For: 3.1.0, 2.10.0, 3.0.1 > > Attachments: HADOOP-15056.001.patch, HADOOP-15056.002.patch, > HADOOP-15056.003.patch, HADOOP-15056.004.patch, HADOOP-15056.005.patch, > HADOOP-15056.006.patch, HADOOP-15056.007.patch > > > Hello! I am a new contributor and actually contributing to open source for > the very first time. :) > I pulled down Hadoop today and when running the tests I encountered a failure > with the TestUnbuffer#testUnbufferException test. > The unbuffer code has recently gone through some changes and I believe this > test case may have been overlooked. Using today's git commit > (659e85e304d070f9908a96cf6a0e1cbafde6a434), and upon running the test case, > there is an expect mock for an exception UnsupportedOperationException that > is no longer being thrown. > It would appear that a test like this would be valuable so my initial > proposed patch did not remove it. Instead, I removed the conditions that were > guarding the cast from being able to fire -- as was the previous behavior. > Now when we encounter an object that doesn't have the UNBUFFERED > StreamCapability, it will throw an error as it did prior to the recent > changes. > Please review and let me know what you think! :D -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14872) CryptoInputStream should implement unbuffer
[ https://issues.apache.org/jira/browse/HADOOP-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-14872: --- Fix Version/s: 3.0.1 2.10.0 > CryptoInputStream should implement unbuffer > --- > > Key: HADOOP-14872 > URL: https://issues.apache.org/jira/browse/HADOOP-14872 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, security >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: John Zhuge > Fix For: 3.1.0, 2.10.0, 3.0.1 > > Attachments: HADOOP-14872.001.patch, HADOOP-14872.002.patch, > HADOOP-14872.003.patch, HADOOP-14872.004.patch, HADOOP-14872.005.patch, > HADOOP-14872.006.patch, HADOOP-14872.007.patch, HADOOP-14872.008.patch, > HADOOP-14872.009.patch, HADOOP-14872.010.patch, HADOOP-14872.011.patch, > HADOOP-14872.012.patch, HADOOP-14872.013.patch > > > Discovered in IMPALA-5909. > Opening an encrypted HDFS file returns a chain of wrapped input streams: > {noformat} > HdfsDataInputStream > CryptoInputStream > DFSInputStream > {noformat} > If an application such as Impala or HBase calls HdfsDataInputStream#unbuffer, > FSDataInputStream#unbuffer will be called: > {code:java} > try { > ((CanUnbuffer)in).unbuffer(); > } catch (ClassCastException e) { > throw new UnsupportedOperationException("this stream does not " + > "support unbuffering."); > } > {code} > If the {{in}} class does not implement CanUnbuffer, UOE will be thrown. If > the application is not careful, tons of UOEs will show up in logs. > In comparison, opening an non-encrypted HDFS file returns this chain: > {noformat} > HdfsDataInputStream > DFSInputStream > {noformat} > DFSInputStream implements CanUnbuffer. > It is good for CryptoInputStream to implement CanUnbuffer for 2 reasons: > * Release buffer, cache, or any other resource when instructed > * Able to call its wrapped DFSInputStream unbuffer > * Avoid the UOE described above. Applications may not handle the UOE very > well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15012) Add readahead, dropbehind, and unbuffer to StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-15012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283080#comment-16283080 ] Xiao Chen edited comment on HADOOP-15012 at 12/8/17 5:22 AM: - Pushed to branch-3.0 and branch-2 (with HADOOP-15056). Thanks Steve! was (Author: xiaochen): Pushed to branch-3.0 and branch-2. Thanks Steve! > Add readahead, dropbehind, and unbuffer to StreamCapabilities > - > > Key: HADOOP-15012 > URL: https://issues.apache.org/jira/browse/HADOOP-15012 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.9.0 >Reporter: John Zhuge >Assignee: John Zhuge > Fix For: 3.1.0, 2.10.0, 3.0.1 > > Attachments: HADOOP-15012.branch-2.01.patch > > > A split from HADOOP-14872 to track changes that enhance StreamCapabilities > class with READAHEAD, DROPBEHIND, and UNBUFFER capability. > Discussions and code reviews are done in HADOOP-14872. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15012) Add readahead, dropbehind, and unbuffer to StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-15012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-15012: --- Resolution: Fixed Fix Version/s: 3.0.1 2.10.0 Status: Resolved (was: Patch Available) Pushed to branch-3.0 and branch-2. Thanks Steve! > Add readahead, dropbehind, and unbuffer to StreamCapabilities > - > > Key: HADOOP-15012 > URL: https://issues.apache.org/jira/browse/HADOOP-15012 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.9.0 >Reporter: John Zhuge >Assignee: John Zhuge > Fix For: 3.1.0, 2.10.0, 3.0.1 > > Attachments: HADOOP-15012.branch-2.01.patch > > > A split from HADOOP-14872 to track changes that enhance StreamCapabilities > class with READAHEAD, DROPBEHIND, and UNBUFFER capability. > Discussions and code reviews are done in HADOOP-14872. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15056) Fix TestUnbuffer#testUnbufferException failure
[ https://issues.apache.org/jira/browse/HADOOP-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-15056: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.1 2.10.0 3.1.0 Status: Resolved (was: Patch Available) Pushed to trunk, branch-3.0 and branch-2. Thanks for the contribution [~jackbearden], and [~jzhuge] for reviewing! > Fix TestUnbuffer#testUnbufferException failure > -- > > Key: HADOOP-15056 > URL: https://issues.apache.org/jira/browse/HADOOP-15056 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Affects Versions: 2.9.0 >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Minor > Fix For: 3.1.0, 2.10.0, 3.0.1 > > Attachments: HADOOP-15056.001.patch, HADOOP-15056.002.patch, > HADOOP-15056.003.patch, HADOOP-15056.004.patch, HADOOP-15056.005.patch, > HADOOP-15056.006.patch, HADOOP-15056.007.patch > > > Hello! I am a new contributor and actually contributing to open source for > the very first time. :) > I pulled down Hadoop today and when running the tests I encountered a failure > with the TestUnbuffer#testUnbufferException test. > The unbuffer code has recently gone through some changes and I believe this > test case may have been overlooked. Using today's git commit > (659e85e304d070f9908a96cf6a0e1cbafde6a434), and upon running the test case, > there is an expect mock for an exception UnsupportedOperationException that > is no longer being thrown. > It would appear that a test like this would be valuable so my initial > proposed patch did not remove it. Instead, I removed the conditions that were > guarding the cast from being able to fire -- as was the previous behavior. > Now when we encounter an object that doesn't have the UNBUFFERED > StreamCapability, it will throw an error as it did prior to the recent > changes. > Please review and let me know what you think! :D -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15056) Fix TestUnbuffer#testUnbufferException failure
[ https://issues.apache.org/jira/browse/HADOOP-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283063#comment-16283063 ] Xiao Chen commented on HADOOP-15056: Test failures look unrelated, committing this. > Fix TestUnbuffer#testUnbufferException failure > -- > > Key: HADOOP-15056 > URL: https://issues.apache.org/jira/browse/HADOOP-15056 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Affects Versions: 2.9.0 >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Minor > Attachments: HADOOP-15056.001.patch, HADOOP-15056.002.patch, > HADOOP-15056.003.patch, HADOOP-15056.004.patch, HADOOP-15056.005.patch, > HADOOP-15056.006.patch, HADOOP-15056.007.patch > > > Hello! I am a new contributor and actually contributing to open source for > the very first time. :) > I pulled down Hadoop today and when running the tests I encountered a failure > with the TestUnbuffer#testUnbufferException test. > The unbuffer code has recently gone through some changes and I believe this > test case may have been overlooked. Using today's git commit > (659e85e304d070f9908a96cf6a0e1cbafde6a434), and upon running the test case, > there is an expect mock for an exception UnsupportedOperationException that > is no longer being thrown. > It would appear that a test like this would be valuable so my initial > proposed patch did not remove it. Instead, I removed the conditions that were > guarding the cast from being able to fire -- as was the previous behavior. > Now when we encounter an object that doesn't have the UNBUFFERED > StreamCapability, it will throw an error as it did prior to the recent > changes. > Please review and let me know what you think! :D -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15101) what testListStatusFile verified not consistent with listStatus declaration in FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhoutai.zt updated HADOOP-15101: Description: @Test public void testListStatusFile() throws Throwable { describe("test the listStatus(path) on a file"); Path f = touchf("liststatusfile"); verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f)); } In this case, first create a file _f_, then listStatus on _f_,expect listStatus returns an array of one FileStatus. But this is not consistent with the declarations in FileSystem, i.e. " List the statuses of the files/directories in the given path if the path is a directory. Parameters: f given path Returns: the statuses of the files/directories in the given patch " Which is the expected? The behave in fs contract test or in FileSystem? was: @Test public void testListStatusFile() throws Throwable { describe("test the listStatus(path) on a file"); Path f = touchf("liststatusfile"); verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f)); } In this case, first create a file _f_, then listStatus on _f_,expect listStatus returns an array of one FileStatus. But this is not consistent with the declarations in FileSystem, i.e. ??List the statuses of the files/directories in the given path if the path is a directory. Parameters: f given path Returns: the statuses of the files/directories in the given patch?? Which is the expected? The behave in fs contract test or in FileSystem? > what testListStatusFile verified not consistent with listStatus declaration > in FileSystem > --- > > Key: HADOOP-15101 > URL: https://issues.apache.org/jira/browse/HADOOP-15101 > Project: Hadoop Common > Issue Type: Bug > Components: fs, test >Affects Versions: 3.0.0-beta1 >Reporter: zhoutai.zt >Priority: Critical > > @Test > public void testListStatusFile() throws Throwable { > describe("test the listStatus(path) on a file"); > Path f = touchf("liststatusfile"); > verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f)); > } > In this case, first create a file _f_, then listStatus on _f_,expect > listStatus returns an array of one FileStatus. But this is not consistent > with the declarations in FileSystem, i.e. > " > List the statuses of the files/directories in the given path if the path is a > directory. > Parameters: > f given path > Returns: > the statuses of the files/directories in the given patch > " > Which is the expected? The behave in fs contract test or in FileSystem? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12502) SetReplication OutOfMemoryError
[ https://issues.apache.org/jira/browse/HADOOP-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282957#comment-16282957 ] genericqa commented on HADOOP-12502: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 37s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 169 unchanged - 1 fixed = 169 total (was 170) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 52s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 39s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 80m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-12502 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901180/HADOOP-12502-07.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8ff09a0ac0e9 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d6c31a3 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13807/testReport/ | | Max. process+thread count | 1468 (vs. ulimit of 5000) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13807/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > SetReplication OutOfMemoryError >
[jira] [Created] (HADOOP-15101) what testListStatusFile verified not consistent with listStatus declaration in FileSystem
zhoutai.zt created HADOOP-15101: --- Summary: what testListStatusFile verified not consistent with listStatus declaration in FileSystem Key: HADOOP-15101 URL: https://issues.apache.org/jira/browse/HADOOP-15101 Project: Hadoop Common Issue Type: Bug Components: fs, test Affects Versions: 3.0.0-beta1 Reporter: zhoutai.zt Priority: Critical @Test public void testListStatusFile() throws Throwable { describe("test the listStatus(path) on a file"); Path f = touchf("liststatusfile"); verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f)); } In this case, first create a file _f_, then listStatus on _f_,expect listStatus returns an array of one FileStatus. But this is not consistent with the declarations in FileSystem, i.e. ??List the statuses of the files/directories in the given path if the path is a directory. Parameters: f given path Returns: the statuses of the files/directories in the given patch?? Which is the expected? The behave in fs contract test or in FileSystem? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12502) SetReplication OutOfMemoryError
[ https://issues.apache.org/jira/browse/HADOOP-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-12502: - Attachment: HADOOP-12502-07.patch [~vinayrpet] the last patch you posted is almost good. I posted a new patch by removing the redundant FileSystem#listStatusIterator() from the v06 patch. > SetReplication OutOfMemoryError > --- > > Key: HADOOP-12502 > URL: https://issues.apache.org/jira/browse/HADOOP-12502 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.3.0 >Reporter: Philipp Schuegerl >Assignee: Vinayakumar B > Attachments: HADOOP-12502-01.patch, HADOOP-12502-02.patch, > HADOOP-12502-03.patch, HADOOP-12502-04.patch, HADOOP-12502-05.patch, > HADOOP-12502-06.patch, HADOOP-12502-07.patch > > > Setting the replication of a HDFS folder recursively can run out of memory. > E.g. with a large /var/log directory: > hdfs dfs -setrep -R -w 1 /var/log > Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit > exceeded > at java.util.Arrays.copyOfRange(Arrays.java:2694) > at java.lang.String.(String.java:203) > at java.lang.String.substring(String.java:1913) > at java.net.URI$Parser.substring(URI.java:2850) > at java.net.URI$Parser.parse(URI.java:3046) > at java.net.URI.(URI.java:753) > at org.apache.hadoop.fs.Path.initialize(Path.java:203) > at org.apache.hadoop.fs.Path.(Path.java:116) > at org.apache.hadoop.fs.Path.(Path.java:94) > at > org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:222) > at > org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:246) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:689) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:102) > at > org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712) > at > org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:708) > at > org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:268) > at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308) > at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308) > at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308) > at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308) > at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347) > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308) > at > org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278) > at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260) > at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244) > at > org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:76) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which breaks rolling upgrade
[ https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-15059: Release Note: This change reverses the default delegation token format implemented by HADOOP-12563, but preserves the capability to read the new delegation token format. When the new format becomes default, then MR deployment jobs runs will be compatible with releases that contain this change. > 3.0 deployment cannot work with old version MR tar ball which breaks rolling > upgrade > > > Key: HADOOP-15059 > URL: https://issues.apache.org/jira/browse/HADOOP-15059 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Junping Du >Assignee: Jason Lowe >Priority: Blocker > Attachments: HADOOP-15059.001.patch, HADOOP-15059.002.patch, > HADOOP-15059.003.patch, HADOOP-15059.004.patch, HADOOP-15059.005.patch, > HADOOP-15059.006.patch > > > I tried to deploy 3.0 cluster with 2.9 MR tar ball. The MR job is failed > because following error: > {noformat} > 2017-11-21 12:42:50,911 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for > application appattempt_1511295641738_0003_01 > 2017-11-21 12:42:51,070 WARN [main] org.apache.hadoop.util.NativeCodeLoader: > Unable to load native-hadoop library for your platform... using builtin-java > classes where applicable > 2017-11-21 12:42:51,118 FATAL [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster > java.lang.RuntimeException: Unable to determine current user > at > org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:254) > at > org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:220) > at > org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:212) > at > org.apache.hadoop.conf.Configuration.addResource(Configuration.java:888) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1638) > Caused by: java.io.IOException: Exception reading > /tmp/nm-local-dir/usercache/jdu/appcache/application_1511295641738_0003/container_e03_1511295641738_0003_01_01/container_tokens > at > org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:208) > at > org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907) > at > org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:820) > at > org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689) > at > org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:252) > ... 4 more > Caused by: java.io.IOException: Unknown version 1 in token storage. > at > org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226) > at > org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:205) > ... 8 more > 2017-11-21 12:42:51,122 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting > with status 1: java.lang.RuntimeException: Unable to determine current user > {noformat} > I think it is due to token incompatiblity change between 2.9 and 3.0. As we > claim "rolling upgrade" is supported in Hadoop 3, we should fix this before > we ship 3.0 otherwise all MR running applications will get stuck during/after > upgrade. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14959) DelegationTokenAuthenticator.authenticate() to wrap network exceptions
[ https://issues.apache.org/jira/browse/HADOOP-14959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282798#comment-16282798 ] genericqa commented on HADOOP-14959: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 42s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 50s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 43s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 84m 29s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-14959 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901148/HADOOP-14959.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 128c5340dd64 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d6c31a3 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13805/testReport/ | | Max. process+thread count | 1417 (vs. ulimit of 5000) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13805/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. >
[jira] [Commented] (HADOOP-13974) S3a CLI to support list/purge of pending multipart commits
[ https://issues.apache.org/jira/browse/HADOOP-13974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282777#comment-16282777 ] genericqa commented on HADOOP-13974: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 55s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 52s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 14s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 4 new + 11 unchanged - 0 fixed = 15 total (was 11) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 33s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 35s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 59m 41s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-tools/hadoop-aws | | | Found reliance on default encoding in org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Uploads.promptBeforeAbort(PrintStream):in org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Uploads.promptBeforeAbort(PrintStream): new java.util.Scanner(InputStream) At S3GuardTool.java:[line 1193] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-13974 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901151/HADOOP-13974.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f75cd3d56078 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 15:49:21 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d6c31a3 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle |
[jira] [Updated] (HADOOP-14959) DelegationTokenAuthenticator.authenticate() to wrap network exceptions
[ https://issues.apache.org/jira/browse/HADOOP-14959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-14959: Attachment: HADOOP-14959.002.patch [~ste...@apache.org] thanks for review. Addressed checkstyle issue in new patch. > DelegationTokenAuthenticator.authenticate() to wrap network exceptions > -- > > Key: HADOOP-14959 > URL: https://issues.apache.org/jira/browse/HADOOP-14959 > Project: Hadoop Common > Issue Type: Improvement > Components: net, security >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-14959.001.patch, HADOOP-14959.002.patch > > > network errors raised in {{DelegationTokenAuthenticator.authenticate()}} > aren't being wrapped, so only return the usual limited-value java.net error > text. using {{NetUtils.wrapException()}} can address that -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15080) Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x "json-lib"
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory updated HADOOP-15080: --- Fix Version/s: 2.9.1 2.10.0 Yes, you're right. Cherry-picked to branch-2.9 and branch-2. I checked for any other branches containing references to Aliyun to confirm that's it. > Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on > Cat-x "json-lib" > --- > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Assignee: SammiChen >Priority: Blocker > Fix For: 3.0.0, 3.1.0, 2.10.0, 2.9.1 > > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > Cat-X dependency on org.json via derived json-lib. OSS SDK has a dependency > on json-lib. In LEGAL-245, the org.json library (from which json-lib may be > derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-15100) Configuration#Resource constructor change broke Hive tests
[ https://issues.apache.org/jira/browse/HADOOP-15100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen resolved HADOOP-15100. Resolution: Won't Fix After some discussion with Aihua, he will fix the hive tests to always start the minihivekdc before initializing {{HiveConf}}. > Configuration#Resource constructor change broke Hive tests > -- > > Key: HADOOP-15100 > URL: https://issues.apache.org/jira/browse/HADOOP-15100 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.3, 2.7.5, 3.0.0, 2.9.1 >Reporter: Xiao Chen >Priority: Critical > > In CDH's C6 rebased testing, the following Hive tests started failing: > {noformat} > org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie.org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie > org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie.org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie > org.apache.hive.minikdc.TestHiveAuthFactory.org.apache.hive.minikdc.TestHiveAuthFactory > org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp > org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp > org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs > org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs > org.apache.hive.minikdc.TestJdbcWithMiniKdc.org.apache.hive.minikdc.TestJdbcWithMiniKdc > org.apache.hive.minikdc.TestJdbcWithMiniKdc.org.apache.hive.minikdc.TestJdbcWithMiniKdc > org.apache.hive.minikdc.TestHs2HooksWithMiniKdc.org.apache.hive.minikdc.TestHs2HooksWithMiniKdc > org.apache.hive.minikdc.TestHs2HooksWithMiniKdc.org.apache.hive.minikdc.TestHs2HooksWithMiniKdc > org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc > org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc > org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary > org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary > org.apache.hive.minikdc.TestMiniHiveKdc.testLogin > org.apache.hive.minikdc.TestMiniHiveKdc.testLogin > org.apache.hive.minikdc.TestJdbcWithDBTokenStore.org.apache.hive.minikdc.TestJdbcWithDBTokenStore > org.apache.hive.minikdc.TestJdbcWithDBTokenStore.org.apache.hive.minikdc.TestJdbcWithDBTokenStore > org.apache.hadoop.hive.ql.TestMetaStoreLimitPartitionRequest.testQueryWithInWithFallbackToORM > org.apache.hive.jdbc.TestJdbcWithMiniHS2.testSelectThriftSerializeInTasks > org.apache.hive.jdbc.TestJdbcWithMiniHS2.testEmptyResultsetThriftSerializeInTasks > org.apache.hive.jdbc.TestJdbcWithMiniHS2.testParallelCompilation2 > org.apache.hive.jdbc.TestJdbcWithMiniHS2.testJoinThriftSerializeInTasks > org.apache.hive.jdbc.TestJdbcWithMiniHS2.testParallelCompilation > org.apache.hive.jdbc.TestJdbcWithMiniHS2.testConcurrentStatements > org.apache.hive.jdbc.TestJdbcWithMiniHS2.testFloatCast2DoubleThriftSerializeInTasks > org.apache.hive.jdbc.TestJdbcWithMiniHS2.testEnableThriftSerializeInTasks > org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testExecuteStatementParallel > {noformat} > The exception is > {noformat} > java.lang.ExceptionInInitializerError: null > at sun.security.krb5.Config.getRealmFromDNS(Config.java:1102) > at sun.security.krb5.Config.getDefaultRealm(Config.java:987) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:483) > at > org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:110) > at > org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:63) > at > org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:332) > at > org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:317) > at > org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907) > at > org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:873) > at > org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:740) > at > org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:261) > at >
[jira] [Commented] (HADOOP-15080) Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x "json-lib"
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282678#comment-16282678 ] Chris Douglas commented on HADOOP-15080: Thanks for taking care of this, [~Sammi]. And thanks [~mackrorysd] for following up here and on LEGAL-349 > Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on > Cat-x "json-lib" > --- > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Assignee: SammiChen >Priority: Blocker > Fix For: 3.0.0, 3.1.0, 2.10.0, 2.9.1 > > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > Cat-X dependency on org.json via derived json-lib. OSS SDK has a dependency > on json-lib. In LEGAL-245, the org.json library (from which json-lib may be > derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13974) S3a CLI to support list/purge of pending multipart commits
[ https://issues.apache.org/jira/browse/HADOOP-13974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri updated HADOOP-13974: -- Attachment: HADOOP-13974.004.patch Attaching v4 patch. - Reworks code after related changes were merged from HADOOP-13786. - Fixes typos in docs mentioned by [~ste...@apache.org]. There is one "XXX" comment that needs to be removed.. I left in for discussion (I suggest makine WriteOperationHelper static methods instead of instantiating objects just to, essentially, hold on to a couple of parameters). Another thing I'd like comments on is: Should we just remove listMultipartUploads() and use the new iterator-based listing introduced here? That would take some reworking of committer code so I've left both versions in for now. > S3a CLI to support list/purge of pending multipart commits > -- > > Key: HADOOP-13974 > URL: https://issues.apache.org/jira/browse/HADOOP-13974 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Aaron Fabbri > Attachments: HADOOP-13974.001.patch, HADOOP-13974.002.patch, > HADOOP-13974.003.patch, HADOOP-13974.004.patch > > > The S3A CLI will need to be able to list and delete pending multipart > commits. > We can do the cleanup already via fs.s3a properties. The CLI will let scripts > stat for outstanding data (have a different exit code) and permit batch jobs > to explicitly trigger cleanups. > This will become critical with the multipart committer, as there's a > significantly higher likelihood of commits remaining outstanding. > We may also want to be able to enumerate/cancel all pending commits in the FS > tree -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13974) S3a CLI to support list/purge of pending multipart commits
[ https://issues.apache.org/jira/browse/HADOOP-13974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282675#comment-16282675 ] Aaron Fabbri edited comment on HADOOP-13974 at 12/7/17 11:06 PM: - Attaching v4 patch. - Reworks code after related changes were merged from HADOOP-13786. - Fixes typos in docs mentioned by [~ste...@apache.org]. There is one "XXX" comment that needs to be removed.. I left in for discussion (I suggest makine WriteOperationHelper static methods instead of instantiating objects just to, essentially, hold on to a couple of parameters). Another thing I'd like comments on is: Should we just remove listMultipartUploads() and use the new iterator-based listing introduced here? That would take some reworking of committer code so I've left both versions in for now. In general want suggestions on any code deduplication I may have missed here. I tried to reuse similar code from the S3 Committer stuff but there may be some things I missed. was (Author: fabbri): Attaching v4 patch. - Reworks code after related changes were merged from HADOOP-13786. - Fixes typos in docs mentioned by [~ste...@apache.org]. There is one "XXX" comment that needs to be removed.. I left in for discussion (I suggest makine WriteOperationHelper static methods instead of instantiating objects just to, essentially, hold on to a couple of parameters). Another thing I'd like comments on is: Should we just remove listMultipartUploads() and use the new iterator-based listing introduced here? That would take some reworking of committer code so I've left both versions in for now. > S3a CLI to support list/purge of pending multipart commits > -- > > Key: HADOOP-13974 > URL: https://issues.apache.org/jira/browse/HADOOP-13974 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Aaron Fabbri > Attachments: HADOOP-13974.001.patch, HADOOP-13974.002.patch, > HADOOP-13974.003.patch, HADOOP-13974.004.patch > > > The S3A CLI will need to be able to list and delete pending multipart > commits. > We can do the cleanup already via fs.s3a properties. The CLI will let scripts > stat for outstanding data (have a different exit code) and permit batch jobs > to explicitly trigger cleanups. > This will become critical with the multipart committer, as there's a > significantly higher likelihood of commits remaining outstanding. > We may also want to be able to enumerate/cancel all pending commits in the FS > tree -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors
[ https://issues.apache.org/jira/browse/HADOOP-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282498#comment-16282498 ] Jim Brennan commented on HADOOP-15085: -- The one remaining checkstyle issue is about an empty block in a try-with-resources block where all the work is done in the resource section. I think this is ready for review. > Output streams closed with IOUtils suppressing write errors > --- > > Key: HADOOP-15085 > URL: https://issues.apache.org/jira/browse/HADOOP-15085 > Project: Hadoop Common > Issue Type: Bug >Reporter: Jason Lowe >Assignee: Jim Brennan > Attachments: HADOOP-15085.001.patch, HADOOP-15085.002.patch > > > There are a few places in hadoop-common that are closing an output stream > with IOUtils.cleanupWithLogger like this: > {code} > try { > ...write to outStream... > } finally { > IOUtils.cleanupWithLogger(LOG, outStream); > } > {code} > This suppresses any IOException that occurs during the close() method which > could lead to partial/corrupted output without throwing a corresponding > exception. The code should either use try-with-resources or explicitly close > the stream within the try block so the exception thrown during close() is > properly propagated as exceptions during write operations are. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which breaks rolling upgrade
[ https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282500#comment-16282500 ] Ray Chiang commented on HADOOP-15059: - Never mind. I found the cut-and-paste error in the config file name. Confirmed that I can duplicate the error that [~djp] saw on a cluster without patch 005 and the error is no longer there on the 005 patched cluster. +1 (binding) from me. > 3.0 deployment cannot work with old version MR tar ball which breaks rolling > upgrade > > > Key: HADOOP-15059 > URL: https://issues.apache.org/jira/browse/HADOOP-15059 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Junping Du >Assignee: Jason Lowe >Priority: Blocker > Attachments: HADOOP-15059.001.patch, HADOOP-15059.002.patch, > HADOOP-15059.003.patch, HADOOP-15059.004.patch, HADOOP-15059.005.patch, > HADOOP-15059.006.patch > > > I tried to deploy 3.0 cluster with 2.9 MR tar ball. The MR job is failed > because following error: > {noformat} > 2017-11-21 12:42:50,911 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for > application appattempt_1511295641738_0003_01 > 2017-11-21 12:42:51,070 WARN [main] org.apache.hadoop.util.NativeCodeLoader: > Unable to load native-hadoop library for your platform... using builtin-java > classes where applicable > 2017-11-21 12:42:51,118 FATAL [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster > java.lang.RuntimeException: Unable to determine current user > at > org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:254) > at > org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:220) > at > org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:212) > at > org.apache.hadoop.conf.Configuration.addResource(Configuration.java:888) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1638) > Caused by: java.io.IOException: Exception reading > /tmp/nm-local-dir/usercache/jdu/appcache/application_1511295641738_0003/container_e03_1511295641738_0003_01_01/container_tokens > at > org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:208) > at > org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907) > at > org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:820) > at > org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689) > at > org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:252) > ... 4 more > Caused by: java.io.IOException: Unknown version 1 in token storage. > at > org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226) > at > org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:205) > ... 8 more > 2017-11-21 12:42:51,122 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting > with status 1: java.lang.RuntimeException: Unable to determine current user > {noformat} > I think it is due to token incompatiblity change between 2.9 and 3.0. As we > claim "rolling upgrade" is supported in Hadoop 3, we should fix this before > we ship 3.0 otherwise all MR running applications will get stuck during/after > upgrade. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15056) Fix TestUnbuffer#testUnbufferException failure
[ https://issues.apache.org/jira/browse/HADOOP-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282537#comment-16282537 ] Jack Bearden commented on HADOOP-15056: --- Thanks again [~jzhuge] and [~xiaochen] for all the help > Fix TestUnbuffer#testUnbufferException failure > -- > > Key: HADOOP-15056 > URL: https://issues.apache.org/jira/browse/HADOOP-15056 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.9.0 >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Minor > Attachments: HADOOP-15056.001.patch, HADOOP-15056.002.patch, > HADOOP-15056.003.patch, HADOOP-15056.004.patch, HADOOP-15056.005.patch, > HADOOP-15056.006.patch, HADOOP-15056.007.patch > > > Hello! I am a new contributor and actually contributing to open source for > the very first time. :) > I pulled down Hadoop today and when running the tests I encountered a failure > with the TestUnbuffer#testUnbufferException test. > The unbuffer code has recently gone through some changes and I believe this > test case may have been overlooked. Using today's git commit > (659e85e304d070f9908a96cf6a0e1cbafde6a434), and upon running the test case, > there is an expect mock for an exception UnsupportedOperationException that > is no longer being thrown. > It would appear that a test like this would be valuable so my initial > proposed patch did not remove it. Instead, I removed the conditions that were > guarding the cast from being able to fire -- as was the previous behavior. > Now when we encounter an object that doesn't have the UNBUFFERED > StreamCapability, it will throw an error as it did prior to the recent > changes. > Please review and let me know what you think! :D -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization
[ https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282532#comment-16282532 ] Daryn Sharp commented on HADOOP-9747: - Working on this today. A few quick points: bq. System.setProperty(KRB5CCNAME) is not being set, previously this is being set in the case of IBM_JAVA Intentional. If a specific ticket cache is defined, it must be used. It's wrong set a property for one of the locations to look and then specify default cache which means it might find a ticket cache _somewhere other than specifically defined_. Not to mention a system property has the same thread-safety issues as the statics I removed. bq. getLoginUser is no longer Synchronized. That's definitely the intent. I think it's fine, will re-verify correctness. bq. Can we get away by saying that it’s user’s responsibility to renew external subjects? That external subject behavior is/was completely broken and just an attempt to workaround a subject containing a keytab that wasn't in sync with the (now removed) class static. I think I'm going to no-op relogin from keytab anyway since java caches and re-reads keytab contents as necessary. We've dropped new keytabs with updated kvnos and java picked them up but I'll reverify. bq. In unprotectedLoginUserFromSubject we should change the local variable name instead of overloading loginUser, only for better readability. Sure. Was only trying to minimize patch size. > Reduce unnecessary UGI synchronization > -- > > Key: HADOOP-9747 > URL: https://issues.apache.org/jira/browse/HADOOP-9747 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Attachments: HADOOP-9747-trunk.01.patch, > HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, > HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch > > > Jstacks of heavily loaded NNs show up to dozens of threads blocking in the > UGI. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15080) Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x "json-lib"
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282475#comment-16282475 ] Sean Mackrory commented on HADOOP-15080: I just checked and Sammi already backported it. I expected to see Hudson messages for the branch-3* lines as well as trunk. I'll mark this as resolved... > Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on > Cat-x "json-lib" > --- > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Priority: Blocker > Fix For: 3.0.0, 3.1.0, 3.0.1 > > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > Cat-X dependency on org.json via derived json-lib. OSS SDK has a dependency > on json-lib. In LEGAL-245, the org.json library (from which json-lib may be > derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15100) Configuration#Resource constructor change broke Hive tests
Xiao Chen created HADOOP-15100: -- Summary: Configuration#Resource constructor change broke Hive tests Key: HADOOP-15100 URL: https://issues.apache.org/jira/browse/HADOOP-15100 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.8.3, 2.7.5, 3.0.0, 2.9.1 Reporter: Xiao Chen Priority: Critical In CDH's C6 rebased testing, the following Hive tests started failing: {noformat} org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie.org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie.org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie org.apache.hive.minikdc.TestHiveAuthFactory.org.apache.hive.minikdc.TestHiveAuthFactory org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs org.apache.hive.minikdc.TestJdbcWithMiniKdc.org.apache.hive.minikdc.TestJdbcWithMiniKdc org.apache.hive.minikdc.TestJdbcWithMiniKdc.org.apache.hive.minikdc.TestJdbcWithMiniKdc org.apache.hive.minikdc.TestHs2HooksWithMiniKdc.org.apache.hive.minikdc.TestHs2HooksWithMiniKdc org.apache.hive.minikdc.TestHs2HooksWithMiniKdc.org.apache.hive.minikdc.TestHs2HooksWithMiniKdc org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary org.apache.hive.minikdc.TestMiniHiveKdc.testLogin org.apache.hive.minikdc.TestMiniHiveKdc.testLogin org.apache.hive.minikdc.TestJdbcWithDBTokenStore.org.apache.hive.minikdc.TestJdbcWithDBTokenStore org.apache.hive.minikdc.TestJdbcWithDBTokenStore.org.apache.hive.minikdc.TestJdbcWithDBTokenStore org.apache.hadoop.hive.ql.TestMetaStoreLimitPartitionRequest.testQueryWithInWithFallbackToORM org.apache.hive.jdbc.TestJdbcWithMiniHS2.testSelectThriftSerializeInTasks org.apache.hive.jdbc.TestJdbcWithMiniHS2.testEmptyResultsetThriftSerializeInTasks org.apache.hive.jdbc.TestJdbcWithMiniHS2.testParallelCompilation2 org.apache.hive.jdbc.TestJdbcWithMiniHS2.testJoinThriftSerializeInTasks org.apache.hive.jdbc.TestJdbcWithMiniHS2.testParallelCompilation org.apache.hive.jdbc.TestJdbcWithMiniHS2.testConcurrentStatements org.apache.hive.jdbc.TestJdbcWithMiniHS2.testFloatCast2DoubleThriftSerializeInTasks org.apache.hive.jdbc.TestJdbcWithMiniHS2.testEnableThriftSerializeInTasks org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testExecuteStatementParallel {noformat} The exception is {noformat} java.lang.ExceptionInInitializerError: null at sun.security.krb5.Config.getRealmFromDNS(Config.java:1102) at sun.security.krb5.Config.getDefaultRealm(Config.java:987) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:110) at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:63) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:332) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:317) at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:873) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:740) at org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:261) at org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:229) at org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:221) at org.apache.hadoop.conf.Configuration.addResource(Configuration.java:916) at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:3864) at org.apache.hadoop.hive.conf.HiveConf.(HiveConf.java:3816) at
[jira] [Updated] (HADOOP-15080) Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x "json-lib"
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-15080: - Fix Version/s: (was: 3.0.1) > Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on > Cat-x "json-lib" > --- > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Assignee: SammiChen >Priority: Blocker > Fix For: 3.0.0, 3.1.0 > > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > Cat-X dependency on org.json via derived json-lib. OSS SDK has a dependency > on json-lib. In LEGAL-245, the org.json library (from which json-lib may be > derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15080) Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x "json-lib"
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282523#comment-16282523 ] Andrew Wang commented on HADOOP-15080: -- This needs to go to 2.9.1 also right? > Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on > Cat-x "json-lib" > --- > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Assignee: SammiChen >Priority: Blocker > Fix For: 3.0.0, 3.1.0 > > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > Cat-X dependency on org.json via derived json-lib. OSS SDK has a dependency > on json-lib. In LEGAL-245, the org.json library (from which json-lib may be > derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15056) Fix TestUnbuffer#testUnbufferException failure
[ https://issues.apache.org/jira/browse/HADOOP-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282508#comment-16282508 ] John Zhuge commented on HADOOP-15056: - +1 LGTM > Fix TestUnbuffer#testUnbufferException failure > -- > > Key: HADOOP-15056 > URL: https://issues.apache.org/jira/browse/HADOOP-15056 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.9.0 >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Minor > Attachments: HADOOP-15056.001.patch, HADOOP-15056.002.patch, > HADOOP-15056.003.patch, HADOOP-15056.004.patch, HADOOP-15056.005.patch, > HADOOP-15056.006.patch, HADOOP-15056.007.patch > > > Hello! I am a new contributor and actually contributing to open source for > the very first time. :) > I pulled down Hadoop today and when running the tests I encountered a failure > with the TestUnbuffer#testUnbufferException test. > The unbuffer code has recently gone through some changes and I believe this > test case may have been overlooked. Using today's git commit > (659e85e304d070f9908a96cf6a0e1cbafde6a434), and upon running the test case, > there is an expect mock for an exception UnsupportedOperationException that > is no longer being thrown. > It would appear that a test like this would be valuable so my initial > proposed patch did not remove it. Instead, I removed the conditions that were > guarding the cast from being able to fire -- as was the previous behavior. > Now when we encounter an object that doesn't have the UNBUFFERED > StreamCapability, it will throw an error as it did prior to the recent > changes. > Please review and let me know what you think! :D -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors
[ https://issues.apache.org/jira/browse/HADOOP-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282482#comment-16282482 ] genericqa commented on HADOOP-15085: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 35s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 45s{color} | {color:orange} hadoop-common-project: The patch generated 1 new + 531 unchanged - 0 fixed = 532 total (was 531) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 47s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 25s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 53s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 87m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15085 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901122/HADOOP-15085.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 692ad39a38d2 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 67b2661 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle |
[jira] [Commented] (HADOOP-15056) Fix TestUnbuffer#testUnbufferException failure
[ https://issues.apache.org/jira/browse/HADOOP-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282481#comment-16282481 ] Xiao Chen commented on HADOOP-15056: Thanks for revving [~jackbearden], +1 on patch 7. Any additional comments [~jzhuge]? > Fix TestUnbuffer#testUnbufferException failure > -- > > Key: HADOOP-15056 > URL: https://issues.apache.org/jira/browse/HADOOP-15056 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.9.0 >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Minor > Attachments: HADOOP-15056.001.patch, HADOOP-15056.002.patch, > HADOOP-15056.003.patch, HADOOP-15056.004.patch, HADOOP-15056.005.patch, > HADOOP-15056.006.patch, HADOOP-15056.007.patch > > > Hello! I am a new contributor and actually contributing to open source for > the very first time. :) > I pulled down Hadoop today and when running the tests I encountered a failure > with the TestUnbuffer#testUnbufferException test. > The unbuffer code has recently gone through some changes and I believe this > test case may have been overlooked. Using today's git commit > (659e85e304d070f9908a96cf6a0e1cbafde6a434), and upon running the test case, > there is an expect mock for an exception UnsupportedOperationException that > is no longer being thrown. > It would appear that a test like this would be valuable so my initial > proposed patch did not remove it. Instead, I removed the conditions that were > guarding the cast from being able to fire -- as was the previous behavior. > Now when we encounter an object that doesn't have the UNBUFFERED > StreamCapability, it will throw an error as it did prior to the recent > changes. > Please review and let me know what you think! :D -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15098) TestClusterTopology#testChooseRandom fails intermittently
[ https://issues.apache.org/jira/browse/HADOOP-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory updated HADOOP-15098: --- Resolution: Fixed Fix Version/s: 3.0.1 3.1.0 Status: Resolved (was: Patch Available) > TestClusterTopology#testChooseRandom fails intermittently > - > > Key: HADOOP-15098 > URL: https://issues.apache.org/jira/browse/HADOOP-15098 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 3.0.0 >Reporter: Zsolt Venczel >Assignee: Zsolt Venczel > Labels: flaky-test > Fix For: 3.1.0, 3.0.1 > > Attachments: HADOOP-15098.01.patch > > > Flaky test failure: > {code:java} > java.lang.AssertionError > Error > Not choosing nodes randomly > Stack Trace > java.lang.AssertionError: Not choosing nodes randomly > at > org.apache.hadoop.net.TestClusterTopology.testChooseRandom(TestClusterTopology.java:170) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15080) Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x "json-lib"
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory updated HADOOP-15080: --- Resolution: Fixed Assignee: SammiChen Fix Version/s: 3.0.1 3.1.0 3.0.0 Status: Resolved (was: Patch Available) > Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on > Cat-x "json-lib" > --- > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Assignee: SammiChen >Priority: Blocker > Fix For: 3.0.0, 3.1.0, 3.0.1 > > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > Cat-X dependency on org.json via derived json-lib. OSS SDK has a dependency > on json-lib. In LEGAL-245, the org.json library (from which json-lib may be > derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15080) Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x "json-lib"
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282458#comment-16282458 ] Andrew Wang commented on HADOOP-15080: -- Let's get this backported to branch-3.0 and branch-3.0.0, based on the prior discussion (and that precommit ran against branch-3.0.0) I think it's clear the intent was to get this in for 3.0.0. > Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on > Cat-x "json-lib" > --- > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Priority: Blocker > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > Cat-X dependency on org.json via derived json-lib. OSS SDK has a dependency > on json-lib. In LEGAL-245, the org.json library (from which json-lib may be > derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which breaks rolling upgrade
[ https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282465#comment-16282465 ] Daryn Sharp commented on HADOOP-15059: -- bq. Daryn Sharp, don't get scared by exception handling! I'm just grumbly about the chain of catching ArrayIndexOutOfBoundsException, rethrowing as IllegalArgumentException, catching that, rethrowing as IOException. In the end, it doesn't really matter though. I'm still +1. Full speed ahead. > 3.0 deployment cannot work with old version MR tar ball which breaks rolling > upgrade > > > Key: HADOOP-15059 > URL: https://issues.apache.org/jira/browse/HADOOP-15059 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Junping Du >Assignee: Jason Lowe >Priority: Blocker > Attachments: HADOOP-15059.001.patch, HADOOP-15059.002.patch, > HADOOP-15059.003.patch, HADOOP-15059.004.patch, HADOOP-15059.005.patch, > HADOOP-15059.006.patch > > > I tried to deploy 3.0 cluster with 2.9 MR tar ball. The MR job is failed > because following error: > {noformat} > 2017-11-21 12:42:50,911 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for > application appattempt_1511295641738_0003_01 > 2017-11-21 12:42:51,070 WARN [main] org.apache.hadoop.util.NativeCodeLoader: > Unable to load native-hadoop library for your platform... using builtin-java > classes where applicable > 2017-11-21 12:42:51,118 FATAL [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster > java.lang.RuntimeException: Unable to determine current user > at > org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:254) > at > org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:220) > at > org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:212) > at > org.apache.hadoop.conf.Configuration.addResource(Configuration.java:888) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1638) > Caused by: java.io.IOException: Exception reading > /tmp/nm-local-dir/usercache/jdu/appcache/application_1511295641738_0003/container_e03_1511295641738_0003_01_01/container_tokens > at > org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:208) > at > org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907) > at > org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:820) > at > org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689) > at > org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:252) > ... 4 more > Caused by: java.io.IOException: Unknown version 1 in token storage. > at > org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226) > at > org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:205) > ... 8 more > 2017-11-21 12:42:51,122 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting > with status 1: java.lang.RuntimeException: Unable to determine current user > {noformat} > I think it is due to token incompatiblity change between 2.9 and 3.0. As we > claim "rolling upgrade" is supported in Hadoop 3, we should fix this before > we ship 3.0 otherwise all MR running applications will get stuck during/after > upgrade. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15098) TestClusterTopology#testChooseRandom fails intermittently
[ https://issues.apache.org/jira/browse/HADOOP-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282343#comment-16282343 ] Sean Mackrory commented on HADOOP-15098: +1. The reasoning is sound, code changes look good, and I can confirm 3/100 failures without your patch, 0/100 failures with your patch. Still not entirely deterministic, but in the exceptionally low event of failure, I think the comments and git history will make it sufficiently clear. Will commit later today. > TestClusterTopology#testChooseRandom fails intermittently > - > > Key: HADOOP-15098 > URL: https://issues.apache.org/jira/browse/HADOOP-15098 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 3.0.0 >Reporter: Zsolt Venczel >Assignee: Zsolt Venczel > Labels: flaky-test > Attachments: HADOOP-15098.01.patch > > > Flaky test failure: > {code:java} > java.lang.AssertionError > Error > Not choosing nodes randomly > Stack Trace > java.lang.AssertionError: Not choosing nodes randomly > at > org.apache.hadoop.net.TestClusterTopology.testChooseRandom(TestClusterTopology.java:170) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors
[ https://issues.apache.org/jira/browse/HADOOP-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Brennan updated HADOOP-15085: - Attachment: HADOOP-15085.002.patch Attaching a new patch that address most of the style complaints. > Output streams closed with IOUtils suppressing write errors > --- > > Key: HADOOP-15085 > URL: https://issues.apache.org/jira/browse/HADOOP-15085 > Project: Hadoop Common > Issue Type: Bug >Reporter: Jason Lowe >Assignee: Jim Brennan > Attachments: HADOOP-15085.001.patch, HADOOP-15085.002.patch > > > There are a few places in hadoop-common that are closing an output stream > with IOUtils.cleanupWithLogger like this: > {code} > try { > ...write to outStream... > } finally { > IOUtils.cleanupWithLogger(LOG, outStream); > } > {code} > This suppresses any IOException that occurs during the close() method which > could lead to partial/corrupted output without throwing a corresponding > exception. The code should either use try-with-resources or explicitly close > the stream within the try block so the exception thrown during close() is > properly propagated as exceptions during write operations are. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15099) YARN Federation Link not working
[ https://issues.apache.org/jira/browse/HADOOP-15099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282311#comment-16282311 ] Anirudh edited comment on HADOOP-15099 at 12/7/17 6:51 PM: --- Fix added, PR raised: [https://github.com/apache/hadoop/pull/300] was (Author: animenon): Fix added, PR raised: [pull 300](HADOOP-15099) > YARN Federation Link not working > > > Key: HADOOP-15099 > URL: https://issues.apache.org/jira/browse/HADOOP-15099 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.9.0 >Reporter: Anirudh >Priority: Trivial > Labels: documentation, easyfix, newbie > Original Estimate: 1m > Remaining Estimate: 1m > > YARN federation link isn't working on [YARN > page](http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15099) YARN Federation Link not working
[ https://issues.apache.org/jira/browse/HADOOP-15099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anirudh updated HADOOP-15099: - Target Version/s: 2.9.1 Status: Patch Available (was: Open) fixed, PR raised: [https://github.com/apache/hadoop/pull/300] > YARN Federation Link not working > > > Key: HADOOP-15099 > URL: https://issues.apache.org/jira/browse/HADOOP-15099 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.9.0 >Reporter: Anirudh >Priority: Trivial > Labels: documentation, easyfix, newbie > Original Estimate: 1m > Remaining Estimate: 1m > > YARN federation link isn't working on [YARN > page](http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-15099) YARN Federation Link not working
[ https://issues.apache.org/jira/browse/HADOOP-15099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anirudh updated HADOOP-15099: - Comment: was deleted (was: Fix added, PR raised: [https://github.com/apache/hadoop/pull/300]) > YARN Federation Link not working > > > Key: HADOOP-15099 > URL: https://issues.apache.org/jira/browse/HADOOP-15099 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.9.0 >Reporter: Anirudh >Priority: Trivial > Labels: documentation, easyfix, newbie > Original Estimate: 1m > Remaining Estimate: 1m > > YARN federation link isn't working on [YARN > page](http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15099) YARN Federation Link not working
[ https://issues.apache.org/jira/browse/HADOOP-15099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282311#comment-16282311 ] Anirudh commented on HADOOP-15099: -- Fix added, PR raised: [pull 300](HADOOP-15099) > YARN Federation Link not working > > > Key: HADOOP-15099 > URL: https://issues.apache.org/jira/browse/HADOOP-15099 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.9.0 >Reporter: Anirudh >Priority: Trivial > Labels: documentation, easyfix, newbie > Original Estimate: 1m > Remaining Estimate: 1m > > YARN federation link isn't working on [YARN > page](http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15099) YARN Federation Link not working
[ https://issues.apache.org/jira/browse/HADOOP-15099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anirudh updated HADOOP-15099: - Description: YARN federation(in the last paragraph on the page) link isn't working on [(http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html]. was: YARN federation link isn't working on [YARN page](http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html). > YARN Federation Link not working > > > Key: HADOOP-15099 > URL: https://issues.apache.org/jira/browse/HADOOP-15099 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.9.0 >Reporter: Anirudh >Priority: Trivial > Labels: documentation, easyfix, newbie > Original Estimate: 1m > Remaining Estimate: 1m > > YARN federation(in the last paragraph on the page) link isn't working on > [(http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15099) YARN Federation Link not working
[ https://issues.apache.org/jira/browse/HADOOP-15099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anirudh updated HADOOP-15099: - Description: YARN federation(in the last paragraph on the page) link isn't working on [http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html]. was: YARN federation(in the last paragraph on the page) link isn't working on [(http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html]. > YARN Federation Link not working > > > Key: HADOOP-15099 > URL: https://issues.apache.org/jira/browse/HADOOP-15099 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.9.0 >Reporter: Anirudh >Priority: Trivial > Labels: documentation, easyfix, newbie > Original Estimate: 1m > Remaining Estimate: 1m > > YARN federation(in the last paragraph on the page) link isn't working on > [http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15099) YARN Federation Link not working
Anirudh created HADOOP-15099: Summary: YARN Federation Link not working Key: HADOOP-15099 URL: https://issues.apache.org/jira/browse/HADOOP-15099 Project: Hadoop Common Issue Type: Bug Components: documentation Affects Versions: 2.9.0 Reporter: Anirudh Priority: Trivial YARN federation link isn't working on [YARN page](http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15012) Add readahead, dropbehind, and unbuffer to StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-15012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282227#comment-16282227 ] Xiao Chen commented on HADOOP-15012: Thanks for the review [~ste...@apache.org]. This is the same patch as trunk. Specifically it's the cherry-pick of the trunk commit bf6a660232b01642b07697a289c773ea5b97217c, with the minimal resolution of conflicts in the following files: {noformat} both modified: hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md both modified: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java both modified: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java {noformat} Will commit to branch-2, and cherry-pick to branch-3.0 once HADOOP-15056 is fixed in trunk. > Add readahead, dropbehind, and unbuffer to StreamCapabilities > - > > Key: HADOOP-15012 > URL: https://issues.apache.org/jira/browse/HADOOP-15012 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.9.0 >Reporter: John Zhuge >Assignee: John Zhuge > Fix For: 3.1.0 > > Attachments: HADOOP-15012.branch-2.01.patch > > > A split from HADOOP-14872 to track changes that enhance StreamCapabilities > class with READAHEAD, DROPBEHIND, and UNBUFFER capability. > Discussions and code reviews are done in HADOOP-14872. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors
[ https://issues.apache.org/jira/browse/HADOOP-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282199#comment-16282199 ] genericqa commented on HADOOP-15085: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 36s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 41s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 44s{color} | {color:orange} hadoop-common-project: The patch generated 6 new + 531 unchanged - 0 fixed = 537 total (was 531) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 53s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 36s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 12s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}103m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15085 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901082/HADOOP-15085.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a279f861b795 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 67b2661 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle |
[jira] [Commented] (HADOOP-15098) TestClusterTopology#testChooseRandom fails intermittently
[ https://issues.apache.org/jira/browse/HADOOP-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282164#comment-16282164 ] genericqa commented on HADOOP-15098: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 30s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 4 unchanged - 3 fixed = 4 total (was 7) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 46s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 38s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 79m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15098 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901056/HADOOP-15098.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux fda5a27300f3 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 67b2661 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13802/testReport/ | | Max. process+thread count | 1468 (vs. ulimit of 5000) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13802/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestClusterTopology#testChooseRandom fails intermittently >
[jira] [Commented] (HADOOP-15080) Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x "json-lib"
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282134#comment-16282134 ] Sean Mackrory commented on HADOOP-15080: Thanks for the quick turn-around on the new SDK - I can confirm the dependency is gone in trunk. Are you also going to backport branch-3.0 and branch-3.0.0? Happy to do it if you're not already on it. > Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on > Cat-x "json-lib" > --- > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Priority: Blocker > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > Cat-X dependency on org.json via derived json-lib. OSS SDK has a dependency > on json-lib. In LEGAL-245, the org.json library (from which json-lib may be > derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15080) Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x "json-lib"
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16282062#comment-16282062 ] Hudson commented on HADOOP-15080: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13340 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13340/]) HADOOP-15080. Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove (sammi.chen: rev 67b2661e3d73a68ba7ca73b112bf6baea128631e) * (edit) hadoop-project/pom.xml > Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on > Cat-x "json-lib" > --- > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Priority: Blocker > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > Cat-X dependency on org.json via derived json-lib. OSS SDK has a dependency > on json-lib. In LEGAL-245, the org.json library (from which json-lib may be > derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors
[ https://issues.apache.org/jira/browse/HADOOP-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Brennan updated HADOOP-15085: - Status: Patch Available (was: Open) Submitting patch. > Output streams closed with IOUtils suppressing write errors > --- > > Key: HADOOP-15085 > URL: https://issues.apache.org/jira/browse/HADOOP-15085 > Project: Hadoop Common > Issue Type: Bug >Reporter: Jason Lowe >Assignee: Jim Brennan > Attachments: HADOOP-15085.001.patch > > > There are a few places in hadoop-common that are closing an output stream > with IOUtils.cleanupWithLogger like this: > {code} > try { > ...write to outStream... > } finally { > IOUtils.cleanupWithLogger(LOG, outStream); > } > {code} > This suppresses any IOException that occurs during the close() method which > could lead to partial/corrupted output without throwing a corresponding > exception. The code should either use try-with-resources or explicitly close > the stream within the try block so the exception thrown during close() is > properly propagated as exceptions during write operations are. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors
[ https://issues.apache.org/jira/browse/HADOOP-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Brennan updated HADOOP-15085: - Attachment: HADOOP-15085.001.patch Submitting a patch which uses try-with-resources to address these issues. > Output streams closed with IOUtils suppressing write errors > --- > > Key: HADOOP-15085 > URL: https://issues.apache.org/jira/browse/HADOOP-15085 > Project: Hadoop Common > Issue Type: Bug >Reporter: Jason Lowe >Assignee: Jim Brennan > Attachments: HADOOP-15085.001.patch > > > There are a few places in hadoop-common that are closing an output stream > with IOUtils.cleanupWithLogger like this: > {code} > try { > ...write to outStream... > } finally { > IOUtils.cleanupWithLogger(LOG, outStream); > } > {code} > This suppresses any IOException that occurs during the close() method which > could lead to partial/corrupted output without throwing a corresponding > exception. The code should either use try-with-resources or explicitly close > the stream within the try block so the exception thrown during close() is > properly propagated as exceptions during write operations are. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15080) Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x "json-lib"
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-15080: --- Summary: Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x "json-lib" (was: Cat-X dependency on org.json via derived json-lib) > Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on > Cat-x "json-lib" > --- > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Priority: Blocker > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > Cat-X dependency on org.json via derived json-lib. OSS SDK has a dependency > on json-lib. In LEGAL-245, the org.json library (from which json-lib may be > derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15080) Cat-X dependency on org.json via derived json-lib
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-15080: --- Description: Cat-X dependency on org.json via derived json-lib. OSS SDK has a dependency on json-lib. In LEGAL-245, the org.json library (from which json-lib may be derived) is released under a [category-x|https://www.apache.org/legal/resolved.html#json] license. (was: The OSS SDK has a dependency on json-lib. In LEGAL-245, the org.json library (from which json-lib may be derived) is released under a [category-x|https://www.apache.org/legal/resolved.html#json] license.) > Cat-X dependency on org.json via derived json-lib > - > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Priority: Blocker > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > Cat-X dependency on org.json via derived json-lib. OSS SDK has a dependency > on json-lib. In LEGAL-245, the org.json library (from which json-lib may be > derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15056) Fix TestUnbuffer#testUnbufferException failure
[ https://issues.apache.org/jira/browse/HADOOP-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281909#comment-16281909 ] genericqa commented on HADOOP-15056: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 55s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 0s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 32s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}189m 32s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestSafeMode | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDistributedFileSystemWithECFile | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-15056 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901032/HADOOP-15056.007.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f059d50c097f 3.13.0-135-generic
[jira] [Commented] (HADOOP-14959) DelegationTokenAuthenticator.authenticate() to wrap network exceptions
[ https://issues.apache.org/jira/browse/HADOOP-14959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281896#comment-16281896 ] Steve Loughran commented on HADOOP-14959: - LGTM; checkstyle is only about a line length of 82 chars after the extra indentation. That should really be fixed by chopping down the line earlier than it currently is done. What do other people who understand UGI have to say about this? > DelegationTokenAuthenticator.authenticate() to wrap network exceptions > -- > > Key: HADOOP-14959 > URL: https://issues.apache.org/jira/browse/HADOOP-14959 > Project: Hadoop Common > Issue Type: Improvement > Components: net, security >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-14959.001.patch > > > network errors raised in {{DelegationTokenAuthenticator.authenticate()}} > aren't being wrapped, so only return the usual limited-value java.net error > text. using {{NetUtils.wrapException()}} can address that -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15080) Cat-X dependency on org.json via derived json-lib
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281777#comment-16281777 ] SammiChen commented on HADOOP-15080: Thanks [~drankye] for the review. I will commit it later. Thanks [~chris.douglas], [~ste...@apache.org] , [~mackrorysd] and [~andrew.wang] for all your support. > Cat-X dependency on org.json via derived json-lib > - > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Priority: Blocker > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > The OSS SDK has a dependency on json-lib. In LEGAL-245, the org.json library > (from which json-lib may be derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15098) TestClusterTopology#testChooseRandom fails intermittently
[ https://issues.apache.org/jira/browse/HADOOP-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zsolt Venczel updated HADOOP-15098: --- Status: Patch Available (was: In Progress) > TestClusterTopology#testChooseRandom fails intermittently > - > > Key: HADOOP-15098 > URL: https://issues.apache.org/jira/browse/HADOOP-15098 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 3.0.0 >Reporter: Zsolt Venczel >Assignee: Zsolt Venczel > Labels: flaky-test > Attachments: HADOOP-15098.01.patch > > > Flaky test failure: > {code:java} > java.lang.AssertionError > Error > Not choosing nodes randomly > Stack Trace > java.lang.AssertionError: Not choosing nodes randomly > at > org.apache.hadoop.net.TestClusterTopology.testChooseRandom(TestClusterTopology.java:170) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15098) TestClusterTopology#testChooseRandom fails intermittently
[ https://issues.apache.org/jira/browse/HADOOP-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zsolt Venczel updated HADOOP-15098: --- Attachment: HADOOP-15098.01.patch Check ChiSquareTest three times as suggested by Knuth, Donald E.The Art of Computer Programming.vol.2.2 ed.Reading, MA: Addison-Wesley,1981, page 44 > TestClusterTopology#testChooseRandom fails intermittently > - > > Key: HADOOP-15098 > URL: https://issues.apache.org/jira/browse/HADOOP-15098 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 3.0.0 >Reporter: Zsolt Venczel >Assignee: Zsolt Venczel > Labels: flaky-test > Attachments: HADOOP-15098.01.patch > > > Flaky test failure: > {code:java} > java.lang.AssertionError > Error > Not choosing nodes randomly > Stack Trace > java.lang.AssertionError: Not choosing nodes randomly > at > org.apache.hadoop.net.TestClusterTopology.testChooseRandom(TestClusterTopology.java:170) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15097) AbstractContractDeleteTest::testDeleteNonEmptyDirRecursive with misleading path
[ https://issues.apache.org/jira/browse/HADOOP-15097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281743#comment-16281743 ] Steve Loughran commented on HADOOP-15097: - Well spotted; no doubt a copy and paste error. its important to have unique pathnames so that eventually consistent object stores don't have tests interfering with each other. Patch welcome, ideally tested against any object stores to which you have credentials > AbstractContractDeleteTest::testDeleteNonEmptyDirRecursive with misleading > path > --- > > Key: HADOOP-15097 > URL: https://issues.apache.org/jira/browse/HADOOP-15097 > Project: Hadoop Common > Issue Type: Bug > Components: fs, test >Affects Versions: 3.0.0-beta1 >Reporter: zhoutai.zt >Priority: Minor > > @Test > public void testDeleteNonEmptyDirRecursive() throws Throwable { > Path path = path("{color:red}testDeleteNonEmptyDirNonRecursive{color}"); > mkdirs(path); > Path file = new Path(path, "childfile"); > ContractTestUtils.writeTextFile(getFileSystem(), file, "goodbye, world", > true); > assertDeleted(path, true); > assertPathDoesNotExist("not deleted", file); > } > change testDeleteNonEmptyDirNonRecursive to testDeleteNonEmptyDirRecursive -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15012) Add readahead, dropbehind, and unbuffer to StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-15012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281741#comment-16281741 ] Steve Loughran commented on HADOOP-15012: - This is the same patch as in trunk, right? So we shouldn't be making changes here which increase the diff between branch-2 & trunk? Given that, even though checkstyle is complaining, +1 to this patch > Add readahead, dropbehind, and unbuffer to StreamCapabilities > - > > Key: HADOOP-15012 > URL: https://issues.apache.org/jira/browse/HADOOP-15012 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.9.0 >Reporter: John Zhuge >Assignee: John Zhuge > Fix For: 3.1.0 > > Attachments: HADOOP-15012.branch-2.01.patch > > > A split from HADOOP-14872 to track changes that enhance StreamCapabilities > class with READAHEAD, DROPBEHIND, and UNBUFFER capability. > Discussions and code reviews are done in HADOOP-14872. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15080) Cat-X dependency on org.json via derived json-lib
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281673#comment-16281673 ] genericqa commented on HADOOP-15080: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-3.0.0 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 31s{color} | {color:green} branch-3.0.0 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s{color} | {color:green} branch-3.0.0 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s{color} | {color:green} branch-3.0.0 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 26m 31s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} branch-3.0.0 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 13s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 4s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:20ca677 | | JIRA Issue | HADOOP-15080 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901021/HADOOP-15080-branch-3.0.0.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 706eed3ae3f5 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-3.0.0 / 7f35409 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13800/testReport/ | | Max. process+thread count | 298 (vs. ulimit of 5000) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13800/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Cat-X dependency on org.json via derived json-lib > - > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Priority: Blocker > Attachments:
[jira] [Moved] (HADOOP-15098) TestClusterTopology#testChooseRandom fails intermittently
[ https://issues.apache.org/jira/browse/HADOOP-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zsolt Venczel moved HDFS-12892 to HADOOP-15098: --- Affects Version/s: (was: 3.0.0) 3.0.0 Target Version/s: 3.0.0 (was: 3.0.0) Component/s: (was: test) test Key: HADOOP-15098 (was: HDFS-12892) Project: Hadoop Common (was: Hadoop HDFS) > TestClusterTopology#testChooseRandom fails intermittently > - > > Key: HADOOP-15098 > URL: https://issues.apache.org/jira/browse/HADOOP-15098 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 3.0.0 >Reporter: Zsolt Venczel >Assignee: Zsolt Venczel > Labels: flaky-test > > Flaky test failure: > {code:java} > java.lang.AssertionError > Error > Not choosing nodes randomly > Stack Trace > java.lang.AssertionError: Not choosing nodes randomly > at > org.apache.hadoop.net.TestClusterTopology.testChooseRandom(TestClusterTopology.java:170) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15097) AbstractContractDeleteTest::testDeleteNonEmptyDirRecursive with misleading path
zhoutai.zt created HADOOP-15097: --- Summary: AbstractContractDeleteTest::testDeleteNonEmptyDirRecursive with misleading path Key: HADOOP-15097 URL: https://issues.apache.org/jira/browse/HADOOP-15097 Project: Hadoop Common Issue Type: Bug Components: fs, test Affects Versions: 3.0.0-beta1 Reporter: zhoutai.zt Priority: Minor @Test public void testDeleteNonEmptyDirRecursive() throws Throwable { Path path = path("{color:red}testDeleteNonEmptyDirNonRecursive{color}"); mkdirs(path); Path file = new Path(path, "childfile"); ContractTestUtils.writeTextFile(getFileSystem(), file, "goodbye, world", true); assertDeleted(path, true); assertPathDoesNotExist("not deleted", file); } change testDeleteNonEmptyDirNonRecursive to testDeleteNonEmptyDirRecursive -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15056) Fix TestUnbuffer#testUnbufferException failure
[ https://issues.apache.org/jira/browse/HADOOP-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HADOOP-15056: -- Attachment: HADOOP-15056.007.patch #7 Correcting a missed whitespace character in the error message > Fix TestUnbuffer#testUnbufferException failure > -- > > Key: HADOOP-15056 > URL: https://issues.apache.org/jira/browse/HADOOP-15056 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.9.0 >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Minor > Attachments: HADOOP-15056.001.patch, HADOOP-15056.002.patch, > HADOOP-15056.003.patch, HADOOP-15056.004.patch, HADOOP-15056.005.patch, > HADOOP-15056.006.patch, HADOOP-15056.007.patch > > > Hello! I am a new contributor and actually contributing to open source for > the very first time. :) > I pulled down Hadoop today and when running the tests I encountered a failure > with the TestUnbuffer#testUnbufferException test. > The unbuffer code has recently gone through some changes and I believe this > test case may have been overlooked. Using today's git commit > (659e85e304d070f9908a96cf6a0e1cbafde6a434), and upon running the test case, > there is an expect mock for an exception UnsupportedOperationException that > is no longer being thrown. > It would appear that a test like this would be valuable so my initial > proposed patch did not remove it. Instead, I removed the conditions that were > guarding the cast from being able to fire -- as was the previous behavior. > Now when we encounter an object that doesn't have the UNBUFFERED > StreamCapability, it will throw an error as it did prior to the recent > changes. > Please review and let me know what you think! :D -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15056) Fix TestUnbuffer#testUnbufferException failure
[ https://issues.apache.org/jira/browse/HADOOP-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281578#comment-16281578 ] Jack Bearden commented on HADOOP-15056: --- Thanks [~xiaochen], patch #6 has fixes for the last review. Let me know if you find anything else :) > Fix TestUnbuffer#testUnbufferException failure > -- > > Key: HADOOP-15056 > URL: https://issues.apache.org/jira/browse/HADOOP-15056 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.9.0 >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Minor > Attachments: HADOOP-15056.001.patch, HADOOP-15056.002.patch, > HADOOP-15056.003.patch, HADOOP-15056.004.patch, HADOOP-15056.005.patch, > HADOOP-15056.006.patch > > > Hello! I am a new contributor and actually contributing to open source for > the very first time. :) > I pulled down Hadoop today and when running the tests I encountered a failure > with the TestUnbuffer#testUnbufferException test. > The unbuffer code has recently gone through some changes and I believe this > test case may have been overlooked. Using today's git commit > (659e85e304d070f9908a96cf6a0e1cbafde6a434), and upon running the test case, > there is an expect mock for an exception UnsupportedOperationException that > is no longer being thrown. > It would appear that a test like this would be valuable so my initial > proposed patch did not remove it. Instead, I removed the conditions that were > guarding the cast from being able to fire -- as was the previous behavior. > Now when we encounter an object that doesn't have the UNBUFFERED > StreamCapability, it will throw an error as it did prior to the recent > changes. > Please review and let me know what you think! :D -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15056) Fix TestUnbuffer#testUnbufferException failure
[ https://issues.apache.org/jira/browse/HADOOP-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jack Bearden updated HADOOP-15056: -- Attachment: HADOOP-15056.006.patch > Fix TestUnbuffer#testUnbufferException failure > -- > > Key: HADOOP-15056 > URL: https://issues.apache.org/jira/browse/HADOOP-15056 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.9.0 >Reporter: Jack Bearden >Assignee: Jack Bearden >Priority: Minor > Attachments: HADOOP-15056.001.patch, HADOOP-15056.002.patch, > HADOOP-15056.003.patch, HADOOP-15056.004.patch, HADOOP-15056.005.patch, > HADOOP-15056.006.patch > > > Hello! I am a new contributor and actually contributing to open source for > the very first time. :) > I pulled down Hadoop today and when running the tests I encountered a failure > with the TestUnbuffer#testUnbufferException test. > The unbuffer code has recently gone through some changes and I believe this > test case may have been overlooked. Using today's git commit > (659e85e304d070f9908a96cf6a0e1cbafde6a434), and upon running the test case, > there is an expect mock for an exception UnsupportedOperationException that > is no longer being thrown. > It would appear that a test like this would be valuable so my initial > proposed patch did not remove it. Instead, I removed the conditions that were > guarding the cast from being able to fire -- as was the previous behavior. > Now when we encounter an object that doesn't have the UNBUFFERED > StreamCapability, it will throw an error as it did prior to the recent > changes. > Please review and let me know what you think! :D -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15080) Cat-X dependency on org.json via derived json-lib
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281518#comment-16281518 ] Kai Zheng commented on HADOOP-15080: Thanks [~Sammi] for the quick tweak on this! The change LGTM and +1, also having a check on the new sdk and it looks clean. > Cat-X dependency on org.json via derived json-lib > - > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Priority: Blocker > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > The OSS SDK has a dependency on json-lib. In LEGAL-245, the org.json library > (from which json-lib may be derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15012) Add readahead, dropbehind, and unbuffer to StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-15012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281513#comment-16281513 ] genericqa commented on HADOOP-15012: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 45s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 40s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 53s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 33s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 40s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 28s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 24s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 39s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 39s{color} | {color:red} root generated 3 new + 1436 unchanged - 0 fixed = 1439 total (was 1436) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 41s{color} | {color:orange} root: The patch generated 1 new + 105 unchanged - 0 fixed = 106 total (was 105) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 37s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 9s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 86m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:17213a0 | | JIRA Issue | HADOOP-15012 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12901015/HADOOP-15012.branch-2.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f8bce292021c 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / 046424c | | maven | version: Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) | | Default Java | 1.7.0_151 | | findbugs | v3.0.0 | | javac |
[jira] [Comment Edited] (HADOOP-15080) Cat-X dependency on org.json via derived json-lib
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281496#comment-16281496 ] SammiChen edited comment on HADOOP-15080 at 12/7/17 8:32 AM: - Aliyun OSS team provides oss sdk 2.8.3 to replace 2.8.1. json-lib is replaced by Jersey-json 1.9 as "test" scope dependency of oss sdk 2.8.3. Here is my verification steps, 1. delete json-lib in local maven repository 2. clean build Hadoop 3. all Hadoop OSS module UT passed 4. check local maven repository, json-lib is not downloaded was (Author: sammi): Aliyun OSS team provides oss sdk 2.8.3 to replace 2.8.1. json-lib is replaced by Jersey-json 1.9 as "test" scope dependency of oss sdk 2.8.3. Here is my verification steps, 1. delete json-lib in local maven repository 2. clean compiled Hadoop 3. all Hadoop OSS module UT passed 4. check local maven repository, json-lib is not downloaded > Cat-X dependency on org.json via derived json-lib > - > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Priority: Blocker > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > The OSS SDK has a dependency on json-lib. In LEGAL-245, the org.json library > (from which json-lib may be derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15080) Cat-X dependency on org.json via derived json-lib
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281496#comment-16281496 ] SammiChen commented on HADOOP-15080: Aliyun OSS team provides oss sdk 2.8.3 to replace 2.8.1. json-lib is replaced by Jersey-json 1.9 as "test" scope dependency of oss sdk 2.8.3. Here is my verification steps, 1. delete json-lib in local maven repository 1. clean compiled Hadoop 2. all Hadoop OSS module UT passed 3. check local maven repository, json-lib is not downloaded > Cat-X dependency on org.json via derived json-lib > - > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Priority: Blocker > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > The OSS SDK has a dependency on json-lib. In LEGAL-245, the org.json library > (from which json-lib may be derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15080) Cat-X dependency on org.json via derived json-lib
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281496#comment-16281496 ] SammiChen edited comment on HADOOP-15080 at 12/7/17 8:31 AM: - Aliyun OSS team provides oss sdk 2.8.3 to replace 2.8.1. json-lib is replaced by Jersey-json 1.9 as "test" scope dependency of oss sdk 2.8.3. Here is my verification steps, 1. delete json-lib in local maven repository 2. clean compiled Hadoop 3. all Hadoop OSS module UT passed 4. check local maven repository, json-lib is not downloaded was (Author: sammi): Aliyun OSS team provides oss sdk 2.8.3 to replace 2.8.1. json-lib is replaced by Jersey-json 1.9 as "test" scope dependency of oss sdk 2.8.3. Here is my verification steps, 1. delete json-lib in local maven repository 1. clean compiled Hadoop 2. all Hadoop OSS module UT passed 3. check local maven repository, json-lib is not downloaded > Cat-X dependency on org.json via derived json-lib > - > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Priority: Blocker > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > The OSS SDK has a dependency on json-lib. In LEGAL-245, the org.json library > (from which json-lib may be derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15080) Cat-X dependency on org.json via derived json-lib
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-15080: --- Attachment: HADOOP-15080-branch-3.0.0.002.patch > Cat-X dependency on org.json via derived json-lib > - > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Priority: Blocker > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > The OSS SDK has a dependency on json-lib. In LEGAL-245, the org.json library > (from which json-lib may be derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org