[jira] [Commented] (HADOOP-15101) what testListStatusFile verified not consistent with listStatus declaration in FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284550#comment-16284550 ] zhoutai.zt commented on HADOOP-15101: - Thanks Steve Loughran. Will the details in filesystem.md be add to FileSystem.java? At least add a link to filesystem.md. > what testListStatusFile verified not consistent with listStatus declaration > in FileSystem > --- > > Key: HADOOP-15101 > URL: https://issues.apache.org/jira/browse/HADOOP-15101 > Project: Hadoop Common > Issue Type: Bug > Components: fs, test >Affects Versions: 3.0.0-beta1 >Reporter: zhoutai.zt >Priority: Critical > > {code} > @Test > public void testListStatusFile() throws Throwable { > describe("test the listStatus(path) on a file"); > Path f = touchf("liststatusfile"); > verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f)); > } > {code} > In this case, first create a file _f_, then listStatus on _f_,expect > listStatus returns an array of one FileStatus. But this is not consistent > with the declarations in FileSystem, i.e. > {code} > " > List the statuses of the files/directories in the given path if the path is a > directory. > Parameters: > f given path > Returns: > the statuses of the files/directories in the given patch > " > {code} > Which is the expected? The behave in fs contract test or in FileSystem? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-15101) what testListStatusFile verified not consistent with listStatus declaration in FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhoutai.zt updated HADOOP-15101: Comment: was deleted (was: Where can I find the file filesystem.md?) > what testListStatusFile verified not consistent with listStatus declaration > in FileSystem > --- > > Key: HADOOP-15101 > URL: https://issues.apache.org/jira/browse/HADOOP-15101 > Project: Hadoop Common > Issue Type: Bug > Components: fs, test >Affects Versions: 3.0.0-beta1 >Reporter: zhoutai.zt >Priority: Critical > > {code} > @Test > public void testListStatusFile() throws Throwable { > describe("test the listStatus(path) on a file"); > Path f = touchf("liststatusfile"); > verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f)); > } > {code} > In this case, first create a file _f_, then listStatus on _f_,expect > listStatus returns an array of one FileStatus. But this is not consistent > with the declarations in FileSystem, i.e. > {code} > " > List the statuses of the files/directories in the given path if the path is a > directory. > Parameters: > f given path > Returns: > the statuses of the files/directories in the given patch > " > {code} > Which is the expected? The behave in fs contract test or in FileSystem? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15101) what testListStatusFile verified not consistent with listStatus declaration in FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284546#comment-16284546 ] zhoutai.zt commented on HADOOP-15101: - Where can I find the file filesystem.md? > what testListStatusFile verified not consistent with listStatus declaration > in FileSystem > --- > > Key: HADOOP-15101 > URL: https://issues.apache.org/jira/browse/HADOOP-15101 > Project: Hadoop Common > Issue Type: Bug > Components: fs, test >Affects Versions: 3.0.0-beta1 >Reporter: zhoutai.zt >Priority: Critical > > {code} > @Test > public void testListStatusFile() throws Throwable { > describe("test the listStatus(path) on a file"); > Path f = touchf("liststatusfile"); > verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f)); > } > {code} > In this case, first create a file _f_, then listStatus on _f_,expect > listStatus returns an array of one FileStatus. But this is not consistent > with the declarations in FileSystem, i.e. > {code} > " > List the statuses of the files/directories in the given path if the path is a > directory. > Parameters: > f given path > Returns: > the statuses of the files/directories in the given patch > " > {code} > Which is the expected? The behave in fs contract test or in FileSystem? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
[ https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284479#comment-16284479 ] genericqa commented on HADOOP-14788: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 44s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 46s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 39s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 92m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-14788 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892863/HADOOP-14788.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e1c58f3eb4b5 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 04b84da | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13808/testReport/ | | Max. process+thread count | 1767 (vs. ulimit of 5000) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13808/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Credentials readTokenStorageFile to stop wrapping IOEs in IOEs > -- > > Key:
[jira] [Updated] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
[ https://issues.apache.org/jira/browse/HADOOP-15106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HADOOP-15106: --- Attachment: HADOOP-15106.00.patch Add exception type. Will rebase on HDFS-12882 after commit. > FileSystem::open(PathHandle) should throw a specific exception on validation > failure > > > Key: HADOOP-15106 > URL: https://issues.apache.org/jira/browse/HADOOP-15106 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Chris Douglas >Priority: Minor > Attachments: HADOOP-15106.00.patch > > > Callers of {{FileSystem::open(PathHandle)}} cannot distinguish between I/O > errors and an invalid handle. The signature should include a specific, > checked exception for this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15106) FileSystem::open(PathHandle) should throw a specific exception on validation failure
Chris Douglas created HADOOP-15106: -- Summary: FileSystem::open(PathHandle) should throw a specific exception on validation failure Key: HADOOP-15106 URL: https://issues.apache.org/jira/browse/HADOOP-15106 Project: Hadoop Common Issue Type: Improvement Reporter: Chris Douglas Priority: Minor Callers of {{FileSystem::open(PathHandle)}} cannot distinguish between I/O errors and an invalid handle. The signature should include a specific, checked exception for this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors
[ https://issues.apache.org/jira/browse/HADOOP-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284091#comment-16284091 ] Jason Lowe commented on HADOOP-15085: - Thanks for the patch! For input streams we want the close errors to be suppressed because we've read what we needed to read. Anything else that happens to that stream after that isn't interesting and shouldn't fail the operation. The same isn't true for output streams since the things written aren't guaranteed to be persisted until the close() completes successfully. So the cases where we are using try-with-resources on input streams is undesirable since we don't want errors on close to be a problem. In short, we shouldn't apply the code transformation to input streams, just to the output streams. bq. The one remaining checkstyle issue is about an empty block in a try-with-resources block where all the work is done in the resource section. Nit: I think the code would be a bit more readable if that inner try-with-resources was just the constructor call followed by a close call to show it's explicit that the stream is being opened and then immediately closed. The try-with-resources with an empty block isn't adding readability or brevity in that case. > Output streams closed with IOUtils suppressing write errors > --- > > Key: HADOOP-15085 > URL: https://issues.apache.org/jira/browse/HADOOP-15085 > Project: Hadoop Common > Issue Type: Bug >Reporter: Jason Lowe >Assignee: Jim Brennan > Attachments: HADOOP-15085.001.patch, HADOOP-15085.002.patch > > > There are a few places in hadoop-common that are closing an output stream > with IOUtils.cleanupWithLogger like this: > {code} > try { > ...write to outStream... > } finally { > IOUtils.cleanupWithLogger(LOG, outStream); > } > {code} > This suppresses any IOException that occurs during the close() method which > could lead to partial/corrupted output without throwing a corresponding > exception. The code should either use try-with-resources or explicitly close > the stream within the try block so the exception thrown during close() is > properly propagated as exceptions during write operations are. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15042) Azure PageBlobInputStream.skip() can return negative value when numberOfPagesRemaining is 0
[ https://issues.apache.org/jira/browse/HADOOP-15042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-15042: - Fix Version/s: (was: 3.0.0) 3.0.1 3.1.0 > Azure PageBlobInputStream.skip() can return negative value when > numberOfPagesRemaining is 0 > --- > > Key: HADOOP-15042 > URL: https://issues.apache.org/jira/browse/HADOOP-15042 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Affects Versions: 2.9.0 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Fix For: 3.1.0, 2.10.0, 3.0.1 > > Attachments: HADOOP-15042.001.patch > > > {{PageBlobInputStream::skip-->skipImpl}} returns negative values when > {{numberOfPagesRemaining=0}}. This can cause wrong position to be set in > NativeAzureFileSystem::seek() and can lead to errors. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15080) Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x "json-lib"
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-15080: - Fix Version/s: (was: 3.0.1) > Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on > Cat-x "json-lib" > --- > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Assignee: SammiChen >Priority: Blocker > Fix For: 3.0.0, 3.1.0, 2.10.0, 2.9.1 > > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > Cat-X dependency on org.json via derived json-lib. OSS SDK has a dependency > on json-lib. In LEGAL-245, the org.json library (from which json-lib may be > derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which breaks rolling upgrade
[ https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284023#comment-16284023 ] Andrew Wang commented on HADOOP-15059: -- Thanks everyone for the great work on this issue! > 3.0 deployment cannot work with old version MR tar ball which breaks rolling > upgrade > > > Key: HADOOP-15059 > URL: https://issues.apache.org/jira/browse/HADOOP-15059 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Junping Du >Assignee: Jason Lowe >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HADOOP-15059.001.patch, HADOOP-15059.002.patch, > HADOOP-15059.003.patch, HADOOP-15059.004.patch, HADOOP-15059.005.patch, > HADOOP-15059.006.patch > > > I tried to deploy 3.0 cluster with 2.9 MR tar ball. The MR job is failed > because following error: > {noformat} > 2017-11-21 12:42:50,911 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for > application appattempt_1511295641738_0003_01 > 2017-11-21 12:42:51,070 WARN [main] org.apache.hadoop.util.NativeCodeLoader: > Unable to load native-hadoop library for your platform... using builtin-java > classes where applicable > 2017-11-21 12:42:51,118 FATAL [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster > java.lang.RuntimeException: Unable to determine current user > at > org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:254) > at > org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:220) > at > org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:212) > at > org.apache.hadoop.conf.Configuration.addResource(Configuration.java:888) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1638) > Caused by: java.io.IOException: Exception reading > /tmp/nm-local-dir/usercache/jdu/appcache/application_1511295641738_0003/container_e03_1511295641738_0003_01_01/container_tokens > at > org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:208) > at > org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907) > at > org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:820) > at > org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689) > at > org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:252) > ... 4 more > Caused by: java.io.IOException: Unknown version 1 in token storage. > at > org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226) > at > org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:205) > ... 8 more > 2017-11-21 12:42:51,122 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting > with status 1: java.lang.RuntimeException: Unable to determine current user > {noformat} > I think it is due to token incompatiblity change between 2.9 and 3.0. As we > claim "rolling upgrade" is supported in Hadoop 3, we should fix this before > we ship 3.0 otherwise all MR running applications will get stuck during/after > upgrade. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-15102) HADOOP-14831
[ https://issues.apache.org/jira/browse/HADOOP-15102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas resolved HADOOP-15102. Resolution: Invalid > HADOOP-14831 > > > Key: HADOOP-15102 > URL: https://issues.apache.org/jira/browse/HADOOP-15102 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, tools >Reporter: Steve Loughran > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which breaks rolling upgrade
[ https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283820#comment-16283820 ] Hudson commented on HADOOP-15059: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13348 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13348/]) HADOOP-15059. Undoing the switch of Credentials to PB format as default (vinodkv: rev f19638333b11da6dcab9a964e73a49947b8390fd) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/TestDtUtilShell.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DtFileOperations.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Credentials.java > 3.0 deployment cannot work with old version MR tar ball which breaks rolling > upgrade > > > Key: HADOOP-15059 > URL: https://issues.apache.org/jira/browse/HADOOP-15059 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Junping Du >Assignee: Jason Lowe >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HADOOP-15059.001.patch, HADOOP-15059.002.patch, > HADOOP-15059.003.patch, HADOOP-15059.004.patch, HADOOP-15059.005.patch, > HADOOP-15059.006.patch > > > I tried to deploy 3.0 cluster with 2.9 MR tar ball. The MR job is failed > because following error: > {noformat} > 2017-11-21 12:42:50,911 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for > application appattempt_1511295641738_0003_01 > 2017-11-21 12:42:51,070 WARN [main] org.apache.hadoop.util.NativeCodeLoader: > Unable to load native-hadoop library for your platform... using builtin-java > classes where applicable > 2017-11-21 12:42:51,118 FATAL [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster > java.lang.RuntimeException: Unable to determine current user > at > org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:254) > at > org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:220) > at > org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:212) > at > org.apache.hadoop.conf.Configuration.addResource(Configuration.java:888) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1638) > Caused by: java.io.IOException: Exception reading > /tmp/nm-local-dir/usercache/jdu/appcache/application_1511295641738_0003/container_e03_1511295641738_0003_01_01/container_tokens > at > org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:208) > at > org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907) > at > org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:820) > at > org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689) > at > org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:252) > ... 4 more > Caused by: java.io.IOException: Unknown version 1 in token storage. > at > org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226) > at > org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:205) > ... 8 more > 2017-11-21 12:42:51,122 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting > with status 1: java.lang.RuntimeException: Unable to determine current user > {noformat} > I think it is due to token incompatiblity change between 2.9 and 3.0. As we > claim "rolling upgrade" is supported in Hadoop 3, we should fix this before > we ship 3.0 otherwise all MR running applications will get stuck during/after > upgrade. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15104) AliyunOSS: change the default value of max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283800#comment-16283800 ] Hudson commented on HADOOP-15104: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13347 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13347/]) HADOOP-15104. AliyunOSS: change the default value of max error retry. (zhengkai.zk: rev ce04340ec73617daff74378056a95c5d0cc0a790) * (edit) hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java > AliyunOSS: change the default value of max error retry > -- > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0, 3.1.0 > > Attachments: HADOOP-15104.001.patch > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which breaks rolling upgrade
[ https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated HADOOP-15059: - Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) I just committed 005 patch to trunk, branch-3.0 and branch-3.0.0. Thanks [~jlowe] for the patch and the quick turn-around! Thanks [~djp] for finding the issue, [~rchiang] for verifying the fix and [~daryn] for the reviews. > 3.0 deployment cannot work with old version MR tar ball which breaks rolling > upgrade > > > Key: HADOOP-15059 > URL: https://issues.apache.org/jira/browse/HADOOP-15059 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Junping Du >Assignee: Jason Lowe >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HADOOP-15059.001.patch, HADOOP-15059.002.patch, > HADOOP-15059.003.patch, HADOOP-15059.004.patch, HADOOP-15059.005.patch, > HADOOP-15059.006.patch > > > I tried to deploy 3.0 cluster with 2.9 MR tar ball. The MR job is failed > because following error: > {noformat} > 2017-11-21 12:42:50,911 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for > application appattempt_1511295641738_0003_01 > 2017-11-21 12:42:51,070 WARN [main] org.apache.hadoop.util.NativeCodeLoader: > Unable to load native-hadoop library for your platform... using builtin-java > classes where applicable > 2017-11-21 12:42:51,118 FATAL [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster > java.lang.RuntimeException: Unable to determine current user > at > org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:254) > at > org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:220) > at > org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:212) > at > org.apache.hadoop.conf.Configuration.addResource(Configuration.java:888) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1638) > Caused by: java.io.IOException: Exception reading > /tmp/nm-local-dir/usercache/jdu/appcache/application_1511295641738_0003/container_e03_1511295641738_0003_01_01/container_tokens > at > org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:208) > at > org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907) > at > org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:820) > at > org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689) > at > org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:252) > ... 4 more > Caused by: java.io.IOException: Unknown version 1 in token storage. > at > org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226) > at > org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:205) > ... 8 more > 2017-11-21 12:42:51,122 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting > with status 1: java.lang.RuntimeException: Unable to determine current user > {noformat} > I think it is due to token incompatiblity change between 2.9 and 3.0. As we > claim "rolling upgrade" is supported in Hadoop 3, we should fix this before > we ship 3.0 otherwise all MR running applications will get stuck during/after > upgrade. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13974) S3a CLI to support list/purge of pending multipart commits
[ https://issues.apache.org/jira/browse/HADOOP-13974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283715#comment-16283715 ] Steve Loughran commented on HADOOP-13974: - Looks pretty good General * check the import ordering w.r.t the style rules * I can see that retry stuff is useful, but also it's going to have to be something we keep an eye on maintenance-wise. * need to move {{S3ATestUtils.listMultipartUploads()}} to this * there's some shiny new Java 8 code in S3AUtils, like applyLocatedFiles(), which work on RemoteIterator. These could be expanded to take any RemoteIterator/subclass thereof, maybe, which would actually be something to have in hadoop common for broader use. {{MultipartTestUtils.assertNoUploadsAt()}} would be an example use * MultipartUtils L210: can simplify to {{return batchIterator.hasNext();}}. S3AFileSystem L772. Good point. They used to be bonded to the destination path, but we've moved off that: we could just create a single instance here. Want to change it? ITestS3GuardToolLocal: * good Q. about using eventually() in listings. I don't know what happens there, but I'd hope that there's more list consistency here. (Fewer entries and you certainly need to be able to map on subsequent posts to the outstanding MPU * If uploadCommandAssertCount really is at risk of failing, the output should be logged or included in the fail(). Simplest to log > S3a CLI to support list/purge of pending multipart commits > -- > > Key: HADOOP-13974 > URL: https://issues.apache.org/jira/browse/HADOOP-13974 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Aaron Fabbri > Attachments: HADOOP-13974.001.patch, HADOOP-13974.002.patch, > HADOOP-13974.003.patch, HADOOP-13974.004.patch > > > The S3A CLI will need to be able to list and delete pending multipart > commits. > We can do the cleanup already via fs.s3a properties. The CLI will let scripts > stat for outstanding data (have a different exit code) and permit batch jobs > to explicitly trigger cleanups. > This will become critical with the multipart committer, as there's a > significantly higher likelihood of commits remaining outstanding. > We may also want to be able to enumerate/cancel all pending commits in the FS > tree -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15104) AliyunOSS: change the default value of max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HADOOP-15104: --- Fix Version/s: 3.1.0 > AliyunOSS: change the default value of max error retry > -- > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0, 3.1.0 > > Attachments: HADOOP-15104.001.patch > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15104) AliyunOSS: change the default value of max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HADOOP-15104: --- Fix Version/s: (was: 2.9.1) > AliyunOSS: change the default value of max error retry > -- > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0 > > Attachments: HADOOP-15104.001.patch > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-15104) AliyunOSS: change the default value of max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng resolved HADOOP-15104. Resolution: Fixed Committed to trunk and branch-3.0.0. Thanks Jinhu for the work. > AliyunOSS: change the default value of max error retry > -- > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0, 2.9.1 > > Attachments: HADOOP-15104.001.patch > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15105) add htrace context to HTTP requests as ? parameter
Steve Loughran created HADOOP-15105: --- Summary: add htrace context to HTTP requests as ? parameter Key: HADOOP-15105 URL: https://issues.apache.org/jira/browse/HADOOP-15105 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.0.0 Reporter: Steve Loughran Priority: Minor you can [add x-something query parameters to S3 REST calls|http://docs.aws.amazon.com/AmazonS3/latest/dev/LogFormat.html] These then get included in the logs. If the htrace context were passed in this way, you could determine from the S3 logs what query/job an HTTP request ties back to -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15104) AliyunOSS: change the default value of max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HADOOP-15104: --- Summary: AliyunOSS: change the default value of max error retry (was: AliyunOSS: change default max error retry) > AliyunOSS: change the default value of max error retry > -- > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0, 2.9.1 > > Attachments: HADOOP-15104.001.patch > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-15104) AliyunOSS: change default max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-15104 started by wujinhu. > AliyunOSS: change default max error retry > - > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0, 2.9.1 > > Attachments: HADOOP-15104.001.patch > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15104) AliyunOSS: change default max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283678#comment-16283678 ] Kai Zheng commented on HADOOP-15104: A minor change that makes sense. +1. > AliyunOSS: change default max error retry > - > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0, 2.9.1 > > Attachments: HADOOP-15104.001.patch > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15104) AliyunOSS: change default max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283659#comment-16283659 ] wujinhu commented on HADOOP-15104: -- Attach for trunk > AliyunOSS: change default max error retry > - > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0, 2.9.1 > > Attachments: HADOOP-15104.001.patch > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15104) AliyunOSS: change default max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wujinhu updated HADOOP-15104: - Attachment: HADOOP-15104.001.patch > AliyunOSS: change default max error retry > - > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0, 2.9.1 > > Attachments: HADOOP-15104.001.patch > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15104) AliyunOSS: change default max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wujinhu updated HADOOP-15104: - Attachment: (was: HADOOP-15104.001.patch) > AliyunOSS: change default max error retry > - > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0, 2.9.1 > > Attachments: HADOOP-15104.001.patch > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15104) AliyunOSS: change default max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wujinhu updated HADOOP-15104: - Attachment: HADOOP-15104.001.patch > AliyunOSS: change default max error retry > - > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0, 2.9.1 > > Attachments: HADOOP-15104.001.patch > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15104) AliyunOSS: change default max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wujinhu updated HADOOP-15104: - Component/s: fs/oss > AliyunOSS: change default max error retry > - > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0, 2.9.1 > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15104) AliyunOSS: change default max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wujinhu updated HADOOP-15104: - Fix Version/s: 3.0.0 > AliyunOSS: change default max error retry > - > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0, 2.9.1 > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15104) AliyunOSS: change default max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wujinhu updated HADOOP-15104: - Affects Version/s: 3.0.0-beta1 > AliyunOSS: change default max error retry > - > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0, 2.9.1 > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15104) AliyunOSS: change default max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wujinhu updated HADOOP-15104: - Fix Version/s: (was: 3.1.0) > AliyunOSS: change default max error retry > - > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0, 2.9.1 > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15104) AliyunOSS: change default max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wujinhu updated HADOOP-15104: - Fix Version/s: 2.9.1 3.1.0 > AliyunOSS: change default max error retry > - > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: wujinhu >Assignee: wujinhu > Fix For: 3.0.0, 2.9.1 > > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15104) AliyunOSS: change default max error retry
[ https://issues.apache.org/jira/browse/HADOOP-15104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wujinhu updated HADOOP-15104: - Description: Currently, default number of times we should retry errors is 20, however, oss sdk retry delay is {code:java} long delay = (long)Math.pow(2, retries) * 0.3 {code} when one error occurs. So, if we retry 20 times, sleep time will be about 3.64 days and it is unacceptable. So we should change the default behavior. > AliyunOSS: change default max error retry > - > > Key: HADOOP-15104 > URL: https://issues.apache.org/jira/browse/HADOOP-15104 > Project: Hadoop Common > Issue Type: Improvement >Reporter: wujinhu >Assignee: wujinhu > > Currently, default number of times we should retry errors is 20, however, > oss sdk retry delay is > {code:java} > long delay = (long)Math.pow(2, retries) * 0.3 > {code} > when one error occurs. So, if we retry 20 times, sleep time will be about > 3.64 days and it is unacceptable. So we should change the default behavior. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15104) AliyunOSS: change default max error retry
wujinhu created HADOOP-15104: Summary: AliyunOSS: change default max error retry Key: HADOOP-15104 URL: https://issues.apache.org/jira/browse/HADOOP-15104 Project: Hadoop Common Issue Type: Improvement Reporter: wujinhu Assignee: wujinhu -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15101) what testListStatusFile verified not consistent with listStatus declaration in FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283541#comment-16283541 ] Steve Loughran commented on HADOOP-15101: - What you get should be that defined in filesystem.md, as taken from what HDFS does > what testListStatusFile verified not consistent with listStatus declaration > in FileSystem > --- > > Key: HADOOP-15101 > URL: https://issues.apache.org/jira/browse/HADOOP-15101 > Project: Hadoop Common > Issue Type: Bug > Components: fs, test >Affects Versions: 3.0.0-beta1 >Reporter: zhoutai.zt >Priority: Critical > > {code} > @Test > public void testListStatusFile() throws Throwable { > describe("test the listStatus(path) on a file"); > Path f = touchf("liststatusfile"); > verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f)); > } > {code} > In this case, first create a file _f_, then listStatus on _f_,expect > listStatus returns an array of one FileStatus. But this is not consistent > with the declarations in FileSystem, i.e. > {code} > " > List the statuses of the files/directories in the given path if the path is a > directory. > Parameters: > f given path > Returns: > the statuses of the files/directories in the given patch > " > {code} > Which is the expected? The behave in fs contract test or in FileSystem? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15101) what testListStatusFile verified not consistent with listStatus declaration in FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15101: Description: {code} @Test public void testListStatusFile() throws Throwable { describe("test the listStatus(path) on a file"); Path f = touchf("liststatusfile"); verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f)); } {code} In this case, first create a file _f_, then listStatus on _f_,expect listStatus returns an array of one FileStatus. But this is not consistent with the declarations in FileSystem, i.e. {code} " List the statuses of the files/directories in the given path if the path is a directory. Parameters: f given path Returns: the statuses of the files/directories in the given patch " {code} Which is the expected? The behave in fs contract test or in FileSystem? was: @Test public void testListStatusFile() throws Throwable { describe("test the listStatus(path) on a file"); Path f = touchf("liststatusfile"); verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f)); } In this case, first create a file _f_, then listStatus on _f_,expect listStatus returns an array of one FileStatus. But this is not consistent with the declarations in FileSystem, i.e. " List the statuses of the files/directories in the given path if the path is a directory. Parameters: f given path Returns: the statuses of the files/directories in the given patch " Which is the expected? The behave in fs contract test or in FileSystem? > what testListStatusFile verified not consistent with listStatus declaration > in FileSystem > --- > > Key: HADOOP-15101 > URL: https://issues.apache.org/jira/browse/HADOOP-15101 > Project: Hadoop Common > Issue Type: Bug > Components: fs, test >Affects Versions: 3.0.0-beta1 >Reporter: zhoutai.zt >Priority: Critical > > {code} > @Test > public void testListStatusFile() throws Throwable { > describe("test the listStatus(path) on a file"); > Path f = touchf("liststatusfile"); > verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f)); > } > {code} > In this case, first create a file _f_, then listStatus on _f_,expect > listStatus returns an array of one FileStatus. But this is not consistent > with the declarations in FileSystem, i.e. > {code} > " > List the statuses of the files/directories in the given path if the path is a > directory. > Parameters: > f given path > Returns: > the statuses of the files/directories in the given patch > " > {code} > Which is the expected? The behave in fs contract test or in FileSystem? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15024) AliyunOSS: support user agent configuration and include that & Hadoop version information to oss server
[ https://issues.apache.org/jira/browse/HADOOP-15024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-15024: --- Fix Version/s: 3.0.1 2.9.1 2.10.0 3.0.0 > AliyunOSS: support user agent configuration and include that & Hadoop version > information to oss server > --- > > Key: HADOOP-15024 > URL: https://issues.apache.org/jira/browse/HADOOP-15024 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/oss >Affects Versions: 3.0.0 >Reporter: SammiChen >Assignee: SammiChen > Fix For: 3.0.0, 3.1.0, 2.10.0, 2.9.1, 3.0.1 > > Attachments: HADOOP-15024.000.patch, HADOOP-15024.001.patch, > HADOOP-15024.002.patch > > > Provide oss client side Hadoop version to oss server, to help build access > statistic metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14993) AliyunOSS: Override listFiles and listLocatedStatus
[ https://issues.apache.org/jira/browse/HADOOP-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283500#comment-16283500 ] SammiChen commented on HADOOP-14993: Hi [~uncleGen], the patch cannot apply to branch-2. Would you please take a look and provide a new patch for branch-2? > AliyunOSS: Override listFiles and listLocatedStatus > > > Key: HADOOP-14993 > URL: https://issues.apache.org/jira/browse/HADOOP-14993 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: 3.0.0, 3.1.0, 3.0.1 > > Attachments: HADOOP-14993.001.patch, HADOOP-14993.002.patch, > HADOOP-14993.003.patch > > > Do a bulk listing off all entries under a path in one single operation, there > is no need to recursively walk the directory tree. > Updates: > - override listFiles and listLocatedStatus by using bulk listing > - some minor updates in hadoop-aliyun index.md -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15080) Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x "json-lib"
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-15080: --- Fix Version/s: 3.0.1 > Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on > Cat-x "json-lib" > --- > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Assignee: SammiChen >Priority: Blocker > Fix For: 3.0.0, 3.1.0, 2.10.0, 2.9.1, 3.0.1 > > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > Cat-X dependency on org.json via derived json-lib. OSS SDK has a dependency > on json-lib. In LEGAL-245, the org.json library (from which json-lib may be > derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14993) AliyunOSS: Override listFiles and listLocatedStatus
[ https://issues.apache.org/jira/browse/HADOOP-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-14993: --- Fix Version/s: 3.0.1 3.0.0 > AliyunOSS: Override listFiles and listLocatedStatus > > > Key: HADOOP-14993 > URL: https://issues.apache.org/jira/browse/HADOOP-14993 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: 3.0.0, 3.1.0, 3.0.1 > > Attachments: HADOOP-14993.001.patch, HADOOP-14993.002.patch, > HADOOP-14993.003.patch > > > Do a bulk listing off all entries under a path in one single operation, there > is no need to recursively walk the directory tree. > Updates: > - override listFiles and listLocatedStatus by using bulk listing > - some minor updates in hadoop-aliyun index.md -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14997) Add hadoop-aliyun as dependency of hadoop-cloud-storage
[ https://issues.apache.org/jira/browse/HADOOP-14997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-14997: --- Fix Version/s: 2.9.1 2.10.0 > Add hadoop-aliyun as dependency of hadoop-cloud-storage > > > Key: HADOOP-14997 > URL: https://issues.apache.org/jira/browse/HADOOP-14997 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Genmao Yu >Assignee: Genmao Yu >Priority: Minor > Fix For: 3.0.0, 3.1.0, 2.10.0, 2.9.1 > > Attachments: HADOOP-14997.001.patch > > > add {{hadoop-aliyun}} dependency in cloud storage modules -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14997) Add hadoop-aliyun as dependency of hadoop-cloud-storage
[ https://issues.apache.org/jira/browse/HADOOP-14997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-14997: --- Fix Version/s: 3.1.0 > Add hadoop-aliyun as dependency of hadoop-cloud-storage > > > Key: HADOOP-14997 > URL: https://issues.apache.org/jira/browse/HADOOP-14997 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Genmao Yu >Assignee: Genmao Yu >Priority: Minor > Fix For: 3.0.0, 3.1.0 > > Attachments: HADOOP-14997.001.patch > > > add {{hadoop-aliyun}} dependency in cloud storage modules -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15103) SSLConnectionConfigurator should be created only if security is enabled
Lokesh Jain created HADOOP-15103: Summary: SSLConnectionConfigurator should be created only if security is enabled Key: HADOOP-15103 URL: https://issues.apache.org/jira/browse/HADOOP-15103 Project: Hadoop Common Issue Type: Bug Reporter: Lokesh Jain Assignee: Lokesh Jain Currently URLConnectionFactory#getSSLConnectionConfiguration attempts to create a SSL connection configurator even if security is not enabled. This raises the below false warning in the logs. {code:java} 17/12/08 10:12:03 WARN web.URLConnectionFactory: Cannot load customized ssl related configuration. Fallback to system-generic settings. java.io.FileNotFoundException: /etc/security/clientKeys/all.jks (No such file or directory) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.(FileInputStream.java:138) at org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:169) at org.apache.hadoop.security.ssl.ReloadingX509TrustManager.(ReloadingX509TrustManager.java:87) at org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:219) at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:176) at org.apache.hadoop.hdfs.web.URLConnectionFactory.newSslConnConfigurator(URLConnectionFactory.java:164) at org.apache.hadoop.hdfs.web.URLConnectionFactory.getSSLConnectionConfiguration(URLConnectionFactory.java:106) at org.apache.hadoop.hdfs.web.URLConnectionFactory.newDefaultURLConnectionFactory(URLConnectionFactory.java:85) at org.apache.hadoop.hdfs.tools.DFSck.(DFSck.java:136) at org.apache.hadoop.hdfs.tools.DFSck.(DFSck.java:128) at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:396) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13625) Document FileSystem actions that trigger update of modification time.
[ https://issues.apache.org/jira/browse/HADOOP-13625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283354#comment-16283354 ] Ewan Higgs commented on HADOOP-13625: - {quote}{code} FileSystem#createSnapshot {code}{quote} Obviously on the {{.snapshot/...}} directory the mtime should be the time of creation, but should the mtime of the directory being snapshotted be updated? This means two consecutive snapshots will not result in a 'no differences' diff due to the differring mtimes. > Document FileSystem actions that trigger update of modification time. > - > > Key: HADOOP-13625 > URL: https://issues.apache.org/jira/browse/HADOOP-13625 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Reporter: Chris Nauroth > > Hadoop users and developers of Hadoop-compatible file systems have sometimes > asked questions about which file system actions trigger an update of the > path's modification time. This issue proposes to document which actions do > and do not update modification time, so that the information is easy to find > without reading HDFS code or manually testing individual cases. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15102) HADOOP-14831
Steve Loughran created HADOOP-15102: --- Summary: HADOOP-14831 Key: HADOOP-15102 URL: https://issues.apache.org/jira/browse/HADOOP-15102 Project: Hadoop Common Issue Type: Sub-task Reporter: Steve Loughran -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14736) S3AInputStream to implement an efficient skip() call through seeking
[ https://issues.apache.org/jira/browse/HADOOP-14736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14736: Priority: Major (was: Minor) > S3AInputStream to implement an efficient skip() call through seeking > > > Key: HADOOP-14736 > URL: https://issues.apache.org/jira/browse/HADOOP-14736 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Steve Loughran > > {{S3AInputStream}} implements skip() naively through the byte class: Reading > and discarding all data. Efficient on classic "sequential" reads, provided > the forward skip is <1MB. For larger skip values or on random IO, seek() > should be used. > After some range checks/handling past-EOF skips to seek (EOF-1), a seek() > should handle the skip file. > *there are no FS contract tests for skip semantics* -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14747) S3AInputStream to implement CanUnbuffer
[ https://issues.apache.org/jira/browse/HADOOP-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283306#comment-16283306 ] Steve Loughran commented on HADOOP-14747: - + stream capabilities to cover this > S3AInputStream to implement CanUnbuffer > --- > > Key: HADOOP-14747 > URL: https://issues.apache.org/jira/browse/HADOOP-14747 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.1 >Reporter: Steve Loughran > > HBase relies on FileSystems implementing {{CanUnbuffer.unbuffer()}} to force > input streams to free up remote connections (HBASE-9393). This works for > HDFS, but not elsewhere. > S3A input stream can implement {{CanUnbuffer.unbuffer()}} by closing the > input stream and relying on lazy seek to reopen it on demand. > Needs > * Contract specification of unbuffer. As in "who added a new feature to > filesystems but forgot to mention what it should do?" > * Contract test for filesystems which declare their support. > * S3AInputStream to call {{closeStream()}} on a call to {{unbuffer()}}. > * Test case -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13853) S3ADataBlocks.DiskBlock to lazy create dest file for faster 0-byte puts
[ https://issues.apache.org/jira/browse/HADOOP-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283305#comment-16283305 ] Steve Loughran commented on HADOOP-13853: - In HADOOP-13786 the success marker is non empty. However, possibly some merits for any touch() operation > S3ADataBlocks.DiskBlock to lazy create dest file for faster 0-byte puts > --- > > Key: HADOOP-13853 > URL: https://issues.apache.org/jira/browse/HADOOP-13853 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Priority: Minor > > Looking at traces of work, there's invariably a PUT of a _SUCCESS at the end, > which, with disk output, adds the overhead of creating, writing to and then > reading a 0 byte file. > With a lazy create, the creation could be postponed until the first write, > with special handling in the {{startUpload()}} operation to return a null > stream, rather than reopen the file. Saves on some disk IO: create, read, > delete -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15080) Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x "json-lib"
[ https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283298#comment-16283298 ] SammiChen commented on HADOOP-15080: Thanks [~mackrorysd] for backport to branch-2 & branch-2.9. > Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on > Cat-x "json-lib" > --- > > Key: HADOOP-15080 > URL: https://issues.apache.org/jira/browse/HADOOP-15080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Chris Douglas >Assignee: SammiChen >Priority: Blocker > Fix For: 3.0.0, 3.1.0, 2.10.0, 2.9.1 > > Attachments: HADOOP-15080-branch-3.0.0.001.patch, > HADOOP-15080-branch-3.0.0.002.patch > > > Cat-X dependency on org.json via derived json-lib. OSS SDK has a dependency > on json-lib. In LEGAL-245, the org.json library (from which json-lib may be > derived) is released under a > [category-x|https://www.apache.org/legal/resolved.html#json] license. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org