[jira] [Commented] (HADOOP-14493) YARN distributed shell application fails, when RM failed over or Restarts
[ https://issues.apache.org/jira/browse/HADOOP-14493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038233#comment-16038233 ] Sathishkumar Manimoorthy commented on HADOOP-14493: --- [~msingh] The container launch actually calling the filesystem's rename function and it is failing as the file path was not present. Basically the container is trying to do the file rename of ExecScript under DistributedShell FS directory. > YARN distributed shell application fails, when RM failed over or Restarts > - > > Key: HADOOP-14493 > URL: https://issues.apache.org/jira/browse/HADOOP-14493 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Sathishkumar Manimoorthy >Priority: Minor > Labels: distributedshell, yarn > > YARN Distributed shell application fails when doing RM failover or RM > restarts. > Exception trace: > 17/05/30 11:57:38 DEBUG security.UserGroupInformation: PrivilegedAction > as:mapr (auth:SIMPLE) > from:org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032) > 17/05/30 11:57:38 DEBUG security.UserGroupInformation: > PrivilegedActionException as:mapr (auth:SIMPLE) cause:java.io.IOException: > Invalid source or target > 17/05/30 11:57:38 ERROR distributedshell.ApplicationMaster: Not able to add > suffix (.bat/.sh) to the shell script filename > java.io.IOException: Invalid source or target > at com.mapr.fs.MapRFileSystem.rename(MapRFileSystem.java:1132) > at > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1036) > at > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1032) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595) > at > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032) > at > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1400(ApplicationMaster.java:167) > at > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$LaunchContainerRunnable.run(ApplicationMaster.java:953) > at java.lang.Thread.run(Thread.java:748) > DS application trying to launch the additional container and it is failing to > rename the path Execscript.sh as it was already renamed by the previous > containers in filesystem path. > I will upload the logs and path details soon. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14493) YARN distributed shell application fails, when RM failed over or Restarts
[ https://issues.apache.org/jira/browse/HADOOP-14493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038230#comment-16038230 ] Mukul Kumar Singh commented on HADOOP-14493: This seems related to MapRFileSystem, please post this to Mapr mailing list. {code} at com.mapr.fs.MapRFileSystem.rename(MapRFileSystem.java:1132) {code} > YARN distributed shell application fails, when RM failed over or Restarts > - > > Key: HADOOP-14493 > URL: https://issues.apache.org/jira/browse/HADOOP-14493 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Sathishkumar Manimoorthy >Priority: Minor > Labels: distributedshell, yarn > > YARN Distributed shell application fails when doing RM failover or RM > restarts. > Exception trace: > 17/05/30 11:57:38 DEBUG security.UserGroupInformation: PrivilegedAction > as:mapr (auth:SIMPLE) > from:org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032) > 17/05/30 11:57:38 DEBUG security.UserGroupInformation: > PrivilegedActionException as:mapr (auth:SIMPLE) cause:java.io.IOException: > Invalid source or target > 17/05/30 11:57:38 ERROR distributedshell.ApplicationMaster: Not able to add > suffix (.bat/.sh) to the shell script filename > java.io.IOException: Invalid source or target > at com.mapr.fs.MapRFileSystem.rename(MapRFileSystem.java:1132) > at > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1036) > at > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1032) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595) > at > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032) > at > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1400(ApplicationMaster.java:167) > at > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$LaunchContainerRunnable.run(ApplicationMaster.java:953) > at java.lang.Thread.run(Thread.java:748) > DS application trying to launch the additional container and it is failing to > rename the path Execscript.sh as it was already renamed by the previous > containers in filesystem path. > I will upload the logs and path details soon. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14493) YARN distributed shell application fails, when RM failed over or Restarts
[ https://issues.apache.org/jira/browse/HADOOP-14493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sathishkumar Manimoorthy updated HADOOP-14493: -- Description: YARN Distributed shell application fails when doing RM failover or RM restarts. Exception trace: 17/05/30 11:57:38 DEBUG security.UserGroupInformation: PrivilegedAction as:mapr (auth:SIMPLE) from:org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032) 17/05/30 11:57:38 DEBUG security.UserGroupInformation: PrivilegedActionException as:mapr (auth:SIMPLE) cause:java.io.IOException: Invalid source or target 17/05/30 11:57:38 ERROR distributedshell.ApplicationMaster: Not able to add suffix (.bat/.sh) to the shell script filename java.io.IOException: Invalid source or target at com.mapr.fs.MapRFileSystem.rename(MapRFileSystem.java:1132) at org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1036) at org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1032) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595) at org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032) at org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1400(ApplicationMaster.java:167) at org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$LaunchContainerRunnable.run(ApplicationMaster.java:953) at java.lang.Thread.run(Thread.java:748) DS application trying to launch the additional container and it is failing to rename the path Execscript.sh as it was already renamed by the previous containers in filesystem path. I will upload the logs and path details soon. was: YARN Distributed shell application fails when doing RM failover or RM restarts. Exception trace: 17/05/30 11:57:38 DEBUG security.UserGroupInformation: PrivilegedAction as:mapr (auth:SIMPLE) from:org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032) 17/05/30 11:57:38 DEBUG security.UserGroupInformation: PrivilegedActionException as:mapr (auth:SIMPLE) cause:java.io.IOException: Invalid source or target 17/05/30 11:57:38 ERROR distributedshell.ApplicationMaster: Not able to add suffix (.bat/.sh) to the shell script filename java.io.IOException: Invalid source or target at com.mapr.fs.MapRFileSystem.rename(MapRFileSystem.java:1132) at org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1036) at org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1032) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595) at org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032) at org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1400(ApplicationMaster.java:167) at org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$LaunchContainerRunnable.run(ApplicationMaster.java:953) at java.lang.Thread.run(Thread.java:748) DS application trying to lo launch the additional container and it is failing to rename the path Execscript.sh as it was already renamed by the previous containers in filesystem path. I will upload the logs and path details soon. > YARN distributed shell application fails, when RM failed over or Restarts > - > > Key: HADOOP-14493 > URL: https://issues.apache.org/jira/browse/HADOOP-14493 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Sathishkumar Manimoorthy >Priority: Minor > Labels: distributedshell, yarn > > YARN Distributed shell application fails when doing RM failover or RM > restarts. > Exception trace: > 17/05/30 11:57:38 DEBUG security.UserGroupInformation: PrivilegedAction > as:mapr (auth:SIMPLE) > from:org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032) > 17/05/30 11:57:38 DEBUG security.UserGroupInformation: > PrivilegedActionException as:mapr (auth:SIMPLE) cause:java.io.IOException: > Invalid source or target > 17/05/30 11:57:38 ERROR distributedshell.ApplicationMaster: Not
[jira] [Updated] (HADOOP-14493) YARN distributed shell application fails, when RM failed over or Restarts
[ https://issues.apache.org/jira/browse/HADOOP-14493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sathishkumar Manimoorthy updated HADOOP-14493: -- Labels: distributedshell yarn (was: ) > YARN distributed shell application fails, when RM failed over or Restarts > - > > Key: HADOOP-14493 > URL: https://issues.apache.org/jira/browse/HADOOP-14493 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Sathishkumar Manimoorthy >Priority: Minor > Labels: distributedshell, yarn > > YARN Distributed shell application fails when doing RM failover or RM > restarts. > Exception trace: > 17/05/30 11:57:38 DEBUG security.UserGroupInformation: PrivilegedAction > as:mapr (auth:SIMPLE) > from:org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032) > 17/05/30 11:57:38 DEBUG security.UserGroupInformation: > PrivilegedActionException as:mapr (auth:SIMPLE) cause:java.io.IOException: > Invalid source or target > 17/05/30 11:57:38 ERROR distributedshell.ApplicationMaster: Not able to add > suffix (.bat/.sh) to the shell script filename > java.io.IOException: Invalid source or target > at com.mapr.fs.MapRFileSystem.rename(MapRFileSystem.java:1132) > at > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1036) > at > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1032) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595) > at > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032) > at > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1400(ApplicationMaster.java:167) > at > org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$LaunchContainerRunnable.run(ApplicationMaster.java:953) > at java.lang.Thread.run(Thread.java:748) > DS application trying to lo launch the additional container and it is failing > to rename the path Execscript.sh as it was already renamed by the previous > containers in filesystem path. > I will upload the logs and path details soon. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14493) YARN distributed shell application fails, when RM failed over or Restarts
Sathishkumar Manimoorthy created HADOOP-14493: - Summary: YARN distributed shell application fails, when RM failed over or Restarts Key: HADOOP-14493 URL: https://issues.apache.org/jira/browse/HADOOP-14493 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.7.0 Reporter: Sathishkumar Manimoorthy Priority: Minor YARN Distributed shell application fails when doing RM failover or RM restarts. Exception trace: 17/05/30 11:57:38 DEBUG security.UserGroupInformation: PrivilegedAction as:mapr (auth:SIMPLE) from:org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032) 17/05/30 11:57:38 DEBUG security.UserGroupInformation: PrivilegedActionException as:mapr (auth:SIMPLE) cause:java.io.IOException: Invalid source or target 17/05/30 11:57:38 ERROR distributedshell.ApplicationMaster: Not able to add suffix (.bat/.sh) to the shell script filename java.io.IOException: Invalid source or target at com.mapr.fs.MapRFileSystem.rename(MapRFileSystem.java:1132) at org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1036) at org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1032) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595) at org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032) at org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1400(ApplicationMaster.java:167) at org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$LaunchContainerRunnable.run(ApplicationMaster.java:953) at java.lang.Thread.run(Thread.java:748) DS application trying to lo launch the additional container and it is failing to rename the path Execscript.sh as it was already renamed by the previous containers in filesystem path. I will upload the logs and path details soon. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14491) Azure has messed doc structure
[ https://issues.apache.org/jira/browse/HADOOP-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038213#comment-16038213 ] Hadoop QA commented on HADOOP-14491: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 29s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14491 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12871379/HADOOP-14491.000.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 71ddba5811ae 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 66c6fd8 | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12449/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Azure has messed doc structure > -- > > Key: HADOOP-14491 > URL: https://issues.apache.org/jira/browse/HADOOP-14491 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, fs/azure >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14491.000.patch, new.png, old.png > > > # The _WASB Secure mode and configuration_ and _Authorization Support in > WASB_ sections are missing in the navigation > # _Authorization Support in WASB_ should be header level 3 instead of level 2 > # Some of the code format is not specified > # Sample code indent not unified. > Let's use the auto-generated navigation instead of manually updating it, just > as other documents. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14476) make InconsistentAmazonS3Client usable in downstream tests
[ https://issues.apache.org/jira/browse/HADOOP-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038204#comment-16038204 ] Hadoop QA commented on HADOOP-14476: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 39s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 49s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 40s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HADOOP-14476 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12871477/HADOOP-14476-HADOOP-13345.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux fca1c74fae77 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-13345 / 2d06842 | | Default Java | 1.8.0_131 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12448/testReport/ | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12448/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > make InconsistentAmazonS3Client usable in downstream tests > -- > > Key: HADOOP-14476 > URL: https://issues.apache.org/jira/browse/HADOOP-14476 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Aaron Fabbri > Attachments: HADOOP-14476-HADOOP-13345.001.patch > > > It's important for downstream apps to be able to verify that s3guard works by > making the AWS client inconsistent (so demonstrate problems), then turn > s3guard on to verify that they go away. > This can be done by exposing the {{InconsistentAmazonS3Client}} > # move the factory to the production source > # make delay configurable for
[jira] [Comment Edited] (HADOOP-14492) RpcDetailedMetrics and NameNodeMetrics use different rate metrics abstraction cause the Xavgtime confused
[ https://issues.apache.org/jira/browse/HADOOP-14492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038202#comment-16038202 ] Lantao Jin edited comment on HADOOP-14492 at 6/6/17 5:25 AM: - How about changing the {{NameNodeMetrics}} to use {{MutableRatesWithAggregation}} instead of {{MutableRates}}? Any idea? [~steve_l] [~zhz] was (Author: cltlfcjin): How about changing the {{NameNodeMetrics}} to use {{MutableRatesWithAggregation}} instead of {{MutableRates}}? Any idea? [~steve_l] > RpcDetailedMetrics and NameNodeMetrics use different rate metrics abstraction > cause the Xavgtime confused > - > > Key: HADOOP-14492 > URL: https://issues.apache.org/jira/browse/HADOOP-14492 > Project: Hadoop Common > Issue Type: Bug > Components: metrics >Affects Versions: 2.8.0, 2.7.4 >Reporter: Lantao Jin >Priority: Minor > > For performance purpose, > [HADOOP-13782|https://issues.apache.org/jira/browse/HADOOP-13782] change the > metrics behaviour in {{RpcDetailedMetrics}}. > In 2.7.4: > {code} > public class RpcDetailedMetrics { > @Metric MutableRatesWithAggregation rates; > {code} > In old version: > {code} > public class RpcDetailedMetrics { > @Metric MutableRates rates; > {code} > But {{NameNodeMetrics}} still use {{MutableRate}} whatever in the new or old > version: > {code} > public class NameNodeMetrics { > @Metric("Block report") MutableRate blockReport; > {code} > It causes the metrics in JMX very different between them. > {quote} > name: "Hadoop:service=NameNode,name=RpcDetailedActivityForPort8030", > modelerType: "RpcDetailedActivityForPort8030", > tag.port: "8030", > tag.Context: "rpcdetailed", > ... > BlockReportNumOps: 237634, > BlockReportAvgTime: 1382, > ... > name: "Hadoop:service=NameNode,name=NameNodeActivity", > modelerType: "NameNodeActivity", > tag.ProcessName: "NameNode", > ... > BlockReportNumOps: 2592932, > BlockReportAvgTime: 19.258064516129032, > ... > {quote} > In the old version. They are correct. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14492) RpcDetailedMetrics and NameNodeMetrics use different rate metrics abstraction cause the Xavgtime confused
[ https://issues.apache.org/jira/browse/HADOOP-14492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038202#comment-16038202 ] Lantao Jin commented on HADOOP-14492: - How about changing the {{NameNodeMetrics}} to use {{MutableRatesWithAggregation}} instead of {{MutableRates}}? Any idea? [~steve_l] > RpcDetailedMetrics and NameNodeMetrics use different rate metrics abstraction > cause the Xavgtime confused > - > > Key: HADOOP-14492 > URL: https://issues.apache.org/jira/browse/HADOOP-14492 > Project: Hadoop Common > Issue Type: Bug > Components: metrics >Affects Versions: 2.8.0, 2.7.4 >Reporter: Lantao Jin >Priority: Minor > > For performance purpose, > [HADOOP-13782|https://issues.apache.org/jira/browse/HADOOP-13782] change the > metrics behaviour in {{RpcDetailedMetrics}}. > In 2.7.4: > {code} > public class RpcDetailedMetrics { > @Metric MutableRatesWithAggregation rates; > {code} > In old version: > {code} > public class RpcDetailedMetrics { > @Metric MutableRates rates; > {code} > But {{NameNodeMetrics}} still use {{MutableRate}} whatever in the new or old > version: > {code} > public class NameNodeMetrics { > @Metric("Block report") MutableRate blockReport; > {code} > It causes the metrics in JMX very different between them. > {quote} > name: "Hadoop:service=NameNode,name=RpcDetailedActivityForPort8030", > modelerType: "RpcDetailedActivityForPort8030", > tag.port: "8030", > tag.Context: "rpcdetailed", > ... > BlockReportNumOps: 237634, > BlockReportAvgTime: 1382, > ... > name: "Hadoop:service=NameNode,name=NameNodeActivity", > modelerType: "NameNodeActivity", > tag.ProcessName: "NameNode", > ... > BlockReportNumOps: 2592932, > BlockReportAvgTime: 19.258064516129032, > ... > {quote} > In the old version. They are correct. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14491) Azure has messed doc structure
[ https://issues.apache.org/jira/browse/HADOOP-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14491: --- Status: Patch Available (was: Open) > Azure has messed doc structure > -- > > Key: HADOOP-14491 > URL: https://issues.apache.org/jira/browse/HADOOP-14491 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, fs/azure >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14491.000.patch, new.png, old.png > > > # The _WASB Secure mode and configuration_ and _Authorization Support in > WASB_ sections are missing in the navigation > # _Authorization Support in WASB_ should be header level 3 instead of level 2 > # Some of the code format is not specified > # Sample code indent not unified. > Let's use the auto-generated navigation instead of manually updating it, just > as other documents. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038184#comment-16038184 ] Hongyuan Li commented on HADOOP-14486: -- will update patch soon. > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi >Assignee: Hongyuan Li > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14431) ModifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is wrong
[ https://issues.apache.org/jira/browse/HADOOP-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038152#comment-16038152 ] Hudson commented on HADOOP-14431: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11828 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11828/]) HADOOP-14431. ModifyTime of FileStatus returned by SFTPFileSystem's (brahma: rev 66c6fd831497944f4f49c5ce42c69a302b7d7bc0) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/sftp/SFTPFileSystem.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/sftp/TestSFTPFileSystem.java > ModifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is > wrong > --- > > Key: HADOOP-14431 > URL: https://issues.apache.org/jira/browse/HADOOP-14431 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Hongyuan Li >Assignee: Hongyuan Li > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HADOOP-14431-001.patch, HADOOP-14431-002.patch > > > {{getFileStatus(ChannelSftp channel, LsEntry sftpFile, Path parentPath)}} > get FileStatus as code below: > {code} > private FileStatus getFileStatus(ChannelSftp channel, LsEntry sftpFile, > Path parentPath) throws IOException { > SftpATTRS attr = sftpFile.getAttrs(); >…… > long modTime = attr.getMTime() * 1000; // convert to milliseconds >…… > } > {code} > ,which {{attr.getMTime}} return int, which meansthe modTime is wrong -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14470) redundant ternary operator in create method of class CommandWithDestination
[ https://issues.apache.org/jira/browse/HADOOP-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hongyuan Li updated HADOOP-14470: - Summary: redundant ternary operator in create method of class CommandWithDestination (was: the ternary operator in create method in class CommandWithDestination is redundant) > redundant ternary operator in create method of class CommandWithDestination > > > Key: HADOOP-14470 > URL: https://issues.apache.org/jira/browse/HADOOP-14470 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.0.0-alpha3 >Reporter: Hongyuan Li >Assignee: Hongyuan Li >Priority: Trivial > Attachments: HADOOP-14470-001.patch > > > in if statement,the lazyPersist is always true, thus the ternary operator is > redundant, > {{lazyPersist == true}} in if statment, so {{lazyPersist ? 1 : > getDefaultReplication(item.path)}} is redundant. > related code like below, which is in > {{org.apache.hadoop.fs.shell.CommandWithDestination}} lineNumber : 504 : > {code:java} >FSDataOutputStream create(PathData item, boolean lazyPersist, > boolean direct) > throws IOException { > try { > if (lazyPersist) { // in if stament, lazyPersist is always true > …… > return create(item.path, > FsPermission.getFileDefault().applyUMask( > FsPermission.getUMask(getConf())), > createFlags, > getConf().getInt(IO_FILE_BUFFER_SIZE_KEY, > IO_FILE_BUFFER_SIZE_DEFAULT), > lazyPersist ? 1 : getDefaultReplication(item.path), > // *this is redundant* > getDefaultBlockSize(), > null, > null); > } else { > return create(item.path, true); > } > } finally { // might have been created but stream was interrupted > if (!direct) { > deleteOnExit(item.path); > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14476) make InconsistentAmazonS3Client usable in downstream tests
[ https://issues.apache.org/jira/browse/HADOOP-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri updated HADOOP-14476: -- Status: Patch Available (was: Open) > make InconsistentAmazonS3Client usable in downstream tests > -- > > Key: HADOOP-14476 > URL: https://issues.apache.org/jira/browse/HADOOP-14476 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Aaron Fabbri > Attachments: HADOOP-14476-HADOOP-13345.001.patch > > > It's important for downstream apps to be able to verify that s3guard works by > making the AWS client inconsistent (so demonstrate problems), then turn > s3guard on to verify that they go away. > This can be done by exposing the {{InconsistentAmazonS3Client}} > # move the factory to the production source > # make delay configurable for when you want a really long delay > # have factory code log @ warn when a non-default factory is used. > # mention in s3a testing.md > I think we could look at the name of the option, > {{fs.s3a.s3.client.factory.impl}} too. I'd like something which has > "internal" in it, and without the duplication of s3a.s3 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14470) the ternary operator in create method in class CommandWithDestination is redundant
[ https://issues.apache.org/jira/browse/HADOOP-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16032293#comment-16032293 ] Hongyuan Li edited comment on HADOOP-14470 at 6/6/17 4:45 AM: -- ping [~ste...@apache.org]、 [~brahmareddy] 、[~yzhangal], Couldyou please give me a code review? was (Author: hongyuan li): ping [~ste...@apache.org]、 [~brahmareddy] 、[~yzhangal], can you please give me a code review? None of the findbugs/unit test warnings seem to be related to the change. > the ternary operator in create method in class CommandWithDestination is > redundant > --- > > Key: HADOOP-14470 > URL: https://issues.apache.org/jira/browse/HADOOP-14470 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.0.0-alpha3 >Reporter: Hongyuan Li >Assignee: Hongyuan Li >Priority: Trivial > Attachments: HADOOP-14470-001.patch > > > in if statement,the lazyPersist is always true, thus the ternary operator is > redundant, > {{lazyPersist == true}} in if statment, so {{lazyPersist ? 1 : > getDefaultReplication(item.path)}} is redundant. > related code like below, which is in > {{org.apache.hadoop.fs.shell.CommandWithDestination}} lineNumber : 504 : > {code:java} >FSDataOutputStream create(PathData item, boolean lazyPersist, > boolean direct) > throws IOException { > try { > if (lazyPersist) { // in if stament, lazyPersist is always true > …… > return create(item.path, > FsPermission.getFileDefault().applyUMask( > FsPermission.getUMask(getConf())), > createFlags, > getConf().getInt(IO_FILE_BUFFER_SIZE_KEY, > IO_FILE_BUFFER_SIZE_DEFAULT), > lazyPersist ? 1 : getDefaultReplication(item.path), > // *this is redundant* > getDefaultBlockSize(), > null, > null); > } else { > return create(item.path, true); > } > } finally { // might have been created but stream was interrupted > if (!direct) { > deleteOnExit(item.path); > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14470) the ternary operator in create method in class CommandWithDestination is redundant
[ https://issues.apache.org/jira/browse/HADOOP-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16032293#comment-16032293 ] Hongyuan Li edited comment on HADOOP-14470 at 6/6/17 4:46 AM: -- ping [~ste...@apache.org]、 [~brahmareddy] 、[~yzhangal], Could you please give me a code review? was (Author: hongyuan li): ping [~ste...@apache.org]、 [~brahmareddy] 、[~yzhangal], Couldyou please give me a code review? > the ternary operator in create method in class CommandWithDestination is > redundant > --- > > Key: HADOOP-14470 > URL: https://issues.apache.org/jira/browse/HADOOP-14470 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.0.0-alpha3 >Reporter: Hongyuan Li >Assignee: Hongyuan Li >Priority: Trivial > Attachments: HADOOP-14470-001.patch > > > in if statement,the lazyPersist is always true, thus the ternary operator is > redundant, > {{lazyPersist == true}} in if statment, so {{lazyPersist ? 1 : > getDefaultReplication(item.path)}} is redundant. > related code like below, which is in > {{org.apache.hadoop.fs.shell.CommandWithDestination}} lineNumber : 504 : > {code:java} >FSDataOutputStream create(PathData item, boolean lazyPersist, > boolean direct) > throws IOException { > try { > if (lazyPersist) { // in if stament, lazyPersist is always true > …… > return create(item.path, > FsPermission.getFileDefault().applyUMask( > FsPermission.getUMask(getConf())), > createFlags, > getConf().getInt(IO_FILE_BUFFER_SIZE_KEY, > IO_FILE_BUFFER_SIZE_DEFAULT), > lazyPersist ? 1 : getDefaultReplication(item.path), > // *this is redundant* > getDefaultBlockSize(), > null, > null); > } else { > return create(item.path, true); > } > } finally { // might have been created but stream was interrupted > if (!direct) { > deleteOnExit(item.path); > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14476) make InconsistentAmazonS3Client usable in downstream tests
[ https://issues.apache.org/jira/browse/HADOOP-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri updated HADOOP-14476: -- Attachment: HADOOP-14476-HADOOP-13345.001.patch Attaching v1 patch. Documentation included. I have not added the test case [~ste...@apache.org] suggested yet about cases of getFileStatus() failing due to eventual consistency, but I can do that in the v2 patch. Ran all integration tests w/ and w/o s3guard (dynamo) in us-west-2 region. Also an ITestS3AContractGetFileStatus() with all keys delayed and it failed without S3Guard and succeed with. > make InconsistentAmazonS3Client usable in downstream tests > -- > > Key: HADOOP-14476 > URL: https://issues.apache.org/jira/browse/HADOOP-14476 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Aaron Fabbri > Attachments: HADOOP-14476-HADOOP-13345.001.patch > > > It's important for downstream apps to be able to verify that s3guard works by > making the AWS client inconsistent (so demonstrate problems), then turn > s3guard on to verify that they go away. > This can be done by exposing the {{InconsistentAmazonS3Client}} > # move the factory to the production source > # make delay configurable for when you want a really long delay > # have factory code log @ warn when a non-default factory is used. > # mention in s3a testing.md > I think we could look at the name of the option, > {{fs.s3a.s3.client.factory.impl}} too. I'd like something which has > "internal" in it, and without the duplication of s3a.s3 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14431) ModifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is wrong
[ https://issues.apache.org/jira/browse/HADOOP-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038135#comment-16038135 ] Hongyuan Li commented on HADOOP-14431: -- Ok, got it,That's ok. > ModifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is > wrong > --- > > Key: HADOOP-14431 > URL: https://issues.apache.org/jira/browse/HADOOP-14431 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Hongyuan Li >Assignee: Hongyuan Li > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HADOOP-14431-001.patch, HADOOP-14431-002.patch > > > {{getFileStatus(ChannelSftp channel, LsEntry sftpFile, Path parentPath)}} > get FileStatus as code below: > {code} > private FileStatus getFileStatus(ChannelSftp channel, LsEntry sftpFile, > Path parentPath) throws IOException { > SftpATTRS attr = sftpFile.getAttrs(); >…… > long modTime = attr.getMTime() * 1000; // convert to milliseconds >…… > } > {code} > ,which {{attr.getMTime}} return int, which meansthe modTime is wrong -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14431) ModifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is wrong
[ https://issues.apache.org/jira/browse/HADOOP-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038123#comment-16038123 ] Brahma Reddy Battula commented on HADOOP-14431: --- I pushed it..thanks [~Hongyuan Li] for reminding.. > ModifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is > wrong > --- > > Key: HADOOP-14431 > URL: https://issues.apache.org/jira/browse/HADOOP-14431 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Hongyuan Li >Assignee: Hongyuan Li > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HADOOP-14431-001.patch, HADOOP-14431-002.patch > > > {{getFileStatus(ChannelSftp channel, LsEntry sftpFile, Path parentPath)}} > get FileStatus as code below: > {code} > private FileStatus getFileStatus(ChannelSftp channel, LsEntry sftpFile, > Path parentPath) throws IOException { > SftpATTRS attr = sftpFile.getAttrs(); >…… > long modTime = attr.getMTime() * 1000; // convert to milliseconds >…… > } > {code} > ,which {{attr.getMTime}} return int, which meansthe modTime is wrong -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14431) ModifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is wrong
[ https://issues.apache.org/jira/browse/HADOOP-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-14431: -- Summary: ModifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is wrong (was: the modifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is wrong) > ModifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is > wrong > --- > > Key: HADOOP-14431 > URL: https://issues.apache.org/jira/browse/HADOOP-14431 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Hongyuan Li >Assignee: Hongyuan Li > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HADOOP-14431-001.patch, HADOOP-14431-002.patch > > > {{getFileStatus(ChannelSftp channel, LsEntry sftpFile, Path parentPath)}} > get FileStatus as code below: > {code} > private FileStatus getFileStatus(ChannelSftp channel, LsEntry sftpFile, > Path parentPath) throws IOException { > SftpATTRS attr = sftpFile.getAttrs(); >…… > long modTime = attr.getMTime() * 1000; // convert to milliseconds >…… > } > {code} > ,which {{attr.getMTime}} return int, which meansthe modTime is wrong -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14395) Provide Builder pattern for DistributedFileSystem.append
[ https://issues.apache.org/jira/browse/HADOOP-14395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038074#comment-16038074 ] Hadoop QA commented on HADOOP-14395: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 30s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 28s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 46s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 45s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 41s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 0s{color} | {color:orange} root: The patch generated 1 new + 257 unchanged - 0 fixed = 258 total (was 257) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 8s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 24s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 44s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}209m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.TestHarFileSystem | | | hadoop.fs.TestFilterFileSystem | | | hadoop.net.TestClusterTopology | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | hadoop.hdfs.TestDistributedFileSystem | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14395 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12871371/HADOOP-14395.01-trunk.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 81784d964404 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6a28a2b | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/12446/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html | | checkstyle |
[jira] [Commented] (HADOOP-14431) the modifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is wrong
[ https://issues.apache.org/jira/browse/HADOOP-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038067#comment-16038067 ] Brahma Reddy Battula commented on HADOOP-14431: --- Looks, it's not pushed.. I push it.. > the modifyTime of FileStatus returned by SFTPFileSystem's getFileStatus > method is wrong > --- > > Key: HADOOP-14431 > URL: https://issues.apache.org/jira/browse/HADOOP-14431 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Hongyuan Li >Assignee: Hongyuan Li > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HADOOP-14431-001.patch, HADOOP-14431-002.patch > > > {{getFileStatus(ChannelSftp channel, LsEntry sftpFile, Path parentPath)}} > get FileStatus as code below: > {code} > private FileStatus getFileStatus(ChannelSftp channel, LsEntry sftpFile, > Path parentPath) throws IOException { > SftpATTRS attr = sftpFile.getAttrs(); >…… > long modTime = attr.getMTime() * 1000; // convert to milliseconds >…… > } > {code} > ,which {{attr.getMTime}} return int, which meansthe modTime is wrong -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038069#comment-16038069 ] Brahma Reddy Battula commented on HADOOP-14486: --- will check and update in HADOOP-14431. > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi >Assignee: Hongyuan Li > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038045#comment-16038045 ] Hongyuan Li edited comment on HADOOP-14486 at 6/6/17 3:22 AM: -- [~brahmareddy] [~ste...@apache.org] on trunk and branch-2 ,seemed that HADOOP-14431 patch has not submitted. what happened? was (Author: hongyuan li): [~brahmareddy] [~ste...@apache.org] on trunk and branch-2 ,seemed that HADOOP-14431 patch has not submitted. what happend? > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi >Assignee: Hongyuan Li > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14431) the modifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is wrong
[ https://issues.apache.org/jira/browse/HADOOP-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038044#comment-16038044 ] Hongyuan Li edited comment on HADOOP-14431 at 6/6/17 3:22 AM: -- [~brahmareddy] [~ste...@apache.org] on trunk and branch-2 ,seemed that HADOOP-14431 patch has not submitted. what happened? was (Author: hongyuan li): [~brahmareddy] [~ste...@apache.org] on trunk and branch-2 ,seemed that HADOOP-14431 patch has not submitted. what happend? > the modifyTime of FileStatus returned by SFTPFileSystem's getFileStatus > method is wrong > --- > > Key: HADOOP-14431 > URL: https://issues.apache.org/jira/browse/HADOOP-14431 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Hongyuan Li >Assignee: Hongyuan Li > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HADOOP-14431-001.patch, HADOOP-14431-002.patch > > > {{getFileStatus(ChannelSftp channel, LsEntry sftpFile, Path parentPath)}} > get FileStatus as code below: > {code} > private FileStatus getFileStatus(ChannelSftp channel, LsEntry sftpFile, > Path parentPath) throws IOException { > SftpATTRS attr = sftpFile.getAttrs(); >…… > long modTime = attr.getMTime() * 1000; // convert to milliseconds >…… > } > {code} > ,which {{attr.getMTime}} return int, which meansthe modTime is wrong -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14431) the modifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is wrong
[ https://issues.apache.org/jira/browse/HADOOP-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038044#comment-16038044 ] Hongyuan Li edited comment on HADOOP-14431 at 6/6/17 3:21 AM: -- [~brahmareddy] [~ste...@apache.org] on trunk and branch-2 ,seemed that HADOOP-14431 patch has not submitted. what happend? was (Author: hongyuan li): [~brahmareddy] on trunk and branch-2long modTime = attr.getMTime() * 1000; // convert to milliseconds is still wring, what happend? > the modifyTime of FileStatus returned by SFTPFileSystem's getFileStatus > method is wrong > --- > > Key: HADOOP-14431 > URL: https://issues.apache.org/jira/browse/HADOOP-14431 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Hongyuan Li >Assignee: Hongyuan Li > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HADOOP-14431-001.patch, HADOOP-14431-002.patch > > > {{getFileStatus(ChannelSftp channel, LsEntry sftpFile, Path parentPath)}} > get FileStatus as code below: > {code} > private FileStatus getFileStatus(ChannelSftp channel, LsEntry sftpFile, > Path parentPath) throws IOException { > SftpATTRS attr = sftpFile.getAttrs(); >…… > long modTime = attr.getMTime() * 1000; // convert to milliseconds >…… > } > {code} > ,which {{attr.getMTime}} return int, which meansthe modTime is wrong -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038045#comment-16038045 ] Hongyuan Li edited comment on HADOOP-14486 at 6/6/17 3:21 AM: -- [~brahmareddy] [~ste...@apache.org] on trunk and branch-2 ,seemed that HADOOP-14431 patch has not submitted. what happend? was (Author: hongyuan li): [~brahmareddy] [~Steve Loughran] on trunk and branch-2 ,seemed that HADOOP-14431 patch has not submitted. what happend? > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi >Assignee: Hongyuan Li > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038045#comment-16038045 ] Hongyuan Li edited comment on HADOOP-14486 at 6/6/17 3:21 AM: -- [~brahmareddy] [~Steve Loughran] on trunk and branch-2 ,seemed that HADOOP-14431 patch has not submitted. what happend? was (Author: hongyuan li): [~brahmareddy] [~Steve Loughran] on trunk and branch-2long modTime = attr.getMTime() * 1000; // convert to milliseconds is still wring, what happend? > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi >Assignee: Hongyuan Li > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038045#comment-16038045 ] Hongyuan Li commented on HADOOP-14486: -- [~brahmareddy] [~Steve Loughran] on trunk and branch-2long modTime = attr.getMTime() * 1000; // convert to milliseconds is still wring, what happend? > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi >Assignee: Hongyuan Li > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hongyuan Li updated HADOOP-14486: - Attachment: (was: HADOOP-14486-001.patch) > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi >Assignee: Hongyuan Li > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14431) the modifyTime of FileStatus returned by SFTPFileSystem's getFileStatus method is wrong
[ https://issues.apache.org/jira/browse/HADOOP-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038044#comment-16038044 ] Hongyuan Li commented on HADOOP-14431: -- [~brahmareddy] on trunk and branch-2long modTime = attr.getMTime() * 1000; // convert to milliseconds is still wring, what happend? > the modifyTime of FileStatus returned by SFTPFileSystem's getFileStatus > method is wrong > --- > > Key: HADOOP-14431 > URL: https://issues.apache.org/jira/browse/HADOOP-14431 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Hongyuan Li >Assignee: Hongyuan Li > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HADOOP-14431-001.patch, HADOOP-14431-002.patch > > > {{getFileStatus(ChannelSftp channel, LsEntry sftpFile, Path parentPath)}} > get FileStatus as code below: > {code} > private FileStatus getFileStatus(ChannelSftp channel, LsEntry sftpFile, > Path parentPath) throws IOException { > SftpATTRS attr = sftpFile.getAttrs(); >…… > long modTime = attr.getMTime() * 1000; // convert to milliseconds >…… > } > {code} > ,which {{attr.getMTime}} return int, which meansthe modTime is wrong -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038027#comment-16038027 ] Hadoop QA commented on HADOOP-14486: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 9s{color} | {color:red} HADOOP-14486 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-14486 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12871464/HADOOP-14486-001.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12447/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi >Assignee: Hongyuan Li > Attachments: HADOOP-14486-001.patch > > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hongyuan Li updated HADOOP-14486: - Status: Patch Available (was: Open) > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi >Assignee: Hongyuan Li > Attachments: HADOOP-14486-001.patch > > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hongyuan Li updated HADOOP-14486: - Attachment: HADOOP-14486-001.patch > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi >Assignee: Hongyuan Li > Attachments: HADOOP-14486-001.patch > > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037996#comment-16037996 ] Hongyuan Li edited comment on HADOOP-14486 at 6/6/17 2:34 AM: -- this may runs at windows system? some system may got access time in millions, but most got in seconds. the sftp api can got time in seconds.Sure, modify code can avoid this. [~ste...@apache.org]. I will solve this including modifitime test unit. was (Author: hongyuan li): this may runs at windows system? some system may got access time in millions, but most got in seconds. the sftp api can got time to seconds, [~ste...@apache.org]. I will solve this including modifitime test unit. > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi >Assignee: Hongyuan Li > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037996#comment-16037996 ] Hongyuan Li edited comment on HADOOP-14486 at 6/6/17 2:34 AM: -- this may runs at windows system? some system may got access time in millions, but most got in seconds. the sftp api can got time to seconds, [~ste...@apache.org]. I will solve this including modifitime test unit. was (Author: hongyuan li): this may runs at windows system? the sftp api can got time to seconds, [~ste...@apache.org]. I will solve this including modifitime test unit. > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi >Assignee: Hongyuan Li > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14492) RpcDetailedMetrics and NameNodeMetrics use different rate metrics abstraction cause the Xavgtime confused
[ https://issues.apache.org/jira/browse/HADOOP-14492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lantao Jin updated HADOOP-14492: Description: For performance purpose, [HADOOP-13782|https://issues.apache.org/jira/browse/HADOOP-13782] change the metrics behaviour in {{RpcDetailedMetrics}}. In 2.7.4: {code} public class RpcDetailedMetrics { @Metric MutableRatesWithAggregation rates; {code} In old version: {code} public class RpcDetailedMetrics { @Metric MutableRates rates; {code} But {{NameNodeMetrics}} still use {{MutableRate}} whatever in the new or old version: {code} public class NameNodeMetrics { @Metric("Block report") MutableRate blockReport; {code} It causes the metrics in JMX very different between them. {quote} name: "Hadoop:service=NameNode,name=RpcDetailedActivityForPort8030", modelerType: "RpcDetailedActivityForPort8030", tag.port: "8030", tag.Context: "rpcdetailed", ... BlockReportNumOps: 237634, BlockReportAvgTime: 1382, ... name: "Hadoop:service=NameNode,name=NameNodeActivity", modelerType: "NameNodeActivity", tag.ProcessName: "NameNode", ... BlockReportNumOps: 2592932, BlockReportAvgTime: 19.258064516129032, ... {quote} In the old version. They are correct. was: For performance purpose, [HADOOP-13782|https://issues.apache.org/jira/browse/HADOOP-13782] change the metrics behaviour in {{RpcDetailedMetrics}}. In 2.7.4: {code} public class RpcDetailedMetrics { @Metric MutableRatesWithAggregation rates; {code} In old version: {code} public class RpcDetailedMetrics { @Metric MutableRates rates; {code} But {{NameNodeMetrics}} still use {{MutableRate}} whatever in the new or old version: {code} public class NameNodeMetrics { @Metric("Block report") MutableRate blockReport; {code} It causes the metrics in JMX very different between them. {quote} { name: "Hadoop:service=NameNode,name=RpcDetailedActivityForPort8030", modelerType: "RpcDetailedActivityForPort8030", tag.port: "8030", tag.Context: "rpcdetailed", ... BlockReportNumOps: 237634, BlockReportAvgTime: 1382, ... } { name: "Hadoop:service=NameNode,name=NameNodeActivity", modelerType: "NameNodeActivity", tag.ProcessName: "NameNode", ... BlockReportNumOps: 2592932, BlockReportAvgTime: 19.258064516129032, ... } {quote} In the old version. They are correct. > RpcDetailedMetrics and NameNodeMetrics use different rate metrics abstraction > cause the Xavgtime confused > - > > Key: HADOOP-14492 > URL: https://issues.apache.org/jira/browse/HADOOP-14492 > Project: Hadoop Common > Issue Type: Bug > Components: metrics >Affects Versions: 2.8.0, 2.7.4 >Reporter: Lantao Jin >Priority: Minor > > For performance purpose, > [HADOOP-13782|https://issues.apache.org/jira/browse/HADOOP-13782] change the > metrics behaviour in {{RpcDetailedMetrics}}. > In 2.7.4: > {code} > public class RpcDetailedMetrics { > @Metric MutableRatesWithAggregation rates; > {code} > In old version: > {code} > public class RpcDetailedMetrics { > @Metric MutableRates rates; > {code} > But {{NameNodeMetrics}} still use {{MutableRate}} whatever in the new or old > version: > {code} > public class NameNodeMetrics { > @Metric("Block report") MutableRate blockReport; > {code} > It causes the metrics in JMX very different between them. > {quote} > name: "Hadoop:service=NameNode,name=RpcDetailedActivityForPort8030", > modelerType: "RpcDetailedActivityForPort8030", > tag.port: "8030", > tag.Context: "rpcdetailed", > ... > BlockReportNumOps: 237634, > BlockReportAvgTime: 1382, > ... > name: "Hadoop:service=NameNode,name=NameNodeActivity", > modelerType: "NameNodeActivity", > tag.ProcessName: "NameNode", > ... > BlockReportNumOps: 2592932, > BlockReportAvgTime: 19.258064516129032, > ... > {quote} > In the old version. They are correct. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14492) RpcDetailedMetrics and NameNodeMetrics use different rate metrics abstraction cause the Xavgtime confused
Lantao Jin created HADOOP-14492: --- Summary: RpcDetailedMetrics and NameNodeMetrics use different rate metrics abstraction cause the Xavgtime confused Key: HADOOP-14492 URL: https://issues.apache.org/jira/browse/HADOOP-14492 Project: Hadoop Common Issue Type: Bug Components: metrics Affects Versions: 2.8.0, 2.7.4 Reporter: Lantao Jin Priority: Minor For performance purpose, [HADOOP-13782|https://issues.apache.org/jira/browse/HADOOP-13782] change the metrics behaviour in {{RpcDetailedMetrics}}. In 2.7.4: {code} public class RpcDetailedMetrics { @Metric MutableRatesWithAggregation rates; {code} In old version: {code} public class RpcDetailedMetrics { @Metric MutableRates rates; {code} But {{NameNodeMetrics}} still use {{MutableRate}} whatever in the new or old version: {code} public class NameNodeMetrics { @Metric("Block report") MutableRate blockReport; {code} It causes the metrics in JMX very different between them. {quote} { name: "Hadoop:service=NameNode,name=RpcDetailedActivityForPort8030", modelerType: "RpcDetailedActivityForPort8030", tag.port: "8030", tag.Context: "rpcdetailed", ... BlockReportNumOps: 237634, BlockReportAvgTime: 1382, ... } { name: "Hadoop:service=NameNode,name=NameNodeActivity", modelerType: "NameNodeActivity", tag.ProcessName: "NameNode", ... BlockReportNumOps: 2592932, BlockReportAvgTime: 19.258064516129032, ... } {quote} In the old version. They are correct. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037996#comment-16037996 ] Hongyuan Li edited comment on HADOOP-14486 at 6/6/17 2:19 AM: -- this may runs at windows system? the sftp api can got time to seconds, [~ste...@apache.org]. I will solve this including modifitime test unit. was (Author: hongyuan li): this may runs at windows system? the sftp api can got time to seconds, [~ste...@apache.org]. Will i assign this to me? > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi >Assignee: Hongyuan Li > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hongyuan Li reassigned HADOOP-14486: Assignee: Hongyuan Li > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi >Assignee: Hongyuan Li > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037996#comment-16037996 ] Hongyuan Li edited comment on HADOOP-14486 at 6/6/17 2:18 AM: -- this may runs at windows system? the sftp api can got time to seconds, [~ste...@apache.org]. Will i assign this to me? was (Author: hongyuan li): this may runs at windows system? the sftp api cannot got time to seconds, [ > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037996#comment-16037996 ] Hongyuan Li edited comment on HADOOP-14486 at 6/6/17 2:17 AM: -- this may runs at windows system? the sftp api cannot got time to seconds, [ was (Author: hongyuan li): this may runs at windows system. > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037996#comment-16037996 ] Hongyuan Li commented on HADOOP-14486: -- this may runs at windows system. > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-14436) Remove the redundant colon in ViewFs.md
[ https://issues.apache.org/jira/browse/HADOOP-14436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] maobaolong updated HADOOP-14436: Comment: was deleted (was: [~brahma] thanks for review this patch again, now, jenkins build this patch successful, PTAL.) > Remove the redundant colon in ViewFs.md > --- > > Key: HADOOP-14436 > URL: https://issues.apache.org/jira/browse/HADOOP-14436 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.7.1, 3.0.0-alpha2 >Reporter: maobaolong >Assignee: maobaolong > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HADOOP-14436.patch > > > Minor mistake can led the beginner to the wrong way and getting far away from > us. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14491) Azure has messed doc structure
[ https://issues.apache.org/jira/browse/HADOOP-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037940#comment-16037940 ] Mingliang Liu commented on HADOOP-14491: Ping [~ajisakaa]. Thanks, > Azure has messed doc structure > -- > > Key: HADOOP-14491 > URL: https://issues.apache.org/jira/browse/HADOOP-14491 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, fs/azure >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14491.000.patch, new.png, old.png > > > # The _WASB Secure mode and configuration_ and _Authorization Support in > WASB_ sections are missing in the navigation > # _Authorization Support in WASB_ should be header level 3 instead of level 2 > # Some of the code format is not specified > # Sample code indent not unified. > Let's use the auto-generated navigation instead of manually updating it, just > as other documents. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14491) Azure has messed doc structure
[ https://issues.apache.org/jira/browse/HADOOP-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-14491: --- Attachment: HADOOP-14491.000.patch new.png old.png > Azure has messed doc structure > -- > > Key: HADOOP-14491 > URL: https://issues.apache.org/jira/browse/HADOOP-14491 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, fs/azure >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14491.000.patch, new.png, old.png > > > # The _WASB Secure mode and configuration_ and _Authorization Support in > WASB_ sections are missing in the navigation > # _Authorization Support in WASB_ should be header level 3 instead of level 2 > # Some of the code format is not specified > # Sample code indent not unified. > Let's use the auto-generated navigation instead of manually updating it, just > as other documents. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14491) Azure has messed doc structure
Mingliang Liu created HADOOP-14491: -- Summary: Azure has messed doc structure Key: HADOOP-14491 URL: https://issues.apache.org/jira/browse/HADOOP-14491 Project: Hadoop Common Issue Type: Improvement Components: documentation, fs/azure Reporter: Mingliang Liu Assignee: Mingliang Liu # The _WASB Secure mode and configuration_ and _Authorization Support in WASB_ sections are missing in the navigation # _Authorization Support in WASB_ should be header level 3 instead of level 2 # Some of the code format is not specified # Sample code indent not unified. Let's use the auto-generated navigation instead of manually updating it, just as other documents. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14490) Upgrade azure-storage sdk version >5.2.0
[ https://issues.apache.org/jira/browse/HADOOP-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-14490: --- Component/s: fs/azure > Upgrade azure-storage sdk version >5.2.0 > > > Key: HADOOP-14490 > URL: https://issues.apache.org/jira/browse/HADOOP-14490 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Reporter: Mingliang Liu >Assignee: Rajesh Balamohan > > As required by [HADOOP-14478], we're expecting the {{BlobInputStream}} to > support advanced {{readFully()}} by taking hints of mark. This can only be > done by means of sdk version bump. > cc: [~rajesh.balamohan]. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14490) Upgrade azure-storage sdk version >5.2.0
[ https://issues.apache.org/jira/browse/HADOOP-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037933#comment-16037933 ] Mingliang Liu commented on HADOOP-14490: As the related patch is mainly contributed by you, I assign this one to you. Feel free to un-assign, [~rajesh.balamohan]. > Upgrade azure-storage sdk version >5.2.0 > > > Key: HADOOP-14490 > URL: https://issues.apache.org/jira/browse/HADOOP-14490 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Mingliang Liu >Assignee: Rajesh Balamohan > > As required by [HADOOP-14478], we're expecting the {{BlobInputStream}} to > support advanced {{readFully()}} by taking hints of mark. This can only be > done by means of sdk version bump. > cc: [~rajesh.balamohan]. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14490) Upgrade azure-storage sdk version >5.2.0
[ https://issues.apache.org/jira/browse/HADOOP-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu reassigned HADOOP-14490: -- Assignee: Rajesh Balamohan > Upgrade azure-storage sdk version >5.2.0 > > > Key: HADOOP-14490 > URL: https://issues.apache.org/jira/browse/HADOOP-14490 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Mingliang Liu >Assignee: Rajesh Balamohan > > As required by [HADOOP-14478], we're expecting the {{BlobInputStream}} to > support advanced {{readFully()}} by taking hints of mark. This can only be > done by means of sdk version bump. > cc: [~rajesh.balamohan]. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14478) Optimize NativeAzureFsInputStream for positional reads
[ https://issues.apache.org/jira/browse/HADOOP-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037914#comment-16037914 ] Rajesh Balamohan commented on HADOOP-14478: --- [~liuml07] - Perf improvement would be observed when {{BlobInputStream}} is fixed. Thanks for creating HADOOP-14490. > Optimize NativeAzureFsInputStream for positional reads > -- > > Key: HADOOP-14478 > URL: https://issues.apache.org/jira/browse/HADOOP-14478 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HADOOP-14478.001.patch, HADOOP-14478.002.patch, > HADOOP-14478.003.patch > > > Azure's {{BlobbInputStream}} internally buffers 4 MB of data irrespective of > the data length requested for. This would be beneficial for sequential reads. > However, for positional reads (seek to specific location, read x number of > bytes, seek back to original location) this may not be beneficial and might > even download lot more data which are not used later. > It would be good to override {{readFully(long position, byte[] buffer, int > offset, int length)}} for {{NativeAzureFsInputStream}} and make use of > {{mark(readLimit)}} as a hint to Azure's BlobInputStream. > BlobInputStream reference: > https://github.com/Azure/azure-storage-java/blob/master/microsoft-azure-storage/src/com/microsoft/azure/storage/blob/BlobInputStream.java#L448 > BlobInputStream can consider this as a hint later to determine the amount of > data to be read ahead. Changes to BlobInputStream would not be addressed in > this JIRA. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Closed] (HADOOP-14473) Optimize NativeAzureFileSystem::seek for forward seeks
[ https://issues.apache.org/jira/browse/HADOOP-14473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu closed HADOOP-14473. -- Closing as nothing to be done for releases. > Optimize NativeAzureFileSystem::seek for forward seeks > -- > > Key: HADOOP-14473 > URL: https://issues.apache.org/jira/browse/HADOOP-14473 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan > Attachments: HADOOP-14473-001.patch > > > {{NativeAzureFileSystem::seek()}} closes and re-opens the inputstream > irrespective of forward/backward seek. It would be beneficial to re-open the > stream on backward seek. > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java#L889 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14490) Upgrade azure-storage sdk version >5.2.0
[ https://issues.apache.org/jira/browse/HADOOP-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-14490: --- Summary: Upgrade azure-storage sdk version >5.2.0 (was: Upgrade azure-storage sdk version) > Upgrade azure-storage sdk version >5.2.0 > > > Key: HADOOP-14490 > URL: https://issues.apache.org/jira/browse/HADOOP-14490 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Mingliang Liu > > As required by [HADOOP-14478], we're expecting the {{BlobInputStream}} to > support advanced {{readFully()}} by taking hints of mark. This can only be > done by means of sdk version bump. > cc: [~rajesh.balamohan]. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14478) Optimize NativeAzureFsInputStream for positional reads
[ https://issues.apache.org/jira/browse/HADOOP-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037893#comment-16037893 ] Mingliang Liu commented on HADOOP-14478: [~rajesh.balamohan], we are expecting a sdk version bump after it's supported in {{BlobInputStream}}, right? I filed [HADOOP-14490] to track this. Correct me if I'm wrong. > Optimize NativeAzureFsInputStream for positional reads > -- > > Key: HADOOP-14478 > URL: https://issues.apache.org/jira/browse/HADOOP-14478 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HADOOP-14478.001.patch, HADOOP-14478.002.patch, > HADOOP-14478.003.patch > > > Azure's {{BlobbInputStream}} internally buffers 4 MB of data irrespective of > the data length requested for. This would be beneficial for sequential reads. > However, for positional reads (seek to specific location, read x number of > bytes, seek back to original location) this may not be beneficial and might > even download lot more data which are not used later. > It would be good to override {{readFully(long position, byte[] buffer, int > offset, int length)}} for {{NativeAzureFsInputStream}} and make use of > {{mark(readLimit)}} as a hint to Azure's BlobInputStream. > BlobInputStream reference: > https://github.com/Azure/azure-storage-java/blob/master/microsoft-azure-storage/src/com/microsoft/azure/storage/blob/BlobInputStream.java#L448 > BlobInputStream can consider this as a hint later to determine the amount of > data to be read ahead. Changes to BlobInputStream would not be addressed in > this JIRA. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14490) Upgrade azure-storage sdk version
Mingliang Liu created HADOOP-14490: -- Summary: Upgrade azure-storage sdk version Key: HADOOP-14490 URL: https://issues.apache.org/jira/browse/HADOOP-14490 Project: Hadoop Common Issue Type: Improvement Reporter: Mingliang Liu As required by [HADOOP-14478], we're expecting the {{BlobInputStream}} to support advanced {{readFully()}} by taking hints of mark. This can only be done by means of sdk version bump. cc: [~rajesh.balamohan]. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14473) Optimize NativeAzureFileSystem::seek for forward seeks
[ https://issues.apache.org/jira/browse/HADOOP-14473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajesh Balamohan updated HADOOP-14473: -- Resolution: Resolved Status: Resolved (was: Patch Available) Closing this ticket, since HADOOP-14478 takes care of this. > Optimize NativeAzureFileSystem::seek for forward seeks > -- > > Key: HADOOP-14473 > URL: https://issues.apache.org/jira/browse/HADOOP-14473 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan > Attachments: HADOOP-14473-001.patch > > > {{NativeAzureFileSystem::seek()}} closes and re-opens the inputstream > irrespective of forward/backward seek. It would be beneficial to re-open the > stream on backward seek. > https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java#L889 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14478) Optimize NativeAzureFsInputStream for positional reads
[ https://issues.apache.org/jira/browse/HADOOP-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037890#comment-16037890 ] Rajesh Balamohan commented on HADOOP-14478: --- Thanks [~liuml07], [~ste...@apache.org]. > Optimize NativeAzureFsInputStream for positional reads > -- > > Key: HADOOP-14478 > URL: https://issues.apache.org/jira/browse/HADOOP-14478 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HADOOP-14478.001.patch, HADOOP-14478.002.patch, > HADOOP-14478.003.patch > > > Azure's {{BlobbInputStream}} internally buffers 4 MB of data irrespective of > the data length requested for. This would be beneficial for sequential reads. > However, for positional reads (seek to specific location, read x number of > bytes, seek back to original location) this may not be beneficial and might > even download lot more data which are not used later. > It would be good to override {{readFully(long position, byte[] buffer, int > offset, int length)}} for {{NativeAzureFsInputStream}} and make use of > {{mark(readLimit)}} as a hint to Azure's BlobInputStream. > BlobInputStream reference: > https://github.com/Azure/azure-storage-java/blob/master/microsoft-azure-storage/src/com/microsoft/azure/storage/blob/BlobInputStream.java#L448 > BlobInputStream can consider this as a hint later to determine the amount of > data to be read ahead. Changes to BlobInputStream would not be addressed in > this JIRA. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14395) Provide Builder pattern for DistributedFileSystem.append
[ https://issues.apache.org/jira/browse/HADOOP-14395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HADOOP-14395: --- Attachment: HADOOP-14395.01-trunk.patch > Provide Builder pattern for DistributedFileSystem.append > > > Key: HADOOP-14395 > URL: https://issues.apache.org/jira/browse/HADOOP-14395 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu > Attachments: HADOOP-14395.00.patch, HADOOP-14395.00-trunk.patch, > HADOOP-14395.01.patch, HADOOP-14395.01-trunk.patch > > > Follow HADOOP-14394, it should also provide a {{Builder}} API for > {{DistributedFileSystem#append}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14395) Provide Builder pattern for DistributedFileSystem.append
[ https://issues.apache.org/jira/browse/HADOOP-14395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HADOOP-14395: --- Attachment: HADOOP-14395.01.patch Rebase against HADOOP-14394. > Provide Builder pattern for DistributedFileSystem.append > > > Key: HADOOP-14395 > URL: https://issues.apache.org/jira/browse/HADOOP-14395 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu > Attachments: HADOOP-14395.00.patch, HADOOP-14395.00-trunk.patch, > HADOOP-14395.01.patch, HADOOP-14395.01-trunk.patch > > > Follow HADOOP-14394, it should also provide a {{Builder}} API for > {{DistributedFileSystem#append}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13145) In DistCp, prevent unnecessary getFileStatus call when not preserving metadata.
[ https://issues.apache.org/jira/browse/HADOOP-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037880#comment-16037880 ] Adam Kramer commented on HADOOP-13145: -- Any chance of creating a patch/applying to 2.7 branch? > In DistCp, prevent unnecessary getFileStatus call when not preserving > metadata. > --- > > Key: HADOOP-13145 > URL: https://issues.apache.org/jira/browse/HADOOP-13145 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-13145.001.patch, HADOOP-13145.003.patch, > HADOOP-13145-branch-2.004.patch, HADOOP-13145-branch-2.8.004.patch > > > After DistCp copies a file, it calls {{getFileStatus}} to get the > {{FileStatus}} from the destination so that it can compare to the source and > update metadata if necessary. If the DistCp command was run without the > option to preserve metadata attributes, then this additional > {{getFileStatus}} call is wasteful. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A
[ https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037850#comment-16037850 ] Aaron Fabbri commented on HADOOP-13345: --- Sounds good to me [~liuml07]. Thank you for doing the merge. > S3Guard: Improved Consistency for S3A > - > > Key: HADOOP-13345 > URL: https://issues.apache.org/jira/browse/HADOOP-13345 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: HADOOP-13345.prototype1.patch, s3c.001.patch, > S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, > S3GuardImprovedConsistencyforS3AV2.pdf > > > This issue proposes S3Guard, a new feature of S3A, to provide an option for a > stronger consistency model than what is currently offered. The solution > coordinates with a strongly consistent external store to resolve > inconsistencies caused by the S3 eventual consistency model. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14478) Optimize NativeAzureFsInputStream for positional reads
[ https://issues.apache.org/jira/browse/HADOOP-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037830#comment-16037830 ] Hudson commented on HADOOP-14478: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11826 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11826/]) HADOOP-14478. Optimize NativeAzureFsInputStream for positional reads. (liuml07: rev 5fd9742c83fbeae96bf0913bdcdf77fafbf15b2f) * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java > Optimize NativeAzureFsInputStream for positional reads > -- > > Key: HADOOP-14478 > URL: https://issues.apache.org/jira/browse/HADOOP-14478 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HADOOP-14478.001.patch, HADOOP-14478.002.patch, > HADOOP-14478.003.patch > > > Azure's {{BlobbInputStream}} internally buffers 4 MB of data irrespective of > the data length requested for. This would be beneficial for sequential reads. > However, for positional reads (seek to specific location, read x number of > bytes, seek back to original location) this may not be beneficial and might > even download lot more data which are not used later. > It would be good to override {{readFully(long position, byte[] buffer, int > offset, int length)}} for {{NativeAzureFsInputStream}} and make use of > {{mark(readLimit)}} as a hint to Azure's BlobInputStream. > BlobInputStream reference: > https://github.com/Azure/azure-storage-java/blob/master/microsoft-azure-storage/src/com/microsoft/azure/storage/blob/BlobInputStream.java#L448 > BlobInputStream can consider this as a hint later to determine the amount of > data to be read ahead. Changes to BlobInputStream would not be addressed in > this JIRA. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14478) Optimize NativeAzureFsInputStream for positional reads
[ https://issues.apache.org/jira/browse/HADOOP-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-14478: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha4 2.9.0 Status: Resolved (was: Patch Available) Committed to {{branch-2}} and {{trunk}} branches. Thanks for your contribution, [~rajesh.balamohan]. Thanks for your review [~ste...@apache.org]. > Optimize NativeAzureFsInputStream for positional reads > -- > > Key: HADOOP-14478 > URL: https://issues.apache.org/jira/browse/HADOOP-14478 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HADOOP-14478.001.patch, HADOOP-14478.002.patch, > HADOOP-14478.003.patch > > > Azure's {{BlobbInputStream}} internally buffers 4 MB of data irrespective of > the data length requested for. This would be beneficial for sequential reads. > However, for positional reads (seek to specific location, read x number of > bytes, seek back to original location) this may not be beneficial and might > even download lot more data which are not used later. > It would be good to override {{readFully(long position, byte[] buffer, int > offset, int length)}} for {{NativeAzureFsInputStream}} and make use of > {{mark(readLimit)}} as a hint to Azure's BlobInputStream. > BlobInputStream reference: > https://github.com/Azure/azure-storage-java/blob/master/microsoft-azure-storage/src/com/microsoft/azure/storage/blob/BlobInputStream.java#L448 > BlobInputStream can consider this as a hint later to determine the amount of > data to be read ahead. Changes to BlobInputStream would not be addressed in > this JIRA. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12360) Create StatsD metrics2 sink
[ https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037804#comment-16037804 ] Dave Marion commented on HADOOP-12360: -- [~michaelmoss] - Seems like we could invert the logic for service name such that if it is not specified in the configuration, then use the process name. I'm not sure if we should change context and hostname, I'm thinking that they should be left alone. Looks like there are planned changes for this class in HADOOP-13048, you might be able to add this problem to that issue, or create a new issue. > Create StatsD metrics2 sink > --- > > Key: HADOOP-12360 > URL: https://issues.apache.org/jira/browse/HADOOP-12360 > Project: Hadoop Common > Issue Type: New Feature > Components: metrics >Affects Versions: 2.7.1 >Reporter: Dave Marion >Assignee: Dave Marion >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, > HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, > HADOOP-12360.006.patch, HADOOP-12360.007.patch, HADOOP-12360.008.patch, > HADOOP-12360.009.patch, HADOOP-12360.010.patch > > > Create a metrics sink that pushes to a StatsD daemon. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A
[ https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037785#comment-16037785 ] Mingliang Liu commented on HADOOP-13345: Kinda clean integration tests. # Running w/o s3guard {{mvn -Dit.test='ITestS3A*' -Dtest=none -Dscale -q clean verify}}, all test cases pass. # Running with DynamoDB web service us-west-1 region, {{mvn -Dit.test='ITestS3A*,ITestS3Guard*,ITestDynamo*' -Dtest=none -Ds3guard -Ddynamo -q verify}}. Only one test failure, ITestS3AEncryptionSSEC. This has been identified and reported by [HADOOP-14448]. # Running with DynamoDB Local (in-memory DDB simulator for test), {{mvn -Dit.test='ITestS3A*,ITestS3Guard*,ITestDynamo*' -Dtest=none -Ds3guard -Ddynamodblocal -q verify}}. As above, only one test failure, ITestS3AEncryptionSSEC. This has been identified and reported by [HADOOP-14448]. # Running with Local mode (in-memory metadata store), {code} $ mvn -Dit.test='ITestS3A*,ITestS3Guard*,ITestDynamo*' -Dtest=none -Ds3guard -Dlocal -q verify Results : Tests run: 390, Failures: 0, Errors: 0, Skipped: 55 {code} > S3Guard: Improved Consistency for S3A > - > > Key: HADOOP-13345 > URL: https://issues.apache.org/jira/browse/HADOOP-13345 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: HADOOP-13345.prototype1.patch, s3c.001.patch, > S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, > S3GuardImprovedConsistencyforS3AV2.pdf > > > This issue proposes S3Guard, a new feature of S3A, to provide an option for a > stronger consistency model than what is currently offered. The solution > coordinates with a strongly consistent external store to resolve > inconsistencies caused by the S3 eventual consistency model. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14433) ITestS3GuardConcurrentOps.testConcurrentTableCreations failing on local dynamo
[ https://issues.apache.org/jira/browse/HADOOP-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037682#comment-16037682 ] Mingliang Liu commented on HADOOP-14433: Thanks [~mackrorysd]. I think the patch makes sense. Can you also test in parallel mode as well? I'll fix the summary by deleting "on local dynamo" as test failure appears on "dynamo" as well per [HADOOP-14489]. > ITestS3GuardConcurrentOps.testConcurrentTableCreations failing on local dynamo > -- > > Key: HADOOP-14433 > URL: https://issues.apache.org/jira/browse/HADOOP-14433 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Sean Mackrory >Priority: Minor > Attachments: HADOOP-14433-HADOOP-13345.001.patch > > > test run with local dynamo {{-Dparallel-tests -DtestsThreadCount=8 > -Ddynamodblocal -Ds3guard}} failing > {code} > Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocalTests run: 1, > Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.401 sec - in > org.apache.hadoop.fs.s3a.ITestS3GuardEmptyDirsTests run: 1, Failures: 0, > Errors: 1, Skipped: 0, Time elapsed: 10.264 sec <<< FAILURE! - in > org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOpstestConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps) > Time elapsed: 9.744 sec <<< ERROR! java.lang.IllegalArgumentException: No > DynamoDB table name configured! > at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:122) > at > org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:266) > at > org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.testConcurrentTableCreations(ITestS3GuardConcurrentOps.java:81) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14433) ITestS3GuardConcurrentOps.testConcurrentTableCreations failing on local dynamo
[ https://issues.apache.org/jira/browse/HADOOP-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu reassigned HADOOP-14433: -- Assignee: Sean Mackrory (was: Mingliang Liu) > ITestS3GuardConcurrentOps.testConcurrentTableCreations failing on local dynamo > -- > > Key: HADOOP-14433 > URL: https://issues.apache.org/jira/browse/HADOOP-14433 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Sean Mackrory >Priority: Minor > Attachments: HADOOP-14433-HADOOP-13345.001.patch > > > test run with local dynamo {{-Dparallel-tests -DtestsThreadCount=8 > -Ddynamodblocal -Ds3guard}} failing > {code} > Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocalTests run: 1, > Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.401 sec - in > org.apache.hadoop.fs.s3a.ITestS3GuardEmptyDirsTests run: 1, Failures: 0, > Errors: 1, Skipped: 0, Time elapsed: 10.264 sec <<< FAILURE! - in > org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOpstestConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps) > Time elapsed: 9.744 sec <<< ERROR! java.lang.IllegalArgumentException: No > DynamoDB table name configured! > at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:122) > at > org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:266) > at > org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.testConcurrentTableCreations(ITestS3GuardConcurrentOps.java:81) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14489) ITestS3GuardConcurrentOps requires explicit DynamoDB table name to be configured
[ https://issues.apache.org/jira/browse/HADOOP-14489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory resolved HADOOP-14489. Resolution: Fixed Resolving as a duplicate. Thanks [~liuml07] > ITestS3GuardConcurrentOps requires explicit DynamoDB table name to be > configured > > > Key: HADOOP-14489 > URL: https://issues.apache.org/jira/browse/HADOOP-14489 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Sean Mackrory >Assignee: Sean Mackrory > > testConcurrentTableCreations fails with this: > {quote}java.lang.IllegalArgumentException: No DynamoDB table name > configured!{quote} > I don't think that's necessary - should be able to shuffle stuff around and > either use the bucket name by default (like other DynamoDB tests would) or > use the table name that's configured later in the test. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-14489) ITestS3GuardConcurrentOps requires explicit DynamoDB table name to be configured
[ https://issues.apache.org/jira/browse/HADOOP-14489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory updated HADOOP-14489: --- Comment: was deleted (was: So this has always succeeded for me because my auth-keys.xml sets a specific table name. If I don't, it fails for me no matter how it's run. Attaching a patch that fixes it by initializing the DynamoDB client using the test file system (so it can use the bucket name as the default table name). Now it passes for me with and without a specific table name.) > ITestS3GuardConcurrentOps requires explicit DynamoDB table name to be > configured > > > Key: HADOOP-14489 > URL: https://issues.apache.org/jira/browse/HADOOP-14489 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Sean Mackrory >Assignee: Sean Mackrory > > testConcurrentTableCreations fails with this: > {quote}java.lang.IllegalArgumentException: No DynamoDB table name > configured!{quote} > I don't think that's necessary - should be able to shuffle stuff around and > either use the bucket name by default (like other DynamoDB tests would) or > use the table name that's configured later in the test. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14433) ITestS3GuardConcurrentOps.testConcurrentTableCreations failing on local dynamo
[ https://issues.apache.org/jira/browse/HADOOP-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory updated HADOOP-14433: --- Attachment: HADOOP-14433-HADOOP-13345.001.patch So this has always succeeded for me because my auth-keys.xml sets a specific table name. If I don't, it fails for me no matter how it's run. Attaching a patch that fixes it by initializing the DynamoDB client using the test file system (so it can use the bucket name as the default table name). Now it passes for me with and without a specific table name. > ITestS3GuardConcurrentOps.testConcurrentTableCreations failing on local dynamo > -- > > Key: HADOOP-14433 > URL: https://issues.apache.org/jira/browse/HADOOP-14433 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Mingliang Liu >Priority: Minor > Attachments: HADOOP-14433-HADOOP-13345.001.patch > > > test run with local dynamo {{-Dparallel-tests -DtestsThreadCount=8 > -Ddynamodblocal -Ds3guard}} failing > {code} > Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocalTests run: 1, > Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.401 sec - in > org.apache.hadoop.fs.s3a.ITestS3GuardEmptyDirsTests run: 1, Failures: 0, > Errors: 1, Skipped: 0, Time elapsed: 10.264 sec <<< FAILURE! - in > org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOpstestConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps) > Time elapsed: 9.744 sec <<< ERROR! java.lang.IllegalArgumentException: No > DynamoDB table name configured! > at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:122) > at > org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:266) > at > org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.testConcurrentTableCreations(ITestS3GuardConcurrentOps.java:81) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14489) ITestS3GuardConcurrentOps requires explicit DynamoDB table name to be configured
[ https://issues.apache.org/jira/browse/HADOOP-14489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037640#comment-16037640 ] Sean Mackrory commented on HADOOP-14489: So this has always succeeded for me because my auth-keys.xml sets a specific table name. If I don't, it fails for me no matter how it's run. Attaching a patch that fixes it by initializing the DynamoDB client using the test file system (so it can use the bucket name as the default table name). Now it passes for me with and without a specific table name. > ITestS3GuardConcurrentOps requires explicit DynamoDB table name to be > configured > > > Key: HADOOP-14489 > URL: https://issues.apache.org/jira/browse/HADOOP-14489 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Sean Mackrory >Assignee: Sean Mackrory > > testConcurrentTableCreations fails with this: > {quote}java.lang.IllegalArgumentException: No DynamoDB table name > configured!{quote} > I don't think that's necessary - should be able to shuffle stuff around and > either use the bucket name by default (like other DynamoDB tests would) or > use the table name that's configured later in the test. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A
[ https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037592#comment-16037592 ] Mingliang Liu commented on HADOOP-13345: I'll do a new merge from {{trunk}} today. I merge on my local machine and almost finish all the integration tests. Unless there is any objection or concerns, I'll push the merge after I post a clean test report by the end of day. > S3Guard: Improved Consistency for S3A > - > > Key: HADOOP-13345 > URL: https://issues.apache.org/jira/browse/HADOOP-13345 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: HADOOP-13345.prototype1.patch, s3c.001.patch, > S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, > S3GuardImprovedConsistencyforS3AV2.pdf > > > This issue proposes S3Guard, a new feature of S3A, to provide an option for a > stronger consistency model than what is currently offered. The solution > coordinates with a strongly consistent external store to resolve > inconsistencies caused by the S3 eventual consistency model. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037585#comment-16037585 ] Yongjun Zhang commented on HADOOP-14445: Hi [~daryn], Thanks for your detailed sharing. Would you please take a look at [~shahrs87]'s patch as you planned? thanks. > Delegation tokens are not shared between KMS instances > -- > > Key: HADOOP-14445 > URL: https://issues.apache.org/jira/browse/HADOOP-14445 > Project: Hadoop Common > Issue Type: Bug > Components: documentation, kms >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Assignee: Rushabh S Shah > Attachments: HADOOP-14445-branch-2.8.patch > > > As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do > not share delegation tokens. (a client uses KMS address/port as the key for > delegation token) > {code:title=DelegationTokenAuthenticatedURL#openConnection} > if (!creds.getAllTokens().isEmpty()) { > InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(), > url.getPort()); > Text service = SecurityUtil.buildTokenService(serviceAddr); > dToken = creds.getToken(service); > {code} > But KMS doc states: > {quote} > Delegation Tokens > Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation > tokens too. > Under HA, A KMS instance must verify the delegation token given by another > KMS instance, by checking the shared secret used to sign the delegation > token. To do this, all KMS instances must be able to retrieve the shared > secret from ZooKeeper. > {quote} > We should either update the KMS documentation, or fix this code to share > delegation tokens. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14385) HttpExceptionUtils#validateResponse swallows exceptions
[ https://issues.apache.org/jira/browse/HADOOP-14385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-14385: - Attachment: HADOOP-14385.002.patch rev 002 patch to address Steve's comments. Thanks! > HttpExceptionUtils#validateResponse swallows exceptions > --- > > Key: HADOOP-14385 > URL: https://issues.apache.org/jira/browse/HADOOP-14385 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > Attachments: HADOOP-14385.001.patch, HADOOP-14385.002.patch > > > In the following code > {code:title=HttpExceptionUtils#validateResponse} > try { > es = conn.getErrorStream(); > ObjectMapper mapper = new ObjectMapper(); > Map json = mapper.readValue(es, Map.class); > json = (Map) json.get(ERROR_JSON); > String exClass = (String) json.get(ERROR_CLASSNAME_JSON); > String exMsg = (String) json.get(ERROR_MESSAGE_JSON); > if (exClass != null) { > try { > ClassLoader cl = HttpExceptionUtils.class.getClassLoader(); > Class klass = cl.loadClass(exClass); > Constructor constr = klass.getConstructor(String.class); > toThrow = (Exception) constr.newInstance(exMsg); > } catch (Exception ex) { > toThrow = new IOException(String.format( > "HTTP status [%d], exception [%s], message [%s] ", > conn.getResponseCode(), exClass, exMsg)); > } > } else { > String msg = (exMsg != null) ? exMsg : conn.getResponseMessage(); > toThrow = new IOException(String.format( > "HTTP status [%d], message [%s]", conn.getResponseCode(), msg)); > } > } catch (Exception ex) { > toThrow = new IOException(String.format( <-- here > "HTTP status [%d], message [%s]", conn.getResponseCode(), > conn.getResponseMessage())); > } > {code} > If the an exception is thrown within the try block, the initial exception is > swallowed, and it doesn't help debugging. > We had to cross reference this exception with the KMS server side to guess > what happened. > IMHO the IOException thrown should also carry the initial exception. It > should also print exClass and exMsg. It probably failed to instantiate an > exception class. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14385) HttpExceptionUtils#validateResponse swallows exceptions
[ https://issues.apache.org/jira/browse/HADOOP-14385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-14385: - Status: Patch Available (was: Open) > HttpExceptionUtils#validateResponse swallows exceptions > --- > > Key: HADOOP-14385 > URL: https://issues.apache.org/jira/browse/HADOOP-14385 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > Attachments: HADOOP-14385.001.patch, HADOOP-14385.002.patch > > > In the following code > {code:title=HttpExceptionUtils#validateResponse} > try { > es = conn.getErrorStream(); > ObjectMapper mapper = new ObjectMapper(); > Map json = mapper.readValue(es, Map.class); > json = (Map) json.get(ERROR_JSON); > String exClass = (String) json.get(ERROR_CLASSNAME_JSON); > String exMsg = (String) json.get(ERROR_MESSAGE_JSON); > if (exClass != null) { > try { > ClassLoader cl = HttpExceptionUtils.class.getClassLoader(); > Class klass = cl.loadClass(exClass); > Constructor constr = klass.getConstructor(String.class); > toThrow = (Exception) constr.newInstance(exMsg); > } catch (Exception ex) { > toThrow = new IOException(String.format( > "HTTP status [%d], exception [%s], message [%s] ", > conn.getResponseCode(), exClass, exMsg)); > } > } else { > String msg = (exMsg != null) ? exMsg : conn.getResponseMessage(); > toThrow = new IOException(String.format( > "HTTP status [%d], message [%s]", conn.getResponseCode(), msg)); > } > } catch (Exception ex) { > toThrow = new IOException(String.format( <-- here > "HTTP status [%d], message [%s]", conn.getResponseCode(), > conn.getResponseMessage())); > } > {code} > If the an exception is thrown within the try block, the initial exception is > swallowed, and it doesn't help debugging. > We had to cross reference this exception with the KMS server side to guess > what happened. > IMHO the IOException thrown should also carry the initial exception. It > should also print exClass and exMsg. It probably failed to instantiate an > exception class. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14488) s34guard localdynamo listStatus fails after renaming file into directory
[ https://issues.apache.org/jira/browse/HADOOP-14488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037573#comment-16037573 ] Mingliang Liu commented on HADOOP-14488: What about running with the real dynamo web service? > s34guard localdynamo listStatus fails after renaming file into directory > > > Key: HADOOP-14488 > URL: https://issues.apache.org/jira/browse/HADOOP-14488 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Steve Loughran >Priority: Blocker > > Running scala integration test with inconsistent s3 client & local DDB enabled > {code} > fs.rename("work/task-00/part-00", work) > fs.listStatus(work) > {code} > The list status work fails with a message about the childStatus not being a > child of the parent. > Hypothesis: rename isn't updating the child path entry -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14487) DirListingMetadata precondition failure messages to include path at fault
[ https://issues.apache.org/jira/browse/HADOOP-14487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037570#comment-16037570 ] Mingliang Liu commented on HADOOP-14487: patch file name. > DirListingMetadata precondition failure messages to include path at fault > - > > Key: HADOOP-14487 > URL: https://issues.apache.org/jira/browse/HADOOP-14487 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-14487-001.patch > > > I've done something wrong in my code and getting "" childPath must be a child > of path", which is all very well, but it doesn't include paths. > The precondition checks all need to include the relevant path info for users > to start working out what has gone wrong. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14487) DirListingMetadata precondition failure messages to include path at fault
[ https://issues.apache.org/jira/browse/HADOOP-14487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037566#comment-16037566 ] Hadoop QA commented on HADOOP-14487: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 10s{color} | {color:red} HADOOP-14487 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-14487 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12871297/HADOOP-14487-001.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12444/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > DirListingMetadata precondition failure messages to include path at fault > - > > Key: HADOOP-14487 > URL: https://issues.apache.org/jira/browse/HADOOP-14487 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-14487-001.patch > > > I've done something wrong in my code and getting "" childPath must be a child > of path", which is all very well, but it doesn't include paths. > The precondition checks all need to include the relevant path info for users > to start working out what has gone wrong. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14433) ITestS3GuardConcurrentOps.testConcurrentTableCreations failing on local dynamo
[ https://issues.apache.org/jira/browse/HADOOP-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu reassigned HADOOP-14433: -- Assignee: Mingliang Liu > ITestS3GuardConcurrentOps.testConcurrentTableCreations failing on local dynamo > -- > > Key: HADOOP-14433 > URL: https://issues.apache.org/jira/browse/HADOOP-14433 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Mingliang Liu >Priority: Minor > > test run with local dynamo {{-Dparallel-tests -DtestsThreadCount=8 > -Ddynamodblocal -Ds3guard}} failing > {code} > Running org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocalTests run: 1, > Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.401 sec - in > org.apache.hadoop.fs.s3a.ITestS3GuardEmptyDirsTests run: 1, Failures: 0, > Errors: 1, Skipped: 0, Time elapsed: 10.264 sec <<< FAILURE! - in > org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOpstestConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps) > Time elapsed: 9.744 sec <<< ERROR! java.lang.IllegalArgumentException: No > DynamoDB table name configured! > at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:122) > at > org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:266) > at > org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.testConcurrentTableCreations(ITestS3GuardConcurrentOps.java:81) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14489) ITestS3GuardConcurrentOps requires explicit DynamoDB table name to be configured
[ https://issues.apache.org/jira/browse/HADOOP-14489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037564#comment-16037564 ] Mingliang Liu commented on HADOOP-14489: I think this is related to [HADOOP-14433]. > ITestS3GuardConcurrentOps requires explicit DynamoDB table name to be > configured > > > Key: HADOOP-14489 > URL: https://issues.apache.org/jira/browse/HADOOP-14489 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Sean Mackrory >Assignee: Sean Mackrory > > testConcurrentTableCreations fails with this: > {quote}java.lang.IllegalArgumentException: No DynamoDB table name > configured!{quote} > I don't think that's necessary - should be able to shuffle stuff around and > either use the bucket name by default (like other DynamoDB tests would) or > use the table name that's configured later in the test. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14487) DirListingMetadata precondition failure messages to include path at fault
[ https://issues.apache.org/jira/browse/HADOOP-14487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037561#comment-16037561 ] Mingliang Liu commented on HADOOP-14487: +1 pending on Jenkins. > DirListingMetadata precondition failure messages to include path at fault > - > > Key: HADOOP-14487 > URL: https://issues.apache.org/jira/browse/HADOOP-14487 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-14487-001.patch > > > I've done something wrong in my code and getting "" childPath must be a child > of path", which is all very well, but it doesn't include paths. > The precondition checks all need to include the relevant path info for users > to start working out what has gone wrong. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14487) DirListingMetadata precondition failure messages to include path at fault
[ https://issues.apache.org/jira/browse/HADOOP-14487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14487: Status: Patch Available (was: Open) > DirListingMetadata precondition failure messages to include path at fault > - > > Key: HADOOP-14487 > URL: https://issues.apache.org/jira/browse/HADOOP-14487 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-14487-001.patch > > > I've done something wrong in my code and getting "" childPath must be a child > of path", which is all very well, but it doesn't include paths. > The precondition checks all need to include the relevant path info for users > to start working out what has gone wrong. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14487) DirListingMetadata precondition failure messages to include path at fault
[ https://issues.apache.org/jira/browse/HADOOP-14487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14487: Attachment: HADOOP-14487-001.patch Patch 001. Stack trace of code with patch in appears in HADOOP-14488. No tests > DirListingMetadata precondition failure messages to include path at fault > - > > Key: HADOOP-14487 > URL: https://issues.apache.org/jira/browse/HADOOP-14487 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-14487-001.patch > > > I've done something wrong in my code and getting "" childPath must be a child > of path", which is all very well, but it doesn't include paths. > The precondition checks all need to include the relevant path info for users > to start working out what has gone wrong. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14488) s34guard localdynamo listStatus fails after renaming file into directory
[ https://issues.apache.org/jira/browse/HADOOP-14488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14488: Priority: Blocker (was: Major) > s34guard localdynamo listStatus fails after renaming file into directory > > > Key: HADOOP-14488 > URL: https://issues.apache.org/jira/browse/HADOOP-14488 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Steve Loughran >Priority: Blocker > > Running scala integration test with inconsistent s3 client & local DDB enabled > {code} > fs.rename("work/task-00/part-00", work) > fs.listStatus(work) > {code} > The list status work fails with a message about the childStatus not being a > child of the parent. > Hypothesis: rename isn't updating the child path entry -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14487) DirListingMetadata precondition failure messages to include path at fault
[ https://issues.apache.org/jira/browse/HADOOP-14487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-14487: --- Assignee: Steve Loughran > DirListingMetadata precondition failure messages to include path at fault > - > > Key: HADOOP-14487 > URL: https://issues.apache.org/jira/browse/HADOOP-14487 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > > I've done something wrong in my code and getting "" childPath must be a child > of path", which is all very well, but it doesn't include paths. > The precondition checks all need to include the relevant path info for users > to start working out what has gone wrong. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14457) create() does not notify metadataStore of parent directories or ensure they're not existing files
[ https://issues.apache.org/jira/browse/HADOOP-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037533#comment-16037533 ] Steve Loughran commented on HADOOP-14457: - Having a check up the tree during a create is going to be OK For s3guard, as cost of check is low (can we do the entire check as one batch? I don't know the DDB API). its only against raw S3 that the cost becomes excessive enough to make it hard to justify prefer the do/while of line 1674 to have the while(fPart != null) at the top probably just some personal preference, as I can't see a codepath where the you'd get to FNFE and f.getParent()==null (as it would imply listing the root dir failued), but > create() does not notify metadataStore of parent directories or ensure > they're not existing files > - > > Key: HADOOP-14457 > URL: https://issues.apache.org/jira/browse/HADOOP-14457 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Attachments: HADOOP-14457-HADOOP-13345.001.patch, > HADOOP-14457-HADOOP-13345.002.patch, HADOOP-14457-HADOOP-13345.003.patch > > > Not a great test yet, but it at least reliably demonstrates the issue. > LocalMetadataStore will sometimes erroneously report that a directory is > empty with isAuthoritative = true when it *definitely* has children the > metadatastore should know about. It doesn't appear to happen if the children > are just directory. The fact that it's returning an empty listing is > concerning, but the fact that it says it's authoritative *might* be a second > bug. > {code} > diff --git > a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java > > b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java > index 78b3970..1821d19 100644 > --- > a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java > +++ > b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java > @@ -965,7 +965,7 @@ public boolean hasMetadataStore() { >} > >@VisibleForTesting > - MetadataStore getMetadataStore() { > + public MetadataStore getMetadataStore() { > return metadataStore; >} > > diff --git > a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java > > b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java > index 4339649..881bdc9 100644 > --- > a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java > +++ > b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java > @@ -23,6 +23,11 @@ > import org.apache.hadoop.fs.contract.AbstractFSContract; > import org.apache.hadoop.fs.FileSystem; > import org.apache.hadoop.fs.Path; > +import org.apache.hadoop.fs.s3a.S3AFileSystem; > +import org.apache.hadoop.fs.s3a.Tristate; > +import org.apache.hadoop.fs.s3a.s3guard.DirListingMetadata; > +import org.apache.hadoop.fs.s3a.s3guard.MetadataStore; > +import org.junit.Test; > > import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset; > import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset; > @@ -72,4 +77,24 @@ public void testRenameDirIntoExistingDir() throws > Throwable { > boolean rename = fs.rename(srcDir, destDir); > assertFalse("s3a doesn't support rename to non-empty directory", rename); >} > + > + @Test > + public void testMkdirPopulatesFileAncestors() throws Exception { > +final FileSystem fs = getFileSystem(); > +final MetadataStore ms = ((S3AFileSystem) fs).getMetadataStore(); > +final Path parent = path("testMkdirPopulatesFileAncestors/source"); > +try { > + fs.mkdirs(parent); > + final Path nestedFile = new Path(parent, "dir1/dir2/dir3/file4"); > + byte[] srcDataset = dataset(256, 'a', 'z'); > + writeDataset(fs, nestedFile, srcDataset, srcDataset.length, > + 1024, false); > + > + DirListingMetadata list = ms.listChildren(parent); > + assertTrue("MetadataStore falsely reports authoritative empty list", > + list.isEmpty() == Tristate.FALSE || !list.isAuthoritative()); > +} finally { > + fs.delete(parent, true); > +} > + } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14471) Upgrade Jetty to latest 9.3 version
[ https://issues.apache.org/jira/browse/HADOOP-14471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037499#comment-16037499 ] John Zhuge commented on HADOOP-14471: - Sure, I will report back the results. > Upgrade Jetty to latest 9.3 version > --- > > Key: HADOOP-14471 > URL: https://issues.apache.org/jira/browse/HADOOP-14471 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14471.001.patch > > > The current Jetty version is {{9.3.11.v20160721}}. Should we upgrade it to > the latest 9.3.x which is {{9.3.19.v20170502}}? Or 9.4? > 9.3.x changes: > https://github.com/eclipse/jetty.project/blob/jetty-9.3.x/VERSION.txt > 9.4.x changes: > https://github.com/eclipse/jetty.project/blob/jetty-9.4.x/VERSION.txt -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14471) Upgrade Jetty to latest 9.3 version
[ https://issues.apache.org/jira/browse/HADOOP-14471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037409#comment-16037409 ] Steve Loughran commented on HADOOP-14471: - you couldn't run your in house tests could you? I fear jetty updates > Upgrade Jetty to latest 9.3 version > --- > > Key: HADOOP-14471 > URL: https://issues.apache.org/jira/browse/HADOOP-14471 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14471.001.patch > > > The current Jetty version is {{9.3.11.v20160721}}. Should we upgrade it to > the latest 9.3.x which is {{9.3.19.v20170502}}? Or 9.4? > 9.3.x changes: > https://github.com/eclipse/jetty.project/blob/jetty-9.3.x/VERSION.txt > 9.4.x changes: > https://github.com/eclipse/jetty.project/blob/jetty-9.4.x/VERSION.txt -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure
[ https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037400#comment-16037400 ] Steve Loughran commented on HADOOP-14486: - Assume that HADOOP-14430 has done this, it being the most recent change to SFTP. What's probably happening is that the remote FS has a different granularity than expected. Sonia, can you supply a patch which relaxes the equality somewhat? We could probably allow for +-60s to avoid breaking against any FS I can imagine. If you can do that along with a declaration of which OS/FTP server you tested against. (that's there as our due diligence check), I'll get the fix in > TestSFTPFileSystem#testGetAccessTime test failure > - > > Key: HADOOP-14486 > URL: https://issues.apache.org/jira/browse/HADOOP-14486 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 > Environment: Ubuntu 14.04 > x86, ppc64le > $ java -version > openjdk version "1.8.0_111" > OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14) > OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode) >Reporter: Sonia Garudi > > The TestSFTPFileSystem#testGetAccessTime test fails consistently with the > error below: > {code} > java.lang.AssertionError: expected:<1496496040072> but was:<149649604> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14428) s3a: mkdir appears to be broken
[ https://issues.apache.org/jira/browse/HADOOP-14428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037388#comment-16037388 ] Hudson commented on HADOOP-14428: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11823 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11823/]) HADOOP-14428. s3a: mkdir appears to be broken. Contributed by Mingliang (liuml07: rev 6aeda55bb8f741d9dafd41f6dfbf1a88acdd4003) * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractMkdirTest.java > s3a: mkdir appears to be broken > --- > > Key: HADOOP-14428 > URL: https://issues.apache.org/jira/browse/HADOOP-14428 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.0.0-alpha2, HADOOP-13345 >Reporter: Aaron Fabbri >Assignee: Mingliang Liu >Priority: Blocker > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HADOOP-14428.000.patch, HADOOP-14428.001.patch > > > Reproduction is: > hadoop fs -mkdir s3a://my-bucket/dir/ > hadoop fs -ls s3a://my-bucket/dir/ > ls: `s3a://my-bucket/dir/': No such file or directory > I believe this is a regression from HADOOP-14255. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13760) S3Guard: add delete tracking
[ https://issues.apache.org/jira/browse/HADOOP-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037348#comment-16037348 ] Sean Mackrory commented on HADOOP-13760: [~ste...@apache.org] Note that the exception message can be fixed with HADOOP-13760, as I had discussed with [~fabbri] in that patch that we could go back to much simpler logic in innerMkdirs and rely on what getFileStatus was also doing - so the original exception message is part of that change as well. And with the simpler logic, there's less that can be gained by using switch statements. Still not sure what you meant about delete javadocs, however, so please let me know if you still have a concern about those. > S3Guard: add delete tracking > > > Key: HADOOP-13760 > URL: https://issues.apache.org/jira/browse/HADOOP-13760 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13760-HADOOP-13345.001.patch, > HADOOP-13760-HADOOP-13345.002.patch, HADOOP-13760-HADOOP-13345.003.patch, > HADOOP-13760-HADOOP-13345.004.patch, HADOOP-13760-HADOOP-13345.005.patch, > HADOOP-13760-HADOOP-13345.006.patch, HADOOP-13760-HADOOP-13345.007.patch, > HADOOP-13760-HADOOP-13345.008.patch, HADOOP-13760-HADOOP-13345.009.patch, > HADOOP-13760-HADOOP-13345.010.patch, HADOOP-13760-HADOOP-13345.011.patch, > HADOOP-13760-HADOOP-13345.012.patch, HADOOP-13760-HADOOP-13345.013.patch > > > Following the S3AFileSystem integration patch in HADOOP-13651, we need to add > delete tracking. > Current behavior on delete is to remove the metadata from the MetadataStore. > To make deletes consistent, we need to add a {{isDeleted}} flag to > {{PathMetadata}} and check it when returning results from functions like > {{getFileStatus()}} and {{listStatus()}}. In HADOOP-13651, I added TODO > comments in most of the places these new conditions are needed. The work > does not look too bad. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14428) s3a: mkdir appears to be broken
[ https://issues.apache.org/jira/browse/HADOOP-14428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-14428: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha4 2.9.0 Status: Resolved (was: Patch Available) Committed to {{branch-2}} and {{trunk}} branches. Thanks for your report and review [~ajfabbri]. Thanks for your review and discussion, [~ste...@apache.org]. > s3a: mkdir appears to be broken > --- > > Key: HADOOP-14428 > URL: https://issues.apache.org/jira/browse/HADOOP-14428 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.0.0-alpha2, HADOOP-13345 >Reporter: Aaron Fabbri >Assignee: Mingliang Liu >Priority: Blocker > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HADOOP-14428.000.patch, HADOOP-14428.001.patch > > > Reproduction is: > hadoop fs -mkdir s3a://my-bucket/dir/ > hadoop fs -ls s3a://my-bucket/dir/ > ls: `s3a://my-bucket/dir/': No such file or directory > I believe this is a regression from HADOOP-14255. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file
[ https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037314#comment-16037314 ] Hadoop QA commented on HADOOP-14475: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 8s{color} | {color:blue} The patch file was not named according to hadoop's naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute for instructions. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 14s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 11s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 41s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}232m 28s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14475 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12871238/s3a-metrics.patch1 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f4c1a2f512af 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 46f7e91 | | Default Java | 1.8.0_131 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/12441/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12441/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HADOOP-Build/12441/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: . U: . | | Console output
[jira] [Commented] (HADOOP-14428) s3a: mkdir appears to be broken
[ https://issues.apache.org/jira/browse/HADOOP-14428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037259#comment-16037259 ] Aaron Fabbri commented on HADOOP-14428: --- Yes, thank you for the detailed description [~liuml07] > s3a: mkdir appears to be broken > --- > > Key: HADOOP-14428 > URL: https://issues.apache.org/jira/browse/HADOOP-14428 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.0.0-alpha2, HADOOP-13345 >Reporter: Aaron Fabbri >Assignee: Mingliang Liu >Priority: Blocker > Attachments: HADOOP-14428.000.patch, HADOOP-14428.001.patch > > > Reproduction is: > hadoop fs -mkdir s3a://my-bucket/dir/ > hadoop fs -ls s3a://my-bucket/dir/ > ls: `s3a://my-bucket/dir/': No such file or directory > I believe this is a regression from HADOOP-14255. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14283) S3A may hang due to bug in AWS SDK 1.11.86
[ https://issues.apache.org/jira/browse/HADOOP-14283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri updated HADOOP-14283: -- Status: Patch Available (was: Open) > S3A may hang due to bug in AWS SDK 1.11.86 > -- > > Key: HADOOP-14283 > URL: https://issues.apache.org/jira/browse/HADOOP-14283 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.0.0-alpha2 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri >Priority: Critical > Attachments: HADOOP-14283.001.patch, ITestS3AConcurrentRename.java > > > We hit a hang bug when testing S3A with parallel renames. > I narrowed this down to the newer AWS Java SDK. It only happens under load, > and appears to be a failure to wake up a waiting thread on timeout/error. > I've created a github issue here: > https://github.com/aws/aws-sdk-java/issues/1102 > I can post a Hadoop scale test which reliably reproduces this after some > cleanup. I have posted an SDK-only test here which reproduces the issue > without Hadoop: > https://github.com/ajfabbri/awstest > I have a support ticket open and am working with Amazon on this bug so I'll > take this issue. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14440) Add metrics for connections dropped
[ https://issues.apache.org/jira/browse/HADOOP-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037198#comment-16037198 ] Hudson commented on HADOOP-14440: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11822 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11822/]) HADOOP-14440. Add metrics for connections dropped. Contributed by Eric (brahma: rev abdd609e51a80388493417126c3bc9b1badc0ac1) * (edit) hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java > Add metrics for connections dropped > --- > > Key: HADOOP-14440 > URL: https://issues.apache.org/jira/browse/HADOOP-14440 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Eric Badger >Assignee: Eric Badger > Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2 > > Attachments: HADOOP-14440.001.patch, HADOOP-14440.002.patch, > HADOOP-14440.003.patch > > > Will be useful to figure out when the NN is getting overloaded with more > connections than it can handle -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14428) s3a: mkdir appears to be broken
[ https://issues.apache.org/jira/browse/HADOOP-14428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037181#comment-16037181 ] Steve Loughran commented on HADOOP-14428: - good trackdown of the underlying problem. It may come back again somewhere, but at least now we have a test for all the filesystems. +1 > s3a: mkdir appears to be broken > --- > > Key: HADOOP-14428 > URL: https://issues.apache.org/jira/browse/HADOOP-14428 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.0.0-alpha2, HADOOP-13345 >Reporter: Aaron Fabbri >Assignee: Mingliang Liu >Priority: Blocker > Attachments: HADOOP-14428.000.patch, HADOOP-14428.001.patch > > > Reproduction is: > hadoop fs -mkdir s3a://my-bucket/dir/ > hadoop fs -ls s3a://my-bucket/dir/ > ls: `s3a://my-bucket/dir/': No such file or directory > I believe this is a regression from HADOOP-14255. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14440) Add metrics for connections dropped
[ https://issues.apache.org/jira/browse/HADOOP-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037180#comment-16037180 ] Kihwal Lee commented on HADOOP-14440: - Cherry-picked to branch-2.8. > Add metrics for connections dropped > --- > > Key: HADOOP-14440 > URL: https://issues.apache.org/jira/browse/HADOOP-14440 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Eric Badger >Assignee: Eric Badger > Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2 > > Attachments: HADOOP-14440.001.patch, HADOOP-14440.002.patch, > HADOOP-14440.003.patch > > > Will be useful to figure out when the NN is getting overloaded with more > connections than it can handle -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14440) Add metrics for connections dropped
[ https://issues.apache.org/jira/browse/HADOOP-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HADOOP-14440: Fix Version/s: 2.8.2 > Add metrics for connections dropped > --- > > Key: HADOOP-14440 > URL: https://issues.apache.org/jira/browse/HADOOP-14440 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Eric Badger >Assignee: Eric Badger > Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2 > > Attachments: HADOOP-14440.001.patch, HADOOP-14440.002.patch, > HADOOP-14440.003.patch > > > Will be useful to figure out when the NN is getting overloaded with more > connections than it can handle -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14440) Add metrics for connections dropped
[ https://issues.apache.org/jira/browse/HADOOP-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037170#comment-16037170 ] Eric Badger commented on HADOOP-14440: -- Thanks, [~brahmareddy]! > Add metrics for connections dropped > --- > > Key: HADOOP-14440 > URL: https://issues.apache.org/jira/browse/HADOOP-14440 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Eric Badger >Assignee: Eric Badger > Fix For: 2.9.0, 3.0.0-alpha4 > > Attachments: HADOOP-14440.001.patch, HADOOP-14440.002.patch, > HADOOP-14440.003.patch > > > Will be useful to figure out when the NN is getting overloaded with more > connections than it can handle -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org