[jira] [Created] (HADOOP-14896) Throttle will behave unexpected when gc time longer than periodExtension
Zhizhen Hou created HADOOP-14896: Summary: Throttle will behave unexpected when gc time longer than periodExtension Key: HADOOP-14896 URL: https://issues.apache.org/jira/browse/HADOOP-14896 Project: Hadoop Common Issue Type: Bug Components: common Affects Versions: 2.7.2 Reporter: Zhizhen Hou -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14895) Consider exposing SimpleCopyListing#computeSourceRootPath() for downstream project
Ted Yu created HADOOP-14895: --- Summary: Consider exposing SimpleCopyListing#computeSourceRootPath() for downstream project Key: HADOOP-14895 URL: https://issues.apache.org/jira/browse/HADOOP-14895 Project: Hadoop Common Issue Type: Improvement Reporter: Ted Yu Over in HBASE-18843, [~vrodionov] needs to override SimpleCopyListing#computeSourceRootPath() . Since the method is private, some duplicated code appears in hbase. We should consider exposing SimpleCopyListing#computeSourceRootPath() so that its behavior can be overridden. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14893) ProtobufRpcEngine should use Time.monotonicNow
Chetna Chaudhari created HADOOP-14893: - Summary: ProtobufRpcEngine should use Time.monotonicNow Key: HADOOP-14893 URL: https://issues.apache.org/jira/browse/HADOOP-14893 Project: Hadoop Common Issue Type: Sub-task Reporter: Chetna Chaudhari Priority: Minor -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14894) ReflectionUtils should use Time.monotonicNow to mesaure duration
Bharat Viswanadham created HADOOP-14894: --- Summary: ReflectionUtils should use Time.monotonicNow to mesaure duration Key: HADOOP-14894 URL: https://issues.apache.org/jira/browse/HADOOP-14894 Project: Hadoop Common Issue Type: Sub-task Reporter: Bharat Viswanadham ReflectionUtils should use Time.monotonicNow to mesaure duration -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14892) MetricsSystemImpl should use Time.monotonicNow for measuring durations
Chetna Chaudhari created HADOOP-14892: - Summary: MetricsSystemImpl should use Time.monotonicNow for measuring durations Key: HADOOP-14892 URL: https://issues.apache.org/jira/browse/HADOOP-14892 Project: Hadoop Common Issue Type: Sub-task Reporter: Chetna Chaudhari Priority: Minor -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1
[ https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reopened HADOOP-14799: - > Update nimbus-jose-jwt to 4.41.1 > > > Key: HADOOP-14799 > URL: https://issues.apache.org/jira/browse/HADOOP-14799 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14799.001.patch, HADOOP-14799.002.patch, > HADOOP-14799.003.patch > > > Update the dependency > com.nimbusds:nimbus-jose-jwt:3.9 > to the latest (4.41.1) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14891) Guava 21.0+ libraries not compatible with user jobs
Jonathan Eagles created HADOOP-14891: Summary: Guava 21.0+ libraries not compatible with user jobs Key: HADOOP-14891 URL: https://issues.apache.org/jira/browse/HADOOP-14891 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.8.1 Reporter: Jonathan Eagles Assignee: Jonathan Eagles Use provided a guava 23.0 jar as part of the job submission. {code} 2017-09-20 16:10:42,897 [INFO] [main] |service.AbstractService|: Service org.apache.tez.dag.app.DAGAppMaster failed in state STARTED; cause: org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59) at org.apache.tez.dag.app.DAGAppMaster.startServices(DAGAppMaster.java:1989) at org.apache.tez.dag.app.DAGAppMaster.serviceStart(DAGAppMaster.java:2056) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2707) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1936) at org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2703) at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2508) Caused by: java.lang.NoSuchMethodError: com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper; at org.apache.hadoop.metrics2.lib.MetricsRegistry.toString(MetricsRegistry.java:419) at java.lang.String.valueOf(String.java:2994) at java.lang.StringBuilder.append(StringBuilder.java:131) at org.apache.hadoop.ipc.metrics.RpcMetrics.(RpcMetrics.java:74) at org.apache.hadoop.ipc.metrics.RpcMetrics.create(RpcMetrics.java:80) at org.apache.hadoop.ipc.Server.(Server.java:2658) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810) at org.apache.tez.dag.api.client.DAGClientServer.createServer(DAGClientServer.java:134) at org.apache.tez.dag.api.client.DAGClientServer.serviceStart(DAGClientServer.java:82) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.tez.dag.app.DAGAppMaster$ServiceWithDependency.start(DAGAppMaster.java:1909) at org.apache.tez.dag.app.DAGAppMaster$ServiceThread.run(DAGAppMaster.java:1930) 2017-09-20 16:10:42,898 [ERROR] [main] |rm.TaskSchedulerManager|: Failed to do a clean initiateStop for Scheduler: [0:TezYarn] {code} Metrics2 has been relying on deprecated toStringHelper for some time now which was finally removed in guava 21.0. Removing the dependency on this method will free up the user to supplying their own guava jar again. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14890) Move up to AWS SDK 1.11.199
Steve Loughran created HADOOP-14890: --- Summary: Move up to AWS SDK 1.11.199 Key: HADOOP-14890 URL: https://issues.apache.org/jira/browse/HADOOP-14890 Project: Hadoop Common Issue Type: Sub-task Components: build, fs/s3 Affects Versions: 3.0.0-beta1 Reporter: Steve Loughran Assignee: Steve Loughran the AWS SDK in Hadoop 3.0.-beta-1 prints a warning whenever you call abort() on a stream, which is what we need to do whenever doing long-distance seeks in a large file opened with fadvise=normal {code} 2017-09-20 17:51:50,459 [ScalaTest-main-running-S3ASeekReadSuite] INFO s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - 2017-09-20 17:51:50,460 [ScalaTest-main-running-S3ASeekReadSuite] INFO s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - Starting read() [pos = 45603305] 2017-09-20 17:51:50,461 [ScalaTest-main-running-S3ASeekReadSuite] WARN internal.S3AbortableInputStream (S3AbortableInputStream.java:close(163)) - Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use. 2017-09-20 17:51:51,263 [ScalaTest-main-running-S3ASeekReadSuite] INFO s3.S3ASeekReadSuite (Logging.scala:logInfo(54)) - Duration of read() [pos = 45603305] = 803,650,637 nS {code} This goes away if we upgrade to the latest SDK, at least for the non-localdynamo bits -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14770) S3A http connection in s3a driver not reuse in Spark application
[ https://issues.apache.org/jira/browse/HADOOP-14770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonger resolved HADOOP-14770. - Resolution: Duplicate Thanks Steve. when we apply the read input policy with random in our workload after upgrade to Hadoop 2.8.1, it works as my expect, connections not destroyed every time but reused. > S3A http connection in s3a driver not reuse in Spark application > > > Key: HADOOP-14770 > URL: https://issues.apache.org/jira/browse/HADOOP-14770 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Yonger >Assignee: Yonger >Priority: Minor > > I print out connection stats every 2 s when running Spark application against > s3-compatible storage: > {code} > ESTAB 0 0 :::10.0.2.36:6 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44454 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44374 > :::10.0.2.254:80 > ESTAB 159724 0 :::10.0.2.36:44436 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:8 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44338 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44438 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44414 > :::10.0.2.254:80 > ESTAB 0 480 :::10.0.2.36:44450 > :::10.0.2.254:80 timer:(on,170ms,0) > ESTAB 0 0 :::10.0.2.36:2 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44390 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44326 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44452 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44394 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:4 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44456 > :::10.0.2.254:80 > == > ESTAB 0 0 :::10.0.2.36:44508 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44476 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44524 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44374 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44500 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44504 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44512 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44506 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44464 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44518 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44510 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:2 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44526 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44472 > :::10.0.2.254:80 > ESTAB 0 0 :::10.0.2.36:44466 > :::10.0.2.254:80 > {code} > the connection in the above of "=" and below were changed all the time. But > this haven't seen in MR application. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org