[ https://issues.apache.org/jira/browse/HADOOP-15426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16560335#comment-16560335 ]
Steve Loughran commented on HADOOP-15426: ----------------------------------------- Attached, latest screenshot of a full parrallel iTest run of the forthcoming patch. 26 min to complete with thread count of 12 The DDB Scale tests statistics dump of the FS show that it is wired up; there's an assert for that (FWIW, it wasn't going into the StorageStatistics, just the metrics)...the kind of thing you only notice once you start generating throttle events. {code} 2018-07-27 13:39:06,821 s3guard_metadatastore_put_path_request = 0 2018-07-27 13:39:06,821 s3guard_metadatastore_put_path_latency = 0 2018-07-27 13:39:06,821 s3guard_metadatastore_initialization = 0 2018-07-27 13:39:06,821 s3guard_metadatastore_retry = 267 2018-07-27 13:39:06,821 s3guard_metadatastore_throttled = 267 2018-07-27 13:39:06,821 s3guard_metadatastore_throttle_rate = 0 2018-07-27 13:39:06,821 store_io_throttled = 267 {code} > Make S3guard client resilient to DDB throttle events and network failures > ------------------------------------------------------------------------- > > Key: HADOOP-15426 > URL: https://issues.apache.org/jira/browse/HADOOP-15426 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Affects Versions: 3.1.0 > Reporter: Steve Loughran > Assignee: Steve Loughran > Priority: Blocker > Attachments: HADOOP-15426-001.patch, HADOOP-15426-002.patch, Screen > Shot 2018-07-24 at 15.16.46.png, Screen Shot 2018-07-25 at 16.22.10.png, > Screen Shot 2018-07-25 at 16.28.53.png, Screen Shot 2018-07-27 at 14.07.38.png > > > managed to create on a parallel test run > {code} > org.apache.hadoop.fs.s3a.AWSServiceThrottledException: delete on > s3a://hwdev-steve-ireland-new/fork-0005/test/existing-dir/existing-file: > com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: > The level of configured provisioned throughput for the table was exceeded. > Consider increasing your provisioning level with the UpdateTable API. > (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: > ProvisionedThroughputExceededException; Request ID: > RDM3370REDBBJQ0SLCLOFC8G43VV4KQNSO5AEMVJF66Q9ASUAAJG): The level of > configured provisioned throughput for the table was exceeded. Consider > increasing your provisioning level with the UpdateTable API. (Service: > AmazonDynamoDBv2; Status Code: 400; Error Code: > ProvisionedThroughputExceededException; Request ID: > RDM3370REDBBJQ0SLCLOFC8G43VV4KQNSO5AEMVJF66Q9ASUAAJG) > at > {code} > We should be able to handle this. 400 "bad things happened" error though, not > the 503 from S3. > h3. We need a retry handler for DDB throttle operations -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org