[
https://issues.apache.org/jira/browse/HADOOP-15604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Steve Loughran resolved HADOOP-15604.
-------------------------------------
Resolution: Fixed
Fix Version/s: 3.3.0
Fixed in the HADOOP-15183 patch: a BulkOperation is initiated against the
metastore and passed back in all metadata operations; this is used to track
ancestor status and avoid polling for or creating any ancestors already
found/created by other operations
> Bulk commits of S3A MPUs place needless excessive load on S3 & S3Guard
> ----------------------------------------------------------------------
>
> Key: HADOOP-15604
> URL: https://issues.apache.org/jira/browse/HADOOP-15604
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.1.0
> Reporter: Gabor Bota
> Assignee: Steve Loughran
> Priority: Major
> Fix For: 3.3.0
>
>
> When there are ~50 files being committed; each in their own thread from the
> commit pool; probably the DDB repo is being overloaded just from one single
> process doing task commit. We should be backing off more, especially given
> that failing on a write could potentially leave the store inconsistent with
> the FS (renames, etc)
> It would be nice to have some tests to prove that the I/O thresholds are the
> reason for unprocessed items in DynamoDB metadata store
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]