[
https://issues.apache.org/jira/browse/HDFS-17085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
farmmamba resolved HDFS-17085.
------------------------------
Resolution: Not A Problem
> Erasure coding: readTo is computed large than actually needed during pread
> --------------------------------------------------------------------------
>
> Key: HDFS-17085
> URL: https://issues.apache.org/jira/browse/HDFS-17085
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: erasure-coding
> Affects Versions: 3.4.0
> Reporter: farmmamba
> Assignee: farmmamba
> Priority: Major
>
> In HDFS-16520,it improved EC pread by introducing a readTo field.
> But, the way it was calculated seems still have some room for improvement.
> Now, it was calculated by below code:
> {code:java}
> for (AlignedStripe stripe : stripes) {
> readTo = Math.max(readTo, stripe.getOffsetInBlock() +
> stripe.getSpanInBlock());
> } {code}
> But in the followed code, for every AlignedStripe object, it uses max readTo
> to construct StripeReader. I think there still exists waste of resource.
> {code:java}
> for (AlignedStripe stripe : stripes) {
> // Parse group to get chosen DN location
> StripeReader preader = new PositionStripeReader(stripe, ecPolicy, blks,
> preaderInfos, corruptedBlocks, decoder, this);
> preader.setReadTo(readTo);
> try {
> preader.readStripe();
> } finally {
> preader.close();
> }
> } {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]