[
https://issues.apache.org/jira/browse/HDFS-16430?focusedWorklogId=711858&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-711858
]
ASF GitHub Bot logged work on HDFS-16430:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 20/Jan/22 03:56
Start Date: 20/Jan/22 03:56
Worklog Time Spent: 10m
Work Description: cndaimin commented on pull request #3899:
URL: https://github.com/apache/hadoop/pull/3899#issuecomment-1017095749
Thanks for your review! @ayushtkn
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 711858)
Time Spent: 0.5h (was: 20m)
> Validate maximum blocks in EC group when adding an EC policy
> ------------------------------------------------------------
>
> Key: HDFS-16430
> URL: https://issues.apache.org/jira/browse/HDFS-16430
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: ec, erasure-coding
> Affects Versions: 3.3.0, 3.3.1
> Reporter: daimin
> Assignee: daimin
> Priority: Minor
> Labels: pull-request-available
> Time Spent: 0.5h
> Remaining Estimate: 0h
>
> HDFS EC adopts the last 4 bits of block ID to store the block index in EC
> block group. Therefore maximum blocks in EC block group is 2^4=16, and which
> is defined here: HdfsServerConstants#MAX_BLOCKS_IN_GROUP.
> Currently there is no limitation or warning when adding a bad EC policy with
> numDataUnits + numParityUnits > 16. It only results in read/write error on EC
> file with bad EC policy. To users this is not very straightforward.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]