[
https://issues.apache.org/jira/browse/HDFS-16430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
ASF GitHub Bot updated HDFS-16430:
----------------------------------
Labels: pull-request-available (was: )
> Validate maximum blocks in EC group when adding an EC policy
> ------------------------------------------------------------
>
> Key: HDFS-16430
> URL: https://issues.apache.org/jira/browse/HDFS-16430
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: ec, erasure-coding
> Affects Versions: 3.3.0, 3.3.1
> Reporter: daimin
> Assignee: daimin
> Priority: Minor
> Labels: pull-request-available
> Time Spent: 10m
> Remaining Estimate: 0h
>
> HDFS EC adopts the last 4 bits of block ID to store the block index in EC
> block group. Therefore maximum blocks in EC block group is 2^4=16, and which
> is defined here: HdfsServerConstants#MAX_BLOCKS_IN_GROUP.
> Currently there is no limitation or warning when adding a bad EC policy with
> numDataUnits + numParityUnits > 16. It only results in read/write error on EC
> file with bad EC policy. To users this is not very straightforward.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]