[ 
https://issues.apache.org/jira/browse/HDFS-16430?focusedWorklogId=710303&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-710303
 ]

ASF GitHub Bot logged work on HDFS-16430:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 18/Jan/22 07:35
            Start Date: 18/Jan/22 07:35
    Worklog Time Spent: 10m 
      Work Description: cndaimin opened a new pull request #3899:
URL: https://github.com/apache/hadoop/pull/3899


   HDFS EC adopts the last 4 bits of block ID to store the block index in EC 
block group. Therefore maximum blocks in EC block group is `2^4=16`, and which 
is defined here: `HdfsServerConstants#MAX_BLOCKS_IN_GROUP`.
   
   Currently there is no limitation or warning when adding a bad EC policy with 
`numDataUnits + numParityUnits > 16`. It only results in read/write error on EC 
file with bad EC policy. To users this is not very straightforward.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

            Worklog Id:     (was: 710303)
    Remaining Estimate: 0h
            Time Spent: 10m

> Validate maximum blocks in EC group when adding an EC policy
> ------------------------------------------------------------
>
>                 Key: HDFS-16430
>                 URL: https://issues.apache.org/jira/browse/HDFS-16430
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: ec, erasure-coding
>    Affects Versions: 3.3.0, 3.3.1
>            Reporter: daimin
>            Assignee: daimin
>            Priority: Minor
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDFS EC adopts the last 4 bits of block ID to store the block index in EC 
> block group. Therefore maximum blocks in EC block group is 2^4=16, and which 
> is defined here: HdfsServerConstants#MAX_BLOCKS_IN_GROUP.
> Currently there is no limitation or warning when adding a bad EC policy with 
> numDataUnits + numParityUnits > 16. It only results in read/write error on EC 
> file with bad EC policy. To users this is not very straightforward.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to