[ 
https://issues.apache.org/jira/browse/HDFS-15315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17927894#comment-17927894
 ] 

liuguanghua edited comment on HDFS-15315 at 2/18/25 2:08 AM:
-------------------------------------------------------------

Hello, Sir. I have some questions about addForStriped condition. Now  in case 
of erasure coding blocks, adds only in case there  isn't any missing node.    
Function addExpectedReplicasToPending() will add block into 
pendingReconstruction.
 
addForStriped = blkStriped.getRealTotalBlockNum() == expectedStorages.length
 
 Why should we decide based on addForStriped? Can we add all expectedStorages 
into pendingReconstruction like replicated block does? 
In addition,  XOR-2-1-1024k, if we only write 1024K into a ec file, the close() 
,  blkStriped.getRealTotalBlockNum() = 1 and expectedStorages.length =3 ,this 
will lead this ec group make ECReconstrunctin Task in namenode first.  And then 
the later ibr is coming.  
 
[~weichiu] [~graypacket]  
 


was (Author: liuguanghua):
Hello, Sir. I have some questions about addForStriped condition. Now  in case 
of erasure coding blocks, adds only in case there  isn't any missing node.    
Function addExpectedReplicasToPending() will add block into 
pendingReconstruction.
 
addForStriped = blkStriped.getRealTotalBlockNum() == expectedStorages.length
 
 Why should we decide based on addForStriped? Can we add all expectedStorages 
into pendingReconstruction? 
In addition,  XOR-2-1-1024k, if we only write 1024K into a ec file, the close() 
,  blkStriped.getRealTotalBlockNum() = 1 and expectedStorages.length =3 ,this 
will lead this ec group make ECReconstrunctin Task in namenode first.  And then 
the later ibr is coming.  
 
[~weichiu] [~graypacket]  
 

> IOException on close() when using Erasure Coding
> ------------------------------------------------
>
>                 Key: HDFS-15315
>                 URL: https://issues.apache.org/jira/browse/HDFS-15315
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: ec, hdfs
>    Affects Versions: 3.1.1
>         Environment: XOR-2-1-1024k policy on hadoop 3.1.1 with 3 datanodes
>            Reporter: Anshuman Singh
>            Assignee: Zhao Yi Ming
>            Priority: Major
>
> When using Erasure Coding policy on a directory, the replication factor is 
> set to 1. Solr fails in indexing documents with error - _java.io.IOException: 
> Unable to close file because the last block does not have enough number of 
> replicas._ It works fine without EC (with replication factor as 3.) It seems 
> to be identical to this issue. [ 
> https://issues.apache.org/jira/browse/HDFS-11486|https://issues.apache.org/jira/browse/HDFS-11486]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to