[ 
https://issues.apache.org/jira/browse/HDFS-13642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502852#comment-16502852
 ] 

SammiChen commented on HDFS-13642:
----------------------------------

Right, the current {{hasErasureCodingPolicy}} doesn't check if it's replication 
EC policy or normal EC policy.  We should improve the function to add the 
check. 

The original check should be kept. 
{quote}
  if (shouldReplicate ||
         {color:#f79232} 
(org.apache.commons.lang.StringUtils.isEmpty(ecPolicyName) &&
          !FSDirErasureCodingOp.hasErasureCodingPolicy(this, iip))){color} {
        blockManager.verifyReplication(src, replication, clientMachine);
      }
{quote}

When the file is a 3 replica file, {{blockManager.verifyReplication}} should be 
called to verify the replication factor.  The value of {{shouldReplicate}} 
doesn't indicate file is 3 replica or not. The value of {{shouldReplicate}} 
only reflect if the {{CreateFlag.SHOULD_REPLICATE}} is explicated set. 

 

> Creating a file with block size smaller than EC policy's cell size should 
> throw
> -------------------------------------------------------------------------------
>
>                 Key: HDFS-13642
>                 URL: https://issues.apache.org/jira/browse/HDFS-13642
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: erasure-coding
>    Affects Versions: 3.0.0
>            Reporter: Xiao Chen
>            Assignee: Xiao Chen
>            Priority: Major
>         Attachments: HDFS-13642.01.patch, HDFS-13642.02.patch, 
> HDFS-13642.03.patch, editsStored
>
>
> The following command causes an exception:
> {noformat}
> hadoop fs -Ddfs.block.size=349696 -put -f lineitem_sixblocks.parquet 
> /test-warehouse/tmp123ec
> {noformat}
> {noformat}
> 18/05/25 16:00:59 WARN hdfs.DataStreamer: DataStreamer Exception
> java.io.IOException: BlockSize 349696 < lastByteOffsetInBlock, #0: 
> blk_-9223372036854574256_14634, packet seqno: 7 offsetInBlock: 349696 
> lastPacketInBlock: false lastByteOffsetInBlock: 350208
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:729)
>   at 
> org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:46)
> 18/05/25 16:00:59 WARN hdfs.DFSOutputStream: Failed: offset=4096, length=512, 
> DFSStripedOutputStream:#0: failed, blk_-9223372036854574256_14634
> java.io.IOException: BlockSize 349696 < lastByteOffsetInBlock, #0: 
> blk_-9223372036854574256_14634, packet seqno: 7 offsetInBlock: 349696 
> lastPacketInBlock: false lastByteOffsetInBlock: 350208
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:729)
>   at 
> org.apache.hadoop.hdfs.StripedDataStreamer.run(StripedDataStreamer.java:46)
> {noformat}
> Then the streamer is confused and hangs.
> The local file is under 6MB, the hdfs file has a RS-3-2-1024k EC policy.
>  
> Credit to [~tarasbob] for reporting this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to