[ 
https://issues.apache.org/jira/browse/HDFS-6641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14060325#comment-14060325
 ] 

Brahma Reddy Battula commented on HDFS-6641:
--------------------------------------------

Hi [~cnauroth] 

{quote}
The concat destination file must still maintain the invariant that all blocks 
have the same length, except for possibly the last block, which may be 
partially filled. If this invariant were not maintained, then it could cause 
unpredictable behavior later when a client attempts to read that file.
I'm resolving this issue as Not a Problem, because I believe this is all 
working as designed.
{quote}

you mean,last block should be filled when we go for concat(like pre 
condition)..?  I feel, this can addressed or else need to provide reason then 
we can close this jira..Please correct if i am wrong..

> [ HDFS- File Concat ] Concat will fail when block is not full
> -------------------------------------------------------------
>
>                 Key: HDFS-6641
>                 URL: https://issues.apache.org/jira/browse/HDFS-6641
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.4.1
>            Reporter: Brahma Reddy Battula
>
> sually we can't ensure lastblock alwaysfull...please let me know purpose of 
> following check..
>     long blockSize = trgInode.getPreferredBlockSize();
>     // check the end block to be full
>     final BlockInfo last = trgInode.getLastBlock();
>     if(blockSize != last.getNumBytes()) {
>       throw new HadoopIllegalArgumentException("The last block in " + target
>           + " is not full; last block size = " + last.getNumBytes()
>           + " but file block size = " + blockSize);
>     }
> If it is issue, I'll file jira.
> Following is the trace..
> exception in thread "main" 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.HadoopIllegalArgumentException):
>  The last block in /Test.txt is not full; last block size = 14 but file block 
> size = 134217728
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concatInternal(FSNamesystem.java:1887)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concatInt(FSNamesystem.java:1833)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.concat(FSNamesystem.java:1795)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.concat(NameNodeRpcServer.java:704)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.concat(ClientNamenodeProtocolServerSideTranslatorPB.java:512)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to