[ 
https://issues.apache.org/jira/browse/HDFS-9446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032684#comment-15032684
 ] 

Colin Patrick McCabe edited comment on HDFS-9446 at 1/29/16 11:51 PM:
----------------------------------------------------------------------

We should not change the type of {{tSize}}.  It would silently break everyone 
using libhdfs, causing crashes and memory corruption.  Instead, we should add a 
new API for creating files that takes a block size bigger than 32 bits.  The 
other uses of {{tSize}} are all places where 31 bits is enough (reading into 
and out of buffers which can't be larger than 31 bits anyway)


was (Author: cmccabe):
Please do not change the type of {{tSize}}.  It would silently break everyone 
using libhdfs, causing crashes and memory corruption.  Instead, we are probably 
going to add a new API for creating files that takes a block size bigger than 
32 bits.  The other uses of {{tSize}} are all places where 31 bits is enough 
(reading into and out of buffers which can't be larger than 31 bits anyway)

> tSize of libhdfs in hadoop-2.7.1 is still int32_t
> -------------------------------------------------
>
>                 Key: HDFS-9446
>                 URL: https://issues.apache.org/jira/browse/HDFS-9446
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Glen Cao
>
> Issue (https://issues.apache.org/jira/browse/HDFS-466) says what I mentioned 
> in the title is fixed. However, I find that in the source 
> (hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h) of 
> hadoop-2.7.1, tSize is still typedef-ed as int32_t and I don't find any 
> compilation option about that.
> In hdfs.h:
> 75     typedef int32_t   tSize; /// size of data for read/write io ops



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to