[ 
https://issues.apache.org/jira/browse/HDFS-7630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294898#comment-14294898
 ] 

sam liu commented on HDFS-7630:
-------------------------------

Arpit,

Not all such hard-coding will cause failure and the patch is mainly to remove 
the hard-coding. But sometimes the hard-coding could cause failure. For 
example, without patch HDFS-7585, test TestEnhancedByteBufferAccess will fail 
on power platform. In the tests, the size of BLOCK_SIZE is usually set as 4096 
and it just equals to the default page size of x86 Linux operating system, but 
on power Linux operating system the default page size is 65536. As HDFS is 
based on operating system, it might be better the unit tests could consider the 
differences of operating systems.

Thanks!

> TestConnCache hardcode block size without considering native OS
> ---------------------------------------------------------------
>
>                 Key: HDFS-7630
>                 URL: https://issues.apache.org/jira/browse/HDFS-7630
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: test
>            Reporter: sam liu
>            Assignee: sam liu
>         Attachments: HDFS-7630.001.patch, HDFS-7630.002.patch
>
>
> TestConnCache hardcode block size with 'BLOCK_SIZE = 4096', however it's 
> incorrect on some platforms. For example, on power platform, the correct 
> value is 65536.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to