[ 
https://issues.apache.org/jira/browse/HDFS-8001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14392655#comment-14392655
 ] 

Hudson commented on HDFS-8001:
------------------------------

FAILURE: Integrated in Hadoop-Hdfs-trunk #2083 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2083/])
HDFS-8001 RpcProgramNfs3 : wrong parsing of dfs.blocksize. Contributed by Remi 
Catherinot (brandonli: rev 4d14816c269f110445e1ad3e03ac53b0c1cdb58b)
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> RpcProgramNfs3 : wrong parsing of dfs.blocksize
> -----------------------------------------------
>
>                 Key: HDFS-8001
>                 URL: https://issues.apache.org/jira/browse/HDFS-8001
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: nfs
>    Affects Versions: 2.6.0, 2.5.2
>         Environment: any : windows, linux, etc.
>            Reporter: Remi Catherinot
>            Assignee: Remi Catherinot
>            Priority: Trivial
>              Labels: easyfix
>             Fix For: 2.7.0
>
>         Attachments: HDFS-8001.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java use Configuration.getLong 
> to get the dfs.blocksize value, but it should use getLongBytes so it can 
> handle syntax like 64m rather than pure numeric values. DataNode code & 
> others all use getLongBytes.
> it's line 187 in source code.
> detected on version 2.5.2, checked version 2.6.0 which still has the bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to