[
https://issues.apache.org/jira/browse/HDFS-8001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Remi Catherinot updated HDFS-8001:
----------------------------------
Target Version/s: 2.6.1
Status: Patch Available (was: Open)
affected line depends on hadoop version, here are the diff for 2.5.2 and 2.6.0
versions
hadoop version 2.5.2 diff
182c182
< blockSize = config.getLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY,
---
> blockSize = config.getLongBytes(DFSConfigKeys.DFS_BLOCK_SIZE_KEY,
hadoop version 2.6.0 diff
187c187
< blockSize = config.getLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY,
---
> blockSize = config.getLongBytes(DFSConfigKeys.DFS_BLOCK_SIZE_KEY,
> RpcProgramNfs3 : wrong parsing of dfs.blocksize
> -----------------------------------------------
>
> Key: HDFS-8001
> URL: https://issues.apache.org/jira/browse/HDFS-8001
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: nfs
> Affects Versions: 2.5.2, 2.6.0
> Environment: any : windows, linux, etc.
> Reporter: Remi Catherinot
> Priority: Trivial
> Labels: easyfix
> Original Estimate: 2h
> Remaining Estimate: 2h
>
> org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java use Configuration.getLong
> to get the dfs.blocksize value, but it should use getLongBytes so it can
> handle syntax like 64m rather than pure numeric values. DataNode code &
> others all use getLongBytes.
> it's line 187 in source code.
> detected on version 2.5.2, checked version 2.6.0 which still has the bug.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)