[
https://issues.apache.org/jira/browse/HDFS-6151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Karthik Kambatla updated HDFS-6151:
-----------------------------------
Target Version/s: 2.6.0 (was: 2.5.0)
> HDFS should refuse to cache blocks >=2GB
> ----------------------------------------
>
> Key: HDFS-6151
> URL: https://issues.apache.org/jira/browse/HDFS-6151
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: caching, datanode
> Affects Versions: 2.4.0
> Reporter: Andrew Wang
> Assignee: Andrew Wang
>
> If you try to cache a block that's >=2GB, the DN will silently fail to cache
> it since {{MappedByteBuffer}} uses a signed int to represent size. Blocks
> this large are rare, but we should log or alert the user somehow.
--
This message was sent by Atlassian JIRA
(v6.2#6252)