[
https://issues.apache.org/jira/browse/HADOOP-9196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13556673#comment-13556673
]
Surenkumar Nihalani commented on HADOOP-9196:
---------------------------------------------
How are you getting the byte? I feel like getting it bit-by-bit from BitSet is
some useless time sacrifice.
> Modify BloomFilter read() and write() to address memory concerns
> ----------------------------------------------------------------
>
> Key: HADOOP-9196
> URL: https://issues.apache.org/jira/browse/HADOOP-9196
> Project: Hadoop Common
> Issue Type: Improvement
> Reporter: James
> Priority: Minor
>
> It appears that org.apache.hadoop.util.bloom.BloomFilter's write() method
> creates a byte array large enough to fit the entire bit vector into memory
> during serialization. This is unnecessary and may cause out of memory issues
> if the bit vector is sufficiently large and memory is tight.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira