[ 
https://issues.apache.org/jira/browse/HADOOP-9196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13556737#comment-13556737
 ] 

Surenkumar Nihalani commented on HADOOP-9196:
---------------------------------------------

Correction: have a method take in a DataOutput* 

so, it would be like,
 {code:title=Bar.java|borderStyle=solid}
public class HadoopBitSet extends BitSet{
    public void writeTo(DataOutput stream)
    {
         // loop and stream.write(data[i]);
     }
}
 {code} 
                
> Modify BloomFilter read() and write() to address memory concerns
> ----------------------------------------------------------------
>
>                 Key: HADOOP-9196
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9196
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: James
>            Priority: Minor
>
> It appears that org.apache.hadoop.util.bloom.BloomFilter's write() method 
> creates a byte array large enough to fit the entire bit vector into memory 
> during serialization.  This is unnecessary and may cause out of memory issues 
> if the bit vector is sufficiently large and memory is tight.   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to