[
https://issues.apache.org/jira/browse/CASSANDRA-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan King updated CASSANDRA-1555:
---------------------------------
Attachment: CASSANDRA-1555v3.patch.gz
New patch with several changes based on Stu's feedback:
* renamed BloomFilter to LegacyBloomFilter and BigBloomFilter to BloomFilter
* moved maxBucketsPerElement to BloomCalculations
* removed emptybuckets
* cleaned up formatting in SSTableReader and BigBloomFilter
Finally I changed the serialization to read and write the long[] directly,
which saves a lot of spaces for small filters (column filter for a 10 item row
goes from 120 bytes to 16).
> Considerations for larger bloom filters
> ---------------------------------------
>
> Key: CASSANDRA-1555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-1555
> Project: Cassandra
> Issue Type: Improvement
> Components: Core
> Reporter: Stu Hood
> Assignee: Ryan King
> Fix For: 0.8
>
> Attachments: cassandra-1555.tgz, CASSANDRA-1555v2.patch,
> CASSANDRA-1555v3.patch.gz
>
>
> To (optimally) support SSTables larger than 143 million keys, we need to
> support bloom filters larger than 2^31 bits, which java.util.BitSet can't
> handle directly.
> A few options:
> * Switch to a BitSet class which supports 2^31 * 64 bits (Lucene's OpenBitSet)
> * Partition the java.util.BitSet behind our current BloomFilter
> ** Straightforward bit partitioning: bit N is in bitset N // 2^31
> ** Separate equally sized complete bloom filters for member ranges, which can
> be used independently or OR'd together under memory pressure.
> All of these options require new approaches to serialization.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.