ADARSH KUMAR created CASSANDRA-11920:
----------------------------------------

             Summary: Not able to set bloom_filter_fp_chance as .00001
                 Key: CASSANDRA-11920
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-11920
             Project: Cassandra
          Issue Type: Bug
          Components: Lifecycle, Local Write-Read Paths
            Reporter: ADARSH KUMAR


Hi,

I was doing some bench-marking on bloom_filter_fp_chance values. Everything 
worked fine for values .01(default for STCS), .001, .0001. But when I set 
bloom_filter_fp_chance = .00001 i observed following behaviour:

1). Reads and writes looked normal from cqlsh.
2). SSttables are never created.
3). It just creates two files (*-Data.db and *-index.db) of size 0kb.
4). nodetool flush does not work and produce following exception:

java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 
buckets per element
        at 
org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150)
 .....


I checked BloomCalculations class and following lines are responsible for this 
exception:

if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) {
      throw new UnsupportedOperationException(String.format("Unable to satisfy 
%s with %s buckets per element",
                                                 maxFalsePosProb, 
maxBucketsPerElement));
  }


>From  the code it looks like a hard coaded validation (unless we can change 
>the nuber of buckets).
So, if this validation is hard coaded then why it is even allowed to set such 
value of bloom_fileter_fp_chance, that can prevent ssTable generation?

Please correct this issue.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to