[jira] [Updated] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front
[ https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-11920: Resolution: Fixed Fix Version/s: 3.0.8 3.8 2.2.7 Status: Resolved (was: Patch Available) The tests look good, so +1, committed as {{9e85e85bf259cc7839226a7c93475505d262946a}} to 2.2 and merged up to 3.0 and trunk. I don't think any documentation change is required here. There has always been a minimum supported bloom filter FP ratio, we just failed to enforce at the right point. Thanks again for the patch! > bloom_filter_fp_chance needs to be validated up front > - > > Key: CASSANDRA-11920 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11920 > Project: Cassandra > Issue Type: Bug > Components: Lifecycle, Local Write-Read Paths >Reporter: ADARSH KUMAR >Assignee: Arindam Gupta >Priority: Minor > Labels: lhf > Fix For: 2.2.7, 3.8, 3.0.8 > > Attachments: 11920-3.0.txt > > > Hi, > I was doing some bench-marking on bloom_filter_fp_chance values. Everything > worked fine for values .01(default for STCS), .001, .0001. But when I set > bloom_filter_fp_chance = .1 i observed following behaviour: > 1). Reads and writes looked normal from cqlsh. > 2). SSttables are never created. > 3). It just creates two files (*-Data.db and *-index.db) of size 0kb. > 4). nodetool flush does not work and produce following exception: > java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 > buckets per element > at > org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150) > . > I checked BloomCalculations class and following lines are responsible for > this exception: > if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) { > throw new UnsupportedOperationException(String.format("Unable to > satisfy %s with %s buckets per element", > maxFalsePosProb, > maxBucketsPerElement)); > } > From the code it looks like a hard coaded validation (unless we can change > the nuber of buckets). > So, if this validation is hard coaded then why it is even allowed to set such > value of bloom_fileter_fp_chance, that can prevent ssTable generation? > Please correct this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front
[ https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-11920: Priority: Minor (was: Major) > bloom_filter_fp_chance needs to be validated up front > - > > Key: CASSANDRA-11920 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11920 > Project: Cassandra > Issue Type: Bug > Components: Lifecycle, Local Write-Read Paths >Reporter: ADARSH KUMAR >Assignee: Arindam Gupta >Priority: Minor > Labels: lhf > Attachments: 11920-3.0.txt > > > Hi, > I was doing some bench-marking on bloom_filter_fp_chance values. Everything > worked fine for values .01(default for STCS), .001, .0001. But when I set > bloom_filter_fp_chance = .1 i observed following behaviour: > 1). Reads and writes looked normal from cqlsh. > 2). SSttables are never created. > 3). It just creates two files (*-Data.db and *-index.db) of size 0kb. > 4). nodetool flush does not work and produce following exception: > java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 > buckets per element > at > org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150) > . > I checked BloomCalculations class and following lines are responsible for > this exception: > if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) { > throw new UnsupportedOperationException(String.format("Unable to > satisfy %s with %s buckets per element", > maxFalsePosProb, > maxBucketsPerElement)); > } > From the code it looks like a hard coaded validation (unless we can change > the nuber of buckets). > So, if this validation is hard coaded then why it is even allowed to set such > value of bloom_fileter_fp_chance, that can prevent ssTable generation? > Please correct this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front
[ https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-11920: Assignee: Arindam Gupta > bloom_filter_fp_chance needs to be validated up front > - > > Key: CASSANDRA-11920 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11920 > Project: Cassandra > Issue Type: Bug > Components: Lifecycle, Local Write-Read Paths >Reporter: ADARSH KUMAR >Assignee: Arindam Gupta > Labels: lhf > Attachments: 11920-3.0.txt > > > Hi, > I was doing some bench-marking on bloom_filter_fp_chance values. Everything > worked fine for values .01(default for STCS), .001, .0001. But when I set > bloom_filter_fp_chance = .1 i observed following behaviour: > 1). Reads and writes looked normal from cqlsh. > 2). SSttables are never created. > 3). It just creates two files (*-Data.db and *-index.db) of size 0kb. > 4). nodetool flush does not work and produce following exception: > java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 > buckets per element > at > org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150) > . > I checked BloomCalculations class and following lines are responsible for > this exception: > if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) { > throw new UnsupportedOperationException(String.format("Unable to > satisfy %s with %s buckets per element", > maxFalsePosProb, > maxBucketsPerElement)); > } > From the code it looks like a hard coaded validation (unless we can change > the nuber of buckets). > So, if this validation is hard coaded then why it is even allowed to set such > value of bloom_fileter_fp_chance, that can prevent ssTable generation? > Please correct this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front
[ https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arindam Gupta updated CASSANDRA-11920: -- Reviewer: Tyler Hobbs Reproduced In: 3.0.3 Tester: ADARSH KUMAR Status: Patch Available (was: Open) attached the patch with the fix. I have not added any unit test as could not find corresponding unit test files for these classes, however tested it with cqlsh. > bloom_filter_fp_chance needs to be validated up front > - > > Key: CASSANDRA-11920 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11920 > Project: Cassandra > Issue Type: Bug > Components: Lifecycle, Local Write-Read Paths >Reporter: ADARSH KUMAR > Labels: lhf > Attachments: 11920-3.0.txt > > > Hi, > I was doing some bench-marking on bloom_filter_fp_chance values. Everything > worked fine for values .01(default for STCS), .001, .0001. But when I set > bloom_filter_fp_chance = .1 i observed following behaviour: > 1). Reads and writes looked normal from cqlsh. > 2). SSttables are never created. > 3). It just creates two files (*-Data.db and *-index.db) of size 0kb. > 4). nodetool flush does not work and produce following exception: > java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 > buckets per element > at > org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150) > . > I checked BloomCalculations class and following lines are responsible for > this exception: > if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) { > throw new UnsupportedOperationException(String.format("Unable to > satisfy %s with %s buckets per element", > maxFalsePosProb, > maxBucketsPerElement)); > } > From the code it looks like a hard coaded validation (unless we can change > the nuber of buckets). > So, if this validation is hard coaded then why it is even allowed to set such > value of bloom_fileter_fp_chance, that can prevent ssTable generation? > Please correct this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front
[ https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arindam Gupta updated CASSANDRA-11920: -- Attachment: 11920-3.0.txt patch fix for 11920 > bloom_filter_fp_chance needs to be validated up front > - > > Key: CASSANDRA-11920 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11920 > Project: Cassandra > Issue Type: Bug > Components: Lifecycle, Local Write-Read Paths >Reporter: ADARSH KUMAR > Labels: lhf > Attachments: 11920-3.0.txt > > > Hi, > I was doing some bench-marking on bloom_filter_fp_chance values. Everything > worked fine for values .01(default for STCS), .001, .0001. But when I set > bloom_filter_fp_chance = .1 i observed following behaviour: > 1). Reads and writes looked normal from cqlsh. > 2). SSttables are never created. > 3). It just creates two files (*-Data.db and *-index.db) of size 0kb. > 4). nodetool flush does not work and produce following exception: > java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 > buckets per element > at > org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150) > . > I checked BloomCalculations class and following lines are responsible for > this exception: > if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) { > throw new UnsupportedOperationException(String.format("Unable to > satisfy %s with %s buckets per element", > maxFalsePosProb, > maxBucketsPerElement)); > } > From the code it looks like a hard coaded validation (unless we can change > the nuber of buckets). > So, if this validation is hard coaded then why it is even allowed to set such > value of bloom_fileter_fp_chance, that can prevent ssTable generation? > Please correct this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front
[ https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-11920: -- Labels: lhf (was: ) > bloom_filter_fp_chance needs to be validated up front > - > > Key: CASSANDRA-11920 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11920 > Project: Cassandra > Issue Type: Bug > Components: Lifecycle, Local Write-Read Paths >Reporter: ADARSH KUMAR > Labels: lhf > > Hi, > I was doing some bench-marking on bloom_filter_fp_chance values. Everything > worked fine for values .01(default for STCS), .001, .0001. But when I set > bloom_filter_fp_chance = .1 i observed following behaviour: > 1). Reads and writes looked normal from cqlsh. > 2). SSttables are never created. > 3). It just creates two files (*-Data.db and *-index.db) of size 0kb. > 4). nodetool flush does not work and produce following exception: > java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 > buckets per element > at > org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150) > . > I checked BloomCalculations class and following lines are responsible for > this exception: > if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) { > throw new UnsupportedOperationException(String.format("Unable to > satisfy %s with %s buckets per element", > maxFalsePosProb, > maxBucketsPerElement)); > } > From the code it looks like a hard coaded validation (unless we can change > the nuber of buckets). > So, if this validation is hard coaded then why it is even allowed to set such > value of bloom_fileter_fp_chance, that can prevent ssTable generation? > Please correct this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front
[ https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-11920: Summary: bloom_filter_fp_chance needs to be validated up front (was: Not able to set bloom_filter_fp_chance as .1) > bloom_filter_fp_chance needs to be validated up front > - > > Key: CASSANDRA-11920 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11920 > Project: Cassandra > Issue Type: Bug > Components: Lifecycle, Local Write-Read Paths >Reporter: ADARSH KUMAR > > Hi, > I was doing some bench-marking on bloom_filter_fp_chance values. Everything > worked fine for values .01(default for STCS), .001, .0001. But when I set > bloom_filter_fp_chance = .1 i observed following behaviour: > 1). Reads and writes looked normal from cqlsh. > 2). SSttables are never created. > 3). It just creates two files (*-Data.db and *-index.db) of size 0kb. > 4). nodetool flush does not work and produce following exception: > java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 > buckets per element > at > org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150) > . > I checked BloomCalculations class and following lines are responsible for > this exception: > if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) { > throw new UnsupportedOperationException(String.format("Unable to > satisfy %s with %s buckets per element", > maxFalsePosProb, > maxBucketsPerElement)); > } > From the code it looks like a hard coaded validation (unless we can change > the nuber of buckets). > So, if this validation is hard coaded then why it is even allowed to set such > value of bloom_fileter_fp_chance, that can prevent ssTable generation? > Please correct this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)