[jira] [Commented] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front

2016-06-19 Thread Arindam Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339015#comment-15339015
 ] 

Arindam Gupta commented on CASSANDRA-11920:
---

Thanks Tyler Hobbs for your help. Please let me know if any further actions 
required from my side. 
One additional question : Do we need to change documentation also reflecting 
this change?

> bloom_filter_fp_chance needs to be validated up front
> -
>
> Key: CASSANDRA-11920
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11920
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Local Write-Read Paths
>Reporter: ADARSH KUMAR
>Assignee: Arindam Gupta
>Priority: Minor
>  Labels: lhf
> Attachments: 11920-3.0.txt
>
>
> Hi,
> I was doing some bench-marking on bloom_filter_fp_chance values. Everything 
> worked fine for values .01(default for STCS), .001, .0001. But when I set 
> bloom_filter_fp_chance = .1 i observed following behaviour:
> 1). Reads and writes looked normal from cqlsh.
> 2). SSttables are never created.
> 3). It just creates two files (*-Data.db and *-index.db) of size 0kb.
> 4). nodetool flush does not work and produce following exception:
> java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 
> buckets per element
> at 
> org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150)
>  .
> I checked BloomCalculations class and following lines are responsible for 
> this exception:
> if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) {
>   throw new UnsupportedOperationException(String.format("Unable to 
> satisfy %s with %s buckets per element",
>  maxFalsePosProb, 
> maxBucketsPerElement));
>   }
> From  the code it looks like a hard coaded validation (unless we can change 
> the nuber of buckets).
> So, if this validation is hard coaded then why it is even allowed to set such 
> value of bloom_fileter_fp_chance, that can prevent ssTable generation?
> Please correct this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front

2016-06-17 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15336970#comment-15336970
 ] 

Tyler Hobbs commented on CASSANDRA-11920:
-

Thanks for the patch!

I've created a new unit test to exercise this.  I also backported the patch to 
2.2, since that's affected as well.

Here are the patches and pending CI test runs:
||branch||testall||dtest||
|[CASSANDRA-11920-2.2|https://github.com/thobbs/cassandra/tree/CASSANDRA-11920-2.2]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11920-2.2-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11920-2.2-dtest]|
|[CASSANDRA-11920-3.0|https://github.com/thobbs/cassandra/tree/CASSANDRA-11920-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11920-3.0-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11920-3.0-dtest]|
|[CASSANDRA-11920-trunk|https://github.com/thobbs/cassandra/tree/CASSANDRA-11920-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11920-trunk-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-11920-trunk-dtest]|

If the tests look good, I'll commit this.

> bloom_filter_fp_chance needs to be validated up front
> -
>
> Key: CASSANDRA-11920
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11920
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Local Write-Read Paths
>Reporter: ADARSH KUMAR
>Assignee: Arindam Gupta
>Priority: Minor
>  Labels: lhf
> Attachments: 11920-3.0.txt
>
>
> Hi,
> I was doing some bench-marking on bloom_filter_fp_chance values. Everything 
> worked fine for values .01(default for STCS), .001, .0001. But when I set 
> bloom_filter_fp_chance = .1 i observed following behaviour:
> 1). Reads and writes looked normal from cqlsh.
> 2). SSttables are never created.
> 3). It just creates two files (*-Data.db and *-index.db) of size 0kb.
> 4). nodetool flush does not work and produce following exception:
> java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 
> buckets per element
> at 
> org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150)
>  .
> I checked BloomCalculations class and following lines are responsible for 
> this exception:
> if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) {
>   throw new UnsupportedOperationException(String.format("Unable to 
> satisfy %s with %s buckets per element",
>  maxFalsePosProb, 
> maxBucketsPerElement));
>   }
> From  the code it looks like a hard coaded validation (unless we can change 
> the nuber of buckets).
> So, if this validation is hard coaded then why it is even allowed to set such 
> value of bloom_fileter_fp_chance, that can prevent ssTable generation?
> Please correct this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front

2016-06-17 Thread Arindam Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335836#comment-15335836
 ] 

Arindam Gupta commented on CASSANDRA-11920:
---

please review the patch.

> bloom_filter_fp_chance needs to be validated up front
> -
>
> Key: CASSANDRA-11920
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11920
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Local Write-Read Paths
>Reporter: ADARSH KUMAR
>  Labels: lhf
> Attachments: 11920-3.0.txt
>
>
> Hi,
> I was doing some bench-marking on bloom_filter_fp_chance values. Everything 
> worked fine for values .01(default for STCS), .001, .0001. But when I set 
> bloom_filter_fp_chance = .1 i observed following behaviour:
> 1). Reads and writes looked normal from cqlsh.
> 2). SSttables are never created.
> 3). It just creates two files (*-Data.db and *-index.db) of size 0kb.
> 4). nodetool flush does not work and produce following exception:
> java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 
> buckets per element
> at 
> org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150)
>  .
> I checked BloomCalculations class and following lines are responsible for 
> this exception:
> if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) {
>   throw new UnsupportedOperationException(String.format("Unable to 
> satisfy %s with %s buckets per element",
>  maxFalsePosProb, 
> maxBucketsPerElement));
>   }
> From  the code it looks like a hard coaded validation (unless we can change 
> the nuber of buckets).
> So, if this validation is hard coaded then why it is even allowed to set such 
> value of bloom_fileter_fp_chance, that can prevent ssTable generation?
> Please correct this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front

2016-06-14 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15329879#comment-15329879
 ] 

Tyler Hobbs commented on CASSANDRA-11920:
-

bq. I feel validation should go into TableParams validate() method

Yes, that's the correct place for this.

bq. BloomCalculations class is having default access modifier, so not 
accessible from TableParams

Feel free to make classes, methods, or fields public as needed to support this.

> bloom_filter_fp_chance needs to be validated up front
> -
>
> Key: CASSANDRA-11920
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11920
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Local Write-Read Paths
>Reporter: ADARSH KUMAR
>  Labels: lhf
>
> Hi,
> I was doing some bench-marking on bloom_filter_fp_chance values. Everything 
> worked fine for values .01(default for STCS), .001, .0001. But when I set 
> bloom_filter_fp_chance = .1 i observed following behaviour:
> 1). Reads and writes looked normal from cqlsh.
> 2). SSttables are never created.
> 3). It just creates two files (*-Data.db and *-index.db) of size 0kb.
> 4). nodetool flush does not work and produce following exception:
> java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 
> buckets per element
> at 
> org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150)
>  .
> I checked BloomCalculations class and following lines are responsible for 
> this exception:
> if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) {
>   throw new UnsupportedOperationException(String.format("Unable to 
> satisfy %s with %s buckets per element",
>  maxFalsePosProb, 
> maxBucketsPerElement));
>   }
> From  the code it looks like a hard coaded validation (unless we can change 
> the nuber of buckets).
> So, if this validation is hard coaded then why it is even allowed to set such 
> value of bloom_fileter_fp_chance, that can prevent ssTable generation?
> Please correct this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front

2016-06-09 Thread Arindam Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15322471#comment-15322471
 ] 

Arindam Gupta commented on CASSANDRA-11920:
---

I am looking into this issue, I am very new to C* as this is the first issue I 
am looking into, so my codebase knowledge is very limited. Following Tyler's 
comment I feel validation should go into TableParams validate() method, but 
BloomCalculations class is having default access modifier, so not accessible 
from TableParams, is there any other place where bloom_filter_fp_chance minimum 
supported value is kept?

> bloom_filter_fp_chance needs to be validated up front
> -
>
> Key: CASSANDRA-11920
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11920
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Local Write-Read Paths
>Reporter: ADARSH KUMAR
>  Labels: lhf
>
> Hi,
> I was doing some bench-marking on bloom_filter_fp_chance values. Everything 
> worked fine for values .01(default for STCS), .001, .0001. But when I set 
> bloom_filter_fp_chance = .1 i observed following behaviour:
> 1). Reads and writes looked normal from cqlsh.
> 2). SSttables are never created.
> 3). It just creates two files (*-Data.db and *-index.db) of size 0kb.
> 4). nodetool flush does not work and produce following exception:
> java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 
> buckets per element
> at 
> org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150)
>  .
> I checked BloomCalculations class and following lines are responsible for 
> this exception:
> if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) {
>   throw new UnsupportedOperationException(String.format("Unable to 
> satisfy %s with %s buckets per element",
>  maxFalsePosProb, 
> maxBucketsPerElement));
>   }
> From  the code it looks like a hard coaded validation (unless we can change 
> the nuber of buckets).
> So, if this validation is hard coaded then why it is even allowed to set such 
> value of bloom_fileter_fp_chance, that can prevent ssTable generation?
> Please correct this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front

2016-05-31 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308150#comment-15308150
 ] 

Tyler Hobbs commented on CASSANDRA-11920:
-

To clarify, what we really need to do here is check the 
{{bloom_filter_fp_chance}} against our minimum supported value (found in 
{{BloomCalculations}}) as part of the schema change validation, and reject the 
query if it's too low.

> bloom_filter_fp_chance needs to be validated up front
> -
>
> Key: CASSANDRA-11920
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11920
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Local Write-Read Paths
>Reporter: ADARSH KUMAR
>
> Hi,
> I was doing some bench-marking on bloom_filter_fp_chance values. Everything 
> worked fine for values .01(default for STCS), .001, .0001. But when I set 
> bloom_filter_fp_chance = .1 i observed following behaviour:
> 1). Reads and writes looked normal from cqlsh.
> 2). SSttables are never created.
> 3). It just creates two files (*-Data.db and *-index.db) of size 0kb.
> 4). nodetool flush does not work and produce following exception:
> java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 
> buckets per element
> at 
> org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150)
>  .
> I checked BloomCalculations class and following lines are responsible for 
> this exception:
> if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) {
>   throw new UnsupportedOperationException(String.format("Unable to 
> satisfy %s with %s buckets per element",
>  maxFalsePosProb, 
> maxBucketsPerElement));
>   }
> From  the code it looks like a hard coaded validation (unless we can change 
> the nuber of buckets).
> So, if this validation is hard coaded then why it is even allowed to set such 
> value of bloom_fileter_fp_chance, that can prevent ssTable generation?
> Please correct this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)