[jira] [Commented] (CASSANDRA-11978) StreamReader fails to write sstable if CF directory is symlink

2016-07-11 Thread Arindam Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371171#comment-15371171
 ] 

Arindam Gupta commented on CASSANDRA-11978:
---

Ok thanks, just wondering if this can be reproducible using CCM? Currently I do 
not have any real cluster running to reproduce it.

> StreamReader fails to write sstable if CF directory is symlink
> --
>
> Key: CASSANDRA-11978
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11978
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Michael Frisch
>  Labels: lhf
>
> I'm using Cassandra v2.2.6.  If the CF is stored as a symlink in the keyspace 
> directory on disk then StreamReader.createWriter fails because 
> Descriptor.fromFilename is passed the actual path on disk instead of path 
> with the symlink.
> Example:
> /path/to/data/dir/Keyspace/CFName -> /path/to/data/dir/AnotherDisk/CFName
> Descriptor.fromFilename is passed "/path/to/data/dir/AnotherDisk/CFName" 
> instead of "/path/to/data/dir/Keyspace/CFName", then it concludes that the 
> keyspace name is "AnotherDisk" which is erroneous. I've temporarily worked 
> around this by using cfs.keyspace.getName() to get the keyspace name and 
> cfs.name to get the CF name as those are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11978) StreamReader fails to write sstable if CF directory is symlink

2016-07-08 Thread Arindam Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367875#comment-15367875
 ] 

Arindam Gupta commented on CASSANDRA-11978:
---

so is this happening when you bootstrap a new node in a cluster and also when 
you run "nodetool repair"?

> StreamReader fails to write sstable if CF directory is symlink
> --
>
> Key: CASSANDRA-11978
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11978
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Michael Frisch
>  Labels: lhf
>
> I'm using Cassandra v2.2.6.  If the CF is stored as a symlink in the keyspace 
> directory on disk then StreamReader.createWriter fails because 
> Descriptor.fromFilename is passed the actual path on disk instead of path 
> with the symlink.
> Example:
> /path/to/data/dir/Keyspace/CFName -> /path/to/data/dir/AnotherDisk/CFName
> Descriptor.fromFilename is passed "/path/to/data/dir/AnotherDisk/CFName" 
> instead of "/path/to/data/dir/Keyspace/CFName", then it concludes that the 
> keyspace name is "AnotherDisk" which is erroneous. I've temporarily worked 
> around this by using cfs.keyspace.getName() to get the keyspace name and 
> cfs.name to get the CF name as those are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11978) StreamReader fails to write sstable if CF directory is symlink

2016-07-08 Thread Arindam Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367510#comment-15367510
 ] 

Arindam Gupta commented on CASSANDRA-11978:
---

Can you please provide some more information to proceed further, need to 
clarify the following points :

1) In your case soft link name is "/path/to/data/dir/AnotherDisk/CFName" and 
actual path i.e. target path is "/path/to/data/dir/Keyspace/CFName", am I right?

2) Have you made any changes in cassandra.yaml or other config files for this 
scenario?

3) If I execute "nodetool flush" command after inserting some data into a table 
will I get this error immediately during this nodetool command execution?



> StreamReader fails to write sstable if CF directory is symlink
> --
>
> Key: CASSANDRA-11978
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11978
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Michael Frisch
>  Labels: lhf
>
> I'm using Cassandra v2.2.6.  If the CF is stored as a symlink in the keyspace 
> directory on disk then StreamReader.createWriter fails because 
> Descriptor.fromFilename is passed the actual path on disk instead of path 
> with the symlink.
> Example:
> /path/to/data/dir/Keyspace/CFName -> /path/to/data/dir/AnotherDisk/CFName
> Descriptor.fromFilename is passed "/path/to/data/dir/AnotherDisk/CFName" 
> instead of "/path/to/data/dir/Keyspace/CFName", then it concludes that the 
> keyspace name is "AnotherDisk" which is erroneous. I've temporarily worked 
> around this by using cfs.keyspace.getName() to get the keyspace name and 
> cfs.name to get the CF name as those are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11978) StreamReader fails to write sstable if CF directory is symlink

2016-07-05 Thread Arindam Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15362378#comment-15362378
 ] 

Arindam Gupta commented on CASSANDRA-11978:
---

thanks let me take a look

> StreamReader fails to write sstable if CF directory is symlink
> --
>
> Key: CASSANDRA-11978
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11978
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Michael Frisch
>  Labels: lhf
>
> I'm using Cassandra v2.2.6.  If the CF is stored as a symlink in the keyspace 
> directory on disk then StreamReader.createWriter fails because 
> Descriptor.fromFilename is passed the actual path on disk instead of path 
> with the symlink.
> Example:
> /path/to/data/dir/Keyspace/CFName -> /path/to/data/dir/AnotherDisk/CFName
> Descriptor.fromFilename is passed "/path/to/data/dir/AnotherDisk/CFName" 
> instead of "/path/to/data/dir/Keyspace/CFName", then it concludes that the 
> keyspace name is "AnotherDisk" which is erroneous. I've temporarily worked 
> around this by using cfs.keyspace.getName() to get the keyspace name and 
> cfs.name to get the CF name as those are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11978) StreamReader fails to write sstable if CF directory is symlink

2016-06-28 Thread Arindam Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353050#comment-15353050
 ] 

Arindam Gupta commented on CASSANDRA-11978:
---

Hi Michael Frisch, Did you get any failure stack trace while this occurred? If 
you have Can you share the same? 
Regards
Arindam

> StreamReader fails to write sstable if CF directory is symlink
> --
>
> Key: CASSANDRA-11978
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11978
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Michael Frisch
>  Labels: lhf
>
> I'm using Cassandra v2.2.6.  If the CF is stored as a symlink in the keyspace 
> directory on disk then StreamReader.createWriter fails because 
> Descriptor.fromFilename is passed the actual path on disk instead of path 
> with the symlink.
> Example:
> /path/to/data/dir/Keyspace/CFName -> /path/to/data/dir/AnotherDisk/CFName
> Descriptor.fromFilename is passed "/path/to/data/dir/AnotherDisk/CFName" 
> instead of "/path/to/data/dir/Keyspace/CFName", then it concludes that the 
> keyspace name is "AnotherDisk" which is erroneous. I've temporarily worked 
> around this by using cfs.keyspace.getName() to get the keyspace name and 
> cfs.name to get the CF name as those are correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front

2016-06-19 Thread Arindam Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15339015#comment-15339015
 ] 

Arindam Gupta commented on CASSANDRA-11920:
---

Thanks Tyler Hobbs for your help. Please let me know if any further actions 
required from my side. 
One additional question : Do we need to change documentation also reflecting 
this change?

> bloom_filter_fp_chance needs to be validated up front
> -
>
> Key: CASSANDRA-11920
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11920
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Local Write-Read Paths
>Reporter: ADARSH KUMAR
>Assignee: Arindam Gupta
>Priority: Minor
>  Labels: lhf
> Attachments: 11920-3.0.txt
>
>
> Hi,
> I was doing some bench-marking on bloom_filter_fp_chance values. Everything 
> worked fine for values .01(default for STCS), .001, .0001. But when I set 
> bloom_filter_fp_chance = .1 i observed following behaviour:
> 1). Reads and writes looked normal from cqlsh.
> 2). SSttables are never created.
> 3). It just creates two files (*-Data.db and *-index.db) of size 0kb.
> 4). nodetool flush does not work and produce following exception:
> java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 
> buckets per element
> at 
> org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150)
>  .
> I checked BloomCalculations class and following lines are responsible for 
> this exception:
> if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) {
>   throw new UnsupportedOperationException(String.format("Unable to 
> satisfy %s with %s buckets per element",
>  maxFalsePosProb, 
> maxBucketsPerElement));
>   }
> From  the code it looks like a hard coaded validation (unless we can change 
> the nuber of buckets).
> So, if this validation is hard coaded then why it is even allowed to set such 
> value of bloom_fileter_fp_chance, that can prevent ssTable generation?
> Please correct this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front

2016-06-17 Thread Arindam Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335833#comment-15335833
 ] 

Arindam Gupta edited comment on CASSANDRA-11920 at 6/17/16 12:57 PM:
-

Attached a patch with the fix. I have not added any unit test as could not find 
corresponding unit test files for these classes, however tested it with cqlsh. 
Please review.


was (Author: arindamg):
attached the patch with the fix. I have not added any unit test as could not 
find corresponding unit test files for these classes, however tested it with 
cqlsh.

> bloom_filter_fp_chance needs to be validated up front
> -
>
> Key: CASSANDRA-11920
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11920
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Local Write-Read Paths
>Reporter: ADARSH KUMAR
>  Labels: lhf
> Attachments: 11920-3.0.txt
>
>
> Hi,
> I was doing some bench-marking on bloom_filter_fp_chance values. Everything 
> worked fine for values .01(default for STCS), .001, .0001. But when I set 
> bloom_filter_fp_chance = .1 i observed following behaviour:
> 1). Reads and writes looked normal from cqlsh.
> 2). SSttables are never created.
> 3). It just creates two files (*-Data.db and *-index.db) of size 0kb.
> 4). nodetool flush does not work and produce following exception:
> java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 
> buckets per element
> at 
> org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150)
>  .
> I checked BloomCalculations class and following lines are responsible for 
> this exception:
> if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) {
>   throw new UnsupportedOperationException(String.format("Unable to 
> satisfy %s with %s buckets per element",
>  maxFalsePosProb, 
> maxBucketsPerElement));
>   }
> From  the code it looks like a hard coaded validation (unless we can change 
> the nuber of buckets).
> So, if this validation is hard coaded then why it is even allowed to set such 
> value of bloom_fileter_fp_chance, that can prevent ssTable generation?
> Please correct this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front

2016-06-17 Thread Arindam Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arindam Gupta updated CASSANDRA-11920:
--
Comment: was deleted

(was: patch fix for 11920)

> bloom_filter_fp_chance needs to be validated up front
> -
>
> Key: CASSANDRA-11920
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11920
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Local Write-Read Paths
>Reporter: ADARSH KUMAR
>  Labels: lhf
> Attachments: 11920-3.0.txt
>
>
> Hi,
> I was doing some bench-marking on bloom_filter_fp_chance values. Everything 
> worked fine for values .01(default for STCS), .001, .0001. But when I set 
> bloom_filter_fp_chance = .1 i observed following behaviour:
> 1). Reads and writes looked normal from cqlsh.
> 2). SSttables are never created.
> 3). It just creates two files (*-Data.db and *-index.db) of size 0kb.
> 4). nodetool flush does not work and produce following exception:
> java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 
> buckets per element
> at 
> org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150)
>  .
> I checked BloomCalculations class and following lines are responsible for 
> this exception:
> if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) {
>   throw new UnsupportedOperationException(String.format("Unable to 
> satisfy %s with %s buckets per element",
>  maxFalsePosProb, 
> maxBucketsPerElement));
>   }
> From  the code it looks like a hard coaded validation (unless we can change 
> the nuber of buckets).
> So, if this validation is hard coaded then why it is even allowed to set such 
> value of bloom_fileter_fp_chance, that can prevent ssTable generation?
> Please correct this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front

2016-06-17 Thread Arindam Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arindam Gupta updated CASSANDRA-11920:
--
Comment: was deleted

(was: please review the patch.)

> bloom_filter_fp_chance needs to be validated up front
> -
>
> Key: CASSANDRA-11920
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11920
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Local Write-Read Paths
>Reporter: ADARSH KUMAR
>  Labels: lhf
> Attachments: 11920-3.0.txt
>
>
> Hi,
> I was doing some bench-marking on bloom_filter_fp_chance values. Everything 
> worked fine for values .01(default for STCS), .001, .0001. But when I set 
> bloom_filter_fp_chance = .1 i observed following behaviour:
> 1). Reads and writes looked normal from cqlsh.
> 2). SSttables are never created.
> 3). It just creates two files (*-Data.db and *-index.db) of size 0kb.
> 4). nodetool flush does not work and produce following exception:
> java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 
> buckets per element
> at 
> org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150)
>  .
> I checked BloomCalculations class and following lines are responsible for 
> this exception:
> if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) {
>   throw new UnsupportedOperationException(String.format("Unable to 
> satisfy %s with %s buckets per element",
>  maxFalsePosProb, 
> maxBucketsPerElement));
>   }
> From  the code it looks like a hard coaded validation (unless we can change 
> the nuber of buckets).
> So, if this validation is hard coaded then why it is even allowed to set such 
> value of bloom_fileter_fp_chance, that can prevent ssTable generation?
> Please correct this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front

2016-06-17 Thread Arindam Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15335836#comment-15335836
 ] 

Arindam Gupta commented on CASSANDRA-11920:
---

please review the patch.

> bloom_filter_fp_chance needs to be validated up front
> -
>
> Key: CASSANDRA-11920
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11920
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Local Write-Read Paths
>Reporter: ADARSH KUMAR
>  Labels: lhf
> Attachments: 11920-3.0.txt
>
>
> Hi,
> I was doing some bench-marking on bloom_filter_fp_chance values. Everything 
> worked fine for values .01(default for STCS), .001, .0001. But when I set 
> bloom_filter_fp_chance = .1 i observed following behaviour:
> 1). Reads and writes looked normal from cqlsh.
> 2). SSttables are never created.
> 3). It just creates two files (*-Data.db and *-index.db) of size 0kb.
> 4). nodetool flush does not work and produce following exception:
> java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 
> buckets per element
> at 
> org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150)
>  .
> I checked BloomCalculations class and following lines are responsible for 
> this exception:
> if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) {
>   throw new UnsupportedOperationException(String.format("Unable to 
> satisfy %s with %s buckets per element",
>  maxFalsePosProb, 
> maxBucketsPerElement));
>   }
> From  the code it looks like a hard coaded validation (unless we can change 
> the nuber of buckets).
> So, if this validation is hard coaded then why it is even allowed to set such 
> value of bloom_fileter_fp_chance, that can prevent ssTable generation?
> Please correct this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front

2016-06-17 Thread Arindam Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arindam Gupta updated CASSANDRA-11920:
--
 Reviewer: Tyler Hobbs
Reproduced In: 3.0.3
   Tester: ADARSH KUMAR
   Status: Patch Available  (was: Open)

attached the patch with the fix. I have not added any unit test as could not 
find corresponding unit test files for these classes, however tested it with 
cqlsh.

> bloom_filter_fp_chance needs to be validated up front
> -
>
> Key: CASSANDRA-11920
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11920
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Local Write-Read Paths
>Reporter: ADARSH KUMAR
>  Labels: lhf
> Attachments: 11920-3.0.txt
>
>
> Hi,
> I was doing some bench-marking on bloom_filter_fp_chance values. Everything 
> worked fine for values .01(default for STCS), .001, .0001. But when I set 
> bloom_filter_fp_chance = .1 i observed following behaviour:
> 1). Reads and writes looked normal from cqlsh.
> 2). SSttables are never created.
> 3). It just creates two files (*-Data.db and *-index.db) of size 0kb.
> 4). nodetool flush does not work and produce following exception:
> java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 
> buckets per element
> at 
> org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150)
>  .
> I checked BloomCalculations class and following lines are responsible for 
> this exception:
> if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) {
>   throw new UnsupportedOperationException(String.format("Unable to 
> satisfy %s with %s buckets per element",
>  maxFalsePosProb, 
> maxBucketsPerElement));
>   }
> From  the code it looks like a hard coaded validation (unless we can change 
> the nuber of buckets).
> So, if this validation is hard coaded then why it is even allowed to set such 
> value of bloom_fileter_fp_chance, that can prevent ssTable generation?
> Please correct this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front

2016-06-17 Thread Arindam Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arindam Gupta updated CASSANDRA-11920:
--
Attachment: 11920-3.0.txt

patch fix for 11920

> bloom_filter_fp_chance needs to be validated up front
> -
>
> Key: CASSANDRA-11920
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11920
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Local Write-Read Paths
>Reporter: ADARSH KUMAR
>  Labels: lhf
> Attachments: 11920-3.0.txt
>
>
> Hi,
> I was doing some bench-marking on bloom_filter_fp_chance values. Everything 
> worked fine for values .01(default for STCS), .001, .0001. But when I set 
> bloom_filter_fp_chance = .1 i observed following behaviour:
> 1). Reads and writes looked normal from cqlsh.
> 2). SSttables are never created.
> 3). It just creates two files (*-Data.db and *-index.db) of size 0kb.
> 4). nodetool flush does not work and produce following exception:
> java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 
> buckets per element
> at 
> org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150)
>  .
> I checked BloomCalculations class and following lines are responsible for 
> this exception:
> if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) {
>   throw new UnsupportedOperationException(String.format("Unable to 
> satisfy %s with %s buckets per element",
>  maxFalsePosProb, 
> maxBucketsPerElement));
>   }
> From  the code it looks like a hard coaded validation (unless we can change 
> the nuber of buckets).
> So, if this validation is hard coaded then why it is even allowed to set such 
> value of bloom_fileter_fp_chance, that can prevent ssTable generation?
> Please correct this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11920) bloom_filter_fp_chance needs to be validated up front

2016-06-09 Thread Arindam Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15322471#comment-15322471
 ] 

Arindam Gupta commented on CASSANDRA-11920:
---

I am looking into this issue, I am very new to C* as this is the first issue I 
am looking into, so my codebase knowledge is very limited. Following Tyler's 
comment I feel validation should go into TableParams validate() method, but 
BloomCalculations class is having default access modifier, so not accessible 
from TableParams, is there any other place where bloom_filter_fp_chance minimum 
supported value is kept?

> bloom_filter_fp_chance needs to be validated up front
> -
>
> Key: CASSANDRA-11920
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11920
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Local Write-Read Paths
>Reporter: ADARSH KUMAR
>  Labels: lhf
>
> Hi,
> I was doing some bench-marking on bloom_filter_fp_chance values. Everything 
> worked fine for values .01(default for STCS), .001, .0001. But when I set 
> bloom_filter_fp_chance = .1 i observed following behaviour:
> 1). Reads and writes looked normal from cqlsh.
> 2). SSttables are never created.
> 3). It just creates two files (*-Data.db and *-index.db) of size 0kb.
> 4). nodetool flush does not work and produce following exception:
> java.lang.UnsupportedOperationException: Unable to satisfy 1.0E-5 with 20 
> buckets per element
> at 
> org.apache.cassandra.utils.BloomCalculations.computeBloomSpec(BloomCalculations.java:150)
>  .
> I checked BloomCalculations class and following lines are responsible for 
> this exception:
> if (maxFalsePosProb < probs[maxBucketsPerElement][maxK]) {
>   throw new UnsupportedOperationException(String.format("Unable to 
> satisfy %s with %s buckets per element",
>  maxFalsePosProb, 
> maxBucketsPerElement));
>   }
> From  the code it looks like a hard coaded validation (unless we can change 
> the nuber of buckets).
> So, if this validation is hard coaded then why it is even allowed to set such 
> value of bloom_fileter_fp_chance, that can prevent ssTable generation?
> Please correct this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)