[jira] [Commented] (CASSANDRA-12431) Getting null value for the field that has value when query result has many rows

2016-09-13 Thread Fei Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488949#comment-15488949
 ] 

Fei Fang commented on CASSANDRA-12431:
--

The guid means id (sorry I renamed the schema but not the query).

I understand that I can insert null into the table. The issue is when I do 
{code}select * from email_histogram where id='1';{code}, I get 20k rows back 
and the log shows score is null for email '2', then I did {code}select * from 
email_histogram where id='1' and email='2' {code}, then I do get a float number 
back;

At week 1, I do {code} insert into email_histogram (id, email,score) values 
('1','8', 2.1);  insert into email_histogram (id, email,score) values ('1','3', 
3.1); {code}
At week 3, I might do {code} insert into email_histogram (id, email,score) 
values ('1','8', 2.3);  insert into email_histogram (id, email,score) values 
('1','3', 3.3); {code}

The emails between week1 and week3 mostly overlap, but there might be some 
emails in week3 only or some emails in week1 only.

So we don't tombstone the entire partition, only on the columns of some 
clustering keys.

What do you mean "Is the partition only written once and never used again"?

Once a bad partition is found, it doesn't continue displaying the odd behavior. 
It seems random.

I can try with read at Quorum and write at ALL.  Do you recommend tracing on a 
staging server? I have tried tracing locally but not sure if server can handle 
that much log.

I don't think the un-repaired partition could be the cause unless one write 
could *partially* succeed, in other words, we insert the values for ALL columns 
in each write. If the email is there, I expect the score to have some value. We 
never insert null value for score. Is it possible for Cassandra to write email 
but not score from one write query?

> Getting null value for the field that has value when query result has many 
> rows
> ---
>
> Key: CASSANDRA-12431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12431
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fei Fang
>Assignee: Edward Capriolo
> Fix For: 2.2.x
>
>
> Hi,
> We get null value (not an older value, but null) for a float column (score) 
> from a 20k result row query. However, when we fetch data for that specific 
> row, the column actually has value.
> The table schema is like this:
> {code}
> CREATE TABLE IF NOT EXISTS email_histogram (
> id text,
> email text,
> score float,
> PRIMARY KEY (id, email)
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = 'KEYS_ONLY'
> AND comment = ''
> AND compaction =
> {'tombstone_threshold': '0.1', 'tombstone_compaction_interval': '300', 
> 'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression =
> {'sstable_compression': 'org.apache.cassandra.io.compress.SnappyCompressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 864000
> AND gc_grace_seconds = 86400
> AND memtable_flush_period_in_ms = 0
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> This is my read query: SELECT * FROM " + TABLE_NAME + " WHERE guid = ?
> I'm using consistency One when querying it and Quorum when updating it. If I 
> insert data, I insert for all the columns, never only part of the column. I 
> understand that I might get out of date value since I'm using One to read, 
> but again here I'm not getting out of date value, but just "null". 
> This is happening on our staging server which servers 20k users, and we see 
> this error happening 10+ times everyday. I don't have an exact number of how 
> many times we do the query, but nodetool cfstats shows local read count of 
> 85314 for this table for the last 18 hours and we have 6 cassandra nodes in 
> this cluster so about 500k querying for 18 hours.
> We update the table every 3 weeks. The table has 20k rows for each key (guid) 
> I'm querying for. Out of the 20k rows, only a couple at most are null and 
> they are not the same every time we query the same key.
> We are using C# driver version 3.0.1 and Cassandra version 2.2.6.44.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12431) Getting null value for the field that has value when query result has many rows

2016-09-13 Thread Fei Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488907#comment-15488907
 ] 

Fei Fang commented on CASSANDRA-12431:
--

@Alex
I tried default page size first and page size of 500, both have this problem.

Yes, your understanding is correct.

> Getting null value for the field that has value when query result has many 
> rows
> ---
>
> Key: CASSANDRA-12431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12431
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fei Fang
>Assignee: Edward Capriolo
> Fix For: 2.2.x
>
>
> Hi,
> We get null value (not an older value, but null) for a float column (score) 
> from a 20k result row query. However, when we fetch data for that specific 
> row, the column actually has value.
> The table schema is like this:
> {code}
> CREATE TABLE IF NOT EXISTS email_histogram (
> id text,
> email text,
> score float,
> PRIMARY KEY (id, email)
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = 'KEYS_ONLY'
> AND comment = ''
> AND compaction =
> {'tombstone_threshold': '0.1', 'tombstone_compaction_interval': '300', 
> 'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression =
> {'sstable_compression': 'org.apache.cassandra.io.compress.SnappyCompressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 864000
> AND gc_grace_seconds = 86400
> AND memtable_flush_period_in_ms = 0
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> This is my read query: SELECT * FROM " + TABLE_NAME + " WHERE guid = ?
> I'm using consistency One when querying it and Quorum when updating it. If I 
> insert data, I insert for all the columns, never only part of the column. I 
> understand that I might get out of date value since I'm using One to read, 
> but again here I'm not getting out of date value, but just "null". 
> This is happening on our staging server which servers 20k users, and we see 
> this error happening 10+ times everyday. I don't have an exact number of how 
> many times we do the query, but nodetool cfstats shows local read count of 
> 85314 for this table for the last 18 hours and we have 6 cassandra nodes in 
> this cluster so about 500k querying for 18 hours.
> We update the table every 3 weeks. The table has 20k rows for each key (guid) 
> I'm querying for. Out of the 20k rows, only a couple at most are null and 
> they are not the same every time we query the same key.
> We are using C# driver version 3.0.1 and Cassandra version 2.2.6.44.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12431) Getting null value for the field that has value when query result has many rows

2016-09-13 Thread Fei Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488899#comment-15488899
 ] 

Fei Fang commented on CASSANDRA-12431:
--

Yes, correct. 

For Cassandra version, here is another result (we upgraded to 2.2.7 recently).

{code}
cqlsh> select release_version from system.local ;

 release_version
-
  2.2.7-SNAPSHOT
{code}

> Getting null value for the field that has value when query result has many 
> rows
> ---
>
> Key: CASSANDRA-12431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12431
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fei Fang
>Assignee: Edward Capriolo
> Fix For: 2.2.x
>
>
> Hi,
> We get null value (not an older value, but null) for a float column (score) 
> from a 20k result row query. However, when we fetch data for that specific 
> row, the column actually has value.
> The table schema is like this:
> {code}
> CREATE TABLE IF NOT EXISTS email_histogram (
> id text,
> email text,
> score float,
> PRIMARY KEY (id, email)
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = 'KEYS_ONLY'
> AND comment = ''
> AND compaction =
> {'tombstone_threshold': '0.1', 'tombstone_compaction_interval': '300', 
> 'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression =
> {'sstable_compression': 'org.apache.cassandra.io.compress.SnappyCompressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 864000
> AND gc_grace_seconds = 86400
> AND memtable_flush_period_in_ms = 0
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> This is my read query: SELECT * FROM " + TABLE_NAME + " WHERE guid = ?
> I'm using consistency One when querying it and Quorum when updating it. If I 
> insert data, I insert for all the columns, never only part of the column. I 
> understand that I might get out of date value since I'm using One to read, 
> but again here I'm not getting out of date value, but just "null". 
> This is happening on our staging server which servers 20k users, and we see 
> this error happening 10+ times everyday. I don't have an exact number of how 
> many times we do the query, but nodetool cfstats shows local read count of 
> 85314 for this table for the last 18 hours and we have 6 cassandra nodes in 
> this cluster so about 500k querying for 18 hours.
> We update the table every 3 weeks. The table has 20k rows for each key (guid) 
> I'm querying for. Out of the 20k rows, only a couple at most are null and 
> they are not the same every time we query the same key.
> We are using C# driver version 3.0.1 and Cassandra version 2.2.6.44.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12431) Getting null value for the field that has value when query result has many rows

2016-09-13 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488467#comment-15488467
 ] 

Edward Capriolo commented on CASSANDRA-12431:
-

Thinking on this more. You can insert null values into this table. 

Are you absolutely sure you are not inserting null?

{noformat}
cqlsh:eventualtest> insert into email_histogram (id, email,score) values 
('1','8', null);
cqlsh:eventualtest> select * from email_histogram where id='1';
 id | email | score
+---+---
  1 | 1 |   3.3
  1 | 2 |   3.3
  1 | 3 |  null
  1 | 5 |   5.5
  1 | 7 |  null
  1 | 8 |  null
{noformat}

{quote}
We update the table every 3 weeks. The table has 20k rows for each key (guid) 
I'm querying for
{quote}

In the schema you provided there are no guid columns. I am assuming you mean 
the text column.

Is a GUID ever reused? Every three weeks: 
* Do you tombstone the entire partition?
* Do you tombstone all the columns in the partition?
* Is the partition only written once and never used again?

One a bad partition is found, Does it continue displaying this odd behavior? Or 
can the problem not be replicated on the same partition?

Unfortunately running load tests do not easily replicate this behavior: 
My suggestions are: 
* Run your read at quorum (determine if this issue still happens)
* write the row at ALL (it sounds like you are writing from batch)
* Enable tracing and attempt to catch this condition
* This condition is fairly easy to detect, run the query 2 times and merge non 
null results in code.

My suspicion is un-repaired partitions that are different across machines.  
Maybe someone can punch a hole in that theory.

> Getting null value for the field that has value when query result has many 
> rows
> ---
>
> Key: CASSANDRA-12431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12431
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fei Fang
>Assignee: Edward Capriolo
> Fix For: 2.2.x
>
>
> Hi,
> We get null value (not an older value, but null) for a float column (score) 
> from a 20k result row query. However, when we fetch data for that specific 
> row, the column actually has value.
> The table schema is like this:
> {code}
> CREATE TABLE IF NOT EXISTS email_histogram (
> id text,
> email text,
> score float,
> PRIMARY KEY (id, email)
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = 'KEYS_ONLY'
> AND comment = ''
> AND compaction =
> {'tombstone_threshold': '0.1', 'tombstone_compaction_interval': '300', 
> 'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression =
> {'sstable_compression': 'org.apache.cassandra.io.compress.SnappyCompressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 864000
> AND gc_grace_seconds = 86400
> AND memtable_flush_period_in_ms = 0
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> This is my read query: SELECT * FROM " + TABLE_NAME + " WHERE guid = ?
> I'm using consistency One when querying it and Quorum when updating it. If I 
> insert data, I insert for all the columns, never only part of the column. I 
> understand that I might get out of date value since I'm using One to read, 
> but again here I'm not getting out of date value, but just "null". 
> This is happening on our staging server which servers 20k users, and we see 
> this error happening 10+ times everyday. I don't have an exact number of how 
> many times we do the query, but nodetool cfstats shows local read count of 
> 85314 for this table for the last 18 hours and we have 6 cassandra nodes in 
> this cluster so about 500k querying for 18 hours.
> We update the table every 3 weeks. The table has 20k rows for each key (guid) 
> I'm querying for. Out of the 20k rows, only a couple at most are null and 
> they are not the same every time we query the same key.
> We are using C# driver version 3.0.1 and Cassandra version 2.2.6.44.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12431) Getting null value for the field that has value when query result has many rows

2016-09-13 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15488027#comment-15488027
 ] 

Edward Capriolo commented on CASSANDRA-12431:
-

Attempting to re-create the scenario here:

https://github.com/edwardcapriolo/ec/blob/master/src/test/java/Base/CASSANDRA_12431.java

A three node cluster running locally for 10 minutes had no null values. Feel 
free to play along.


> Getting null value for the field that has value when query result has many 
> rows
> ---
>
> Key: CASSANDRA-12431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12431
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fei Fang
>Assignee: Edward Capriolo
> Fix For: 2.2.x
>
>
> Hi,
> We get null value (not an older value, but null) for a float column (score) 
> from a 20k result row query. However, when we fetch data for that specific 
> row, the column actually has value.
> The table schema is like this:
> {code}
> CREATE TABLE IF NOT EXISTS email_histogram (
> id text,
> email text,
> score float,
> PRIMARY KEY (id, email)
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = 'KEYS_ONLY'
> AND comment = ''
> AND compaction =
> {'tombstone_threshold': '0.1', 'tombstone_compaction_interval': '300', 
> 'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression =
> {'sstable_compression': 'org.apache.cassandra.io.compress.SnappyCompressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 864000
> AND gc_grace_seconds = 86400
> AND memtable_flush_period_in_ms = 0
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> This is my read query: SELECT * FROM " + TABLE_NAME + " WHERE guid = ?
> I'm using consistency One when querying it and Quorum when updating it. If I 
> insert data, I insert for all the columns, never only part of the column. I 
> understand that I might get out of date value since I'm using One to read, 
> but again here I'm not getting out of date value, but just "null". 
> This is happening on our staging server which servers 20k users, and we see 
> this error happening 10+ times everyday. I don't have an exact number of how 
> many times we do the query, but nodetool cfstats shows local read count of 
> 85314 for this table for the last 18 hours and we have 6 cassandra nodes in 
> this cluster so about 500k querying for 18 hours.
> We update the table every 3 weeks. The table has 20k rows for each key (guid) 
> I'm querying for. Out of the 20k rows, only a couple at most are null and 
> they are not the same every time we query the same key.
> We are using C# driver version 3.0.1 and Cassandra version 2.2.6.44.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12431) Getting null value for the field that has value when query result has many rows

2016-09-13 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15487916#comment-15487916
 ] 

Edward Capriolo commented on CASSANDRA-12431:
-

Possible culprits are:
* user error  (sorry have to look at that as well)
* paging
* funky repair entropy
* eventual consistency

[~feefeif...@gmail.com] Can you provide stats on dropped mutations and on 
digest mismatches from Cassandra. 

It would be nice to enable tracing and try to catch this event but I understand 
that it is a rare occurrence. 

> Getting null value for the field that has value when query result has many 
> rows
> ---
>
> Key: CASSANDRA-12431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12431
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fei Fang
>Assignee: Edward Capriolo
> Fix For: 2.2.x
>
>
> Hi,
> We get null value (not an older value, but null) for a float column (score) 
> from a 20k result row query. However, when we fetch data for that specific 
> row, the column actually has value.
> The table schema is like this:
> {code}
> CREATE TABLE IF NOT EXISTS email_histogram (
> id text,
> email text,
> score float,
> PRIMARY KEY (id, email)
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = 'KEYS_ONLY'
> AND comment = ''
> AND compaction =
> {'tombstone_threshold': '0.1', 'tombstone_compaction_interval': '300', 
> 'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression =
> {'sstable_compression': 'org.apache.cassandra.io.compress.SnappyCompressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 864000
> AND gc_grace_seconds = 86400
> AND memtable_flush_period_in_ms = 0
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> This is my read query: SELECT * FROM " + TABLE_NAME + " WHERE guid = ?
> I'm using consistency One when querying it and Quorum when updating it. If I 
> insert data, I insert for all the columns, never only part of the column. I 
> understand that I might get out of date value since I'm using One to read, 
> but again here I'm not getting out of date value, but just "null". 
> This is happening on our staging server which servers 20k users, and we see 
> this error happening 10+ times everyday. I don't have an exact number of how 
> many times we do the query, but nodetool cfstats shows local read count of 
> 85314 for this table for the last 18 hours and we have 6 cassandra nodes in 
> this cluster so about 500k querying for 18 hours.
> We update the table every 3 weeks. The table has 20k rows for each key (guid) 
> I'm querying for. Out of the 20k rows, only a couple at most are null and 
> they are not the same every time we query the same key.
> We are using C# driver version 3.0.1 and Cassandra version 2.2.6.44.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12431) Getting null value for the field that has value when query result has many rows

2016-09-13 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15487071#comment-15487071
 ] 

Alex Petrov commented on CASSANDRA-12431:
-

If it's a 20K rows in a resultset, this might be a paging problem. Are you 
using custom page size or all default? 
If I understand correctly, sometimes query goes through and sometimes has null 
value, is that right?

> Getting null value for the field that has value when query result has many 
> rows
> ---
>
> Key: CASSANDRA-12431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12431
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fei Fang
> Fix For: 2.2.x
>
>
> Hi,
> We get null value (not an older value, but null) for a float column (score) 
> from a 20k result row query. However, when we fetch data for that specific 
> row, the column actually has value.
> The table schema is like this:
> {code}
> CREATE TABLE IF NOT EXISTS email_histogram (
> id text,
> email text,
> score float,
> PRIMARY KEY (id, email)
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = 'KEYS_ONLY'
> AND comment = ''
> AND compaction =
> {'tombstone_threshold': '0.1', 'tombstone_compaction_interval': '300', 
> 'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression =
> {'sstable_compression': 'org.apache.cassandra.io.compress.SnappyCompressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 864000
> AND gc_grace_seconds = 86400
> AND memtable_flush_period_in_ms = 0
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {code}
> This is my read query: SELECT * FROM " + TABLE_NAME + " WHERE guid = ?
> I'm using consistency One when querying it and Quorum when updating it. If I 
> insert data, I insert for all the columns, never only part of the column. I 
> understand that I might get out of date value since I'm using One to read, 
> but again here I'm not getting out of date value, but just "null". 
> This is happening on our staging server which servers 20k users, and we see 
> this error happening 10+ times everyday. I don't have an exact number of how 
> many times we do the query, but nodetool cfstats shows local read count of 
> 85314 for this table for the last 18 hours and we have 6 cassandra nodes in 
> this cluster so about 500k querying for 18 hours.
> We update the table every 3 weeks. The table has 20k rows for each key (guid) 
> I'm querying for. Out of the 20k rows, only a couple at most are null and 
> they are not the same every time we query the same key.
> We are using C# driver version 3.0.1 and Cassandra version 2.2.6.44.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12431) Getting null value for the field that has value when query result has many rows

2016-09-12 Thread Nate McCall (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15485818#comment-15485818
 ] 

Nate McCall commented on CASSANDRA-12431:
-

To clarify the above, are you saying that the following partition-level query 
returns the {{score}} column as null occasionally:
{noformat}
SELECT * FROM email_histogram WHERE id = ?
{noformat}

Whereas when queried by the whole key, a row which had a null for {{score}} 
above, now has a value?
{noformat}
SELECT * FROM email_histogram WHERE id = ? and email = ?
{noformat}

bq. Cassandra version 2.2.6.44

Also, this looks like you might be running something other than a standard 
release internally. What is the specific release or github SHA? 

> Getting null value for the field that has value when query result has many 
> rows
> ---
>
> Key: CASSANDRA-12431
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12431
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Fei Fang
> Fix For: 2.2.x
>
>
> Hi,
> We get null value (not an older value, but null) for a float column (score) 
> from a 20k result row query. However, when we fetch data for that specific 
> row, the column actually has value.
> The table schema is like this:
> CREATE TABLE IF NOT EXISTS email_histogram (
> id text,
> email text,
> score float,
> PRIMARY KEY (id, email)
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = 'KEYS_ONLY'
> AND comment = ''
> AND compaction =
> {'tombstone_threshold': '0.1', 'tombstone_compaction_interval': '300', 
> 'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression =
> {'sstable_compression': 'org.apache.cassandra.io.compress.SnappyCompressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 864000
> AND gc_grace_seconds = 86400
> AND memtable_flush_period_in_ms = 0
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> This is my read query: SELECT * FROM " + TABLE_NAME + " WHERE guid = ?
> I'm using consistency One when querying it and Quorum when updating it. If I 
> insert data, I insert for all the columns, never only part of the column. I 
> understand that I might get out of date value since I'm using One to read, 
> but again here I'm not getting out of date value, but just "null". 
> This is happening on our staging server which servers 20k users, and we see 
> this error happening 10+ times everyday. I don't have an exact number of how 
> many times we do the query, but nodetool cfstats shows local read count of 
> 85314 for this table for the last 18 hours and we have 6 cassandra nodes in 
> this cluster so about 500k querying for 18 hours.
> We update the table every 3 weeks. The table has 20k rows for each key (guid) 
> I'm querying for. Out of the 20k rows, only a couple at most are null and 
> they are not the same every time we query the same key.
> We are using C# driver version 3.0.1 and Cassandra version 2.2.6.44.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)