Mview in cassandra

2018-10-15 Thread rajasekhar kommineni
Hi,

I am seeing below warning message in system.log after datacopy using 
sstabloader. 

WARN  [CompactionExecutor:972] 2018-10-15 22:20:39,308 ViewBuilder.java:189 - 
Materialized View failed to complete, sleeping 5 minutes before restarting
org.apache.cassandra.exceptions.UnavailableException: Cannot achieve 
consistency level ONE

I tried to drop the Materialized View and recreate it , but the data is not 
getting populated with version 3.11.1

I tried the same on version 3.11.2 on single node dev box and I can query the 
Materialized View with data. Any body have some experiences with Mview’s. 

Thanks,


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Cassandra: Inconsistent data on reads (LOCAL_QUORUM)

2018-10-15 Thread Naik, Ninad
Yup. Verified again that this table is being written, read, and replicated to 
just one data center.


From: Naik, Ninad
Sent: Monday, October 15, 2018 1:18:43 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra: Inconsistent data on reads (LOCAL_QUORUM)


Let me double check that.


From: Jeff Jirsa 
Sent: Monday, October 15, 2018 11:49:10 AM
To: user@cassandra.apache.org
Subject: Re: Cassandra: Inconsistent data on reads (LOCAL_QUORUM)


[ This email has been sent from a source external to Epsilon. Please use 
caution when clicking links or opening attachments. ]

Are you SURE there are no writes to that table coming from another DC?



--
Jeff Jirsa


On Oct 15, 2018, at 5:34 PM, Naik, Ninad 
mailto:ninad.n...@epsilon.com>> wrote:


Thanks Jeff. We're not doing deletes, but I will take a look at this jira.


From: Jeff Jirsa mailto:jji...@gmail.com>>
Sent: Sunday, October 14, 2018 12:55:17 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra: Inconsistent data on reads (LOCAL_QUORUM)


[ This email has been sent from a source external to Epsilon. Please use 
caution when clicking links or opening attachments. ]

If this is 2.1 AND you do deletes AND you have a non-zero number of failed 
writes (timeouts), it’s possibly short reads

3.0 fixes this ( https://issues.apache.org/jira/browse/CASSANDRA-12872 ), it 
won’t be backported to 2.1 because it’s a significant change to how reads are 
executed


--
Jeff Jirsa


On Oct 13, 2018, at 7:24 PM, Naik, Ninad 
mailto:ninad.n...@epsilon.com>> wrote:


Thanks Maitrayee. I should have mentioned this as one of the things we 
verified. The clocks on cassandra nodes are in sync.


From: maitrayee shah 
mailto:koolja...@yahoo.com.INVALID>>
Sent: Friday, October 12, 2018 6:40:25 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra: Inconsistent data on reads (LOCAL_QUORUM)


[ This email has been sent from a source external to Epsilon. Please use 
caution when clicking links or opening attachments. ]

We have seen inconsistent read if the clock on the nodes are not in sync.


Thank you

Sent from my iPhone

On Oct 12, 2018, at 1:50 PM, Naik, Ninad 
mailto:ninad.n...@epsilon.com>> wrote:


Hello,

We're seeing inconsistent data while doing reads on cassandra. Here are the 
details:

It's is a wide column table. The columns can be added my multiple machines, and 
read by multiple machines. The time between writes and reads are in minutes, 
but sometimes can be in seconds. Writes happen every 2 minutes.

Now, while reading we're seeing the following cases of inconsistent reads:

  *   One column was added. If a read was done after the column was added (20 
secs to 2 minutes after the write), Cassandra returns no data. As if the key 
doesn't exist. If the application retries, it gets the data.
  *   A few columns exist for a row key. And a new column 'n' was added. Again, 
a read happens a few minutes after the write. This time, only the latest column 
'n' is returned. In this case the app doesn't know that the data is incomplete 
so it doesn't retry. If we manually retry, we see all the columns.
  *   A few columns exist for a row key. And a new column 'n' is added. When a 
read happens after the write, all columns but 'n' are returned.

Here's what we've verified:

  *   Both writes and reads are using 'LOCAL_QUORUM' consistency level.
  *   The replication is within local data center. No remote data center is 
involved in the read or write.
  *   During the inconsistent reads, none of the nodes are undergoing GC pauses
  *   There are no errors in cassandra logs
  *   Reads always happen after the writes.

A few other details: Cassandra version: 2.1.9 DataStax java driver version: 
2.1.10.2 Replication Factor: 3

We don't see this problem in lower environments. We have seen this happen once 
or twice last year, but since last few days it's happening quite frequently. On 
an average 2 inconsistent reads every minute.

Here's how the table definition looks like:

CREATE TABLE "MY_TABLE" (
  key text,
  sub_key text,
  value text,
  PRIMARY KEY ((key), sub_key)
) WITH
  bloom_filter_fp_chance=0.01 AND
  caching='{"keys":"ALL", "rows_per_partition":"NONE"}' AND
  comment='' AND
  dclocal_read_repair_chance=0.10 AND
  gc_grace_seconds=864000 AND
  read_repair_chance=0.00 AND
  default_time_to_live=0 AND
  speculative_retry='ALWAYS' AND
  memtable_flush_period_in_ms=0 AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'LZ4Compressor'};


Please point us in the right direction. Thanks !



The information contained in this e-mail message and any attachments may be 
privileged and confidential. If the reader of this message is not the intended 
recipient or an agent responsible for delivering it to the intended 

Re: Cassandra: Inconsistent data on reads (LOCAL_QUORUM)

2018-10-15 Thread Naik, Ninad
Let me double check that.


From: Jeff Jirsa 
Sent: Monday, October 15, 2018 11:49:10 AM
To: user@cassandra.apache.org
Subject: Re: Cassandra: Inconsistent data on reads (LOCAL_QUORUM)


[ This email has been sent from a source external to Epsilon. Please use 
caution when clicking links or opening attachments. ]

Are you SURE there are no writes to that table coming from another DC?



--
Jeff Jirsa


On Oct 15, 2018, at 5:34 PM, Naik, Ninad 
mailto:ninad.n...@epsilon.com>> wrote:


Thanks Jeff. We're not doing deletes, but I will take a look at this jira.


From: Jeff Jirsa mailto:jji...@gmail.com>>
Sent: Sunday, October 14, 2018 12:55:17 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra: Inconsistent data on reads (LOCAL_QUORUM)


[ This email has been sent from a source external to Epsilon. Please use 
caution when clicking links or opening attachments. ]

If this is 2.1 AND you do deletes AND you have a non-zero number of failed 
writes (timeouts), it’s possibly short reads

3.0 fixes this ( https://issues.apache.org/jira/browse/CASSANDRA-12872 ), it 
won’t be backported to 2.1 because it’s a significant change to how reads are 
executed


--
Jeff Jirsa


On Oct 13, 2018, at 7:24 PM, Naik, Ninad 
mailto:ninad.n...@epsilon.com>> wrote:


Thanks Maitrayee. I should have mentioned this as one of the things we 
verified. The clocks on cassandra nodes are in sync.


From: maitrayee shah 
mailto:koolja...@yahoo.com.INVALID>>
Sent: Friday, October 12, 2018 6:40:25 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra: Inconsistent data on reads (LOCAL_QUORUM)


[ This email has been sent from a source external to Epsilon. Please use 
caution when clicking links or opening attachments. ]

We have seen inconsistent read if the clock on the nodes are not in sync.


Thank you

Sent from my iPhone

On Oct 12, 2018, at 1:50 PM, Naik, Ninad 
mailto:ninad.n...@epsilon.com>> wrote:


Hello,

We're seeing inconsistent data while doing reads on cassandra. Here are the 
details:

It's is a wide column table. The columns can be added my multiple machines, and 
read by multiple machines. The time between writes and reads are in minutes, 
but sometimes can be in seconds. Writes happen every 2 minutes.

Now, while reading we're seeing the following cases of inconsistent reads:

  *   One column was added. If a read was done after the column was added (20 
secs to 2 minutes after the write), Cassandra returns no data. As if the key 
doesn't exist. If the application retries, it gets the data.
  *   A few columns exist for a row key. And a new column 'n' was added. Again, 
a read happens a few minutes after the write. This time, only the latest column 
'n' is returned. In this case the app doesn't know that the data is incomplete 
so it doesn't retry. If we manually retry, we see all the columns.
  *   A few columns exist for a row key. And a new column 'n' is added. When a 
read happens after the write, all columns but 'n' are returned.

Here's what we've verified:

  *   Both writes and reads are using 'LOCAL_QUORUM' consistency level.
  *   The replication is within local data center. No remote data center is 
involved in the read or write.
  *   During the inconsistent reads, none of the nodes are undergoing GC pauses
  *   There are no errors in cassandra logs
  *   Reads always happen after the writes.

A few other details: Cassandra version: 2.1.9 DataStax java driver version: 
2.1.10.2 Replication Factor: 3

We don't see this problem in lower environments. We have seen this happen once 
or twice last year, but since last few days it's happening quite frequently. On 
an average 2 inconsistent reads every minute.

Here's how the table definition looks like:

CREATE TABLE "MY_TABLE" (
  key text,
  sub_key text,
  value text,
  PRIMARY KEY ((key), sub_key)
) WITH
  bloom_filter_fp_chance=0.01 AND
  caching='{"keys":"ALL", "rows_per_partition":"NONE"}' AND
  comment='' AND
  dclocal_read_repair_chance=0.10 AND
  gc_grace_seconds=864000 AND
  read_repair_chance=0.00 AND
  default_time_to_live=0 AND
  speculative_retry='ALWAYS' AND
  memtable_flush_period_in_ms=0 AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'LZ4Compressor'};


Please point us in the right direction. Thanks !



The information contained in this e-mail message and any attachments may be 
privileged and confidential. If the reader of this message is not the intended 
recipient or an agent responsible for delivering it to the intended recipient, 
you are hereby notified that any review, dissemination, distribution or copying 
of this communication is strictly prohibited. If you have received this 
communication in error, please notify the sender immediately by replying to 
this e-mail and delete the message and any 

Re: Cassandra: Inconsistent data on reads (LOCAL_QUORUM)

2018-10-15 Thread Jeff Jirsa
Are you SURE there are no writes to that table coming from another DC?



-- 
Jeff Jirsa


> On Oct 15, 2018, at 5:34 PM, Naik, Ninad  wrote:
> 
> Thanks Jeff. We're not doing deletes, but I will take a look at this jira. 
> From: Jeff Jirsa 
> Sent: Sunday, October 14, 2018 12:55:17 PM
> To: user@cassandra.apache.org
> Subject: Re: Cassandra: Inconsistent data on reads (LOCAL_QUORUM)
>  
> [ This email has been sent from a source external to Epsilon. Please use 
> caution when clicking links or opening attachments. ]
> 
> If this is 2.1 AND you do deletes AND you have a non-zero number of failed 
> writes (timeouts), it’s possibly short reads
> 
> 3.0 fixes this ( https://issues.apache.org/jira/browse/CASSANDRA-12872 ), it 
> won’t be backported to 2.1 because it’s a significant change to how reads are 
> executed
> 
> 
> -- 
> Jeff Jirsa
> 
> 
> On Oct 13, 2018, at 7:24 PM, Naik, Ninad  wrote:
> 
>> Thanks Maitrayee. I should have mentioned this as one of the things we 
>> verified. The clocks on cassandra nodes are in sync. 
>> From: maitrayee shah 
>> Sent: Friday, October 12, 2018 6:40:25 PM
>> To: user@cassandra.apache.org
>> Subject: Re: Cassandra: Inconsistent data on reads (LOCAL_QUORUM)
>>  
>> [ This email has been sent from a source external to Epsilon. Please use 
>> caution when clicking links or opening attachments. ]
>> 
>> We have seen inconsistent read if the clock on the nodes are not in sync. 
>> 
>> 
>> Thank you 
>> 
>> Sent from my iPhone
>> 
>> On Oct 12, 2018, at 1:50 PM, Naik, Ninad  wrote:
>> 
>>> Hello,
>>> 
>>> We're seeing inconsistent data while doing reads on cassandra. Here are the 
>>> details:
>>> 
>>> It's is a wide column table. The columns can be added my multiple machines, 
>>> and read by multiple machines. The time between writes and reads are in 
>>> minutes, but sometimes can be in seconds. Writes happen every 2 minutes.
>>> 
>>> Now, while reading we're seeing the following cases of inconsistent reads:
>>> 
>>> One column was added. If a read was done after the column was added (20 
>>> secs to 2 minutes after the write), Cassandra returns no data. As if the 
>>> key doesn't exist. If the application retries, it gets the data.
>>> A few columns exist for a row key. And a new column 'n' was added. Again, a 
>>> read happens a few minutes after the write. This time, only the latest 
>>> column 'n' is returned. In this case the app doesn't know that the data is 
>>> incomplete so it doesn't retry. If we manually retry, we see all the 
>>> columns.
>>> A few columns exist for a row key. And a new column 'n' is added. When a 
>>> read happens after the write, all columns but 'n' are returned.
>>> Here's what we've verified:
>>> 
>>> Both writes and reads are using 'LOCAL_QUORUM' consistency level.
>>> The replication is within local data center. No remote data center is 
>>> involved in the read or write.
>>> During the inconsistent reads, none of the nodes are undergoing GC pauses
>>> There are no errors in cassandra logs
>>> Reads always happen after the writes.
>>> A few other details: Cassandra version: 2.1.9 DataStax java driver version: 
>>> 2.1.10.2 Replication Factor: 3
>>> 
>>> We don't see this problem in lower environments. We have seen this happen 
>>> once or twice last year, but since last few days it's happening quite 
>>> frequently. On an average 2 inconsistent reads every minute.
>>> 
>>> Here's how the table definition looks like:
>>> 
>>> CREATE TABLE "MY_TABLE" (
>>>   key text,
>>>   sub_key text,
>>>   value text,
>>>   PRIMARY KEY ((key), sub_key)
>>> ) WITH
>>>   bloom_filter_fp_chance=0.01 AND
>>>   caching='{"keys":"ALL", "rows_per_partition":"NONE"}' AND
>>>   comment='' AND
>>>   dclocal_read_repair_chance=0.10 AND
>>>   gc_grace_seconds=864000 AND
>>>   read_repair_chance=0.00 AND
>>>   default_time_to_live=0 AND
>>>   speculative_retry='ALWAYS' AND
>>>   memtable_flush_period_in_ms=0 AND
>>>   compaction={'class': 'SizeTieredCompactionStrategy'} AND
>>>   compression={'sstable_compression': 'LZ4Compressor'};
>>> Please point us in the right direction. Thanks !
>>> 
>>>  
>>> 
>>> The information contained in this e-mail message and any attachments may be 
>>> privileged and confidential. If the reader of this message is not the 
>>> intended recipient or an agent responsible for delivering it to the 
>>> intended recipient, you are hereby notified that any review, dissemination, 
>>> distribution or copying of this communication is strictly prohibited. If 
>>> you have received this communication in error, please notify the sender 
>>> immediately by replying to this e-mail and delete the message and any 
>>> attachments from your computer.
>>> 
>> 
>>  
>> 
>> The information contained in this e-mail message and any attachments may be 
>> privileged and confidential. If the reader of this message is not the 
>> intended recipient or an agent responsible for delivering it to the intended 
>> recipient, you are hereby 

Re: Cassandra: Inconsistent data on reads (LOCAL_QUORUM)

2018-10-15 Thread Naik, Ninad
Thanks Jeff. We're not doing deletes, but I will take a look at this jira.


From: Jeff Jirsa 
Sent: Sunday, October 14, 2018 12:55:17 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra: Inconsistent data on reads (LOCAL_QUORUM)


[ This email has been sent from a source external to Epsilon. Please use 
caution when clicking links or opening attachments. ]

If this is 2.1 AND you do deletes AND you have a non-zero number of failed 
writes (timeouts), it’s possibly short reads

3.0 fixes this ( https://issues.apache.org/jira/browse/CASSANDRA-12872 ), it 
won’t be backported to 2.1 because it’s a significant change to how reads are 
executed


--
Jeff Jirsa


On Oct 13, 2018, at 7:24 PM, Naik, Ninad 
mailto:ninad.n...@epsilon.com>> wrote:


Thanks Maitrayee. I should have mentioned this as one of the things we 
verified. The clocks on cassandra nodes are in sync.


From: maitrayee shah 
mailto:koolja...@yahoo.com.INVALID>>
Sent: Friday, October 12, 2018 6:40:25 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra: Inconsistent data on reads (LOCAL_QUORUM)


[ This email has been sent from a source external to Epsilon. Please use 
caution when clicking links or opening attachments. ]

We have seen inconsistent read if the clock on the nodes are not in sync.


Thank you

Sent from my iPhone

On Oct 12, 2018, at 1:50 PM, Naik, Ninad 
mailto:ninad.n...@epsilon.com>> wrote:


Hello,

We're seeing inconsistent data while doing reads on cassandra. Here are the 
details:

It's is a wide column table. The columns can be added my multiple machines, and 
read by multiple machines. The time between writes and reads are in minutes, 
but sometimes can be in seconds. Writes happen every 2 minutes.

Now, while reading we're seeing the following cases of inconsistent reads:

  *   One column was added. If a read was done after the column was added (20 
secs to 2 minutes after the write), Cassandra returns no data. As if the key 
doesn't exist. If the application retries, it gets the data.
  *   A few columns exist for a row key. And a new column 'n' was added. Again, 
a read happens a few minutes after the write. This time, only the latest column 
'n' is returned. In this case the app doesn't know that the data is incomplete 
so it doesn't retry. If we manually retry, we see all the columns.
  *   A few columns exist for a row key. And a new column 'n' is added. When a 
read happens after the write, all columns but 'n' are returned.

Here's what we've verified:

  *   Both writes and reads are using 'LOCAL_QUORUM' consistency level.
  *   The replication is within local data center. No remote data center is 
involved in the read or write.
  *   During the inconsistent reads, none of the nodes are undergoing GC pauses
  *   There are no errors in cassandra logs
  *   Reads always happen after the writes.

A few other details: Cassandra version: 2.1.9 DataStax java driver version: 
2.1.10.2 Replication Factor: 3

We don't see this problem in lower environments. We have seen this happen once 
or twice last year, but since last few days it's happening quite frequently. On 
an average 2 inconsistent reads every minute.

Here's how the table definition looks like:

CREATE TABLE "MY_TABLE" (
  key text,
  sub_key text,
  value text,
  PRIMARY KEY ((key), sub_key)
) WITH
  bloom_filter_fp_chance=0.01 AND
  caching='{"keys":"ALL", "rows_per_partition":"NONE"}' AND
  comment='' AND
  dclocal_read_repair_chance=0.10 AND
  gc_grace_seconds=864000 AND
  read_repair_chance=0.00 AND
  default_time_to_live=0 AND
  speculative_retry='ALWAYS' AND
  memtable_flush_period_in_ms=0 AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'LZ4Compressor'};


Please point us in the right direction. Thanks !



The information contained in this e-mail message and any attachments may be 
privileged and confidential. If the reader of this message is not the intended 
recipient or an agent responsible for delivering it to the intended recipient, 
you are hereby notified that any review, dissemination, distribution or copying 
of this communication is strictly prohibited. If you have received this 
communication in error, please notify the sender immediately by replying to 
this e-mail and delete the message and any attachments from your computer.



The information contained in this e-mail message and any attachments may be 
privileged and confidential. If the reader of this message is not the intended 
recipient or an agent responsible for delivering it to the intended recipient, 
you are hereby notified that any review, dissemination, distribution or copying 
of this communication is strictly prohibited. If you have received this 
communication in error, please notify the sender immediately by replying to 
this e-mail and delete the message and any attachments from your computer.

The