If you have succesfully run a repair between the initial insert and running
the first select then that should have ensured that all replicas are there.
Are you sure your repairs are completing successfully?

To check if all replicas are not been written during the periods of high
load you can monitor the dropped mutations metrics.

Cheers
Ben

---


*Ben Slater**Chief Product Officer*

<https://www.instaclustr.com/platform/>

<https://www.facebook.com/instaclustr>   <https://twitter.com/instaclustr>
<https://www.linkedin.com/company/instaclustr>

Read our latest technical blog posts here
<https://www.instaclustr.com/blog/>.

This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
and Instaclustr Inc (USA).

This email and any attachments may contain confidential and legally
privileged information.  If you are not the intended recipient, do not copy
or disclose its content, but please reply to this email immediately and
highlight the error to the sender and then immediately delete the message.


On Tue, 30 Apr 2019 at 17:06, Marco Gasparini <
marco.gaspar...@competitoor.com> wrote:

> > My guess is the initial query was causing a read repair so, on
> subsequent queries, there were replicas of the data on every node and it
> still got returned at consistency one
> got it
>
> >There are a number of ways the data could have become inconsistent in the
> first place - eg  badly overloaded or down nodes, changes in topology
> without following proper procedure, etc
> I actually perform repair every day (because I have  lot of deletes).
> The topology has not been changed since months.
> I usually don't have down nodes but I have a high workload every night
> that last for about 2/3 hours. I'm monitoring Cassandra performances via
> prometheus+grafana and I noticed that reads are too slow, about 10/15
> seconds latency, writes are faster than reads, about 600/700 us. I'm using
> non-SSD drives on nodes.
>
>
>
>
>
>
>
>
>
> Il giorno lun 29 apr 2019 alle ore 22:36 Ben Slater <
> ben.sla...@instaclustr.com> ha scritto:
>
>> My guess is the initial query was causing a read repair so, on subsequent
>> queries, there were replicas of the data on every node and it still got
>> returned at consistency one.
>>
>> There are a number of ways the data could have become inconsistent in the
>> first place - eg  badly overloaded or down nodes, changes in topology
>> without following proper procedure, etc.
>>
>> Cheers
>> Ben
>>
>> ---
>>
>>
>> *Ben Slater**Chief Product Officer*
>>
>> <https://www.instaclustr.com/platform/>
>>
>> <https://www.facebook.com/instaclustr>
>> <https://twitter.com/instaclustr>
>> <https://www.linkedin.com/company/instaclustr>
>>
>> Read our latest technical blog posts here
>> <https://www.instaclustr.com/blog/>.
>>
>> This email has been sent on behalf of Instaclustr Pty. Limited
>> (Australia) and Instaclustr Inc (USA).
>>
>> This email and any attachments may contain confidential and legally
>> privileged information.  If you are not the intended recipient, do not copy
>> or disclose its content, but please reply to this email immediately and
>> highlight the error to the sender and then immediately delete the message.
>>
>>
>> On Mon, 29 Apr 2019 at 19:50, Marco Gasparini <
>> marco.gaspar...@competitoor.com> wrote:
>>
>>> thank you Ben for the reply.
>>>
>>> > You haven’t said what consistency level you are using. CQLSH by
>>> default uses consistency level one which may be part of the issue - try
>>> using a higher level (eg CONSISTENCY QUOROM)
>>> yes, actually I used CQLSH so the consistency level was set to ONE.
>>> After I changed it I get the right results.
>>>
>>> >After results are returned correctly are they then returned correctly
>>> for all future runs?
>>> yes it seems that after they returned I can get access to them at each
>>> run of the same query on each node i run it.
>>>
>>> > When was the data inserted (relative to your attempt to query it)?
>>> about a day before the query
>>>
>>>
>>> Thanks
>>>
>>>
>>> Il giorno lun 29 apr 2019 alle ore 10:29 Ben Slater <
>>> ben.sla...@instaclustr.com> ha scritto:
>>>
>>>> You haven’t said what consistency level you are using. CQLSH by default
>>>> uses consistency level one which may be part of the issue - try using a
>>>> higher level (eg CONSISTENCY QUOROM).
>>>>
>>>> After results are returned correctly are they then returned correctly
>>>> for all future runs? When was the data inserted (relative to your attempt
>>>> to query it)?
>>>>
>>>> Cheers
>>>> Ben
>>>>
>>>> ---
>>>>
>>>>
>>>> *Ben Slater**Chief Product Officer*
>>>>
>>>> <https://www.instaclustr.com/platform/>
>>>>
>>>> <https://www.facebook.com/instaclustr>
>>>> <https://twitter.com/instaclustr>
>>>> <https://www.linkedin.com/company/instaclustr>
>>>>
>>>> Read our latest technical blog posts here
>>>> <https://www.instaclustr.com/blog/>.
>>>>
>>>> This email has been sent on behalf of Instaclustr Pty. Limited
>>>> (Australia) and Instaclustr Inc (USA).
>>>>
>>>> This email and any attachments may contain confidential and legally
>>>> privileged information.  If you are not the intended recipient, do not copy
>>>> or disclose its content, but please reply to this email immediately and
>>>> highlight the error to the sender and then immediately delete the message.
>>>>
>>>>
>>>> On Mon, 29 Apr 2019 at 17:57, Marco Gasparini <
>>>> marco.gaspar...@competitoor.com> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> I'm using Cassandra 3.11.3.5.
>>>>>
>>>>> I have just noticed that when I perform a query I get 0 result but if
>>>>> I launch that same query after few seconds I get the right result.
>>>>>
>>>>> I have traced the query:
>>>>>
>>>>> cqlsh> select event_datetime, id_url, uuid, num_pages from
>>>>> mkp_history.mkp_lookup where id_url= 1455425 and url_type='mytype' ;
>>>>>
>>>>>  event_datetime | id_url | uuid | num_pages
>>>>> ----------------+--------+------+-----------
>>>>>
>>>>> (0 rows)
>>>>>
>>>>> Tracing session: dda9d1a0-6a51-11e9-9e36-f54fe3235e69
>>>>>
>>>>>  activity
>>>>>
>>>>>                    | timestamp                  | source    |
>>>>> source_elapsed | client
>>>>>
>>>>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------+-----------+----------------+-----------
>>>>>
>>>>>
>>>>>  Execute CQL3 query | 2019-04-29 09:39:05.530000 | 10.8.0.10 |
>>>>> 0 | 10.8.0.10
>>>>>  Parsing select event_datetime, id_url, uuid, num_pages from
>>>>> mkp_history.mkp_lookup where id_url= 1455425 and url_type=' mytype'\n;
>>>>> [Native-Transport-Requests-2] | 2019-04-29 09:39:05.530000 | 10.8.0.10 |
>>>>>         238 | 10.8.0.10
>>>>>
>>>>>                                               Preparing statement
>>>>> [Native-Transport-Requests-2] | 2019-04-29 09:39:05.530000 | 10.8.0.10 |
>>>>>         361 | 10.8.0.10
>>>>>
>>>>>                                      reading data from /10.8.0.38
>>>>> [Native-Transport-Requests-2] | 2019-04-29 09:39:05.531000 | 10.8.0.10 |
>>>>>         527 | 10.8.0.10
>>>>>
>>>>>                 Sending READ message to /10.8.0.38
>>>>> [MessagingService-Outgoing-/10.8.0.38-Small] | 2019-04-29 09:39:05.531000 
>>>>> |
>>>>> 10.8.0.10 |            620 | 10.8.0.10
>>>>>
>>>>>                    READ message received from /10.8.0.10
>>>>> [MessagingService-Incoming-/10.8.0.10] | 2019-04-29 09:39:05.535000
>>>>> |  10.8.0.8 |             44 | 10.8.0.10
>>>>>
>>>>>                               speculating read retry on /10.8.0.8
>>>>> [Native-Transport-Requests-2] | 2019-04-29 09:39:05.535000 | 10.8.0.10 |
>>>>>        4913 | 10.8.0.10
>>>>>
>>>>>                                    Executing single-partition query on
>>>>> mkp_lookup [ReadStage-2] | 2019-04-29 09:39:05.535000 |  10.8.0.8 |
>>>>>     304 | 10.8.0.10
>>>>>
>>>>>                   Sending READ message to /10.8.0.8
>>>>> [MessagingService-Outgoing-/10.8.0.8-Small] | 2019-04-29 09:39:05.535000 |
>>>>> 10.8.0.10 |           4970 | 10.8.0.10
>>>>>
>>>>>                                                      Acquiring sstable
>>>>> references [ReadStage-2] | 2019-04-29 09:39:05.536000 |  10.8.0.8 |
>>>>>     391 | 10.8.0.10
>>>>>
>>>>>                                            Bloom filter allows skipping
>>>>> sstable 1 [ReadStage-2] | 2019-04-29 09:39:05.536000 |  10.8.0.8 |
>>>>>   490 | 10.8.0.10
>>>>>
>>>>>         Skipped 0/1 non-slice-intersecting sstables, included 0 due to
>>>>> tombstones [ReadStage-2] | 2019-04-29 09:39:05.536000 |  10.8.0.8 |
>>>>>     549 | 10.8.0.10
>>>>>
>>>>>                                         Merged data from memtables and 0
>>>>> sstables [ReadStage-2] | 2019-04-29 09:39:05.536000 |  10.8.0.8 |
>>>>>   697 | 10.8.0.10
>>>>>
>>>>>                                            Read 0 live rows and 0 
>>>>> tombstone
>>>>> cells [ReadStage-2] | 2019-04-29 09:39:05.536000 |  10.8.0.8 |
>>>>> 808 | 10.8.0.10
>>>>>
>>>>>                                                  Enqueuing response to /
>>>>> 10.8.0.10 [ReadStage-2] | 2019-04-29 09:39:05.536000 |  10.8.0.8 |
>>>>>         896 | 10.8.0.10
>>>>>
>>>>>     Sending REQUEST_RESPONSE message to /10.8.0.10
>>>>> [MessagingService-Outgoing-/10.8.0.10-Small] | 2019-04-29 09:39:05.536000
>>>>> |  10.8.0.8 |           1141 | 10.8.0.10
>>>>>
>>>>>          REQUEST_RESPONSE message received from /10.8.0.8
>>>>> [MessagingService-Incoming-/10.8.0.8] | 2019-04-29 09:39:05.539000 |
>>>>> 10.8.0.10 |           8627 | 10.8.0.10
>>>>>
>>>>>                                     Processing response from /10.8.0.8
>>>>> [RequestResponseStage-3] | 2019-04-29 09:39:05.539000 | 10.8.0.10 |
>>>>>    8739 | 10.8.0.10
>>>>>
>>>>>
>>>>>    Request complete | 2019-04-29 09:39:05.538823 | 10.8.0.10 |
>>>>>  8823 | 10.8.0.10
>>>>>
>>>>>
>>>>>
>>>>> And here I rerun the query just after few seconds:
>>>>>
>>>>>
>>>>> cqlsh> select event_datetime, id_url, uuid, num_pages from
>>>>> mkp_history.mkp_lookup where id_url= 1455425 and url_type='mytype';
>>>>>
>>>>>  event_datetime                  | id_url  | uuid
>>>>>            | num_pages
>>>>>
>>>>> ---------------------------------+---------+--------------------------------------+-----------
>>>>>  2019-04-15 21:32:27.031000+0000 | 1455425 |
>>>>> 91114c7d-3dd3-4913-ac9c-0dfa12b4198b |         1
>>>>>  2019-04-14 21:34:23.630000+0000 | 1455425 |
>>>>> e97b160d-3901-4550-9ce6-36893a6dcd90 |         1
>>>>>  2019-04-11 21:57:23.025000+0000 | 1455425 |
>>>>> 1566cc7c-7893-43f0-bffe-caab47dec851 |         1
>>>>>
>>>>> (3 rows)
>>>>>
>>>>> Tracing session: f4b7eb20-6a51-11e9-9e36-f54fe3235e69
>>>>>
>>>>>  activity
>>>>>
>>>>>                  | timestamp                  | source    | source_elapsed
>>>>> | client
>>>>>
>>>>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------+-----------+----------------+-----------
>>>>>
>>>>>
>>>>>  Execute CQL3 query | 2019-04-29 09:39:44.210000 | 10.8.0.10 |
>>>>> 0 | 10.8.0.10
>>>>>  Parsing select event_datetime, id_url, uuid, num_pages from
>>>>> mkp_history.mkp_lookup where id_url= 1455425 and url_type='mytype';
>>>>> [Native-Transport-Requests-2] | 2019-04-29 09:39:44.210000 | 10.8.0.10 |
>>>>>         125 | 10.8.0.10
>>>>>
>>>>>                  READ message received from /10.8.0.10
>>>>> [MessagingService-Incoming-/10.8.0.10] | 2019-04-29 09:39:44.211000
>>>>> |  10.8.0.8 |             27 | 10.8.0.10
>>>>>
>>>>>                                             Preparing statement
>>>>> [Native-Transport-Requests-2] | 2019-04-29 09:39:44.211000 | 10.8.0.10 |
>>>>>         261 | 10.8.0.10
>>>>>
>>>>>                                  Executing single-partition query on
>>>>> mkp_lookup [ReadStage-1] | 2019-04-29 09:39:44.211000 |  10.8.0.8 |
>>>>>     233 | 10.8.0.10
>>>>>
>>>>>                                     reading data from /10.8.0.8
>>>>> [Native-Transport-Requests-2] | 2019-04-29 09:39:44.211000 | 10.8.0.10 |
>>>>>         422 | 10.8.0.10
>>>>>
>>>>>                 Sending READ message to /10.8.0.8
>>>>> [MessagingService-Outgoing-/10.8.0.8-Small] | 2019-04-29 09:39:44.211000 |
>>>>> 10.8.0.10 |            522 | 10.8.0.10
>>>>>
>>>>>                                                    Acquiring sstable
>>>>> references [ReadStage-1] | 2019-04-29 09:39:44.212000 |  10.8.0.8 |
>>>>>     312 | 10.8.0.10
>>>>>
>>>>>                                          Bloom filter allows skipping
>>>>> sstable 1 [ReadStage-1] | 2019-04-29 09:39:44.212000 |  10.8.0.8 |
>>>>>   413 | 10.8.0.10
>>>>>
>>>>>       Skipped 0/1 non-slice-intersecting sstables, included 0 due to
>>>>> tombstones [ReadStage-1] | 2019-04-29 09:39:44.212000 |  10.8.0.8 |
>>>>>     473 | 10.8.0.10
>>>>>
>>>>>                                       Merged data from memtables and 0
>>>>> sstables [ReadStage-1] | 2019-04-29 09:39:44.212000 |  10.8.0.8 |
>>>>>   676 | 10.8.0.10
>>>>>
>>>>>                                          Read 3 live rows and 0 tombstone
>>>>> cells [ReadStage-1] | 2019-04-29 09:39:44.212000 |  10.8.0.8 |
>>>>> 794 | 10.8.0.10
>>>>>
>>>>>                                                Enqueuing response to /
>>>>> 10.8.0.10 [ReadStage-1] | 2019-04-29 09:39:44.212000 |  10.8.0.8 |
>>>>>         854 | 10.8.0.10
>>>>>
>>>>>   Sending REQUEST_RESPONSE message to /10.8.0.10
>>>>> [MessagingService-Outgoing-/10.8.0.10-Small] | 2019-04-29 09:39:44.212001
>>>>> |  10.8.0.8 |           1017 | 10.8.0.10
>>>>>
>>>>>        REQUEST_RESPONSE message received from /10.8.0.8
>>>>> [MessagingService-Incoming-/10.8.0.8] | 2019-04-29 09:39:44.214000 |
>>>>> 10.8.0.10 |           4117 | 10.8.0.10
>>>>>
>>>>>                                   Processing response from /10.8.0.8
>>>>> [RequestResponseStage-3] | 2019-04-29 09:39:44.214000 | 10.8.0.10 |
>>>>>    4191 | 10.8.0.10
>>>>>
>>>>>
>>>>>  Request complete | 2019-04-29 09:39:44.214349 | 10.8.0.10 |           
>>>>> 4349
>>>>> | 10.8.0.10
>>>>> What is the reason of this behaviour? How can I fix this?
>>>>>
>>>>> Thanks
>>>>> Marco
>>>>>
>>>>

Reply via email to