I was thinking about this issue. I was wondering on the dev side if it
would make sense to make a utility for the unit tests that could enable
tracing and then assert that a number of steps in the trace happened.

Something like:

setup()
runQuery("SELECT * FROM X")
Assertion.assertTrace("Preparing statement").then("Row cache
hit").then("Request complete");

This would be a pretty awesome way to verify things without mock/mockito.



On Mon, Oct 3, 2016 at 2:35 PM, Abhinav Solan <abhinav.so...@gmail.com>
wrote:

> Hi, can anyone please help me with this
>
> Thanks,
> Abhinav
>
> On Fri, Sep 30, 2016 at 6:20 PM Abhinav Solan <abhinav.so...@gmail.com>
> wrote:
>
>> Hi Everyone,
>>
>> My table looks like this -
>> CREATE TABLE test.reads (
>>     svc_pt_id bigint,
>>     meas_type_id bigint,
>>     flags bigint,
>>     read_time timestamp,
>>     value double,
>>     PRIMARY KEY ((svc_pt_id, meas_type_id))
>> ) WITH bloom_filter_fp_chance = 0.1
>>     AND caching = {'keys': 'ALL', 'rows_per_partition': '10'}
>>     AND comment = ''
>>     AND compaction = {'class': 'org.apache.cassandra.db.compaction.
>> LeveledCompactionStrategy'}
>>     AND compression = {'chunk_length_in_kb': '64', 'class': '
>> org.apache.cassandra.io.compress.LZ4Compressor'}
>>     AND crc_check_chance = 1.0
>>     AND dclocal_read_repair_chance = 0.1
>>     AND default_time_to_live = 0
>>     AND gc_grace_seconds = 864000
>>     AND max_index_interval = 2048
>>     AND memtable_flush_period_in_ms = 0
>>     AND min_index_interval = 128
>>     AND read_repair_chance = 0.0
>>     AND speculative_retry = '99PERCENTILE';
>>
>> Have set up the C* nodes with
>> row_cache_size_in_mb: 1024
>> row_cache_save_period: 14400
>>
>> and I am making this query
>> select svc_pt_id, meas_type_id, read_time, value FROM
>> cts_svc_pt_latest_int_read where svc_pt_id = -9941235 and meas_type_id =
>> 146;
>>
>> with tracing on every time it says Row cache miss
>>
>> activity
>>
>>              | timestamp                  | source          | source_elapsed
>> ------------------------------------------------------------
>> ------------------------------------------------------------
>> -------------------------------------------+----------------
>> ------------+-----------------+----------------
>>
>>
>> Execute CQL3 query | 2016-09-30 18:15:00.446000 |  192.168.199.75 |
>>      0
>>  Parsing select svc_pt_id, meas_type_id, read_time, value FROM
>> cts_svc_pt_latest_int_read where svc_pt_id = -9941235 and meas_type_id =
>> 146; [SharedPool-Worker-1] | 2016-09-30 18:15:00.446000 |  192.168.199.75 |
>>            111
>>
>>                                                Preparing statement
>> [SharedPool-Worker-1] | 2016-09-30 18:15:00.446000 |  192.168.199.75 |
>>        209
>>
>>                                 reading data from /192.168.170.186
>> [SharedPool-Worker-1] | 2016-09-30 18:15:00.446001 |  192.168.199.75 |
>>        370
>>
>>    Sending READ message to /192.168.170.186 [MessagingService-Outgoing-/
>> 192.168.170.186] | 2016-09-30 18:15:00.446001 |  192.168.199.75 |
>>      450
>>
>> REQUEST_RESPONSE message received from /192.168.170.186
>> [MessagingService-Incoming-/192.168.170.186] | 2016-09-30
>> 18:15:00.448000 |  192.168.199.75 |           2469
>>
>>                          Processing response from /192.168.170.186
>> [SharedPool-Worker-8] | 2016-09-30 18:15:00.448000 |  192.168.199.75 |
>>       2609
>>
>>   READ message received from /192.168.199.75 [MessagingService-Incoming-/
>> 192.168.199.75] | 2016-09-30 18:15:00.449000 | 192.168.170.186 |
>>     75
>>
>>                                                     Row cache miss
>> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449000 | 192.168.170.186 |
>>        218
>>                                               Fetching data but not
>> populating cache as query does not query from the start of the partition
>> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449000 | 192.168.170.186 |
>>        246
>>
>>     Executing single-partition query on cts_svc_pt_latest_int_read
>> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449000 | 192.168.170.186 |
>>        259
>>
>>                                       Acquiring sstable references
>> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
>>        281
>>
>>                                          Merging memtable contents
>> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
>>        295
>>
>>                                        Merging data from sstable 8
>> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
>>        326
>>
>>                                        Key cache hit for sstable 8
>> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
>>        351
>>
>>                                        Merging data from sstable 7
>> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
>>        439
>>
>>                                        Key cache hit for sstable 7
>> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
>>        468
>>
>>                                  Read 1 live and 0 tombstone cells
>> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
>>        615
>>
>>                              Enqueuing response to /192.168.199.75
>> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449002 | 192.168.170.186 |
>>        766
>>
>>  Sending REQUEST_RESPONSE message to /192.168.199.75
>> [MessagingService-Outgoing-/192.168.199.75] | 2016-09-30 18:15:00.449002
>> | 192.168.170.186 |            897
>>
>>
>> Request complete | 2016-09-30 18:15:00.448888 |  192.168.199.75 |
>> 2888
>>
>> Can please anyone tell me what I am doing wrong?
>>
>> Thanks,
>> Abhinav
>>
>

Reply via email to