I ran two queries, the first one is on the application table, the second one is on our counter table. I’m posting the results below (removed the values for convenience). Coincidence or luck, both queries looked up to node-05 for replica. Still does not make any sense to me.
cqlsh:Usergrid_Applications> select value from "Graph_Source_Node_Edges" limit 10; Tracing session: d99c76c4-2382-11e7-a634-bf481230ee1f activity | timestamp | source | source_elapsed | client --------------------------------------------------------------------------------------------------------------------------+----------------------------+----------------+----------------+---------------- Execute CQL3 query | 2017-04-17 18:30:56.173000 | cassandra-01 | 0 | cassandra-01 Parsing select value from "Graph_Source_Node_Edges" limit 10; [SharedPool-Worker-1] | 2017-04-17 18:30:56.173000 | cassandra-01 | 157 | cassandra-01 Preparing statement [SharedPool-Worker-1] | 2017-04-17 18:30:56.173000 | cassandra-01 | 363 | cassandra-01 Computing ranges to query [SharedPool-Worker-1] | 2017-04-17 18:30:56.173000 | cassandra-01 | 925 | cassandra-01 RANGE_SLICE message received from / cassandra-01 [MessagingService-Incoming-/ cassandra-01 ] | 2017-04-17 18:30:56.174000 | cassandra-05 | 9 | cassandra-01 Submitting range requests on 2561 ranges with a concurrency of 1 (82797.0 rows per range expected) [SharedPool-Worker-1] | 2017-04-17 18:30:56.174000 | cassandra-01 | 1546 | cassandra-01 Executing seq scan across 3 sstables for (min(-9223372036854775808), max(-9173699490866503541)] [SharedPool-Worker-3] | 2017-04-17 18:30:56.174000 | cassandra-05 | 124 | cassandra-01 Enqueuing request to / cassandra-05 [SharedPool-Worker-1] | 2017-04-17 18:30:56.174001 | cassandra-01 | 1674 | cassandra-01 Submitted 1 concurrent range requests [SharedPool-Worker-1] | 2017-04-17 18:30:56.174001 | cassandra-01 | 1819 | cassandra-01 Sending RANGE_SLICE message to / cassandra-05 [MessagingService-Outgoing-/ cassandra-05 ] | 2017-04-17 18:30:56.174001 | cassandra-01 | 1840 | cassandra-01 Read 10 live and 0 tombstone cells [SharedPool-Worker-3] | 2017-04-17 18:30:56.175000 | cassandra-05 | 964 | cassandra-01 Enqueuing response to / cassandra-01 [SharedPool-Worker-3] | 2017-04-17 18:30:56.175000 | cassandra-05 | 1004 | cassandra-01 Sending REQUEST_RESPONSE message to / cassandra-01 [MessagingService-Outgoing-/ cassandra-01 ] | 2017-04-17 18:30:56.175000 | cassandra-05 | 1117 | cassandra-01 REQUEST_RESPONSE message received from / cassandra-05 [MessagingService-Incoming-/ cassandra-05 ] | 2017-04-17 18:30:56.176000 | cassandra-01 | 3571 | cassandra-01 Processing response from / cassandra-05 [SharedPool-Worker-6] | 2017-04-17 18:30:56.176000 | cassandra-01 | 3623 | cassandra-01 Request complete | 2017-04-17 18:30:56.176777 | cassandra-01 | 3777 | cassandra-01 cqlsh:counter_keyspace> select counter_value from counter table limit 10; Tracing session: f9784963-2382-11e7-a634-bf481230ee1f activity | timestamp | source | source_elapsed | client -------------------------------------------------------------------------------------------------------------------------+----------------------------+----------------+----------------+---------------- Execute CQL3 query | 2017-04-17 18:31:49.622000 | cassandra-01 | 0 | cassandra-01 Parsing select counter_value from counter table limit 10; [SharedPool-Worker-4] | 2017-04-17 18:31:49.622000 | cassandra-01 | 142 | cassandra-01 Preparing statement [SharedPool-Worker-4] | 2017-04-17 18:31:49.623000 | cassandra-01 | 217 | cassandra-01 RANGE_SLICE message received from / cassandra-01 [MessagingService-Incoming-/ cassandra-01 ] | 2017-04-17 18:31:49.623000 | cassandra-05 | 18 | cassandra-01 Computing ranges to query [SharedPool-Worker-4] | 2017-04-17 18:31:49.623000 | cassandra-01 | 335 | cassandra-01 Executing seq scan across 2 sstables for (min(-9223372036854775808), max(-9173699490866503541)] [SharedPool-Worker-2] | 2017-04-17 18:31:49.623000 | cassandra-05 | 141 | cassandra-01 Submitting range requests on 2561 ranges with a concurrency of 1 (861.45 rows per range expected) [SharedPool-Worker-4] | 2017-04-17 18:31:49.623001 | cassandra-01 | 1060 | cassandra-01 Enqueuing request to / cassandra-05 [SharedPool-Worker-4] | 2017-04-17 18:31:49.623001 | cassandra-01 | 1134 | cassandra-01 Submitted 1 concurrent range requests [SharedPool-Worker-4] | 2017-04-17 18:31:49.624000 | cassandra-01 | 1225 | cassandra-01 Sending RANGE_SLICE message to / cassandra-05 [MessagingService-Outgoing-/ cassandra-05 ] | 2017-04-17 18:31:49.624000 | cassandra-01 | 1257 | cassandra-01 Read 10 live and 0 tombstone cells [SharedPool-Worker-2] | 2017-04-17 18:31:49.627000 | cassandra-05 | 3350 | cassandra-01 Enqueuing response to / cassandra-01 [SharedPool-Worker-2] | 2017-04-17 18:31:49.627000 | cassandra-05 | 3394 | cassandra-01 Sending REQUEST_RESPONSE message to / cassandra-01 [MessagingService-Outgoing-/ cassandra-01 ] | 2017-04-17 18:31:49.627000 | cassandra-05 | 3453 | cassandra-01 REQUEST_RESPONSE message received from / cassandra-05 [MessagingService-Incoming-/ cassandra-05 ] | 2017-04-17 18:31:49.628000 | cassandra-01 | 5250 | cassandra-01 Processing response from / cassandra-05 [SharedPool-Worker-6] | 2017-04-17 18:31:49.628000 | cassandra-01 | 5319 | cassandra-01 Request complete | 2017-04-17 18:31:49.628595 | cassandra-01 | 6595 | cassandra-01 From: benjamin roth [mailto:brs...@gmail.com] Sent: Monday, April 17, 2017 6:17 PM To: user@cassandra.apache.org Subject: RE: Counter performance Just run some queries on counter tables. Some on regular tables. Look at traces and then compare. You don't need to do anything with application code. You can also set trace probability on a table level and then analyze the queries. Am 17.04.2017 17:07 schrieb "Eren Yilmaz" <eren.yil...@sebit.com.tr<mailto:eren.yil...@sebit.com.tr>>: I can’t add tracing using driver – Usergrid code is way too complex. When I look at logging the slow queries on the C* side, it says the feature is added in version 3.10 (https://issues.apache.org/jira/browse/CASSANDRA-12403), and we use 3.7. Any other ways to log slow queries in this version? Or, what do we expect with this log output? From: benjamin roth [mailto:brs...@gmail.com<mailto:brs...@gmail.com>] Sent: Monday, April 17, 2017 5:44 PM To: user@cassandra.apache.org<mailto:user@cassandra.apache.org> Subject: RE: Counter performance You could enable a slow query log and then trace single queries couldn't you? Am 17.04.2017 16:31 schrieb "Eren Yilmaz" <eren.yil...@sebit.com.tr<mailto:eren.yil...@sebit.com.tr>>: I can’t trace selects on the application tables unfortunately. The application is Usergrid, and it stores the data in binary. We have little control over Usergrid-created data. From: benjamin roth [mailto:brs...@gmail.com<mailto:brs...@gmail.com>] Sent: Monday, April 17, 2017 4:12 PM To: user@cassandra.apache.org<mailto:user@cassandra.apache.org> Subject: Re: Counter performance Do you see difference when tracing the selects? 2017-04-17 13:36 GMT+02:00 Eren Yilmaz <eren.yil...@sebit.com.tr<mailto:eren.yil...@sebit.com.tr>>: Application tables use LeveledCompactionStrategy. At first, counter tables were created by default SizeTieredCompactionStrategy, but we changed them to LeveledCompactionStrategy then. compaction = { 'class' : 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 'sstable_size_in_mb' : 512 } From: benjamin roth [mailto:brs...@gmail.com<mailto:brs...@gmail.com>] Sent: Monday, April 17, 2017 12:12 PM To: user@cassandra.apache.org<mailto:user@cassandra.apache.org> Subject: Re: Counter performance Do you have a different compaction strategy on the counter tables? 2017-04-17 10:07 GMT+02:00 Eren Yilmaz <eren.yil...@sebit.com.tr<mailto:eren.yil...@sebit.com.tr>>: We are using Cassandra (3.7) counter tables in our application, and there are about 10 counter tables. The counter tables are in a separate keyspace with RF=3 (total 10 nodes). The tables are read-heavy, for each web request to the application, we read at least 20 counter values. The counter reads are very slow comparing to the other application data reads from cassandra, and sometimes the reads put extra heavy CPU load on some nodes. Are there any tips, or best practices for increasing the performance of counter tables?