Can you provide more details.
E.g. table structure, the app used for the query, the query itself and the 
error message.

Also get the output of the following commands from your cluster nodes (note 
that one command uses "." and the other "space" between keyspace and tablename)

nodetool -h <hostname> tablestats <keyspace>.<tablename>
nodetool -h <hostname> tablehistograms <keyspace> <tablename>

Timeouts can happen at the client/application level (which can be tuned) and at 
the coordinator node level (which too can be tuned).
But again those timeouts are a symptom of something.
It can happen at the client side because of connection pool queue too full 
(which is likely due to response time from the cluster/coordinate nodes).
And the issues at the cluster side could be due to several reasons.
E.g. your query has to scan through too many tombstones, causing the delay or 
your query (if using filter).

From: "ZAIDI, ASAD A" <az1...@att.com>
Date: Friday, July 7, 2017 at 9:45 AM
To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Subject: RE: READ Queries timing out.

>> I analysed the GC logs not having any issues with major GC's
            If you don’t have issues on GC , than why do you want to [tune] GC 
parameters ?
Instead focus on why select queries are taking time.. may be take a look on 
their trace?


From: Pranay akula [mailto:pranay.akula2...@gmail.com]
Sent: Friday, July 07, 2017 9:27 AM
To: user@cassandra.apache.org
Subject: READ Queries timing out.

Lately i am seeing some select queries timing out, data modelling to blame for 
but not in a situation to redo it.

Does increasing heap will help ??

currently using 1GB new_heap, I analysed the GC logs not having any issues with 
major GC's .

Using G1GC , does increasing new_heap will help ??

currently using JVM_OPTS="$JVM_OPTS -XX:MaxGCPauseMillis=500", even if i 
increase heap to lets say 2GB is that effective b/c young GC's will kick in 
more frequently  to complete in 500ms right ??


Thanks
Pranay.

Reply via email to