Thanks!! Eric Stevens for your reply.

We have following JVM settings :-
---------------------------------------------
*memtable_offheap_space_in_mb: 15360  (*found in casandra.yaml
*)*
*MAX_HEAP_SIZE="16G"  (*found in cassandra-env.sh
*)*---------------------------------------------

And I also found big GC in log. But messages and big GC were logged at
different-different time in system.log. I was expecting to happen them at
same time after reading your reply. I also manually triggered GC but
messages were not dropped.

is *TRACE message drop *harmful or it's okay we can neglect them?

Thank you!!


On Wed, Jun 15, 2016 at 8:45 PM, Eric Stevens <migh...@gmail.com> wrote:

> This is better kept to the User groups.
>
> What are your JVM memory settings for Cassandra, and have you seen big
> GC's in your logs?
>
> The reason I ask is because that's a large number of column families,
> which produces memory pressure, and at first blush that strikes me as a
> likely cause.
>
>
> On Wed, Jun 15, 2016 at 3:23 AM Varun Barala <varunbaral...@gmail.com>
> wrote:
>
>> Hi all,
>>
>> Can anyone tell me that what are all possible reasons for below log:-
>>
>>
>> *"INFO  [ScheduledTasks:1] 2016-06-14 06:27:39,498
>> MessagingService.java:929 - _TRACE messages were dropped in last 5000 ms:
>> 928 for internal timeout and 0 for cross node timeout".*
>> I searched online for the same and found some reasons like:-
>>
>> * Disk is not able to keep up with your ingest
>> * Resources are not able to support all parallel running tasks
>> * If other nodes are down then due to large hint replay
>> * Heavy workload
>>
>> But in this case other kind of messages (mutation, read, write etc)
>>  should be dropped by *C** but It doesn't happen.
>>
>> -----------------------------
>> Cluster Specifications
>> ------------------------------
>> number of nodes = 1
>> total number of CF = 2000
>>
>> -----------------------------
>> Machine Specifications
>> ------------------------------
>> RAM 30 GB
>> hard disk SSD
>> ubuntu 14.04
>>
>>
>> Thanks in advance!!
>>
>> Regards,
>> Varun Barala
>>
>

Reply via email to