The repair results is following (we run it Friday): Cannot proceed on
repair because a neighbor (/192.168.61.201) is dead: session failed
But to be honest the neighbor did not died. It seemed to trigger a series
of full GC events on the initiating node. The results form logs are:
[2015-02-20
Hi,
2.1.3 is now the official latest release - I checked this morning and
got this good surprise. Now it's update time - thanks to all guys
involved, if I meet anyone one beer from me :-)
The changelist is rather long:
tables are 99% write only. It is audit system
3) Yes I am using default values
4) In both operations I am using LOCAL_QUORUM.
I am almost sure that READ timeout happens because of too much SSTables.
Anyway firstly I would like to fix to many pending compactions. I still
don't know
an information
2) I did this. My tables are 99% write only. It is audit system
3) Yes I am using default values
4) In both operations I am using LOCAL_QUORUM.
I am almost sure that READ timeout happens because of too much
SSTables.
Anyway firstly I would like to fix to many pending
operations I am using LOCAL_QUORUM.
I am almost sure that READ timeout happens because of too much
SSTables.
Anyway firstly I would like to fix to many pending compactions. I
still
don't know how to speed up them.
On Wed, Feb 18, 2015 at 2:49 PM, Roni Balthazar
ronibaltha...@gmail.com
I don't have problems with DC_B (replica) only in DC_A(my system write only
to it) I have read timeouts.
I checked in OpsCenter SSTable count and I have:
1) in DC_A same +-10% for last week, a small increase for last 24h (it is
more than 15000-2 SSTables depends on node)
2) in DC_B last 24h
Hi,
Thanks for your tip it looks that something changed - I still don't know
if it is ok.
My nodes started to do more compaction, but it looks that some compactions
are really slow.
In IO we have idle, CPU is quite ok (30%-40%). We set compactionthrouput to
999, but I do not see difference.
Can
happens because of too much SSTables.
Anyway firstly I would like to fix to many pending compactions. I still
don't know how to speed up them.
On Wed, Feb 18, 2015 at 2:49 PM, Roni Balthazar ronibaltha...@gmail.com
wrote:
Are you running repairs within gc_grace_seconds? (default is 10 days)
http
Are you running repairs within gc_grace_seconds? (default is 10 days)
http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_repair_nodes_c.html
Double check if you set cold_reads_to_omit to 0.0 on tables with STCS
that you do not read often.
Are you using default values
to many pending compactions. I still
don't know how to speed up them.
On Wed, Feb 18, 2015 at 2:49 PM, Roni Balthazar
ronibaltha...@gmail.com
wrote:
Are you running repairs within gc_grace_seconds? (default is 10 days)
http://www.datastax.com/documentation/cassandra/2.0
Hi,
You can check if the number of SSTables is decreasing. Look for the
SSTable count information of your tables using nodetool cfstats.
The compaction history can be viewed using nodetool
compactionhistory.
About the timeouts, check this out:
default values
4) In both operations I am using LOCAL_QUORUM.
I am almost sure that READ timeout happens because of too much SSTables.
Anyway firstly I would like to fix to many pending compactions. I still
don't know how to speed up them.
On Wed, Feb 18, 2015 at 2:49 PM, Roni Balthazar
After some diagnostic ( we didn't set yet cold_reads_to_omit ). Compaction
are running but VERY slow with idle IO.
We had a lot of Data files in Cassandra. In DC_A it is about ~12
(only xxx-Data.db) in DC_B has only ~4000.
I don't know if this change anything but:
1) in DC_A avg size of
I set setcompactionthroughput 999 permanently and it doesn't change
anything. IO is still same. CPU is idle.
On Tue, Feb 17, 2015 at 1:15 AM, Roni Balthazar ronibaltha...@gmail.com
wrote:
Hi,
You can run nodetool compactionstats to view statistics on compactions.
Setting cold_reads_to_omit
*Environment*
1) Actual Cassandra 2.1.3, it was upgraded from 2.1.0 (suggested by Al
Tobey from DataStax)
2) not using vnodes
3)Two data centres: 5 nodes in one DC (DC_A), 4 nodes in second DC (DC_B)
4) each node is set up on a physical box with two 16-Core HT Xeon
processors (E5-2660), 64GB RAM
Hi,
1) Actual Cassandra 2.1.3, it was upgraded from 2.1.0 (suggested by Al
Tobey from DataStax)
7) minimal reads (usually none, sometimes few)
those two points keep me repeating an anwser I got. First where did you
get 2.1.3 from? Maybe I missed it, I will have a look. But if it is
2.1.2
Hi 100% in agreement with Roland,
2.1.x series is a pain! I would never recommend the current 2.1.x series
for production.
Clocks is a pain, and check your connectivity! Also check tpstats to see if
your threadpools are being overrun.
Regards,
Carlos Juzarte Rolo
Cassandra Consultant
Pythian
One think I do not understand. In my case compaction is running
permanently. Is there a way to check which compaction is pending? The only
information is about total count.
On Monday, February 16, 2015, Ja Sam ptrstp...@gmail.com wrote:
Of couse I made a mistake. I am using 2.1.2. Anyway night
Of couse I made a mistake. I am using 2.1.2. Anyway night build is
available from
http://cassci.datastax.com/job/cassandra-2.1/
I read about cold_reads_to_omit It looks promising. Should I set also
compaction throughput?
p.s. I am really sad that I didn't read this before:
Hi,
You can run nodetool compactionstats to view statistics on compactions.
Setting cold_reads_to_omit to 0.0 can help to reduce the number of
SSTables when you use Size-Tiered compaction.
You can also create a cron job to increase the value of
setcompactionthroughput during the night or when
20 matches
Mail list logo