Hey guys,
The documentation for the "-pr" repair option says it repairs only the
first range returned by the partitioner. However, with vnodes a node owns a
lot of small ranges.
Does that mean that if I run rolling "nodetool repair -pr" on the cluster,
a whole bunch of ranges remain un-repaired?
g, then I definitely suggest you give it
> a try.
>
> On Sep 10, 2015, at 1:43 PM, Robert Coli <rc...@eventbrite.com> wrote:
>
> On Thu, Sep 10, 2015 at 10:54 AM, Roman Tkachenko <ro...@mailgunhq.com>
> wrote:
>>
>> [5 second CMS GC] Is my best shot to pla
for GC in system.log to verify.
> > If it is GC, there are a myriad of issues that could cause it, but
> > at least you’ve narrowed it down.
> >
> > On Sep 9, 2015, at 11:05 PM, Roman Tkachenko <ro...@mailgunhq.com>
> wrote:
> >
> > > Hey guys,
> > >
Hey guys,
We've been having issues in the past couple of days with CPU usage / load
average suddenly skyrocketing on some nodes of the cluster, affecting
performance significantly so majority of requests start timing out. It can
go on for several hours, with CPU spiking through the roof then
5, 2015 at 1:40 PM, Roman Tkachenko ro...@mailgunhq.com
wrote:
Hey guys,
I have a table with RF=3 and LCS. Data model makes use of wide rows. A
certain query run against this table times out and tracing reveals the
following error on two out of three nodes:
*Scanned over 10 tombstones
Hey guys,
I have a table with RF=3 and LCS. Data model makes use of wide rows. A
certain query run against this table times out and tracing reveals the
following error on two out of three nodes:
*Scanned over 10 tombstones; query aborted (see
tombstone_failure_threshold)*
This basically
Hey guys,
We're running Cassandra with two data directories, let's say
/data/sstables1 and /data/sstables2, which are in fact two separate (but
identical) disks. The problem is that the disk where sstables2 is mounted
is running out of space and large SSTables stored there cannot be compacted.
Hi Dan,
Have you tried using nodetool getendpoints? It shows you nodes that
currently own the specific key.
Roman
On Thu, Mar 26, 2015 at 1:21 PM, Dan Kinder dkin...@turnitin.com wrote:
Hey all,
In certain cases it would be useful for us to find out which node(s) have
the data for a given
Yep, good point: https://issues.apache.org/jira/browse/CASSANDRA-9045.
On Thu, Mar 26, 2015 at 4:23 PM, Robert Coli rc...@eventbrite.com wrote:
On Wed, Mar 25, 2015 at 6:53 PM, Roman Tkachenko ro...@mailgunhq.com
wrote:
Yup, I increased in_memory_compaction_limit_in_mb to 512MB so the row
AM, Roman Tkachenko ro...@mailgunhq.com
wrote:
Well, as I mentioned in my original email all machines running Cassandra
are running NTP. This was one of the first things I verified and I triple
checked that they all show the same time. Is this sufficient to ensure
clocks are synched between
at 1:57 PM, Roman Tkachenko ro...@mailgunhq.com
wrote:
Okay, so I'm positively going crazy :)
Increasing gc_grace + repair + decreasing gc_grace didn't help. The
columns still appear after the repair. I checked in cassandra-cli and
timestamps for these columns are old, not in the future, so
time delete happens?
Also, how do I find out the value to set gc_grace_seconds to?
Thanks.
On Tue, Mar 24, 2015 at 9:38 AM, Duncan Sands duncan.sa...@gmail.com
wrote:
Hi Roman,
On 24/03/15 17:32, Roman Tkachenko wrote:
Hey guys,
Has anyone seen anything like this behavior or has
Hey guys,
Has anyone seen anything like this behavior or has an explanation for it?
If not, I think I'm gonna file a bug report.
Thanks!
Roman
On Mon, Mar 23, 2015 at 4:45 PM, Roman Tkachenko ro...@mailgunhq.com
wrote:
Hey guys,
We're having a very strange issue: deleted columns get
Hey guys,
We're having a very strange issue: deleted columns get resurrected when
repair is run on a node.
Info about the setup. Cassandra 2.0.13, multi datacenter with 12 nodes in
one datacenter and 6 nodes in another one. Schema:
cqlsh describe keyspace blackbook;
CREATE KEYSPACE blackbook
14 matches
Mail list logo