Rob,
It is inevitable that the repairs are needed to keep consistency guarantees. Is
it worthwhile to consider RAID-0 as we get more storage? One can treat loss of
disk as loss of node and rebuild the node and repair. Any other suggestions are
most welcome.
-Sri
Thanks for the reply Rob.
Date: Thu, 16 Oct 2014 11:46:52 -0700
Subject: Re: validation compaction
From: rc...@eventbrite.com
To: user@cassandra.apache.org
On Thu, Oct 16, 2014 at 6:41 AM, S C as...@outlook.com wrote:
Bob,
Bob is my father's name. Unless you need a gastrointestinal consult
: as...@outlook.com
To: user@cassandra.apache.org
Subject: RE: validation compaction
Date: Tue, 14 Oct 2014 17:09:14 -0500
Thanks Rob.
Date: Mon, 13 Oct 2014 13:42:39 -0700
Subject: Re: validation compaction
From: rc...@eventbrite.com
To: user@cassandra.apache.org
On Mon, Oct 13, 2014 at 1:04 PM, S
Thanks Rob.
Date: Mon, 13 Oct 2014 13:42:39 -0700
Subject: Re: validation compaction
From: rc...@eventbrite.com
To: user@cassandra.apache.org
On Mon, Oct 13, 2014 at 1:04 PM, S C as...@outlook.com wrote:
I have started repairing a 10 node cluster with one of the table having 1TB
of data
I have started repairing a 10 node cluster with one of the table having 1TB
of data. I notice that the validation compaction actually shows 3 TB in the
nodetool compactionstats bytes total. However, I have less than 1TB data on
the machine. If I take into consideration of 3 replicas then 3TB
I recently added nodes to existing cluster and removed some. nodetool
gossipinfo doesn't show the non existing nodes but a thread dump on cassandra
reveals it is trying to write to the non existing old node. I tried restarting
the cluster using -Dcassandra.load_ring_state=false on each node but
When I run nodetoool ring I see the ownership with different percentages.
However, LOAD column shows not a huge deviation. Why is that? I am using
Datastax 3.0.
http://pastebin.com/EcWbZn26
Is it ok to delete files from backups directory (hardlinks) once I have it
copied over remotely? Any caution to take?
Thanks,Kumar
Despite storing a replica in the backup node, what is the guarantee that the
backup node has all the data? Unless you make consistency a priority over
availability of your cluster.
I could think of another approach.
You can design your cluster with a topology such that your work load is split
off. Otherwise
they will start to use disk space as the live SSTables diverge from the
snapshots/incrementals.
-psanford
On Sat, Jun 14, 2014 at 10:17 AM, S C as...@outlook.com wrote:
Is it ok to delete files from backups directory (hardlinks) once I have it
copied over remotely? Any
, 2014 at 11:27 PM, S C as...@outlook.com wrote:
I am using Cassandra 1.1 (sorry bit old) and I am seeing high pending
compaction count. pending tasks: 67 while active compaction tasks are not
more than 5. I have a 24CPU machine. Shouldn't I be seeing more compactions? Is
this a pattern of high
of
how many will be needed.
in 1.1 you will OOM far before you hit the limit,. In theory though, the
compaction executor is a little special cased and will actually throw an
exception (normally it will block)
Chris
On Jun 9, 2014, at 7:49 AM, S C as...@outlook.com wrote:Thank you all
I am using Cassandra 1.1 (sorry bit old) and I am seeing high pending
compaction count. pending tasks: 67 while active compaction tasks are not
more than 5. I have a 24CPU machine. Shouldn't I be seeing more compactions? Is
this a pattern of high writes and compactions backing up? How can I
?
-Bill
On 06/08/2014 12:32 PM, Jake Luciani wrote:
23
On Sunday, June 8, 2014, S C as...@outlook.com
mailto:as...@outlook.com wrote:
I am using Cassandra 1.1 (sorry bit old) and I am seeing high
pending compaction count. pending tasks: 67 while active
compaction tasks
Thank you all for your valuable comments and information.
-SC
Date: Tue, 3 Sep 2013 12:01:59 -0400
From: chris.burrou...@gmail.com
To: user@cassandra.apache.org
CC: fsareshw...@quantcast.com
Subject: Re: row cache
On 09/01/2013 03:06 PM, Faraaz Sareshwala wrote:
Yes, that is correct.
It is my understanding that row cache is on the memory (Not on disk). It could
live on heap or native memory depending on the cache provider? Is that right?
-SC
Date: Fri, 23 Aug 2013 18:58:07 +0100
From: b...@dehora.net
To: user@cassandra.apache.org
Subject: Re: row cache
I can't
, right? I
thought that truncate, like drop table, created a snapshot (unless that feature
had been disabled in your yaml.
On Thu, Aug 29, 2013 at 6:51 PM, Robert Coli rc...@eventbrite.com wrote:
On Thu, Aug 29, 2013 at 3:48 PM, S C as...@outlook.com wrote:
Do we have to run nodetool repair
I see a high count All time blocked for Flush Writer on nodetool tpstats.
Is it how many blocked ever since the server was online? Can somebody explain
me what it is? I really appreciate it.
http://pastebin.com/GAiu2q74
Thanks,SC
Thanks Rob. Will it contribute to any performance problems?
Thanks,SC
Date: Thu, 29 Aug 2013 10:57:30 -0700
Subject: Re: Flush writer all time blocked
From: rc...@eventbrite.com
To: user@cassandra.apache.org
On Thu, Aug 29, 2013 at 10:49 AM, S C as...@outlook.com wrote:
I see a high
at 10:49 AM, S C as...@outlook.com wrote:
I see a high count All time blocked for Flush Writer on nodetool tpstats.
Is it how many blocked ever since the server was online? Can somebody
explain me what it is? I really appreciate it.
Yes.
Flush Writer thread pool is the thread
Do we have to run nodetool repair or nodetool cleanup after Truncating a
Column Family?
Thanks,SC
,
/10.225.64.2]
Thanks,SC
Date: Tue, 25 Jun 2013 11:20:03 -0700
Subject: Re: copy data between clusters
From: rc...@eventbrite.com
To: user@cassandra.apache.org
On Mon, Jun 24, 2013 at 8:35 PM, S C as...@outlook.com wrote:
I have a scenario here. I have a cluster A and cluster B running
this time you merely have a connectivity issue e.g. a firewall blocking
traffic.
From: S C
Sent: Tuesday, June 25, 2013 5:28 PM
To: user@cassandra.apache.org
Subject: RE: copy data between clusters
Bob and Arthur - thanks for your inputs.
I tried sstableloader but ran into below issue
I have a scenario here. I have a cluster A and cluster B running on cassandra
1.1. I need to copy data from Cluster A to Cluster B. Cluster A has few
keyspaces that I need to copy over to Cluster B. What are my options?
Thanks,SC
What version of Cassandra are you using? Did you look if Cassandra is under
going GC?
-SC
From: james@metaswitch.com
To: user@cassandra.apache.org
Subject: Cassandra periodically stops responding to write requests under load
Date: Fri, 14 Jun 2013 14:19:57 +
Hello,
I have been
How big is your HEAP?
From: as...@outlook.com
To: user@cassandra.apache.org
Subject: RE: Cassandra periodically stops responding to write requests under
load
Date: Fri, 14 Jun 2013 10:09:24 -0500
What version of Cassandra are you using? Did you look if Cassandra is under
going GC?
-SC
From:
What was the node doing right before the ERROR? Can you post some more log?
Thanks,SC
Date: Fri, 31 May 2013 10:57:38 +0530
From: himanshu.jo...@orkash.com
To: user@cassandra.apache.org
Subject: java.lang.AssertionError on starting the node
Hi,
I have created a 2
I have added two nodes to the cluster running on 1.1.9 and when I run a
nodetool cleanup I see the following in the logs.
INFO [CompactionExecutor:7] 2013-05-28 22:41:58,480 CompactionManager.java
(line 531) Cleanup cannot run before a node has joined the ring
However, nodetool ring/gossip/info
I was in the middle of upgrade to 1.1.9. I brought one node with 1.1.9 while
the other were running on 1.1.5. Once one of the node was on 1.1.9 it is no
longer recognizing other nodes in the ring.
On 192.168.56.10 and 11
192.168.56.10 DC1-CassRAC1Up Normal 28.06 GB
of the down nodes? Did you run
upgradesstables? You need to upgradesstables when moving from 1.1.7 to 1.1.9
On Apr 4, 2013, at 6:11 PM, S C as...@outlook.com wrote:I was in the middle
of upgrade to 1.1.9. I brought one node with 1.1.9 while the other were running
on 1.1.5. Once one of the node
PM, S C as...@outlook.com wrote:I was in the middle
of upgrade to 1.1.9. I brought one node with 1.1.9 while the other were running
on 1.1.5. Once one of the node was on 1.1.9 it is no longer recognizing other
nodes in the ring.
On 192.168.56.10 and 11
192.168.56.10 DC1-CassRAC1Up
I am using Cassandra 1.1.5.
nodetool repair is not coming back on the command line. Did it ran
successfully? Did it hang? How do you find if the repair was successful?I did
not find anything in the logs.nodetool compactionstats and nodetool
netstats are clean.
nodetool compactionstats pending
- Original Message -
From: S C as...@outlook.com
To: user@cassandra.apache.org
Sent: Monday, March 25, 2013 2:55:30 PM
Subject: nodetool repair hung?
I am using Cassandra 1.1.5.
nodetool repair is not coming back on the command line. Did it ran
successfully? Did it hang? How do
Apparently max user process was set very low on the machine.
How to check?ulimit -u
Set it to unlimited /etc/security/limits.conf
* soft nprocs unlimited* hard nprocs unlimited
From: as...@outlook.com
To: user@cassandra.apache.org
Subject: RE: java.lang.OutOfMemoryError: unable to create new
I have a Cassandra node that is going down frequently with
'java.lang.OutOfMemoryError: unable to create new native thread. Its a 16GB VM
out of which 4GB is set as Xmx and there are no other process running on the
VM. I have about 300 clients connecting to this node on an average. I have no
I think I figured out where the issue is. I will keep you posted soon.
From: as...@outlook.com
To: user@cassandra.apache.org
Subject: java.lang.OutOfMemoryError: unable to create new native thread
Date: Fri, 15 Mar 2013 17:54:25 -0500
I have a Cassandra node that is going down frequently with
Zealand
@aaronmortonhttp://www.thelastpickle.com
On 16/02/2013, at 4:41 AM, S C as...@outlook.com wrote:I appreciate any
advise or pointers on this.
Thanks in advance.
From: as...@outlook.com
To: user@cassandra.apache.org
Subject: Question on Cassandra Snapshot
Date: Thu, 14 Feb 2013 20:47:14 -0600
I appreciate any advise or pointers on this.
Thanks in advance.
From: as...@outlook.com
To: user@cassandra.apache.org
Subject: Question on Cassandra Snapshot
Date: Thu, 14 Feb 2013 20:47:14 -0600
I have been looking at incremental backups and snapshots. I have done some
experimentation but
I have been looking at incremental backups and snapshots. I have done some
experimentation but could not come to a conclusion. Can somebody please help me
understanding it right?
/data is my data partition
With incremental_backup turned OFF in Cassandra.yaml - Are all SSTables are
under
I have some data in my keyspaces. When I increase replication factor of a
existing keyspaces say from 2 to 3, will a nodetool repair create a a new
replica on one of the other node in the cluster? Can somebody explain?
Thanks,SC
In one of the scenarios that I encountered, I needed to change the token on the
node. I added new token and started the node
with-Dcassandra.load_ring_state=false in anticipation that the node will not
pick from the locally persisted data. Is that the right way to do? or
At what point is it ok to move the incremental back up from the server to
offsite?Is it recommended to flush the node before doing this?
Thanks,SC
I am trying to change the token of the Cassandra node but it is using a saved
token. I have tried -Dcassandra.load_ring_state=false in the startup script
but did not find it useful. Any thoughts?
Thanks,SC
.
Cheers
-Aaron MortonFreelance Cassandra DeveloperNew Zealand
@aaronmortonhttp://www.thelastpickle.com
On 29/01/2013, at 6:29 AM, S C as...@outlook.com wrote:
One of our node in a 3 node cluster drifted by ~ 20-25 seconds. While I figured
this pretty quickly, I had few
One of our node in a 3 node cluster drifted by ~ 20-25 seconds. While I figured
this pretty quickly, I had few questions that am looking for some answers.
We can always be proactive in keeping the time sync. But, Is there any way to
recover from a time drift (in a reactive manner)? Since it
Aaron,
Can this also be considered?
Connect to node using cassandra-cliuse system;set
LocationInfo[utf8('L')][utf8('ClusterName')]=utf8('new cluster
name');exit;Run nodetool flush on the nodeUpdate cassandra.yaml file with new
cluster_nameRestart node.
Thanks,SC
From: aa...@thelastpickle.com
I tried Helenos 1.3. It looks pretty good.
I created a test account with the role ROLE_USER. With this user, I am able
to create KS/CF's and drop them as well. Is this intended?I was expecting that
the user with role ROLE_USER should be able to browse data but not create or
delete it.
47 matches
Mail list logo