Hello,
I have a cluster of 4 nodes and two of them are on different schema. I
tried to run the commands described in the FAQ section but no luck (
http://wiki.apache.org/cassandra/FAQ#schema_disagreement) .
After running the commands, I get back to the same issue. Cannot afford to
lose the data
, Robert Coli rc...@eventbrite.com wrote:
On Wed, May 8, 2013 at 5:40 PM, srmore comom...@gmail.com wrote:
After running the commands, I get back to the same issue. Cannot afford
to
lose the data so I guess this is the only option for me. And
unfortunately I
am using 1.0.12 ( cannot upgrade
.
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 10/05/2013, at 9:16 AM, srmore comom...@gmail.com wrote:
Thanks Rob !
Tried the steps, that did not work, however I was able to resolve the
problem by syncing the clocks
Hello,
I am observing that my performance is drastically decreasing when my data
size grows. I have a 3 node cluster with 64 GB of ram and my data size is
around 400GB on all the nodes. I also see that when I re-start Cassandra
the performance goes back to normal and then again starts decreasing
://www.datastax.com/dev/blog/performance-improvements-in-cassandra-1-2
On Wed, May 29, 2013 at 11:32 PM, srmore comom...@gmail.com wrote:
Hello,
I am observing that my performance is drastically decreasing when my data
size grows. I have a 3 node cluster with 64 GB of ram and my data size
/Is-it-safe-to-stop-a-read-repair-and-any-suggestion-on-speeding-up-repairs-td6607367.html
Thanks
On May 29, 2013, at 9:32 PM, srmore comom...@gmail.com wrote:
Hello,
I am observing that my performance is drastically decreasing when my data
size grows. I have a 3 node cluster with 64 GB of ram
I am a bit confused when using the consistency level for multi datacenter
setup. Following is my setup:
I have 4 nodes the way these are set up are
Node 1 DC 1 - N1DC1
Node 2 DC 1 - N2DC1
Node 1 DC 2 - N1DC2
Node 2 DC 2 - N2DC2
I setup a delay in between two datacenters (DC1 and DC2 around 1
With CL=TWO it appears that one node randomly picks the node from other
datacenter to get the data. i.e. one node in the datacenter consistently
underperforms.
On Mon, Jun 3, 2013 at 3:21 PM, Hiller, Dean dean.hil...@nrel.gov wrote:
What happens when you use CL=TWO.
Dean
From: srmore
DC nodes out of the
list of reading from for you as well. I need to circle back to with my
teammate to check if he got his fix posted to the dev list or not.
Later,
Dean
From: srmore comom...@gmail.commailto:comom...@gmail.com
Reply-To: user@cassandra.apache.orgmailto:user
Yup, RF is 2 for both the datacenters.
On Mon, Jun 3, 2013 at 3:36 PM, Sylvain Lebresne sylv...@datastax.comwrote:
What's your replication factor? Do you have RF=2 on both datacenters?
On Mon, Jun 3, 2013 at 10:09 PM, srmore comom...@gmail.com wrote:
I am a bit confused when using
.
On Mon, Jun 3, 2013 at 3:37 PM, Hiller, Dean dean.hil...@nrel.gov wrote:
Our badness threshold is 0.1 currently(just checked). Our website used to
get slow during a slow node time until we rolled our own patch out.
Dean
From: srmore comom...@gmail.commailto:comom...@gmail.com
Reply-To: user
Hello All,
We are thinking of going with Cassandra on a 8 core machine, are there any
optimizations that can help us here ?
I have seen that during startup stage Cassandra uses only one core, is
there a way we can speed up the startup process ?
Thanks !
I am seeing the similar behavior, in my case I have 2 nodes in each
datacenter and one node always has high latency (equal to the latency
between the two datacenters). When one of the datacenters is shutdown the
latency drops.
I am curious to know whether anyone else has these issues and if yes
I see an issues when I run high traffic to the Cassandra nodes, the heap
gets full to about 94% (which is expected) but the thing that confuses me
is that the heap usage never goes down after the traffic is stopped
(at-least, it appears to be so) . I kept the nodes up for a day after
stopping the
but gotto
work with this for now).
On Tue, Jun 18, 2013 at 12:13 PM, Robert Coli rc...@eventbrite.com wrote:
On Tue, Jun 18, 2013 at 8:25 AM, srmore comom...@gmail.com wrote:
I see an issues when I run high traffic to the Cassandra nodes, the heap
gets full to about 94% (which is expected)
Which
--
*From: *Robert Coli rc...@eventbrite.com
*To: *user@cassandra.apache.org
*Sent: *Tuesday, June 18, 2013 10:43:13 AM
*Subject: *Re: Heap is not released and streaming hangs at 0%
On Tue, Jun 18, 2013 at 10:33 AM, srmore comom...@gmail.com wrote:
But then shouldn't JVM C G
, 2013 at 5:58 AM, srmore comom...@gmail.com wrote:
On Fri, Jun 21, 2013 at 2:53 AM, aaron morton
aa...@thelastpickle.comwrote:
nodetool -h localhost flush didn't do much good.
Do you have 100's of millions of rows ?
If so see recent discussions about reducing the bloom_filter_fp_chance
We are planning to move data from a 2 node cluster to a 3 node cluster. We
are planning to copy the data from the two nodes (snapshot) to the new 2
nodes and hoping that Cassandra will sync it to the third node. Will this
work ? are there any other commands to run after we are done migrating,
like
On Fri, Jul 5, 2013 at 6:08 PM, Robert Coli rc...@eventbrite.com wrote:
On Thu, Jul 4, 2013 at 10:03 AM, srmore comom...@gmail.com wrote:
We are planning to move data from a 2 node cluster to a 3 node cluster.
We are planning to copy the data from the two nodes (snapshot) to the new 2
nodes
Thanks Takenori,
Looks like the tool provides some good info that people can use. It would
be great if you can share it with the community.
On Thu, Jul 11, 2013 at 6:51 AM, Takenori Sato ts...@cloudian.com wrote:
Hi,
I think it is a common headache for users running a large Cassandra
All,
There are some operations that demand the use lock and I was wondering
whether Cassandra has a built in locking mechanism. After hunting the web
for a while it appears that the answer is no, although I found this
outdated wiki page which describes the algorithm
On Mon, Aug 12, 2013 at 2:49 PM, Robert Coli rc...@eventbrite.com wrote:
On Mon, Aug 12, 2013 at 12:31 PM, srmore comom...@gmail.com wrote:
There are some operations that demand the use lock and I was wondering
whether Cassandra has a built in locking mechanism. After hunting the web
I have a 5 node cluster with a load of around 300GB each. A node went down
and does not come up. I can see the following exception in the logs.
ERROR [main] 2013-09-09 21:50:56,117 AbstractCassandraDaemon.java (line
139) Fatal exception in thread Thread[main,5,main]
java.lang.OutOfMemoryError:
I would be interested to know that too, it would be great if anyone can
share how they do (or do not) track or monitor cross datacenter migrations.
Thanks !
On Wed, Sep 4, 2013 at 10:13 AM, Anand Somani meatfor...@gmail.com wrote:
Hi,
Scenario is a cluster spanning across datacenters and we
copies.
*From:* srmore [mailto:comom...@gmail.com]
*Sent:* Tuesday, September 10, 2013 6:16 AM
*To:* user@cassandra.apache.org
*Subject:* Error during startup - java.lang.OutOfMemoryError: unable to
create new native thread [heur]
** **
I have a 5 node cluster with a load of around 300GB
) unlimited
max user processes (-u) 515038
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Has anyone run into this ?
[1] http://www.datastax.com/docs/1.1/troubleshooting/index
On Wed, Sep 11, 2013 at 8:47 AM, srmore comom...@gmail.com
Was too fast on the send button, sorry.
The thing I wanted to add was the
pending signals (-i) 515038
that looks odd to me, could that be related.
On Thu, Sep 19, 2013 at 4:53 PM, srmore comom...@gmail.com wrote:
I hit this issue again today and looks like changing -Xss
Does anyone know what would roughly be the heap size for cassandra with 1TB
of data ? We started with about 200 G and now on one of the nodes we are
already on 1 TB. We were using 8G of heap and that served us well up until
we reached 700 G where we started seeing failures and nodes flipping.
on? Essentially heap size is function of
number of keys/metadata. In Cassandra 1.2 lot of the metadata like bloom
filters were moved off heap.
On Tue, Oct 1, 2013 at 9:34 PM, srmore comom...@gmail.com wrote:
Does anyone know what would roughly be the heap size for cassandra with
1TB of data ? We started
I changed my index_interval from 128 to index_interval: 128 to 512, does it
make sense to increase more than this ?
On Wed, Oct 2, 2013 at 9:30 AM, cem cayiro...@gmail.com wrote:
Have a look to index_interval.
Cem.
On Wed, Oct 2, 2013 at 2:25 PM, srmore comom...@gmail.com wrote
512 is fine. Could you tell more about your traffic
characteristics?
Cem
On Wed, Oct 2, 2013 at 4:32 PM, srmore comom...@gmail.com wrote:
I changed my index_interval from 128 to index_interval: 128 to 512, does
it make sense to increase more than this ?
On Wed, Oct 2, 2013 at 9:30 AM, cem
16:32, srmore pisze:
I changed my index_interval from 128 to index_interval: 128 to 512, does
it
make sense to increase more than this ?
On Wed, Oct 2, 2013 at 9:30 AM, cem cayiro...@gmail.com wrote:
Have a look to index_interval.
Cem.
On Wed, Oct 2, 2013 at 2:25 PM, srmore comom
I don't know whether this is possible but was just curious, can you query
for the data in the remote datacenter with a CL.ONE ?
There could be a case where one might not have a QUORUM and would like to
read the most recent data which includes the data from the other
datacenter. AFAIK to reliably
Thanks Rob that helps !
On Fri, Oct 25, 2013 at 7:34 PM, Robert Coli rc...@eventbrite.com wrote:
On Fri, Oct 25, 2013 at 2:47 PM, srmore comom...@gmail.com wrote:
I don't know whether this is possible but was just curious, can you query
for the data in the remote datacenter with a CL.ONE
We ran into similar heap issues a while ago for 1.0.11, I am not sure
whether you are at the luxury of upgrading to at-least 1.2.9, we were not.
After a lot of various painful attempts and weeks of testing (just as in
your case) the following settings worked (did not completely relieve the
heap
I recently upgraded to 1.2.9 and I am seeing a lot of REQUEST_RESPONSE and
MUTATION messages are being dropped.
This happens when I have multiple nodes in the cluster (about 3 nodes) and
I send traffic to only one node. I don't think the traffic is that high, it
is around 400 msg/sec with 100
The problem was cross_node_timeout value,I had it set to true and my ntp
clocks were not synchronized as a result, some of the requests were
dropped.
Thanks,
Sandeep
On Sat, Nov 9, 2013 at 6:02 PM, srmore comom...@gmail.com wrote:
I recently upgraded to 1.2.9 and I am seeing a lot
I might be missing something obvious here, for some reason I cannot seem to
get internode_compression = all to work. I am getting the following
exception. I am using cassandra 1.2.9 and have snappy-java-1.0.5.jar in my
classpath. Google search did not return any useful result, has anyone seen
is someone having a similar issue.
http://mail-archives.apache.org/mod_mbox/cassandra-commits/201307.mbox/%3CJIRA.12616012.1352862646995.6820.1373083550278@arcas%3E
*From:* srmore [mailto:comom...@gmail.com]
*Sent:* 11 November 2013 21:32
*To:* user@cassandra.apache.org
*Subject
Hello,
We moved to cassandra 1.2.9 from 1.0.11 to take advantage of the off-heap
bloom filters and other improvements.
We see a lot of messages dropped under high load conditions. We noticed
that when we do heavy read AND write simultaneously (we read first and
check whether the key exists if not
We have a 3 node cluster running cassandra 1.2.12, they are pretty big
machines 64G ram with 16 cores, cassandra heap is 8G.
The interesting observation is that, when I send traffic to one node its
performance is 2x more than when I send traffic to all the nodes. We ran
1.0.11 on the same box and
-XX:MaxTenuringThreshold=2
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
Yes compactions/GC's could skipe the CPU, I had similar behavior with my
setup.
Were you able to get around it ?
-VK
On Fri, Dec 6, 2013 at 7:40 PM, srmore comom...@gmail.com wrote:
We have a 3
The spikes are not that significant in our case and we are running the
cluster with 1.7 gb heap.
Are these spikes causing any issue at your end?
There are no big spikes, the overall performance seems to be about 40% low.
On Fri, Dec 6, 2013 at 9:10 PM, srmore comom...@gmail.com wrote
...@gmail.com wrote:
Hi srmore,
Perhaps if you use jconsole and connect to the jvm using jmx. Then uner
MBeans tab, start inspecting the GC metrics.
/Jason
On Fri, Dec 6, 2013 at 11:40 PM, srmore comom...@gmail.com wrote:
On Fri, Dec 6, 2013 at 9:32 AM, Vicky Kak vicky@gmail.com
), capacity 0 (bytes), 0 hits, 0 requests,
NaN recent hit rate, 0 save period in seconds
On Fri, Dec 6, 2013 at 11:15 AM, Vicky Kak vicky@gmail.com wrote:
Since how long the server had been up, hours,days,months?
On Fri, Dec 6, 2013 at 10:41 PM, srmore comom...@gmail.com wrote
-1-0-improved-memory-and-disk-space-management
The flushing of 2.6 gb to the disk might slow the performance if
frequently called, may be you have lots of write operations going on.
On Fri, Dec 6, 2013 at 10:06 PM, srmore comom...@gmail.com wrote:
On Fri, Dec 6, 2013 at 9:59 AM, Vicky
.
-
Aaron Morton
New Zealand
@aaronmorton
Co-Founder Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
On 7/12/2013, at 8:05 am, srmore comom...@gmail.com wrote:
Changed memtable_total_space_in_mb to 1024 still no luck.
On Fri, Dec 6, 2013 at 11
-
Aaron Morton
New Zealand
@aaronmorton
Co-Founder Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
On 12/12/2013, at 3:39 pm, srmore comom...@gmail.com wrote:
Thanks Aaron
On Wed, Dec 11, 2013 at 8:15 PM, Aaron Morton aa...@thelastpickle.comwrote
to update the data from RandomPartitioner to
Murmur3 ? (upgradesstable ?)
On Fri, Dec 6, 2013 at 10:36 AM, srmore comom...@gmail.com wrote:
On Fri, Dec 6, 2013 at 9:59 AM, Vicky Kak vicky@gmail.com wrote:
You have passed the JVM configurations and not the cassandra
configurations
What version of Cassandra are you running ? I used to see them a lot with
1.2.9, I could correlate the dropped messages with the heap usage almost
every time, so check in the logs whether you are getting GC'd. In this
respect 1.2.12 appears to be more stable. Moving to 1.2.12 took care of
this for
Hello Kyle,
For your first question, you need to create aliases to localhost e.g.
127.0.0.2,127.0.0.3 etc. this should get you going.
About the logging issue, I think if your instance failing before it gets to
long anything, as an example you can strart one instance and make sure it
logs
Sorry to hear that Robert, I ran into similar issue a while ago. I had an
extremely heavy write and update load, as a result Cassandra (1.2.9) was
constantly flushing to disk and used to GC, tried exactly the same steps
you tried (tuning memtable_flush_writers (to 2) and
memtable_flush_queue_size
tl;dr: Decommissioning datacenters by running nodetool decommission on a
node deletes the data on the decommissioned node - is this expected ?
I am trying our some tests on my multi-datacenter setup. Somewhere in the
docs I read that decommissioning a node will stream its data to other nodes
but
).
On Thu, Aug 7, 2014 at 11:43 AM, Robert Coli rc...@eventbrite.com wrote:
On Thu, Aug 7, 2014 at 8:26 AM, srmore comom...@gmail.com wrote:
tl;dr: Decommissioning datacenters by running nodetool decommission on a
node deletes the data on the decommissioned node - is this expected ?
What does
On Thu, Aug 7, 2014 at 12:27 PM, Robert Coli rc...@eventbrite.com wrote:
On Thu, Aug 7, 2014 at 10:04 AM, srmore comom...@gmail.com wrote:
Sorry for being ambiguous. By deletes I mean that running decommission
I can no longer see any keyspaces owned by this node or replicated by other
nodes
, srmore comom...@gmail.com wrote:
On Thu, Aug 7, 2014 at 12:27 PM, Robert Coli rc...@eventbrite.com
wrote:
On Thu, Aug 7, 2014 at 10:04 AM, srmore comom...@gmail.com wrote:
Sorry for being ambiguous. By deletes I mean that running
decommission I can no longer see any keyspaces owned
I tried using 'nodetool rebuild' after I add the datacenters,date same
outcome, and after I decommission my keyspaces are getting wiped out, I
don't understand this.
On Thu, Aug 7, 2014 at 1:54 PM, srmore comom...@gmail.com wrote:
Thanks for the detailed reply Ken, this really helps. I also
57 matches
Mail list logo