If you are only using one Available Zone per region then you have only one
rack per DC and the NetworkTopologyStrategy will do the right thing.
So you mean this part doesn't need more testing ? This will work for sure ?
Did you already did it yourself ?
Because you are going to replicate your
Hello Aron,
We have build up our new cluster from the scratch with version 1.2 - partition
murmor3. We are not using vnodes at all.
Actually log is clean and nothing serious, now investigating logs and post soon
if found something criminal
Our cluster is evenly partitioned
First of all thanks for the response. We're trying to copy existing data
into a keyspace with a different name on the same server. I'm not sure
why our operations team wants this.
We're also looking into the sstable copy approach you suggested and that
could work. Still I thinks it's odd the
On Wed, 24 Apr 2013, aaron morton wrote:
EDIT: works after switching to testing against the lastest
version of the cassandra database (doh!), and also updating the syntax
per notes below:
Hi!
I can't find the 1.1.11 package for the Debian at
http://www.apache.org/dist/cassandra/debian/
The .deb package is there but Packages files still contain just version 1.1.10.
Regards,
Patrik
Hello,
Since Sunday, we've been experiencing a really odd issue in our Cassandra
cluster. We recently started receiving errors that messages are being dropped.
But here is the odd part...
When looking in the AWS console, instead of seeing statistics being elevated
during this time, we
Hello,
Just to wrap up on my part of this thread, tuning CMS compaction threshold
(-XX:CMSInitiatingOccupancyFraction) to 70 appears to resolved my issues with
the memory warnings. However, I don't believe this would be a solution to all
the issues mentioned below. Although, it does make
Not really sure if it has something to do with the schema problems, but
I think the fact that node was down caused us to hit
https://issues.apache.org/jira/browse/CASSANDRA-5179 (a bit different
output on sender's side, but looks similar in general) - after checking
logs with debug level TRACE
Hi David,
We have adapted Bulkload example provided by Datastax as below to write
SSTables for column family that uses Composite keys and this is working fine
for us. Hope this will be of use to you.
ListAbstractType? compositeList = new ArrayListAbstractType?();
Sorry, seems I screwed up somehow.
That should be fixed now however.
--
Sylvain
On Wed, Apr 24, 2013 at 12:57 PM, Patrik Modesto
patrik.mode...@gmail.comwrote:
Hi!
I can't find the 1.1.11 package for the Debian at
http://www.apache.org/dist/cassandra/debian/
The .deb package is there but
It works now, thanks!
P.
On Wed, Apr 24, 2013 at 2:53 PM, Sylvain Lebresne sylv...@datastax.com wrote:
Sorry, seems I screwed up somehow.
That should be fixed now however.
Thrift and intra can be different but what about Geo ?
As the listen address is used for intra-cluster communication, it must be
changed to a routable address so the other nodes can reach it. For example,
assuming you have an Ethernet interface with address 192.168.1.1, you would
change the
On Wed, Apr 24, 2013 at 5:03 AM, Michael Theroux mthero...@yahoo.com wrote:
Another related question. Once we see messages being dropped on one node,
our cassandra client appears to see this, reporting errors. We use
LOCAL_QUORUM with a RF of 3 on all queries. Any idea why clients would
Good catch since that bug also would have shut us down.
The original problem is that previous to Cass 1.1.10 it looks like
cassandra.yaml values
* thrift_framed_transport_size_in_mb
* thrift_max_message_length_in_mb
were ignored (in favor of effectively no limits). We went from 1.1.5 to
I was wondering about the compactionthroughput. I never see ours get even
close to 16MB and I thought this is supposed to throttle compaction, right?
Ours is constantly less than 3MB/sec from looking at our logs or do I have this
totally wrong? How can I see the real throughput so that I can
On Wed, Apr 24, 2013 at 8:11 AM, Kanwar Sangha kan...@mavenir.com wrote:
What about a geo-link ? Can that be separated out ?
What does geo-link mean here? Cassandra only has two kinds of
communication - clientservers and serversservers.
=Rob
I have noticed the same. I think in the real world your compaction
throughput is limited by other things. If I had to speculate I would say
that compaction can remove expired tombstones, however doing this requires
bloom filter checks, etc.
I think that setting is more important with multi
On Wed, Apr 24, 2013 at 1:33 PM, Edward Capriolo edlinuxg...@gmail.com wrote:
I think that setting is more important with multi threaded compaction and/or
more compaction slots. In those cases it may actually throttle something.
Or if you're simultaneously doing a repair, which does a
Thanks much!!! Better to hear at least one other person sees the same thing
;). Sometimes these posts just go silent.
Dean
From: Edward Capriolo edlinuxg...@gmail.commailto:edlinuxg...@gmail.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
I mean across 2 Data centres.
-Original Message-
From: Robert Coli [mailto:rc...@eventbrite.com]
Sent: 24 April 2013 14:56
To: user@cassandra.apache.org
Subject: Re: Networking
On Wed, Apr 24, 2013 at 8:11 AM, Kanwar Sangha kan...@mavenir.com wrote:
What about a geo-link ? Can that be
Same here. We disable the throttling and our disk and CPU usage both low (
10%) and still takes hours for LCS compaction to finish after a repair. For
this cluster, we don't delete any data, so we can rule out tombstones. Not sure
what is holding compaction back. My observation is that for the
21 matches
Mail list logo