...@gmail.com
To: user@cassandra.apache.org; Michael Theroux mthero...@yahoo.com
Sent: Tuesday, June 16, 2015 12:07 AM
Subject: Re: Nodetool ring and Replicas after 1.2 upgrade
maybe check the system.log to see if there is any exception and/or error? check
as well if they are having
To: user@cassandra.apache.org; Michael Theroux mthero...@yahoo.com
Sent: Tuesday, June 16, 2015 4:43 PM
Subject: Re: Nodetool ring and Replicas after 1.2 upgrade
Hi Michael,
I barely can access internet right now and was not able to check outputs on my
computer, yet first thing that come
Hello,
We (finally) have just upgraded from Cassandra 1.1 to Cassandra 1.2.19.
Everything appears to be up and running normally, however, we have noticed
unusual output from nodetool ring. There is a new (to us) field Replicas in
the nodetool output, and this field, seemingly at random, is
Hello Alain,
We switched from EC2 to VPC a couple of years ago. The process for us was
long, slow, and multi step for our (at the time) 6 node cluster.
In our case, we don't need to consider multi-DC. However, in our
infrastructure we were rapidly running out of IP addresses, and wished to
Hello Alain,
We switched from EC2 to VPC a couple of years ago. The process for us was
long, slow and multi step.
In our case, we don't need to consider multi-DC. However, in our
infrastructure we were rapidly running out of IP addresses, and wished to move
to VPC to give us a nearly
We personally use the EC2Snitch, however, we don't have the multi-region
requirements you do,
-Mike
From: Alain RODRIGUEZ arodr...@gmail.com
To: user@cassandra.apache.org
Sent: Thursday, June 5, 2014 9:14 AM
Subject: Re: VPC AWS
I think you can define VPC
memory on this maybe a little fuzzy :)
-Mike
From: Aiman Parvaiz ai...@shift.com
To: user@cassandra.apache.org; Michael Theroux mthero...@yahoo.com
Sent: Thursday, June 5, 2014 12:55 PM
Subject: Re: VPC AWS
Michael,
Thanks for the response, I am about
From: Aiman Parvaiz ai...@shift.com
To: Michael Theroux mthero...@yahoo.com
Cc: user@cassandra.apache.org user@cassandra.apache.org
Sent: Thursday, June 5, 2014 2:39 PM
Subject: Re: VPC AWS
Thanks for this info Michael. As far as restoring node in public VPC is
concerned I was thinking
Hi Marcelo,
Cassandra provides and eventually consistent model for backups. You can do
staggered backups of data, with the idea that if you restore a node, and then
do a repair, your data will be once again consistent. Cassandra will not
automatically copy the data to other nodes (other than
One more note,
When we did this conversion, we were on Cassandra 1.1.X. You didn't mention
what version of Cassandra you were running,
Thanks,
-Mike
On Oct 23, 2013, at 10:05 AM, Michael Theroux wrote:
When we made a similar move, for an unknown reason (I didn't hear any
feedback from
A couple questions:
1) How did you determine that the record is deleted on only one node? Are you
looking for tombstones, or the original entry that was inserted? Note that when
an item is deleted, the original entry can still be in an SSTABLE somewhere,
and the tombstone can be in another
Hello,
Quick question. Is there a tool that allows sstablesplit (reverse compaction)
against 1.1.11 sstables? I seem to recall a separate utility somewhere, but
I'm having difficulty locating it,
Thanks,
-Mike
Hello,
We've been undergoing a migration on Cassandra 1.1.9 where we are combining two
column families. We are incrementally moving data from one column family into
another, where the columns in a row in the source column family are being
appended to columns in a row in the target column
Hello,
We are experiencing an issue where nodes a temporarily slow due to I/O
contention anywhere from 10 minutes to 2 hours. I don't believe this slowdown
is Cassandra related, but factors outside of Cassandra. We run Cassandra
1.1.9. We run a 12 node cluster, with a replication factor of
Hello,
Quick question on Cassandra, TTLs, tombstones, and GC grace. If we have a
column family whose only mechanism of deleting columns is utilizing TTLs, is
repair really necessary to make tombstones consistent, and therefore would it
be safe to set the gc grace period of the column family
The only time information is removed from the filesystem is during compaction.
Compaction can remove tombstones after gc_grace_seconds, which, could result in
reanimation of deleted data if the tombstone was never properly replicated to
other replicas. Repair will make sure tombstones are
Information is only deleted from Cassandra during a compaction. Using
SizeTieredCompaction, compaction only occurs when a number of similarly sized
sstables are combined into a new sstable.
When you perform a major compaction, all sstables are combined into one, very
large, sstable. As a
There has been a lot of discussion on the list recently concerning issues with
repair, runtime, etc.
We recently have had issues with this cassandra bug:
https://issues.apache.org/jira/browse/CASSANDRA-4905
Basically, if you do regular staggered repairs, and you have tombstones that
Hello,
Just wondering if I can get a quick clarification on some simple CQL. We
utilize Thrift CQL Queries to access our cassandra setup. As clarified in a
previous question I had, when using CQL and Thrift, timestamps on the cassandra
column data is assigned by the server, not the client,
. Other
availability zones, in the same region, have yet to show an issue.
It looks like I'm going to need to replace a third DB node today. Any advice
would be appreciated.
Thanks,
-Mike
On Apr 26, 2013, at 10:14 AM, Michael Theroux wrote:
Thanks.
We weren't monitoring this value when
, 2013, at 12:37 PM, Michael Theroux wrote:
Hello,
We've done some additional monitoring, and I think we have more information.
We've been collecting vmstat information every minute, attempting to catch a
node with issues,.
So, it appears, that the cassandra node runs fine. Then suddenly
the
64bit millisecond value. Is that incorrect?
-Mike
On Apr 28, 2013, at 11:42 AM, Michael Theroux wrote:
Hello,
Just wondering if I can get a quick clarification on some simple CQL. We
utilize Thrift CQL Queries to access our cassandra setup. As clarified in a
previous question I had
jason
On Fri, Apr 26, 2013 at 9:54 AM, Michael Theroux mthero...@yahoo.com wrote:
Sorry, Not sure what CPU steal is :)
I have AWS console with detailed monitoring enabled... things seem to track
close to the minute, so I can see the CPU load go to 0... then jump at about
the minute
, 2013 at 5:03 AM, Michael Theroux mthero...@yahoo.com wrote:
Another related question. Once we see messages being dropped on one node,
our cassandra client appears to see this, reporting errors. We use
LOCAL_QUORUM with a RF of 3 on all queries. Any idea why clients would see
an error
Hello,
Since Sunday, we've been experiencing a really odd issue in our Cassandra
cluster. We recently started receiving errors that messages are being dropped.
But here is the odd part...
When looking in the AWS console, instead of seeing statistics being elevated
during this time, we
9GB we essentially only have 1GB of
free memory so when compactions, cleanups, etc take place this situation
starts happening. We are working to change our data model to try to resolve
this.
Ralph
On Apr 19, 2013, at 8:00 AM, Michael Theroux wrote:
Hello,
We've recently upgraded
I often drain and then shutdown, and copy the live data dir
rather than a snapshot dir.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 19/04/2013, at 4:10 AM, Michael Theroux mthero...@yahoo.com wrote
Hello,
We've recently upgraded from m1.large to m1.xlarge instances on AWS to handle
additional load, but to also relieve memory pressure. It appears to have
accomplished both, however, we are still getting a warning, 0-3 times a day, on
our database nodes:
WARN [ScheduledTasks:1] 2013-04-19
A lot more details on your usecase and requirements would help. You need to
make specific considerations in cassandra when you have requirements around
ordering. Ordering can be achieved across columns. Ordering across rows is a
bit more tricky and may require the use of specific
This should work.
Another option is to follow a process similar to what we recently did. We
recently and successfully upgraded 12 instances from large to xlarge instances
in AWS. I chose not to replace nodes as restoring data from the ring would
have taken significant time and put the
Hello,
We are having an odd sporadic issue that I believe maybe due to time
synchronization. Without going into details on the issue right now, quick
question, from the documentation I see numerous references that Cassandra
utilizes timestamps generated by the clients to determine write
Hi Dean,
I saw the same behavior when we switched from STCS to LCS on a couple of our
tables. Not sure why it doesn't proceed immediately (I pinged the list, but
didn't get any feedback). However, running nodetool compact keyspace table
got things moving for me.
-Mike
On Mar 14, 2013, at
One more warning (which I'm sure you know, but in case others see this),
nodetool compact does a major compaction for STS, and is in general, not
recommended for STS. I only ran it on the tables we've converted to LCS.
-Mike
On Mar 14, 2013, at 11:26 AM, Michael Theroux wrote:
Hi Dean,
I
Hi Aaron,
If you have the chance, could you expand on m1.xlarge being the much better
choice? We are going to need to make a choice of expanding from a 12 node -
24 node cluster using .large instances, vs. upgrading all instances to
m1.xlarge, soon and the justifications would be helpful
on 1.1.9?
Those are the two issues I can find regarding this matter:
https://issues.apache.org/jira/browse/CASSANDRA-4876
https://issues.apache.org/jira/browse/CASSANDRA-5029
Looks like in 1.2, it defaults at 0.1, not sure about 1.1.X
-Wei
- Original Message -
From: Michael
I've asked this myself in the past... fairly arbitrarily chose 10MB based on
Wei's experience,
-Mike
On Mar 8, 2013, at 1:50 PM, Hiller, Dean wrote:
+1 (I would love to know this info).
Dean
From: Wei Zhu wz1...@yahoo.commailto:wz1...@yahoo.com
Reply-To:
Hello,
(Hopefully) Quick question.
We are running Cassandra 1.1.9.
I recently converted some tables from Size tiered to Leveled Compaction. The
amount of space for Bloom Filters on these tables went down tremendously (which
is expected, LCS in 1.1.9 does not use bloom filters).
However,
The way I've always thought about it is that -pr will make sure the information
that specific node originates is consistent with its replicas.
So, we know that a node is responsible for a specific token range, and the next
nodes in the ring will hold its replicas. The -pr will make sure that a
BTW, when I say major compaction, I mean running the nodetool compact
command (which does a major compaction for Sized Tiered Compaction). I didn't
see the distribution of SSTables I expected until I ran that command, in the
steps I described below.
-Mike
On Feb 14, 2013, at 3:51 PM, Wei
Hello,
We have an unusual situation that I believe I've reproduced, at least
temporarily, in a test environment. I also think I see where this issue is
occurring in the code.
We have a specific column family that is under heavy read and write load on a
nightly basis. For the purposes of
Hello,
We are running into an unusual situation that I'm wondering if anyone has any
insight on. We've been running a Cassandra cluster for some time, with
compression enabled on one column family in which text documents are stored.
We enabled compression on the column family, utilizing the
PM, Michael Theroux mthero...@yahoo.com wrote:
Thanks for the response.
We are on version 1.1.2. We don't see the MutationStage back up. The dump
from the messages dropped error doesn't show a backup, but also watching
nodetool tpstats doesn't show any backup there.
nodetool info also
Hello,
We have been noticing an issue where, about 50% of the time in which a node
fails or is restarted, secondary indexes appear to be partially lost or
corrupted. A drop and re-add of the index appears to correct the issue. There
are no errors in the cassandra logs that I see. Part of
of days now.
Cool.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 24/09/2012, at 6:09 AM, Michael Theroux mthero...@yahoo.com wrote:
There were no errors in the log (other than the messages dropped exception
pasted below
Hello,
While under load, we have occasionally been seeing messages dropped errors in
our cassandra log. Doing some research, I understand this is part of
Cassandra's design to shed load, and we should look at the tpstats-like output
to determine what should be done to resolve the situation.
Hello,
A number of weeks ago, Amazon announced the availability of EBS Optimized
instances and Provisioned IOPs for Amazon EC2. Historically, I've read EBS is
not recommended for Cassandra due to the network contention that can quickly
result
be expressly aware of it.
On Sat, Jul 14, 2012 at 2:00 PM, Michael Theroux mthero...@yahoo.com wrote:
Hello,
I'm looking at nodetool repair with the -pr, vs. non -pr option.
Looking around, I'm seeing a lot of conflicting information out there.
Almost universally, the recommendation is to run
, is
that timestamp greater?
Thanks,
-Mike
On Jul 12, 2012, at 8:56 PM, Michael Theroux wrote:
Sounds a lot like a bug that I hit that was filed and fixed recently:
https://issues.apache.org/jira/browse/CASSANDRA-4432
-Mike
On Jul 12, 2012, at 8:16 PM, Edward Capriolo wrote:
Possibly the bug
Sounds a lot like a bug that I hit that was filed and fixed recently:
https://issues.apache.org/jira/browse/CASSANDRA-4432
-Mike
On Jul 12, 2012, at 8:16 PM, Edward Capriolo wrote:
Possibly the bug with nanotime causing cassandra to think the change happened
in the past. Talked about onlist
On 8/07/2012, at 4:05 PM, Michael Theroux wrote:
Hello,
We're in the process of trying to move a 6-node cluster from RF=1 to RF=3.
Once our replication factor was upped to 3, we ran nodetool repair, and
immediately hit an issue on the first node we ran repair on:
INFO 03:08:51,536
Michael Theroux mthero...@yahoo.com
Hello,
We are currently running a web application utilizing Cassandra on EC2. Given
the recent outages experienced with Amazon, we want to consider expanding
Cassandra across availability zones sooner rather than later.
We are trying to determine
51 matches
Mail list logo