If you are reading and writing at quorum, then what you are seeing
shouldn't happen. You shouldn't be able to read N+1 until N+1 has
been committed to a quorum of servers. At this point you should not
be able to read N anymore, since there is no quorum that contains N.
Dan - I think you are
Tyler, your answer seems to contradict this email by Jonathan Ellis
[1]. In it Jonathan says,
The important guarantee this gives you is that once one quorum read
sees the new value, all others will too. You can't see the newest
version, then see an older version on a subsequent write [sic, I
We're investigating Cassandra, and we are looking for a way to get Cassandra
use more than 50% of it's data disks. Is this possible?
For major compactions, it looks like we can use more than 50% of the disk if
we use multiple similarly sized column families. If we had 10 column
families of the
/CASSANDRA-579 for some
background here: I was just about to start working on this one, but it won't
make it in until 0.7.
-Original Message-
From: Sean Bridges sean.brid...@gmail.com
Sent: Wednesday, May 26, 2010 11:50am
To: user@cassandra.apache.org
Subject: using more than 50% of disk
at 2:26 PM, Sean Bridges sean.brid...@gmail.com wrote:
We were running a load test against a single 0.6.2 cassandra node. 24
hours into the test, Cassandra appeared to be nearly frozen for 10
minutes. Our write rate went to almost 0, and we had a large number
of write timeouts. We weren't
Hello,
We upgraded a cassandra cluster from 1.2.18 to 2.0.10, and it looks like
repair is significantly more expensive now. Is this expected?
We schedule rolling repairs through the cluster. With 1.2.18 a repair
would take 3 hours or so. The first repair after the upgrade has been
going on
replicas, because at least one replica in the
snapshot is not undergoing repair.
Sean
[1]
http://www.datastax.com/documentation/cassandra/2.0/cassandra/tools/toolsRepair.html
On Wed, Oct 15, 2014 at 5:36 PM, Robert Coli rc...@eventbrite.com wrote:
On Wed, Oct 15, 2014 at 4:54 PM, Sean Bridges
Hello,
I thought an sstable was immutable once written to disk. Before upgrading
from 1.2.18 to 2.0.10 we took a snapshot of our sstables. Now when I
compare the files in the snaphot dir and the original files, the Summary.db
files have a newer modified date, and the file sizes have changed.
important, they're primarily an
optimization for startup time.
On Thu, Oct 16, 2014 at 12:20 PM, Sean Bridges sean.brid...@gmail.com
wrote:
Hello,
I thought an sstable was immutable once written to disk. Before
upgrading from 1.2.18 to 2.0.10 we took a snapshot of our sstables. Now
when I
repair takes 40
hours, with average io around 27 mb/s. Should I file a jira?
Sean
On Wed, Oct 15, 2014 at 9:23 PM, Sean Bridges sean.brid...@gmail.com
wrote:
Thanks Robert. Does the switch to sequential from parallel explain why IO
increases, we see significantly higher IO with 2.10
rc...@eventbrite.com wrote:
On Thu, Oct 23, 2014 at 9:33 AM, Sean Bridges sean.brid...@gmail.com
wrote:
The change from parallel to sequential is very dramatic. For a small
cluster with 3 nodes, using cassandra 2.0.10, a parallel repair takes 2
hours, and io throughput peaks at 6 mb/s
We are using lightweight transactions, two datacenters and DC_LOCAL
consistency level.
There is a comment in CASSANDRA-5797,
This would require manually truncating system.paxos when failing over.
Is that required? I don't see it documented anywhere else.
Thanks,
Sean
with incremental repair, which
is what -pr intended to fix on full repair, by repairing all token ranges only
once instead of times the replication factor.
Cheers,
Le lun. 24 oct. 2016 18:05, Sean Bridges
<sean.brid...@globalrelay.net<mailto:sean.brid...@globalrelay.net>> a
to use -pr with incremental repairs?
Thanks,
Sean
[1]
https://docs.datastax.com/en/cassandra/3.x/cassandra/operations/opsRepairNodesManualRepair.html
--
Sean Bridges
senior systems architect
Global Relay
_sean.bridges@globalrelay.net_ <mailto:sean.brid...@globalrelay.net>
*866.484.6630 *
Ne
Hey,
We are upgrading from cassandra 2.1 to cassandra 2.2.
With cassandra 2.1 we would periodically repair all nodes, using the -pr
flag.
With cassandra 2.2, the same repair takes a very long time, as cassandra
does an anti compaction after the repair. This anti compaction causes
most
and others
marked as unrepaired, which will never be compacted together.
You might want to flag all sstables as unrepaired before moving on, if
you do not intend to switch to incremental repair for now.
Cheers,
On Wed, Oct 19, 2016 at 6:31 PM Sean Bridges
<sean.brid...@globalrelay.
16 matches
Mail list logo