Rahul, none of that is true at all.
Each node stores schema locally in a non-replicated system table. Schema
changes are disseminated directly to live nodes (not the write path), and the
schema version is gossiped to other nodes. If a node misses a schema update, it
will figure this out whe
You're going to have a problem doing this in a single query because you're
asking cassandra to select a non-contiguous set of rows. Also, to my
knowledge, you can only use non equal operators on clustering keys. The
best solution I could come up with would be to define you table like so:
CREATE TA
Hi Stefano,
Based on what I understood reading the docs, if the ratio of garbage
collectable tomstones exceeds the "tombstone_threshold", C* should start
compacting and evicting.
If there are no other normal compaction tasks to be run, LCS will attempt to
compact the sstables it estimates it w
OpsCenter 6.0 and up don't work with Cassandra.
On May 11, 2017 at 12:31:08 PM, cass savy (casss...@gmail.com) wrote:
AWS Backup/Restore process/tools for C*/DSE C*:
Has anyone used Opscenter 6.1 backup tool to backup/restore data for larger
datasets online ?
If yes, did you run into issues us
ables? More interestingly, given
that a single partition might be split across different levels, and that some
range tombstones might be in L0 while all the rest of the data in L1, are all
the tombstones prefetched from _all_ the involved SStables before doing any
table scan?
Regards,
Stefano
That does sound troubling. You mentioned you're reading at local quorum. Did
you write these control records at quorum, or from the same dc at local quorum?
What CL/DC are the other records written at?
On May 17, 2017 at 10:16:42 AM, Dominic Chevalier (dccheval...@gmail.com) wrote:
Hi Folks,
Specifying a dc will only repair the data in that dc. If you leave out the dc
flag, it will repair data in both dcs. You probably shouldn't be restricting
repair to one dc without a good rationale for doing so.
On August 31, 2017 at 8:56:24 AM, Harper, Paul (paul.har...@aspect.com) wrote:
Hello
That's the value version. Gossip uses versioned values to work out which piece
of data is the most recent. Each node has it's own highest version, so I don't
think it's unusual for that to be different for different nodes. When you say
the node crashes, do you mean the process dies?
On August 2
If nodetool repair doesn't return an error, and doesn't hang, the repair
completed successfully.
On September 1, 2017 at 5:50:53 AM, Akshit Jain (akshit13...@iiitd.ac.in) wrote:
Hi,
I am performing repair on cassandra cluster.
After getting repair status as successful, How to figure out if it is
It will on 2.2 and higher, yes.
Also, just want to point out that it would be worth it for you to compare how
long incremental repairs take vs full repairs in your cluster. There are some
problems (which are fixed in 4.0) that can cause significant overstreaming when
using incremental repair.
Hi Hannu,
There are more than a few committers that don't think MVs are currently
suitable for production use. I'm not involved with MV development, so this may
not be 100% accurate, but the problems as I understand them are:
There's no way to determine if a view is out of sync with the base
Not really no. There's a repaired % in nodetool tablestats if you're using
incremental repair (and you probably shouldn't be before 4.0 comes out), but I
wouldn't make any decisions based off it's value.
On October 4, 2017 at 8:05:44 AM, ZAIDI, ASAD A (az1...@att.com) wrote:
Hello folk,
I’
a regular repair on each node based on if this percentage is below
some threshold. It has been running fine since several months ago.
2017-10-04 12:46 GMT-03:00 Blake Eggleston :
Not really no. There's a repaired % in nodetool tablestats if you're using
incremental repair (and you p
Since the UUID is used as the ballot in a paxos instance, if it goes backwards
in time, it will be rejected by the other replicas (if there is a more recent
instance), and the proposal will fail. However, after the initial rejection,
the coordinator will try again with the most recently seen bal
I believe that’s just referencing a counter implementation detail. If I
remember correctly, there was a fairly large improvement of the implementation
of counters in 2.1, and the assignment of the id would basically be a format
migration.
> On Oct 20, 2017, at 9:57 AM, Paul Pollack wrote:
>
>
Hi user@,
Following a discussion on dev@, the materialized view feature is being
retroactively classified as experimental, and not recommended for new
production uses. The next patch releases of 3.0, 3.11, and 4.0 will include
CASSANDRA-13959, which will log warnings when materialized views are
Hey Aiman,
Assuming the situation is just "we accidentally ran incremental repair", you
shouldn't have to do anything. It's not going to hurt anything. Pre-4.0
incremental repair has some issues that can cause a lot of extra streaming, and
inconsistencies in some edge cases, but as long as you'
> Once you run incremental repair, your data is permanently marked as repaired
This is also the case for full repairs, if I'm not mistaken. I'll admit I'm not
as familiar with the quirks of repair in 2.2, but prior to 4.0/CASSANDRA-9143,
any global repair ends with an anticompaction that marks s
eed to mark
> sstables as unrepaired?
That's right, but he mentioned that he is using reaper which uses
subrange repair if I'm not mistaken, which doesn't do anti-compaction.
So in that case he should probably mark data as unrepaired when no
longer using incremental repair.
Looks like a bug, could you open a jira?
> On Nov 2, 2017, at 2:08 AM, Mikhail Tsaplin wrote:
>
> Hi,
> I've upgraded Cassandra from 2.1.6 to 3.0.9 on three nodes cluster. After
> upgrade
> cqlsh shows following error when trying to run "use {keyspace};" command:
> 'ResponseFuture' object has
Because in theory, corruption of your repaired dataset is possible, which
incremental repair won’t fix.
In practice pre-4.0 incremental repair has some flaws that can bring deleted
data back to life in some cases, which this would address.
You should also evaluate whether pre-4.0 incremental
Hi,
I’ve been having a problem with 3 neighboring nodes in our cluster having their
read latencies jump up to 9000ms - 18000ms for a few minutes (as reported by
opscenter), then come back down.
We’re running a 6 node cluster, on AWS hi1.4xlarge instances, with cassandra
reading and writing to
too many
queries for cassandra to handle. However, as I mentioned earlier, the spikes
aren’t correlated to an increase in reads.
On Jan 5, 2014, at 3:28 PM, Blake Eggleston wrote:
> Hi,
>
> I’ve been having a problem with 3 neighboring nodes in our cluster having
> their read laten
Hi All,
We're having a problem with our cassandra cluster and are at a loss as to the
cause.
We have what appear to be columns that disappear for a little while, then
reappear. The rest of the row is returned normally during this time. This is,
of course, very disturbing, and is wreaking havoc
Hi Jimmy,
Check out the token function:
http://www.datastax.com/docs/1.1/dml/using_cql#paging-through-non-ordered-partitioner-results
You can use it to page through your rows.
Blake
On Jul 23, 2013, at 10:18 PM, Jimmy Lin wrote:
> hi,
> I want to fetch all the row keys of a table using CQL3:
25 matches
Mail list logo