Hi Nate,
Are you using incremental backups?
Extract from the documentation (
http://www.datastax.com/documentation/cassandra/2.1/cassandra/operations/ops_backup_incremental_t.html
):
/When incremental backups are enabled (disabled by default), Cassandra
hard-links each flushed SSTable to a
Hi Paul,
There is a JIRA ticket about this issue:
https://issues.apache.org/jira/browse/CASSANDRA-8696
I have seen these errors too the last time I ran nodetool repair.
I would also be interested to know the answer to the questions you were
asking:
Are these errors problematic? Should I just
sstablerepairedset to mark all the SSTables that
were created before you disabled compaction.
- Restart Cassandra
I'd be glad if someone could answer to my other questions in any case ;-).
Thanks in advance for your help
Reynald
On 18/11/2015 16:45, Reynald Bourtembourg wrote:
Hi,
We
Hi,
We currently have a 3 nodes Cassandra cluster with RF = 3.
We are using Cassandra 2.1.7.
We would like to start using incremental repairs.
We have some tables using LCS compaction strategy and some others using
STCS.
Here is the procedure written in the documentation:
To migrate to
Done:
https://issues.apache.org/jira/browse/CASSANDRA-10904
On 18/12/2015 10:51, Sylvain Lebresne wrote:
On Fri, Dec 18, 2015 at 8:55 AM, Reynald Bourtembourg
<reynald.bourtembo...@esrf.fr <mailto:reynald.bourtembo...@esrf.fr>>
wrote:
This does not seem to be explained in t
Hi,
Maybe your problem comes from the new role based access control in
Cassandra introduced in Cassandra 2.2.
http://www.datastax.com/dev/blog/role-based-access-control-in-cassandra
The /Upgrading /section of this blog post is specifying the following:
"/For systems already using the
he matter.
Based on some research here and on IRC, recent versions of
Cassandra do no require anything specific when migrating to
incremental repairs but the the -inc switch even on LCS.
Any confirmation on the matter is more than welcome.
Regards,
Stefano
On Wed, Nov
Hi Paul,
I guess this might come from the incremental repairs...
The repair time is stored in the sstable (RepairedAt timestamp metadata).
Cheers,
Reynald
On 31/05/2016 11:03, Paul Dunkler wrote:
Hi there,
i am sometimes running in very strange errors while backing up
snapshots from a
Hi Paul,
If I understand correctly, you are making a tar file with all the
folders named "snapshots" (i.e. the folder under which all the snapshots
are created. So you have one /snapshots /folder per table).
If this is the case, when you are executing "nodetool repair", Cassandra
will create
Hi,
Maybe Ben was referring to this issue which has been mentioned recently
on this mailing list:
https://issues.apache.org/jira/browse/CASSANDRA-11887
Cheers,
Reynald
On 03/08/2016 18:09, Romain Hardouin wrote:
>Curious why the 2.2 to 3.x upgrade path is risky at best.
I guess that upgrade
, Reynald Bourtembourg
<reynald.bourtembo...@esrf.fr <mailto:reynald.bourtembo...@esrf.fr>>
wrote:
Hi,
You can write with CL=EACH_QUORUM and read with CL=LOCAL_QUORUM to
get strong consistency.
Kind regards,
Reynald
On 28/09/2017 13:46, Peng Xiao wrote:
Hi,
You can write with CL=EACH_QUORUM and read with CL=LOCAL_QUORUM to get
strong consistency.
Kind regards,
Reynald
On 28/09/2017 13:46, Peng Xiao wrote:
even with CL=QUORUM,there is no guarantee to be sure to read the same
data in DC2,right?
then multi DCs looks make no sense?
12 matches
Mail list logo