Re: Cassandra Files Taking up Much More Space than CF

2014-12-09 Thread Reynald Bourtembourg
Hi Nate, Are you using incremental backups? Extract from the documentation ( http://www.datastax.com/documentation/cassandra/2.1/cassandra/operations/ops_backup_incremental_t.html ): /When incremental backups are enabled (disabled by default), Cassandra hard-links each flushed SSTable to a

Re: Repairing OpsCenter rollups60 Results in Snapshot Errors

2015-01-29 Thread Reynald Bourtembourg
Hi Paul, There is a JIRA ticket about this issue: https://issues.apache.org/jira/browse/CASSANDRA-8696 I have seen these errors too the last time I ran nodetool repair. I would also be interested to know the answer to the questions you were asking: Are these errors problematic? Should I just

Re: [Marketing Mail] Migrating to incremental repairs

2015-11-18 Thread Reynald Bourtembourg
sstablerepairedset to mark all the SSTables that were created before you disabled compaction. - Restart Cassandra I'd be glad if someone could answer to my other questions in any case ;-). Thanks in advance for your help Reynald On 18/11/2015 16:45, Reynald Bourtembourg wrote: Hi, We

Migrating to incremental repairs

2015-11-18 Thread Reynald Bourtembourg
Hi, We currently have a 3 nodes Cassandra cluster with RF = 3. We are using Cassandra 2.1.7. We would like to start using incremental repairs. We have some tables using LCS compaction strategy and some others using STCS. Here is the procedure written in the documentation: To migrate to

Re: [Marketing Mail] Re: [Marketing Mail] can't make any permissions change in 2.2.4

2015-12-18 Thread Reynald Bourtembourg
Done: https://issues.apache.org/jira/browse/CASSANDRA-10904 On 18/12/2015 10:51, Sylvain Lebresne wrote: On Fri, Dec 18, 2015 at 8:55 AM, Reynald Bourtembourg <reynald.bourtembo...@esrf.fr <mailto:reynald.bourtembo...@esrf.fr>> wrote: This does not seem to be explained in t

Re: [Marketing Mail] can't make any permissions change in 2.2.4

2015-12-17 Thread Reynald Bourtembourg
Hi, Maybe your problem comes from the new role based access control in Cassandra introduced in Cassandra 2.2. http://www.datastax.com/dev/blog/role-based-access-control-in-cassandra The /Upgrading /section of this blog post is specifying the following: "/For systems already using the

Re: [Marketing Mail] Migrating to incremental repairs

2015-11-20 Thread Reynald Bourtembourg
he matter. Based on some research here and on IRC, recent versions of Cassandra do no require anything specific when migrating to incremental repairs but the the -inc switch even on LCS. Any confirmation on the matter is more than welcome. Regards, Stefano On Wed, Nov

Re: [Marketing Mail] Cassandra 2.1: Snapshot data changing while transferring

2016-05-31 Thread Reynald Bourtembourg
Hi Paul, I guess this might come from the incremental repairs... The repair time is stored in the sstable (RepairedAt timestamp metadata). Cheers, Reynald On 31/05/2016 11:03, Paul Dunkler wrote: Hi there, i am sometimes running in very strange errors while backing up snapshots from a

Re: [Marketing Mail] Re: [Marketing Mail] Cassandra 2.1: Snapshot data changing while transferring

2016-06-01 Thread Reynald Bourtembourg
Hi Paul, If I understand correctly, you are making a tar file with all the folders named "snapshots" (i.e. the folder under which all the snapshots are created. So you have one /snapshots /folder per table). If this is the case, when you are executing "nodetool repair", Cassandra will create

Re: [Marketing Mail] Re: Memory leak and lockup on our 2.2.7 Cassandra cluster.

2016-08-03 Thread Reynald Bourtembourg
Hi, Maybe Ben was referring to this issue which has been mentioned recently on this mailing list: https://issues.apache.org/jira/browse/CASSANDRA-11887 Cheers, Reynald On 03/08/2016 18:09, Romain Hardouin wrote: >Curious why the 2.2 to 3.x upgrade path is risky at best. I guess that upgrade

Re: 回复: data loss in different DC

2017-09-28 Thread Reynald Bourtembourg
, Reynald Bourtembourg <reynald.bourtembo...@esrf.fr <mailto:reynald.bourtembo...@esrf.fr>> wrote: Hi, You can write with CL=EACH_QUORUM and read with CL=LOCAL_QUORUM to get strong consistency. Kind regards, Reynald On 28/09/2017 13:46, Peng Xiao wrote:

Re: 回复: data loss in different DC

2017-09-28 Thread Reynald Bourtembourg
Hi, You can write with CL=EACH_QUORUM and read with CL=LOCAL_QUORUM to get strong consistency. Kind regards, Reynald On 28/09/2017 13:46, Peng Xiao wrote: even with CL=QUORUM,there is no guarantee to be sure to read the same data in DC2,right? then multi DCs looks make no sense?