Re: Documentation about TTL and tombstones

2024-03-16 Thread Sebastian Marsching
> That's not how gc_grace_seconds work. > gc_grace_seconds controls how much time *after* a tombstone can be deleted, > it can actually be deleted, in order to give you enough time to run repairs. > > Say you have data that is about to expire on March 16 8am, and > gc_grace_seconds is 10 days.

Re: Documentation about TTL and tombstones

2024-03-16 Thread Gil Ganz
That's not how gc_grace_seconds work. gc_grace_seconds controls how much time *after* a tombstone can be deleted, it can actually be deleted, in order to give you enough time to run repairs. Say you have data that is about to expire on March 16 8am, and gc_grace_seconds is 10 days. After Mar 16

Re: Documentation about TTL and tombstones

2024-03-14 Thread Sebastian Marsching
> by reading the documentation about TTL > https://cassandra.apache.org/doc/4.1/cassandra/operating/compaction/index.html#ttl > It mention that it creates a tombstone when data expired, how does it > possible without writing to the tombstone on the table ? I thought TTL > doesn't create

Documentation about TTL and tombstones

2024-03-14 Thread Jean Carlo
Hello community, by reading the documentation about TTL https://cassandra.apache.org/doc/4.1/cassandra/operating/compaction/index.html#ttl It mention that it creates a tombstone when data expired, how does it possible without writing to the tombstone on the table ? I thought TTL doesn't create

RE: SStables stored in directory with different table ID than the one found in system_schema.tables

2024-03-13 Thread Michalis Kotsiouros (EXT) via user
Hello everyone, The recovery was performed successfully some days ago. Finally, the problematic datacenter was removed and added back to the cluster. BR MK From: Michalis Kotsiouros (EXT) via user Sent: February 12, 2024 17:59 To: Sebastian Marsching ; user@cassandra.apache.org Cc:

Re: Question about commit consistency level for Lightweight Transactions in Paxos v2

2024-03-11 Thread Weng, Justin via user
So for upgrading Paxos to v2, the non-serial consistency level should be set to ANY or LOCAL_QUORUM, and the serial consistency level should still be SERIAL or LOCAL_SERIAL. Got it, thanks! From: Laxmikant Upadhyay Date: Tuesday, 12 March 2024 at 7:33 am To: user@cassandra.apache.org Cc:

Call for Presentations: Cassandra @ Community Over Code North America 2024

2024-03-11 Thread Paulo Motta
Hi, After a successful experience in ApacheCon 2022, the Cassandra track is back to Community Over Code North America 2024 to be held in Denver, Colorado, October 7-10, 2024. I will be facilitating this track and I would like to request abstract drafts in the following topics to be presented in

Re: Question about commit consistency level for Lightweight Transactions in Paxos v2

2024-03-11 Thread Laxmikant Upadhyay
You need to set both in case of lwt. your regular non -serial consistency level will only applied during commit phase of lwt. On Wed, 6 Mar, 2024, 03:30 Weng, Justin via user, wrote: > Hi Cassandra Community, > > > > I’ve been investigating Cassandra Paxos v2 (as implemented in CEP-14 >

Question about commit consistency level for Lightweight Transactions in Paxos v2

2024-03-05 Thread Weng, Justin via user
Hi Cassandra Community, I’ve been investigating Cassandra Paxos v2 (as implemented in CEP-14) which improves the performance of lightweight transaction (LWT). But I’ve got a question about setting the commit

Streaming a working session with 5.0 - UCS

2024-03-05 Thread Jon Haddad
Hey everyone, Today starting at 10am PT I'm going to be streaming my session messing with 5.0, looking at UCS. I'm doing this with my easy-cass-lab and easy-cass-stress tools using a build of C* from last night. I'll also show some of the cool things you can do with my tools. I'll be running

Re: Best Practices for Managing Concurrent Client Connections in Cassandra

2024-02-29 Thread Andrew Weaver
We've used these settings in production with no issues. What has been more valuable to us though is limiting the rate of client connections via iptables. Often times users configure an aggressive reconnection policy that floods the cluster with connections in certain circumstances like a node

Re: Best Practices for Managing Concurrent Client Connections in Cassandra

2024-02-29 Thread Bowen Song via user
They are suitable for production use for protecting your Cassandra server, not the clients. The clients likely will experience an error when the limit is reached, and it needs to handle that error appropriately. What you really want to do probably are: 1. change the client's behaviour, limit

Best Practices for Managing Concurrent Client Connections in Cassandra

2024-02-29 Thread Naman kaushik
Hello Cassandra Community, We've been experiencing occasional spikes in the number of client connections to our Cassandra cluster, particularly during high-volume API request periods. We're using persistent connections, and we've noticed that the number of connections can increase significantly

RE: Check out new features in K8ssandra and Mission Control

2024-02-28 Thread Durity, Sean R via user
The k8ssandra requirement is a major blocker. Sean R. Durity INTERNAL USE From: Christopher Bradford Sent: Tuesday, February 27, 2024 9:49 PM To: user@cassandra.apache.org Cc: Christopher Bradford Subject: [EXTERNAL] Re: Check out new features in K8ssandra and Mission Control Hey Jon, *

Re: stress testing & lab provisioning tools

2024-02-28 Thread Alexander DEJANOVSKI
Hey Jon, It's awesome to see that you're reviving both these projects! I was eager to get my hands on an updated version of tlp-cluster with up to date AMIs  tlp-stress is by far the best Cassandra stress tool I've worked with, and I recommend everyone to test easy-cass-stress and build

Re: Check out new features in K8ssandra and Mission Control

2024-02-27 Thread Christopher Bradford
Hey Jon, * What aspects of Mission Control are dependent on using K8ssandra? > Mission Control bundles in K8ssandra for the core automation workflows (lifecycle management, cluster operations, medusa &. reaper). In fact we include the K8ssandraSpec in the top-level MissionControlCluster resource

Re: Check out new features in K8ssandra and Mission Control

2024-02-27 Thread Jon Haddad
Hey Chris - this looks pretty interesting! It looks like there's a lot of functionality in here. * What aspects of Mission Control are dependent on using K8ssandra? * Can Mission Control work without K8ssandra? * Is mission control open source? * I'm not familiar with Vector - does it require an

Check out new features in K8ssandra and Mission Control

2024-02-27 Thread Christopher Bradford
Hey C* folks, I'm excited to share that the DataStax team has just released Mission Control , a new operations platform for running Apache Cassandra and DataStax Enterprise. Built around the open source core of K8ssandra we've

stress testing & lab provisioning tools

2024-02-26 Thread Jon Haddad
Hey everyone, Over the last several months I've put a lot of work into 2 projects I started back at The Last Pickle, for stress testing Cassandra and for building labs in AWS. You may know them as tlp-stress and tlp-cluster. Since I haven't worked at TLP in almost half a decade, and am the

Re: Question Regarding Cassandra-19336

2024-02-25 Thread manish khandelwal
It looks a crtirical bug for setup with multi DC using high number of vnodes and running full repair with -PR option, since number of parallel repair sessions can be as high as number of vnodes. Thus it can fill up memory causing pom or direct buffer memory oom. It should get prioritized for

Apache Cassandra Contributor Meeting Next Tuesday

2024-02-23 Thread Constantia
Hi Cassandra community! Our next Apache Cassandra Contributor Meeting will feature *Jaydeepkumar Chovatia* and his work on “*CEP-37: The Cassandra Official Repair Solution* ”. *When*: 10:00 PT / 13:00 ET / 17:00 UTC / 22:30 IST. *Register*:

Re: Cassandra 4.1 compaction thread no longer low priority (cpu nice)

2024-02-23 Thread Pierre Fersing
Hi, Thanks for your detailed answers. I understand the reason why using low priority compaction may not be a great idea in the general case (the example with too high CPU for reading). I’ll give a try with the compaction throughput which I total forgot that this option exists. It may fix the

Re: Cassandra 4.1 compaction thread no longer low priority (cpu nice)

2024-02-22 Thread Dmitry Konstantinov
Thank you for highlighting this, it looks like I need to refresh my knowledge about IO schedulers :-) Cheers, Dmitry On Thu, 22 Feb 2024 at 22:18, Bowen Song via user wrote: > On the IO scheduler point, cfq WAS the only scheduler supporting IO > priorities (such as ionice) shipped by default

Re: Cassandra 4.1 compaction thread no longer low priority (cpu nice)

2024-02-22 Thread Bowen Song via user
On the IO scheduler point, cfq WAS the only scheduler supporting IO priorities (such as ionice) shipped by default with the Linux kernel, but that has changed since bfq and mq-deadline were added to the Linux kernel. Both bfq and mq-deadline supports IO priority, as documented here:

Re: Cassandra 4.1 compaction thread no longer low priority (cpu nice)

2024-02-22 Thread Dmitry Konstantinov
Hi all, I was not participating in the changes but I analyzed the question some time ago from another side. There were also changes related to -XX:ThreadPriorityPolicy JVM option. When you set a thread priority for a Java thread it does not mean it is always propagated as a native OS thread

Re: Cassandra 4.1 compaction thread no longer low priority (cpu nice)

2024-02-22 Thread Bowen Song via user
Hi Pierre, Is there anything stopping you from using the compaction_throughput option in the cassandra.yaml file to manage the performance impact of compaction operations? With

Cassandra 4.1 compaction thread no longer low priority (cpu nice)

2024-02-22 Thread Pierre Fersing
Hello all, I've recently upgraded to Cassandra 4.1 and see a change in compaction behavior that seems unwanted: * With Cassandra 3.11 compaction was run by thread in low priority and thus using CPU nice (visible using top) (I believe Cassandra 4.0 also had this behavior) * With Cassandra

Re: Question Regarding Cassandra-19336

2024-02-21 Thread C. Scott Andreas
The “Since Version” for the ticket is set to 3.0.19, presumably based on C-14096 as the predecessor for this ticket. C-14096 was merged up into 3.11.x in the 3.11.5 release, so 3.11.5 would be the equivalent “since version” for that release series. The patch addressing this ticket is included

Question Regarding Cassandra-19336

2024-02-21 Thread ranju goel
Hi All, https://issues.apache.org/jira/browse/CASSANDRA-19336 Does the same issue mentioned in the above JIRA exists for version 3.11.x Regards Ranju

Community Over Code Asia 2024 Travel Assistance Applications now open!

2024-02-20 Thread Gavin McDonald
Hello to all users, contributors and Committers! The Travel Assistance Committee (TAC) are pleased to announce that travel assistance applications for Community over Code Asia 2024 are now open! We will be supporting Community over Code Asia, Hangzhou, China July 26th - 28th, 2024. TAC exists

Re: Requesting Feedback for Cassandra as a backup solution.

2024-02-19 Thread Gowtham S
Thanks for your valuable reply, will check. Thanks and regards, Gowtham S On Mon, 19 Feb 2024 at 15:46, Bowen Song via user wrote: > You can have a read at > https://www.datastax.com/blog/cassandra-anti-patterns-queues-and-queue-datasets > > Your table schema does not include the most

Cassandra Town Hall this Thursday

2024-02-19 Thread Constantia
Hi Everyone, The next Apache Cassandra Town Hall is happening this *Thursday, 8AM PST* Apache Cassandra Catalyst, Sarma Pydipally, will be presenting “Prometheus-Grafana Dashboard for Apache Cassandra 5” You can register for the meetup here.

Re: Requesting Feedback for Cassandra as a backup solution.

2024-02-19 Thread Bowen Song via user
You can have a read at https://www.datastax.com/blog/cassandra-anti-patterns-queues-and-queue-datasets Your table schema does not include the most important piece of information - the partition keys (and clustering keys, if any). Keep in mind that you can only efficiently query Cassandra by

Re: Requesting Feedback for Cassandra as a backup solution.

2024-02-18 Thread Gowtham S
Hi Bowen which is a well documented anti-pattern. > Can you please explain more on this, I'm not aware of it. It will be helpful to make decisions. Please find the below table schema *Table schema* TopicName - text Partition - int MessageUUID - text Actual data - text OccurredTime - Timestamp

Re: Requesting Feedback for Cassandra as a backup solution.

2024-02-17 Thread Slater, Ben via user
TBH, this sounds to me like a very expensive (in terms of effort) way to deal with whatever Kafka unreliability you’re having. We have lots of both Kafka and Cassandra clusters under management and I have no doubt that Kafka is capable of being as reliable as Cassandra (and both are capable

Re: Requesting Feedback for Cassandra as a backup solution.

2024-02-17 Thread Bowen Song via user
Hi Gowtham, On the face of it, it sounds like you are planning to use Cassandra for a queue-like application, which is a well documented anti-pattern. If that's not the case, can you please show the table schema and some example queries? Cheers, Bowen On 17/02/2024 08:44, Gowtham S wrote:

Re: Requesting Feedback for Cassandra as a backup solution.

2024-02-17 Thread Gowtham S
Thanks for your suggestion Thanks and regards, Gowtham S On Sat, 17 Feb 2024 at 14:58, CPC wrote: > hi, > > We implemented same strategy in one of our customers. Since 2016 we had > one downtime in one DC because of high temperature(whole physical DC > shutdown). > > With that approach I

Re: Requesting Feedback for Cassandra as a backup solution.

2024-02-17 Thread CPC
hi, We implemented same strategy in one of our customers. Since 2016 we had one downtime in one DC because of high temperature(whole physical DC shutdown). With that approach I assume you will use Cassandra as a queue. You have to be careful about modeling and should use multiple partitions may

Requesting Feedback for Cassandra as a backup solution.

2024-02-17 Thread Gowtham S
Dear Cassandra Community, I am reaching out to seek your valuable feedback and insights on a proposed solution we are considering for managing Kafka outages using Cassandra. At our organization, we heavily rely on Kafka for real-time data processing and messaging. However, like any technology,

Re: Switching to Incremental Repair

2024-02-15 Thread Chris Lohfink
I would recommend adding something to C* to be able to flip the repaired state on all sstables quickly (with default OSS can turn nodes off one at a time and use sstablerepairedset). It's a life saver to be able to revert back to non-IR if migration going south. Same can be used to quickly switch

Re: Switching to Incremental Repair

2024-02-15 Thread Bowen Song via user
The gc_grace_seconds, which default to 10 days, is the maximal safe interval between repairs. How much data gets written during that period of time? Will your nodes run out of disk space because of the new data written during that time? If so, it sounds like your nodes are dangerously close to

Re: Switching to Incremental Repair

2024-02-15 Thread Kristijonas Zalys
Hi folks, One last question regarding incremental repair. What would be a safe approach to temporarily stop running incremental repair on a cluster (e.g.: during a Cassandra major version upgrade)? My understanding is that if we simply stop running incremental repair, the cluster's nodes can, in

[RELEASE] Apache Cassandra 4.1.4 released

2024-02-14 Thread Štefan Miklošovič
The Cassandra team is pleased to announce the release of Apache Cassandra version 4.1.4. Apache Cassandra is a fully distributed database. It is the right choice when you need scalability and high availability without compromising performance. https://cassandra.apache.org/ Downloads of source

RE: SStables stored in directory with different table ID than the one found in system_schema.tables

2024-02-12 Thread Michalis Kotsiouros (EXT) via user
Hello Sebastian and community, Thanks a lot for the post. It is really helpful. After some additional observations, I am more concerned about trying to rename/move the sstables directory. I have observed that my client processes complain about missing columns even though those columns appear

Re: SStables stored in directory with different table ID than the one found in system_schema.tables

2024-02-09 Thread Sebastian Marsching
You might the following discussion from the mailing-list archive helpful: https://lists.apache.org/thread/6hnypp6vfxj1yc35ptp0xf15f11cx77d This thread discusses a similar situation gives a few pointers on when it might be save to simply move the SSTables around. > Am 08.02.2024 um 13:06

Re: Regarding Cassandra 4 Support End time

2024-02-09 Thread Mukhesh Chowdary
Unsubscribe Regards V. Mukhesh Chowdary Architect. CA/2012/55397 On Fri, 9 Feb 2024 at 12:12, ranju goel wrote: > Hi All, > > As per the link (https://cassandra.apache.org/_/download.html) Cassandra > 4.0 is going to be maintained till release of 5.1. (July 2024 tentative). > Since Cassandra

Regarding Cassandra 4 Support End time

2024-02-08 Thread ranju goel
Hi All, As per the link (https://cassandra.apache.org/_/download.html) Cassandra 4.0 is going to be maintained till release of 5.1. (July 2024 tentative). Since Cassandra 5 is yet to be released, Can we expect Cassandra 4.0.x support to be increased. This information will help us in planning our

RE: SStables stored in directory with different table ID than the one found in system_schema.tables

2024-02-08 Thread Michalis Kotsiouros (EXT) via user
Hello everyone, I have found this post on-line and seems to be recent. Mismatch between Cassandra table uuid in linux file directory and system_schema.tables - Stack

SStables stored in directory with different table ID than the one found in system_schema.tables

2024-02-08 Thread Michalis Kotsiouros (EXT) via user
Hello community, I have a Cassandra server on 3.11.13 on SLESS 12.5. I have noticed in the logs the following line: Datacenter A org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find table for cfId d8c1bea0-82ed-11ee-8ac8-1513e17b60b1. If a table was just created, this is likely

Full repair with -pr option getting stuck on Cassandra 4.0.10

2024-02-08 Thread manish khandelwal
In a two datacenter cluster (11 nodes each) we are seeing repair getting stuck. Issue is when repair is triggered on a particular keyspace repair session is lost and cassandra never returns for that particular session. There are no "WARN" or "ERROR" logs in Cassandra logs. No message dropped seen

Re: Switching to Incremental Repair

2024-02-07 Thread Bowen Song via user
The over-streaming is only problematic for the repaired SSTables, but it can be triggered by inconsistencies within the unrepaired SSTables during an incremental repair session. This is because although an incremental repair will only compare the unrepaired SSTables, but it will stream both

Re: Switching to Incremental Repair

2024-02-07 Thread Sebastian Marsching
Thank you very much for your explanation. Streaming happens on the token range level, not the SSTable level, right? So, when running an incremental repair before the full repair, the problem that “some unrepaired SSTables are being marked as repaired on one node but not on another” should not

Re: Switching to Incremental Repair

2024-02-07 Thread Bowen Song via user
Unfortunately repair doesn't compare each partition individually. Instead, it groups multiple partitions together and calculate a hash of them, stores the hash in a leaf of a merkle tree, and then compares the merkle trees between replicas during a repair session. If any one of the partitions

Re: Switching to Incremental Repair

2024-02-07 Thread Sebastian Marsching
> Caution, using the method you described, the amount of data streamed at the > end with the full repair is not the amount of data written between stopping > the first node and the last node, but depends on the table size, the number > of partitions written, their distribution in the ring and

Re: Switching to Incremental Repair

2024-02-07 Thread Bowen Song via user
Caution, using the method you described, the amount of data streamed at the end with the full repair is not the amount of data written between stopping the first node and the last node, but depends on the table size, the number of partitions written, their distribution in the ring and the

Re: Switching to Incremental Repair

2024-02-07 Thread Sebastian Marsching
> That's a feature we need to implement in Reaper. I think disallowing the > start of the new incremental repair would be easier to manage than pausing > the full repair that's already running. It's also what I think I'd expect as > a user. > > I'll create an issue to track this. Thank you,

Re: Switching to Incremental Repair

2024-02-07 Thread Sebastian Marsching
> Full repair running for an entire week sounds excessively long. Even if > you've got 1 TB of data per node, 1 week means the repair speed is less than > 2 MB/s, that's very slow. Perhaps you should focus on finding the bottleneck > of the full repair speed and work on that instead. We store

Re: Switching to Incremental Repair

2024-02-07 Thread Bowen Song via user
Just one more thing. Make sure you run 'nodetool repair -full' instead of just 'nodetool repair'. That's because the command's default was changed in Cassandra 2.x. The default was full repair before that change, but the new default now is incremental repair. On 07/02/2024 10:28, Bowen Song

Re: Switching to Incremental Repair

2024-02-07 Thread Bowen Song via user
Not disabling auto-compaction may result in repaired SSTables getting compacted together with unrepaired SSTables before the repair state is set on them, which leads to mismatch in the repaired data between nodes, and potentially very expensive over-streaming in a future full repair. You

Re: Switching to Incremental Repair

2024-02-06 Thread Kristijonas Zalys
Hi folks, Thank you all for your insight, this has been very helpful. I was going through the migration process here and I’m not entirely sure why disabling autocompaction on the node is required?

Incremental repairs and materialized views

2024-02-05 Thread Panagiotis Melidis via user
Hi everyone, Could you please answer the following questions regarding materialized views or point me to the right direction in the documentation? We are currently using Cassandra v4.0.11. 1. Are incremental repairs supported for the base table of Materialized views? 2. Are incremental

Re: Switching to Incremental Repair

2024-02-04 Thread Alexander DEJANOVSKI
Hi Sebastian, That's a feature we need to implement in Reaper. I think disallowing the start of the new incremental repair would be easier to manage than pausing the full repair that's already running. It's also what I think I'd expect as a user. I'll create an issue to track this. Le sam. 3

Re: Switching to Incremental Repair

2024-02-03 Thread Bowen Song via user
Full repair running for an entire week sounds excessively long. Even if you've got 1 TB of data per node, 1 week means the repair speed is less than 2 MB/s, that's very slow. Perhaps you should focus on finding the bottleneck of the full repair speed and work on that instead. On 03/02/2024

Re: Switching to Incremental Repair

2024-02-03 Thread Sebastian Marsching
Hi, > 2. use an orchestration tool, such as Cassandra Reaper, to take care of that > for you. You will still need monitor and alert to ensure the repairs are run > successfully, but fixing a stuck or failed repair is not very time sensitive, > you can usually leave it till Monday morning if it

Re: Switching to Incremental Repair

2024-02-03 Thread Bowen Song via user
Hi Kristijonas, It is not possible to run two repairs, regardless whether they are incremental or full, for the same token range and on the same table concurrently. You have two options: 1. create a schedule that's don't overlap, e.g. run incremental repair daily except the 1st of each

Community over Code EU 2024 Travel Assistance Applications now open!

2024-02-03 Thread Gavin McDonald
Hello to all users, contributors and Committers! The Travel Assistance Committee (TAC) are pleased to announce that travel assistance applications for Community over Code EU 2024 are now open! We will be supporting Community over Code EU, Bratislava, Slovakia, June 3th - 5th, 2024. TAC exists

[no subject]

2024-02-03 Thread Gavin McDonald
Hello to all users, contributors and Committers! The Travel Assistance Committee (TAC) are pleased to announce that travel assistance applications for Community over Code EU 2024 are now open! We will be supporting Community over Code EU, Bratislava, Slovakia, June 3th - 5th, 2024. TAC exists

Re: Switching to Incremental Repair

2024-02-02 Thread manish khandelwal
They(incremental and full repairs) are required to run separately at different times. You need to identify a schedule, for example, running incremental repairs every week for 3 weeks and then run full repair in the 4th week. Regards Manish On Sat, Feb 3, 2024 at 7:29 AM Kristijonas Zalys wrote:

Re: Switching to Incremental Repair

2024-02-02 Thread Kristijonas Zalys
Hi Bowen, Thank you for your help! So given that we would need to run both incremental and full repair for a given cluster, is it safe to have both types of repair running for the same token ranges at the same time? Would it not create a race condition? Thanks, Kristijonas On Fri, Feb 2, 2024

Re: Switching to Incremental Repair

2024-02-02 Thread Bowen Song via user
Hi Kristijonas, To answer your questions: 1. It's still necessary to run full repair on a cluster on which incremental repair is run periodically. The frequency of full repair is more of an art than science. Generally speaking, the less reliable the storage media, the more frequently full

Switching to Incremental Repair

2024-02-02 Thread Kristijonas Zalys
Hi folks, I am working on switching from full to incremental repair in Cassandra v4.0.6 (soon to be v4.1.3) and I have a few questions. 1. Is it necessary to run regular full repair on a cluster if I already run incremental repair? If yes, what frequency would you recommend for full

Have you run Cassandra 3.11.x on Alma9 or Alma8

2024-02-01 Thread Surbhi Gupta
Hi, Has any of you run Cassandra 3.11.x on Alma9 or Alma8? Any issues or concerns? We are going to upgrade from cent 7 to Alma8 or 9 , and wanted to understand if there is/are any known issue? Thanks Surbhi

Re: Tests failing for ppc64le architecture.

2024-01-31 Thread C. Scott Andreas
Pardon me, that should read user-unsubscr...@cassandra.apache.org for this list. On Jan 31, 2024, at 10:07 AM, C. Scott Andreas wrote: Hi Matt, To unsubscribe from this list, send a blank email to dev-unsubscr...@cassandra.apache.org . All messages or replies to the list are distributed to all

Re: Tests failing for ppc64le architecture.

2024-01-31 Thread C. Scott Andreas
Hi Matt, To unsubscribe from this list, send a blank email to dev-unsubscr...@cassandra.apache.org . All messages or replies to the list are distributed to all subscribers of the list. As the project is volunteer-run, others are not able to take this action on behalf of subscribers. For more

Re: Tests failing for ppc64le architecture.

2024-01-31 Thread Matt F
unsubscribe On Wed, Jan 31, 2024 at 2:23 AM Sunidhi Gaonkar via user < user@cassandra.apache.org> wrote: > Hi Bowen, > > My bad, Thank you for pointing this out, I will try the Cassandra dev > mailing list. > > Regards, > > Sunidhi Gaonkar. > -- > *From:* Bowen Song

RE: Tests failing for ppc64le architecture.

2024-01-31 Thread Sunidhi Gaonkar via user
Hi Bowen, My bad, Thank you for pointing this out, I will try the Cassandra dev mailing list. Regards, Sunidhi Gaonkar. From: Bowen Song via user Sent: Tuesday, January 30, 2024 9:02 PM To: user@cassandra.apache.org Cc: Bowen Song Subject: [EXTERNAL] Re:

Re: Tests failing for ppc64le architecture.

2024-01-30 Thread Bowen Song via user
Hi Sunidhi, In case you haven't noticed, this is the Cassandra user mailing list, not the dev mailing list. Most people in this mailing list have never attempted to built Cassandra from the source code. IMHO you should try the Cassandra dev mailing list for this type of things. Cheers,

Any Job opening for Cassandra Administrator in your team/company?

2024-01-30 Thread Daniel Johnson
Hi, Hope you all are well. Just wanted to understand if there is a Job opening in your company/team for Cassandra Administrator. I have around 10 years of experience. If you know anything then, Please email me. Thanks in Advance Regards Daniel

Re: Tests failing for ppc64le architecture.

2024-01-30 Thread Sunidhi Gaonkar via user
Hi team, any thoughts on this? Thank you and Regards, Sunidhi Gaonkar. From: Sunidhi Gaonkar Sent: Thursday, January 11, 2024 7:19 PM To: user@cassandra.apache.org Subject: Tests failing for ppc64le architecture. Hi Team, I am working on validating

Call for Papers now open: Azure Cosmos DB Conf 2024

2024-01-25 Thread German Eichberger via user
All, I got clarification that they are also interested in Cassandra 5.0 and Vector Search - ideally using Azure Managed Instances for Apache Cassandra  In any case here is the link: https://devblogs.microsoft.com/cosmosdb/call-for-papers-now-open-azure-cosmos-db-conf-2024/ and submissions

Re: Over streaming in one node during repair.

2024-01-24 Thread manish khandelwal
Thanks Bowen and Sebastian . That is good information. I am still figuring out what could have gone wrong. Will update here once I get something. Regards Manish On Wed, Jan 24, 2024 at 3:40 PM Bowen Song via user < user@cassandra.apache.org> wrote: > Some common causes of over-streaming: > >

Re: Over streaming in one node during repair.

2024-01-24 Thread Bowen Song via user
Some common causes of over-streaming: * "repair_session_space" is too small (either manually specified, or heap size is small and data on disk is large) * Manually deleting SSTable files * Unexpected foreign (e.g. from a backup) SSTable files * Marking SSTable as repaired or unrepaired

Re: Over streaming in one node during repair.

2024-01-23 Thread Sebastian Marsching
I would check whether some SSTables are marked as repaired while others are not (by running sstablemetadata and checking the value of repairedAt). An inconsistency in the repaired state, it might explain overstreaming. During repairs, data from repaired SSTables on one node is only compared

Over streaming in one node during repair.

2024-01-23 Thread manish khandelwal
In one of our two datacenter setup(3+3), one Cassndra node is getting lot of data streamed from other nodes during repair to the extent that it fills up and ends with full disk. I am not able to understand what could be the reason that this node is misbehaving in the cluster. Cassandra version is

[RELEASE] Apache Cassandra 4.0.12 released

2024-01-23 Thread Štefan Miklošovič
The Cassandra team is pleased to announce the release of Apache Cassandra version 4.0.12. Apache Cassandra is a fully distributed database. It is the right choice when you need scalability and high availability without compromising performance. http://cassandra.apache.org/ Downloads of source

unsubscribe

2024-01-22 Thread Sachin Uplaonkar
-S.

unsubscribe

2024-01-22 Thread Stefano Clini
Saluti Stefano

Re: Race condition in QueryProcessor::prepare API

2024-01-20 Thread Jaydeep Chovatia
I think this is a regression introduced as part of CASSANDRA-17248 , the following code got introduced in QueryProcessor.java since C* 3.0.26. //

Race condition in QueryProcessor::prepare API

2024-01-19 Thread Jaydeep Chovatia
Hi, Today, in our production, we came across the following scenario: 1. We have 100 nodes of the Cassandra cluster on 4.0.6, and our client uses PreparedStatement, say, "*SELECT * FROM T1 WHERE PK=?*" 2. We applied a schema change to add a *regular* column, "*ALTER TABLE T1 ADD

Unsubscribe

2024-01-19 Thread Ridwan Santoso
Unsubscribe

Unsubscribe

2024-01-19 Thread Ghazi Naceur
Unsubscribe

Unsubscribe

2024-01-19 Thread Lu Niu
Unsubscribe

Re: COMMERCIAL:Re: COMMERCIAL:Re: COMMERCIAL:Re: system_schema.tables id and table uuid on disk mismatch

2024-01-18 Thread Bowen Song via user
Without knowing the cause of the issue, it's hard to tell what are the correct steps to recover from it. I would recommend you have a look at the logs and figure out what was the cause of the issue, and then make a recovery plan and also put preventive measure in place to stop it from

Re: COMMERCIAL:Re: COMMERCIAL:Re: COMMERCIAL:Re: system_schema.tables id and table uuid on disk mismatch

2024-01-18 Thread ENES ATABERK
ok thank you! What do you think about the following approach: 1. creating empty correct table id directories in linux filesystem with respect to the system_schema.tables id column 2. importing data with nodetool import from incorrect directory 3. removing the incorrect directory

Re: COMMERCIAL:Re: COMMERCIAL:Re: system_schema.tables id and table uuid on disk mismatch

2024-01-18 Thread Bowen Song via user
I know dropping a table and then creating a new table with the same name can lead to that result, which is expected. If that wasn't what happened, it may be a bug in Cassandra. If you can reproduce the behaviour, you should raise a Jira ticket for it. On 18/01/2024 14:44, ENES ATABERK wrote:

Re: COMMERCIAL:Re: COMMERCIAL:Re: system_schema.tables id and table uuid on disk mismatch

2024-01-18 Thread ENES ATABERK
It has same mismatch id in all nodes not just one node. From: Bowen Song via user Sent: Thursday, January 18, 2024 3:18:11 PM To: user@cassandra.apache.org Cc: Bowen Song Subject: COMMERCIAL:Re: COMMERCIAL:Re: system_schema.tables id and table uuid on disk

Re: COMMERCIAL:Re: system_schema.tables id and table uuid on disk mismatch

2024-01-18 Thread Bowen Song via user
Was the table ID mismatching only on one node or all nodes? Mismatching on one node is usually the result of a racing condition, but on all nodes isn't. The solution I mentioned earlier only applies to the one node situation. On 18/01/2024 13:14, ENES ATABERK wrote: Hi all, Thanks for

Re: COMMERCIAL:Re: system_schema.tables id and table uuid on disk mismatch

2024-01-18 Thread ENES ATABERK
Hi all, Thanks for your responses. The version is Cassandra 4.1.3 After I restarted all the nodes one-by-one cassandra created corrected-id folder and keep the incorrect one as you said. But then I cannot see the data from cqlsh it gives me no result. After i have imported the data from

Re: system_schema.tables id and table uuid on disk mismatch

2024-01-18 Thread Bowen Song via user
It sounds like you have done some concurrent table creation/deletion in the past (e.g. CREATE TABLE IF NOT EXISTS from multiple clients), which resulted in this mismatch. After you restarted the node, Cassandra corrected it by discarding the old table ID and any data associated with it. This

Re: system_schema.tables id and table uuid on disk mismatch

2024-01-18 Thread Štefan Miklošovič
Hi, what Cassandra version are you on? I do not see this behaviour in 5.0-beta. When I create a table, its id will match the id part of table directory on the disk. Regards On Thu, Jan 18, 2024 at 8:49 AM ENES ATABERK wrote: > Hi all, > > we have detected that table-uuid in linux file

<    1   2   3   4   5   6   7   8   9   10   >