> >data would then start being siloed for recovery when the dc comes back
> Do we expect any degradation due to siloed data in the serving DC, such as 
> higher latency due to SSTables remaining disjoint? 

It's hard to say. Generally I'd expect it to be negligible, but there are 
scenarios where it could cause problems. Like if there's a bunch of deleted 
data you can't purge because the data and the tombstones are in different 
silos. That said, the silos are meant to make recovery faster, but aren't a 
hard requirement. If you have a use case where extended data siloing is causing 
problems, it could make sense to disable the recovery silo and just deal with 
repairs when the partition is healed.

> >I'm not sure why you'd do that, but it is possible
> A scenario in which an application wants strong consistency for a critical 
> dataset (Keyspace1) and relaxed consistency for others (Keyspace2,3,...)

Oh right, *that* makes sense. I meant my example where each keyspace is using a 
different DC for a satellite. That's a little harder to think of use cases for. 
Maybe applications where you have different groups of customers in different 
geographical areas?

On Mon, Nov 17, 2025, at 6:49 PM, Jaydeep Chovatia wrote:
> >data would then start being siloed for recovery when the dc comes back
> Do we expect any degradation due to siloed data in the serving DC, such as 
> higher latency due to SSTables remaining disjoint? 
> 
> >I'm not sure why you'd do that, but it is possible
> 
> A scenario in which an application wants strong consistency for a critical 
> dataset (Keyspace1) and relaxed consistency for others (Keyspace2,3,...)
> 
> 
> Jaydeep
> 
> On Sat, Nov 15, 2025 at 8:40 PM Blake Eggleston <[email protected]> wrote:
>> __
>> 
>>>  1. With the "2 satellite configuration", the data will always remain fully 
>>> reconciled; in other words, there is no need to keep the data siloed. Is 
>>> that correct?
>> Not quite. With the 2 satellite configuration, you can switch primaries and 
>> there's no need to silo data because it's still talking to all datacenters. 
>> However if there's a loss of a DC, or an outage that lasted long enough, 
>> then you could disable the secondary satellite 
>> (https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=399278313#CEP58:SatelliteDatacenters-Secondary/SatelliteDisableProcess)
>>  and data would then start being siloed for recovery when the dc comes back. 
>> 
>>>  1. Can a Cassandra cluster have two tables coexist, one with a 
>>> primary/secondary concept and another active-active?
>> 
>> The replication is configured at the keyspace level, so two tables can't, 
>> but 2 keyspaces could. Since it's just a replication setting, there wouldn't 
>> be anything stopping you from doing things like having a DC that is a 
>> satellite for one keyspace, a primary for another, and a secondary for 
>> another. I'm not sure why you'd do that, but it is possible.

Reply via email to