Are you asking if writes are atomic at the partition level? If so yes. If
you have N columns in a simple k/v schema, and you send a write with X/N of
those columns set, all X will be updated at the same time wherever that
writes goes.
The CL thing is more about how tolerant you are to stale data,
Depends on the consistency level you are setting on write and read.
What CL are you writing at and what CL are you reading at?
The consistency level tells the coordinator when to send acknowledgement of a
write and whether to cross DCs to confirm a write. It also tells the
coordinator how many
You may have encountered the same behavior we have encountered going from
2.1 --> 2.2 a week or so ago.
We also have multiple data dirs. Hm.
In our case, we will purge the data of the big offending table.
HOw big are your nodes?
On Tue, May 7, 2019 at 1:40 AM Evgeny Inberg wrote:
> Still
(repair would be done after all the nodes with obviously deletable sstables
were deleted)
(we may then do a purge program anyway)
(this would seem to get rid of 60-90% of the purgable data without
incurring a big round of tombstones and compaction)
On Tue, May 7, 2019 at 12:05 PM Carl Mueller
Hi there!
Could someone please explain how Column Family would be replicated and
"visible / readable" in the following scenario? Having multiple
geo-distributed datacenters with significant latency (up to 100ms RTT).
Let's name two of them A and B and consider the following 2 cases:
1.
Last my googling had some people doing this back in 2.0.x days, and that
you could do it if you brought a node down, removed the desired sstable
#'s artifacts (Data/Index/etc), and then started up. Probably also with a
clearing of the saved caches.
A decent-ish amount of data (256G) in a 2.1
Short answer is no, because missing consistency isn’t an error and there’s no
way to know you’ve missed data without reading at ALL, and if it were ok to
read at ALL you’d already be doing it (it’s not ok for most apps).
> On May 7, 2019, at 8:05 AM, Fd Habash wrote:
>
> Typically, when a
Typically, when a read is submitted to C*, it may complete with …
1. No errors & returns expected data
2. Errors out with UnavailableException
3. No error & returns zero rows on first attempt, but returned on subsequent
runs.
The third scenario happens as a result of cluster entropy specially
Thx for the tips Jeff, I'm definitely going to start using table level TTLs
(not sure why I didn't before), and I'll take a look at the tombstone
compaction subproperties
On Mon, May 6, 2019 at 10:43 AM Jeff Jirsa wrote:
> Fwiw if you enable the tombstone compaction subproperties, you’ll
Roy, We spent along time trying to fix it, but didn’t find a solution, it was a
test cluster, so we ended up rebuilding the cluster, rather than spending
anymore time trying to fix the corruption. We have worked out what had caused
it, so were happy it wasn’t going to occur in production. Sorry
Still no resolution for this. Did anyone else encounter same behavior?
On Thu, May 2, 2019 at 1:54 PM Evgeny Inberg wrote:
> Yes, sstable upgraded on each node.
>
> On Thu, 2 May 2019, 13:39 Nick Hatfield
> wrote:
>
>> Just curious but, did you make sure to run the sstable upgrade after you
>>
I can say that it happens now as well ,currently no node has been
added/removed .
Corrupted sstables are usually the index files and in some machines the
sstable even does not exist on the filesystem.
On one machine I was able to dump the sstable to dump file without any
issue . Any idea how to
12 matches
Mail list logo