enode", The SSTables are streamed
> from the remaining live replica nodes (if RF>1), not the dead node.
> Because of that, the hints for the dead node is irrelevant. I hope that
> answers your question.
>
>
> Cheers,
>
> Bowen
>
> On 12/09/2021 16:45, Roy Burste
Hi ,
In case of a dead node in the cluster (3.11) with hints.
Let's say that the node was dead for 4 Hours(during this time hints are
kept for it) and now we are removing the node,does all nodes stream the
hints in addition to the sstables?
Thanks,
Roy
Hi ,
When creating UDT for a table,does C* store the UDT definition for each
type saved on disk?
Thanks,
Roy
t; Roy, We spent along time trying to fix it, but didn’t find a solution,
>> it was a test cluster, so we ended up rebuilding the cluster, rather than
>> spending anymore time trying to fix the corruption. We have worked out what
>> had caused it, so were happy it wasn’t go
; Long shot: repair or any other operation conflicting. Would leave that to
> others.
>
> On Mon, May 6, 2019 at 3:50 PM Roy Burstein
> wrote:
>
>> It happens on the same column families and they have the same ddl (as
>> already posted) . I did not check it after cleanup
; On Mon, May 6, 2019 at 3:41 PM Roy Burstein
> wrote:
>
>> Yes.
>>
>> On Mon, May 6, 2019, 23:23 Nitan Kainth wrote:
>>
>>> Roy,
>>>
>>> You mean all nodes show corruption when you add a node to cluster??
>>>
>>>
>
Yes.
On Mon, May 6, 2019, 23:23 Nitan Kainth wrote:
> Roy,
>
> You mean all nodes show corruption when you add a node to cluster??
>
>
> Regards,
>
> Nitan
>
> Cell: 510 449 9629
>
> On May 6, 2019, at 2:48 PM, Roy Burstein wrote:
>
> It happened on a
AM, Nitan Kainth wrote:
>
> Did you try sstablescrub?
> If that doesn't work, you can delete all files of this sstable id and then
> run repair -pr on this node.
>
> On Mon, May 6, 2019 at 9:20 AM Roy Burstein
> wrote:
>
>> Hi ,
>> We are having issues with Cassan
Hi ,
We are having issues with Cassandra 3.11.4 , after adding node to the
cluster we get many corrupted files across the cluster (almost all nodes)
,this is reproducible in our env. .
We have 69 nodes in the cluster ,disk_access_mode: standard .
The stack trace :
WARN [ReadStage-4]
n
> compactions were running.
> I guess you have to report this Memory Leak issue to Reaper tool JIRA.
>
> Thanks,
> Bob
>
> On Mon, Jan 14, 2019 at 8:44 AM Roy Burstein
> wrote:
>
>> Hi ,
>>
>> We are testing C* 3.11.3 and we have mapping issue
Hi ,
We are testing C* 3.11.3 and we have mapping issue and possibly leaked
memory.
It might be related to our configuration,any ideas would be helpful .
Cassandra version: 3.11.3
OS: CentOS Linux release 7.4.1708 (Core)
Kernel: 3.10.0-957.1.3.el7.x86_64
JDK: jdk1.8.0_131
Heap: same errors
based on max timestamp per file, so they belong together:
>
>
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/TimeWindowCompactionStrategy.java#L247
>
>
>
> On Sun, Dec 16, 2018 at 11:39 PM Roy Burstein
> wrote:
>
>> hey
derstand what’s happening
> there.
>
> --
> Jeff Jirsa
>
>
> On Dec 13, 2018, at 10:26 PM, Roy Burstein wrote:
>
> Hi all ,
>
> My colleague opened Jira ticket for the issue but we are struggling with
> this issue for a while and we have space issues :
>
&
Hi all ,
My colleague opened Jira ticket for the issue but we are struggling with
this issue for a while and we have space issues :
https://issues.apache.org/jira/browse/CASSANDRA-14929
After removing a node from the cluster, a table that is defined as TWCS,
has sstables from different time
14 matches
Mail list logo