Roy,
I have seen this exception before when a column had been dropped then re added
with the same name but a different type. In particular we dropped a column and
re created it as static, then had this exception from the old sstables created
prior to the ddl change.
Not sure if this applies
can Disk have bad sectors? fccheck or something similar can help.
Long shot: repair or any other operation conflicting. Would leave that to
others.
On Mon, May 6, 2019 at 3:50 PM Roy Burstein wrote:
> It happens on the same column families and they have the same ddl (as
> already posted) . I
It happens on the same column families and they have the same ddl (as
already posted) . I did not check it after cleanup
.
On Mon, May 6, 2019, 23:43 Nitan Kainth wrote:
> This is strange, never saw this. does it happen to same column family?
>
> Does it happen after cleanup?
>
> On Mon, May 6,
This is strange, never saw this. does it happen to same column family?
Does it happen after cleanup?
On Mon, May 6, 2019 at 3:41 PM Roy Burstein wrote:
> Yes.
>
> On Mon, May 6, 2019, 23:23 Nitan Kainth wrote:
>
>> Roy,
>>
>> You mean all nodes show corruption when you add a node to cluster??
Yes.
On Mon, May 6, 2019, 23:23 Nitan Kainth wrote:
> Roy,
>
> You mean all nodes show corruption when you add a node to cluster??
>
>
> Regards,
>
> Nitan
>
> Cell: 510 449 9629
>
> On May 6, 2019, at 2:48 PM, Roy Burstein wrote:
>
> It happened on all the servers in the cluster every time I
Roy,
You mean all nodes show corruption when you add a node to cluster??
Regards,
Nitan
Cell: 510 449 9629
> On May 6, 2019, at 2:48 PM, Roy Burstein wrote:
>
> It happened on all the servers in the cluster every time I have added node
> .
> This is new cluster nothing was upgraded here ,
It happened on all the servers in the cluster every time I have added node
.
This is new cluster nothing was upgraded here , we have a similar cluster
running on C* 2.1.15 with no issues .
We are aware to the scrub utility just it reproduce every time we added
node to the cluster .
We have many
Before you scrub, from which version were you upgrading and can you post a(n
anonymized) schema?
--
Jeff Jirsa
> On May 6, 2019, at 11:37 AM, Nitan Kainth wrote:
>
> Did you try sstablescrub?
> If that doesn't work, you can delete all files of this sstable id and then
> run repair -pr on
Did you try sstablescrub?
If that doesn't work, you can delete all files of this sstable id and then
run repair -pr on this node.
On Mon, May 6, 2019 at 9:20 AM Roy Burstein wrote:
> Hi ,
> We are having issues with Cassandra 3.11.4 , after adding node to the
> cluster we get many corrupted
Fwiw if you enable the tombstone compaction subproperties, you’ll compact away
most of the other data in those old sstables (but not the partition that’s been
manually updated)
Also table level TTLs help catch this type of manual manipulation - consider
adding it if appropriate.
--
Jeff
Compaction settings:
```
compaction = {'class':
'org.apache.cassandra.db.compaction.TimeWindowCompactionStrategy',
'compaction_window_size': '6', 'compaction_window_unit': 'HOURS',
'max_threshold': '32', 'min_threshold': '4'}
```
read_repair_chance is 0, and I don't do any repairs because
Hi ,
We are having issues with Cassandra 3.11.4 , after adding node to the
cluster we get many corrupted files across the cluster (almost all nodes)
,this is reproducible in our env. .
We have 69 nodes in the cluster ,disk_access_mode: standard .
The stack trace :
WARN [ReadStage-4]
Hello Shalom,
Someone already tried a rolling restart of Cassandra. I will probably try
rebooting the OS.
Repair seems to work if you do it a keyspace at a time.
Thanks for your input.
Rhys
On Sun, May 5, 2019 at 2:14 PM shalom sagges wrote:
> Hi Rhys,
>
> I encountered this error after
13 matches
Mail list logo