You need to see what's in that place, it could be:

1) Delete in the future (viewable with SELECT WRITETIME(column) ...). This
could be clock skew or using the wrong resolution timestamps (millis vs
micros)
2) Some form of corruption if you dont have compression + crc check chance.
It's possible (but unlikely) that you can have a really broken data file
that simulates a deletion marker. You may be able to find this with
sstable2json (older versions) or sstabledump (3.0+)

sstabledump your data files that have the key (nodetool getendpoints,
nodetool getsstables, sstabledump), look for something unusual.



On Mon, Mar 23, 2020 at 4:00 PM Oliver Herrmann <o.herrmann...@gmail.com>
wrote:

> Hello,
>
> we are facing a strange issue in one of our Cassandra clusters.
> We are using prepared statements to update a table with consistency local
> quorum. When updating some tables it happes very often that data values are
> not written to the database. When verifying the table using cqlsh (with
> consistency all) the row does not exist.
> When using the prepared statements we do not bind values to all
> placeholder for data columns but I think this should not be a problem,
> right?
>
> I checked system.log and debug.log for any hints but nothing is written
> into these log files.
> It's only happening in one specific cluster. When running the same
> software in other clusters everything is working fine.
>
> We are using Cassanda server version 3.11.1 and datastax cpp driver 2.13.0.
>
> Any idea how to analyze/fix this problem?
>
> Regards
> Oliver
>
>

Reply via email to