Check the logs on the Cassandra servers first. Many different things can
cause the same result, and you will have to dig in deeper to discover
the true cause.
On 14/09/2021 23:55, Joe Obernberger wrote:
I'm getting a lot of the following errors during ingest of data:
com.datastax.oss.driver.api.core.servererrors.WriteTimeoutException:
Cassandra timeout during COUNTER write query at consistency ONE (1
replica were required but only 0 acknowledged the write)
at
com.datastax.oss.driver.api.core.servererrors.WriteTimeoutException.copy(WriteTimeoutException.java:96)
at
com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:149)
at
com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:53)
at
com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:30)
at
com.datastax.oss.driver.internal.core.session.DefaultSession.execute(DefaultSession.java:230)
at
com.datastax.oss.driver.api.core.cql.SyncCqlSession.execute(SyncCqlSession.java:54)
The CQL being executed is:
"update doc.seq set doccount=doccount+? where id=?"
Table is:
CREATE TABLE doc.seq (
id text PRIMARY KEY,
doccount counter
) WITH additional_write_policy = '99p'
AND bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND cdc = false
AND comment = ''
AND compaction = {'class':
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy',
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '16', 'class':
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND default_time_to_live = 0
AND extensions = {}
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair = 'BLOCKING'
AND speculative_retry = '99p';
Total rows in the doc.seq table is 356. What could cause this timeout
error?
Thank you!
-Joe