[ 
https://issues.apache.org/jira/browse/KAFKA-9062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16954893#comment-16954893
 ] 

Guozhang Wang commented on KAFKA-9062:
--------------------------------------

My current thought, which is a bit hacky, is that upon finishing up restoration 
we could look into the rocksDB compaction stats, and if found out that it is 
still very hot and hence a write stall is very possible on the first put, we 
can keep the the task idle while continuously calling consumer.poll to stay in 
the group until compaction cools down.

But this does not avoid any general write stalls generally speaking; in the 
long run maybe we still need to consider different threads polling from 
consumer v.s. threads processing from queue. Thoughts?

> Handle stalled writes to RocksDB
> --------------------------------
>
>                 Key: KAFKA-9062
>                 URL: https://issues.apache.org/jira/browse/KAFKA-9062
>             Project: Kafka
>          Issue Type: Bug
>          Components: streams
>            Reporter: Sophie Blee-Goldman
>            Priority: Major
>
> RocksDB may stall writes at times when background compactions or flushes are 
> having trouble keeping up. This means we can effectively end up blocking 
> indefinitely during a StateStore#put call within Streams, and may get kicked 
> from the group if the throttling does not ease up within the max poll 
> interval.
> Example: when restoring large amounts of state from scratch, we use the 
> strategy recommended by RocksDB of turning off automatic compactions and 
> dumping everything into L0. We do batch somewhat, but do not sort these small 
> batches before loading into the db, so we end up with a large number of 
> unsorted L0 files.
> When restoration is complete and we toggle the db back to normal (not bulk 
> loading) settings, a background compaction is triggered to merge all these 
> into the next level. This background compaction can take a long time to merge 
> unsorted keys, especially when the amount of data is quite large.
> Any new writes while the number of L0 files exceeds the max will be stalled 
> until the compaction can finish, and processing after restoring from scratch 
> can block beyond the polling interval



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to