@lindong28 You are right. Since we only have log level lock, protect segment 
reads using happened in compaction using log lock will introduce non-negligible 
performance overhead, especially for produce. The race conditions we want to 
avoid are happened among:
1. log cleaner segment reads
2. log deletion segment deletes
3. log retention segment updates
4. log truncation segment writes

Bascially what we want to achieve is:
i. mutal exclusion for the above operations
ii. abort 1) if 2), 3) or 4) happens

We can achieve ii) in the current approach but to adress i), the assumption is 
that we are using `abortAndPauseCleaning ` as acquiring lock and 
`resumeCleaning` as releasing the lock. However, any thread (i.e. the one is 
not holding the "lock") can potentially call `resumeCleaning`. If we don't want 
to introduce additional locking, we should add more semantics (probably in the 
LogCleaningState) to achieve i). What do you think?

[ Full content available at: https://github.com/apache/kafka/pull/5591 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to