aleksraiden opened a new pull request, #3229: URL: https://github.com/apache/kvrocks/pull/3229
Bump RocksDB version to v10.6.2 (see: https://github.com/facebook/rocksdb/releases/tag/v10.6.2) **Key changes** - Small improvement to CPU efficiency of compression using built-in algorithms, and a dramatic efficiency improvement for LZ4HC, based on reusing data structures between invocations - When allow_ingest_behind is enabled, compaction will no longer drop tombstones based on the absence of underlying data. Tombstones will be preserved to apply to ingested files - Introduce MultiScanArgs::io_coalesce_threshold to allow a configurable IO coalescing threshold - Introduce column family option cf_allow_ingest_behind. This option aims to replace DBOptions::allow_ingest_behind to enable ingest behind at the per-CF level. DBOptions::allow_ingest_behind is deprecated. - Add new option MultiScanArgs::max_prefetch_size that limits the memory usage of per file pinning of prefetched blocks - Add the fail_if_no_udi_on_open flag in BlockBasedTableOption to control whether a missing user defined index block in a SST is a hard error or not - Fix a race condition in FIFO size-based compaction where concurrent threads could select the same non-L0 file, causing assertion failures in debug builds or "Cannot delete table file from LSM tree" errors in release builds. - Minimum supported version of LZ4 library is now 1.7.0 - A new Filesystem::SyncFile function is added for syncing a file that was already written, such as on file ingestion. The default implementation matches previous RocksDB behavior: re-open the file for read-write, sync it, and close it. We recommend overriding for FileSystems that do not require syncing for crash recovery or do not handle (well) re-opening for writes. - Bug fixes -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
