dependabot[bot] opened a new pull request, #6599: URL: https://github.com/apache/ozone/pull/6599
Bumps [org.rocksdb:rocksdbjni](https://github.com/facebook/rocksdb) from 7.7.3 to 7.10.2. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/facebook/rocksdb/releases">org.rocksdb:rocksdbjni's releases</a>.</em></p> <blockquote> <h2>RocksDB 7.10.2</h2> <h2>7.10.2 (2023-02-10)</h2> <h3>Bug Fixes</h3> <ul> <li>Fixed a bug in DB open/recovery from a compressed WAL that was caused due to incorrect handling of certain record fragments with the same offset within a WAL block.</li> </ul> <h2>7.10.1 (2023-02-01)</h2> <h3>Bug Fixes</h3> <ul> <li>Fixed a data race on <code>ColumnFamilyData::flush_reason</code> caused by concurrent flushes.</li> <li>Fixed <code>DisableManualCompaction()</code> and <code>CompactRangeOptions::canceled</code> to cancel compactions even when they are waiting on conflicting compactions to finish</li> <li>Fixed a bug in which a successful <code>GetMergeOperands()</code> could transiently return <code>Status::MergeInProgress()</code></li> <li>Return the correct error (Status::NotSupported()) to MultiGet caller when ReadOptions::async_io flag is true and IO uring is not enabled. Previously, Status::Corruption() was being returned when the actual failure was lack of async IO support.</li> </ul> <h2>7.10.0 (2023-01-23)</h2> <h3>Behavior changes</h3> <ul> <li>Make best-efforts recovery verify SST unique ID before Version construction (<a href="https://redirect.github.com/facebook/rocksdb/issues/10962">#10962</a>)</li> <li>Introduce <code>epoch_number</code> and sort L0 files by <code>epoch_number</code> instead of <code>largest_seqno</code>. <code>epoch_number</code> represents the order of a file being flushed or ingested/imported. Compaction output file will be assigned with the minimum <code>epoch_number</code> among input files'. For L0, larger <code>epoch_number</code> indicates newer L0 file.</li> </ul> <h3>Bug Fixes</h3> <ul> <li>Fixed a regression in iterator where range tombstones after <code>iterate_upper_bound</code> is processed.</li> <li>Fixed a memory leak in MultiGet with async_io read option, caused by IO errors during table file open</li> <li>Fixed a bug that multi-level FIFO compaction deletes one file in non-L0 even when <code>CompactionOptionsFIFO::max_table_files_size</code> is no exceeded since <a href="https://redirect.github.com/facebook/rocksdb/issues/10348">#10348</a> or 7.8.0.</li> <li>Fixed a bug caused by <code>DB::SyncWAL()</code> affecting <code>track_and_verify_wals_in_manifest</code>. Without the fix, application may see "open error: Corruption: Missing WAL with log number" while trying to open the db. The corruption is a false alarm but prevents DB open (<a href="https://redirect.github.com/facebook/rocksdb/issues/10892">#10892</a>).</li> <li>Fixed a BackupEngine bug in which RestoreDBFromLatestBackup would fail if the latest backup was deleted and there is another valid backup available.</li> <li>Fix L0 file misorder corruption caused by ingesting files of overlapping seqnos with memtable entries' through introducing <code>epoch_number</code>. Before the fix, <code>force_consistency_checks=true</code> may catch the corruption before it's exposed to readers, in which case writes returning <code>Status::Corruption</code> would be expected. Also replace the previous incomplete fix (<a href="https://redirect.github.com/facebook/rocksdb/issues/5958">#5958</a>) to the same corruption with this new and more complete fix.</li> <li>Fixed a bug in LockWAL() leading to re-locking mutex (<a href="https://redirect.github.com/facebook/rocksdb/issues/11020">#11020</a>).</li> <li>Fixed a heap use after free bug in async scan prefetching when the scan thread and another thread try to read and load the same seek block into cache.</li> <li>Fixed a heap use after free in async scan prefetching if dictionary compression is enabled, in which case sync read of the compression dictionary gets mixed with async prefetching</li> <li>Fixed a data race bug of <code>CompactRange()</code> under <code>change_level=true</code> acts on overlapping range with an ongoing file ingestion for level compaction. This will either result in overlapping file ranges corruption at a certain level caught by <code>force_consistency_checks=true</code> or protentially two same keys both with seqno 0 in two different levels (i.e, new data ends up in lower/older level). The latter will be caught by assertion in debug build but go silently and result in read returning wrong result in release build. This fix is general so it also replaced previous fixes to a similar problem for <code>CompactFiles()</code> (<a href="https://redirect.github.com/facebook/rocksdb/issues/4665">#4665</a>), general <code>CompactRange()</code> and auto compaction (commit 5c64fb6 and 87dfc1d).</li> <li>Fixed a bug in compaction output cutting where small output files were produced due to TTL file cutting states were not being updated (<a href="https://redirect.github.com/facebook/rocksdb/issues/11075">#11075</a>).</li> </ul> <h3>New Features</h3> <ul> <li>When an SstPartitionerFactory is configured, CompactRange() now automatically selects for compaction any files overlapping a partition boundary that is in the compaction range, even if no actual entries are in the requested compaction range. With this feature, manual compaction can be used to (re-)establish SST partition points when SstPartitioner changes, without a full compaction.</li> <li>Add BackupEngine feature to exclude files from backup that are known to be backed up elsewhere, using <code>CreateBackupOptions::exclude_files_callback</code>. To restore the DB, the excluded files must be provided in alternative backup directories using <code>RestoreOptions::alternate_dirs</code>.</li> </ul> <h3>Public API Changes</h3> <ul> <li>Substantial changes have been made to the Cache class to support internal development goals. Direct use of Cache class members is discouraged and further breaking modifications are expected in the future. SecondaryCache has some related changes and implementations will need to be updated. (Unlike Cache, SecondaryCache is still intended to support user implementations, and disruptive changes will be avoided.) (<a href="https://redirect.github.com/facebook/rocksdb/issues/10975">#10975</a>)</li> <li>Add <code>MergeOperationOutput::op_failure_scope</code> for merge operator users to control the blast radius of merge operator failures. Existing merge operator users do not need to make any change to preserve the old behavior</li> </ul> <h3>Performance Improvements</h3> <ul> <li>Updated xxHash source code, which should improve kXXH3 checksum speed, at least on ARM (<a href="https://redirect.github.com/facebook/rocksdb/issues/11098">#11098</a>).</li> <li>Improved CPU efficiency of DB reads, from block cache access improvements (<a href="https://redirect.github.com/facebook/rocksdb/issues/10975">#10975</a>).</li> </ul> <h2>RocksDB 7.9.2</h2> <h2>7.9.2 (2022-12-21)</h2> <h3>Bug Fixes</h3> <ul> <li>Fixed a heap use after free bug in async scan prefetching when the scan thread and another thread try to read and load the same seek block into cache.</li> </ul> <h2>7.9.1 (2022-12-08)</h2> <h3>Bug Fixes</h3> <ul> <li>Fixed a regression in iterator where range tombstones after <code>iterate_upper_bound</code> is processed.</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/facebook/rocksdb/blob/v7.10.2/HISTORY.md">org.rocksdb:rocksdbjni's changelog</a>.</em></p> <blockquote> <h2>7.10.2 (02/10/2023)</h2> <h3>Bug Fixes</h3> <ul> <li>Fixed a bug in DB open/recovery from a compressed WAL that was caused due to incorrect handling of certain record fragments with the same offset within a WAL block.</li> </ul> <h2>7.10.1 (02/01/2023)</h2> <h3>Bug Fixes</h3> <ul> <li>Fixed a data race on <code>ColumnFamilyData::flush_reason</code> caused by concurrent flushes.</li> <li>Fixed <code>DisableManualCompaction()</code> and <code>CompactRangeOptions::canceled</code> to cancel compactions even when they are waiting on conflicting compactions to finish</li> <li>Fixed a bug in which a successful <code>GetMergeOperands()</code> could transiently return <code>Status::MergeInProgress()</code></li> <li>Return the correct error (Status::NotSupported()) to MultiGet caller when ReadOptions::async_io flag is true and IO uring is not enabled. Previously, Status::Corruption() was being returned when the actual failure was lack of async IO support.</li> </ul> <h2>7.10.0 (01/23/2023)</h2> <h3>Behavior changes</h3> <ul> <li>Make best-efforts recovery verify SST unique ID before Version construction (<a href="https://redirect.github.com/facebook/rocksdb/issues/10962">#10962</a>)</li> <li>Introduce <code>epoch_number</code> and sort L0 files by <code>epoch_number</code> instead of <code>largest_seqno</code>. <code>epoch_number</code> represents the order of a file being flushed or ingested/imported. Compaction output file will be assigned with the minimum <code>epoch_number</code> among input files'. For L0, larger <code>epoch_number</code> indicates newer L0 file.</li> </ul> <h3>Bug Fixes</h3> <ul> <li>Fixed a regression in iterator where range tombstones after <code>iterate_upper_bound</code> is processed.</li> <li>Fixed a memory leak in MultiGet with async_io read option, caused by IO errors during table file open</li> <li>Fixed a bug that multi-level FIFO compaction deletes one file in non-L0 even when <code>CompactionOptionsFIFO::max_table_files_size</code> is no exceeded since <a href="https://redirect.github.com/facebook/rocksdb/issues/10348">#10348</a> or 7.8.0.</li> <li>Fixed a bug caused by <code>DB::SyncWAL()</code> affecting <code>track_and_verify_wals_in_manifest</code>. Without the fix, application may see "open error: Corruption: Missing WAL with log number" while trying to open the db. The corruption is a false alarm but prevents DB open (<a href="https://redirect.github.com/facebook/rocksdb/issues/10892">#10892</a>).</li> <li>Fixed a BackupEngine bug in which RestoreDBFromLatestBackup would fail if the latest backup was deleted and there is another valid backup available.</li> <li>Fix L0 file misorder corruption caused by ingesting files of overlapping seqnos with memtable entries' through introducing <code>epoch_number</code>. Before the fix, <code>force_consistency_checks=true</code> may catch the corruption before it's exposed to readers, in which case writes returning <code>Status::Corruption</code> would be expected. Also replace the previous incomplete fix (<a href="https://redirect.github.com/facebook/rocksdb/issues/5958">#5958</a>) to the same corruption with this new and more complete fix.</li> <li>Fixed a bug in LockWAL() leading to re-locking mutex (<a href="https://redirect.github.com/facebook/rocksdb/issues/11020">#11020</a>).</li> <li>Fixed a heap use after free bug in async scan prefetching when the scan thread and another thread try to read and load the same seek block into cache.</li> <li>Fixed a heap use after free in async scan prefetching if dictionary compression is enabled, in which case sync read of the compression dictionary gets mixed with async prefetching</li> <li>Fixed a data race bug of <code>CompactRange()</code> under <code>change_level=true</code> acts on overlapping range with an ongoing file ingestion for level compaction. This will either result in overlapping file ranges corruption at a certain level caught by <code>force_consistency_checks=true</code> or protentially two same keys both with seqno 0 in two different levels (i.e, new data ends up in lower/older level). The latter will be caught by assertion in debug build but go silently and result in read returning wrong result in release build. This fix is general so it also replaced previous fixes to a similar problem for <code>CompactFiles()</code> (<a href="https://redirect.github.com/facebook/rocksdb/issues/4665">#4665</a>), general <code>CompactRange()</code> and auto compaction (commit 5c64fb6 and 87dfc1d).</li> <li>Fixed a bug in compaction output cutting where small output files were produced due to TTL file cutting states were not being updated (<a href="https://redirect.github.com/facebook/rocksdb/issues/11075">#11075</a>).</li> </ul> <h3>New Features</h3> <ul> <li>When an SstPartitionerFactory is configured, CompactRange() now automatically selects for compaction any files overlapping a partition boundary that is in the compaction range, even if no actual entries are in the requested compaction range. With this feature, manual compaction can be used to (re-)establish SST partition points when SstPartitioner changes, without a full compaction.</li> <li>Add BackupEngine feature to exclude files from backup that are known to be backed up elsewhere, using <code>CreateBackupOptions::exclude_files_callback</code>. To restore the DB, the excluded files must be provided in alternative backup directories using <code>RestoreOptions::alternate_dirs</code>.</li> </ul> <h3>Public API Changes</h3> <ul> <li>Substantial changes have been made to the Cache class to support internal development goals. Direct use of Cache class members is discouraged and further breaking modifications are expected in the future. SecondaryCache has some related changes and implementations will need to be updated. (Unlike Cache, SecondaryCache is still intended to support user implementations, and disruptive changes will be avoided.) (<a href="https://redirect.github.com/facebook/rocksdb/issues/10975">#10975</a>)</li> <li>Add <code>MergeOperationOutput::op_failure_scope</code> for merge operator users to control the blast radius of merge operator failures. Existing merge operator users do not need to make any change to preserve the old behavior</li> </ul> <h3>Performance Improvements</h3> <ul> <li>Updated xxHash source code, which should improve kXXH3 checksum speed, at least on ARM (<a href="https://redirect.github.com/facebook/rocksdb/issues/11098">#11098</a>).</li> <li>Improved CPU efficiency of DB reads, from block cache access improvements (<a href="https://redirect.github.com/facebook/rocksdb/issues/10975">#10975</a>).</li> </ul> <h2>7.9.0 (11/21/2022)</h2> <h3>Performance Improvements</h3> <ul> <li>Fixed an iterator performance regression for delete range users when scanning through a consecutive sequence of range tombstones (<a href="https://redirect.github.com/facebook/rocksdb/issues/10877">#10877</a>).</li> </ul> <h3>Bug Fixes</h3> <ul> <li>Fix memory corruption error in scans if async_io is enabled. Memory corruption happened if there is IOError while reading the data leading to empty buffer and other buffer already in progress of async read goes again for reading.</li> <li>Fix failed memtable flush retry bug that could cause wrongly ordered updates, which would surface to writers as <code>Status::Corruption</code> in case of <code>force_consistency_checks=true</code> (default). It affects use cases that enable both parallel flush (<code>max_background_flushes > 1</code> or <code>max_background_jobs >= 8</code>) and non-default memtable count (<code>max_write_buffer_number > 2</code>).</li> <li>Fixed an issue where the <code>READ_NUM_MERGE_OPERANDS</code> ticker was not updated when the base key-value or tombstone was read from an SST file.</li> <li>Fixed a memory safety bug when using a SecondaryCache with <code>block_cache_compressed</code>. <code>block_cache_compressed</code> no longer attempts to use SecondaryCache features.</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/facebook/rocksdb/commit/3258b5c3e2488464de0827343c8c27bc6499765e"><code>3258b5c</code></a> Update version.h to 7.10.2</li> <li><a href="https://github.com/facebook/rocksdb/commit/3a04cd558ed07b0f0dd1c0d2cc7df71208df0787"><code>3a04cd5</code></a> Update HISTORY.md for 7.10.2</li> <li><a href="https://github.com/facebook/rocksdb/commit/0ffa8db9b109d51cf22a90d6433c0a10c4bf1dfb"><code>0ffa8db</code></a> Fix bug in WAL streaming uncompression (<a href="https://redirect.github.com/facebook/rocksdb/issues/11198">#11198</a>)</li> <li><a href="https://github.com/facebook/rocksdb/commit/1185bb75ca458764d504f0717b578104691fb9f7"><code>1185bb7</code></a> Return any errors returned by ReadAsync to the MultiGet caller (<a href="https://redirect.github.com/facebook/rocksdb/issues/11171">#11171</a>)</li> <li><a href="https://github.com/facebook/rocksdb/commit/d57ec3f89659ed25b09fa336f6c0387342805ec5"><code>d57ec3f</code></a> update HISTORY.md and version.h for 7.10.1</li> <li><a href="https://github.com/facebook/rocksdb/commit/8a354a1197f6646a23e04d7fa526b458752b7095"><code>8a354a1</code></a> add release note for GetMergeOperands() fix</li> <li><a href="https://github.com/facebook/rocksdb/commit/fcb0580b0859c03ff536ab096acea8e6d09d449e"><code>fcb0580</code></a> Fix GetMergeOperands() returning MergeInProgress (<a href="https://redirect.github.com/facebook/rocksdb/issues/11136">#11136</a>)</li> <li><a href="https://github.com/facebook/rocksdb/commit/06765b5131efc773cecf97c36a7a842445999092"><code>06765b5</code></a> Allow canceling manual compaction while waiting for conflicting compaction (#...</li> <li><a href="https://github.com/facebook/rocksdb/commit/fa13962e0c5e79250eec2c4a68676c10bc8bb8d5"><code>fa13962</code></a> Fix data race on <code>ColumnFamilyData::flush_reason</code> by letting FlushRequest/Job...</li> <li><a href="https://github.com/facebook/rocksdb/commit/e5dcebf756960989bb5a750dffb94c851f22e7e9"><code>e5dcebf</code></a> Fix DelayWrite() calls for two_write_queues (<a href="https://redirect.github.com/facebook/rocksdb/issues/11130">#11130</a>)</li> <li>Additional commits viewable in <a href="https://github.com/facebook/rocksdb/compare/v7.7.3...v7.10.2">compare view</a></li> </ul> </details> <br /> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
