dependabot[bot] opened a new pull request, #7783:
URL: https://github.com/apache/ignite-3/pull/7783

   Bumps [org.rocksdb:rocksdbjni](https://github.com/facebook/rocksdb) from 
10.2.1 to 10.5.1.
   <details>
   <summary>Release notes</summary>
   <p><em>Sourced from <a 
href="https://github.com/facebook/rocksdb/releases";>org.rocksdb:rocksdbjni's 
releases</a>.</em></p>
   <blockquote>
   <h2>v10.5.1</h2>
   <h2>10.5.1 (2025-08-04)</h2>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fixed a bug in remote compaction that may mistakenly delete live SST 
file(s) during the cleanup phase when no keys survive the compaction (all 
expired)</li>
   </ul>
   <h2>10.5.0 (2025-07-21)</h2>
   <h3>Public API Changes</h3>
   <ul>
   <li>DB option skip_checking_sst_file_sizes_on_db_open is deprecated, in 
favor of validating file size in parallel in a thread pool, when db is opened. 
When DB is opened, with paranoid check enabled, a file with the wrong size 
would fail the DB open. With paranoid check disabled, the DB open would 
succeed, the column family with the corrupted file would not be read or write, 
while the other healthy column families could be read and write normally. When 
max_open_files option is not set to -1, only a subset of the files will be 
opened and checked. The rest of the files will be opened and checked when they 
are accessed.</li>
   <li>GetTtl() API is now available in TTL DB</li>
   </ul>
   <h3>Behavior Changes</h3>
   <ul>
   <li>PessimisticTransaction::GetWaitingTxns now returns waiting transaction 
information even if the current transaction has timed out. This allows the 
information to be surfaced to users for debugging purposes once it is known 
that the timeout has occured.</li>
   <li>A new API GetFileSize is added to FSRandomAccessFile interface class. It 
uses fstat vs stat on the posix implementation which is more efficient. Caller 
could use it to get file size faster. This function might be required in the 
future for FileSystem implementation outside of the RocksDB code base.</li>
   <li>RocksDB now triggers eligible compactions every 12 hours when periodic 
compaction is configured. This solves a limitation of the compaction trigger 
mechanism, which would only trigger compaction after specific events like 
flush, compaction, or SetOptions.</li>
   </ul>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix a bug in BackupEngine that can crash backup due to a null 
FSWritableFile passed to WritableFileWriter.</li>
   <li>Fix DB::NewMultiScan iterator to respect the scan upper bound specified 
in ScanOptions</li>
   </ul>
   <h3>Performance Improvements</h3>
   <ul>
   <li>Optimized MultiScan using BlockBasedTable to coalesce I/Os and prefetch 
all data blocks.</li>
   </ul>
   <h2>v10.4.2</h2>
   <h2>10.4.2 (07/09/2025-07-09)</h2>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix a race condition between concurrent DB::Open sharing the same 
SstFileManager instance.</li>
   </ul>
   <h2>10.4.1 (2025-07-01)</h2>
   <h3>Behavior Changes</h3>
   <ul>
   <li>RocksDB now triggers eligible compactions every 12 hours when periodic 
compaction is configured. This solves a limitation of the compaction trigger 
mechanism, which would only trigger compaction after specific events like 
flush, compaction, or SetOptions.</li>
   </ul>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix a bug in BackupEngine that can crash backup due to a null 
FSWritableFile passed to WritableFileWriter.</li>
   </ul>
   <h2>10.4.0 (2025-06-20)</h2>
   <h3>New Features</h3>
   <ul>
   <li>Add a new CF option <code>memtable_avg_op_scan_flush_trigger</code> that 
supports triggering memtable flush when an iterator scans through an expensive 
range of keys, with the average number of skipped keys from the active memtable 
exceeding the threshold.</li>
   <li>Vector based memtable now supports concurrent writers 
(DBOptions::allow_concurrent_memtable_write) <a 
href="https://redirect.github.com/facebook/rocksdb/issues/13675";>#13675</a>.</li>
   <li>Add new experimental 
<code>TransactionOptions::large_txn_commit_optimize_byte_threshold</code> to 
enable optimizations for large transaction commit by transaction batch data 
size.</li>
   <li>Add a new option 
<code>CompactionOptionsUniversal::reduce_file_locking</code> and if it's true, 
auto universal compaction picking will adjust to minimize locking of input 
files when bottom priority compactions are waiting to run. This can increase 
the likelihood of existing L0s being selected for compaction, thereby improving 
write stall and reducing read regression.</li>
   <li>Add new <code>format_version=7</code> to aid experimental support of 
custom compression algorithms with CompressionManager and block-based table. 
This format version includes changing the format of 
<code>TableProperties::compression_name</code>.</li>
   </ul>
   <h3>Public API Changes</h3>
   <ul>
   <li>Change NewExternalTableFactory to return a unique_ptr instead of 
shared_ptr.</li>
   <li>Add an optional min file size requirement for deletion triggered 
compaction. It can be specified when creating 
<code>CompactOnDeletionCollectorFactory</code>.</li>
   </ul>
   <h3>Behavior Changes</h3>
   <ul>
   <li><code>TransactionOptions::large_txn_commit_optimize_threshold</code> now 
has default value 0 for disabled. 
<code>TransactionDBOptions::txn_commit_bypass_memtable_threshold</code> now has 
no effect on transactions.</li>
   </ul>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix a bug where CreateColumnFamilyWithImport() could miss the SST file 
for the memtable flush it triggered. The exported CF then may not contain the 
updates in the memtable when CreateColumnFamilyWithImport() is called.</li>
   </ul>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Changelog</summary>
   <p><em>Sourced from <a 
href="https://github.com/facebook/rocksdb/blob/v10.5.1/HISTORY.md";>org.rocksdb:rocksdbjni's
 changelog</a>.</em></p>
   <blockquote>
   <h2>10.5.1 (08/04/2025)</h2>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fixed a bug in remote compaction that may mistakenly delete live SST 
file(s) during the cleanup phase when no keys survive the compaction (all 
expired)</li>
   </ul>
   <h2>10.5.0 (07/21/2025)</h2>
   <h3>Public API Changes</h3>
   <ul>
   <li>DB option skip_checking_sst_file_sizes_on_db_open is deprecated, in 
favor of validating file size in parallel in a thread pool, when db is opened. 
When DB is opened, with paranoid check enabled, a file with the wrong size 
would fail the DB open. With paranoid check disabled, the DB open would 
succeed, the column family with the corrupted file would not be read or write, 
while the other healthy column families could be read and write normally. When 
max_open_files option is not set to -1, only a subset of the files will be 
opened and checked. The rest of the files will be opened and checked when they 
are accessed.</li>
   <li>GetTtl() API is now available in TTL DB</li>
   </ul>
   <h3>Behavior Changes</h3>
   <ul>
   <li>PessimisticTransaction::GetWaitingTxns now returns waiting transaction 
information even if the current transaction has timed out. This allows the 
information to be surfaced to users for debugging purposes once it is known 
that the timeout has occured.</li>
   <li>A new API GetFileSize is added to FSRandomAccessFile interface class. It 
uses fstat vs stat on the posix implementation which is more efficient. Caller 
could use it to get file size faster. This function might be required in the 
future for FileSystem implementation outside of the RocksDB code base.</li>
   <li>RocksDB now triggers eligible compactions every 12 hours when periodic 
compaction is configured. This solves a limitation of the compaction trigger 
mechanism, which would only trigger compaction after specific events like 
flush, compaction, or SetOptions.</li>
   </ul>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix a bug in BackupEngine that can crash backup due to a null 
FSWritableFile passed to WritableFileWriter.</li>
   <li>Fix DB::NewMultiScan iterator to respect the scan upper bound specified 
in ScanOptions</li>
   </ul>
   <h3>Performance Improvements</h3>
   <ul>
   <li>Optimized MultiScan using BlockBasedTable to coalesce I/Os and prefetch 
all data blocks.</li>
   </ul>
   <h2>10.4.0 (06/20/2025)</h2>
   <h3>New Features</h3>
   <ul>
   <li>Add a new CF option <code>memtable_avg_op_scan_flush_trigger</code> that 
supports triggering memtable flush when an iterator scans through an expensive 
range of keys, with the average number of skipped keys from the active memtable 
exceeding the threshold.</li>
   <li>Vector based memtable now supports concurrent writers 
(DBOptions::allow_concurrent_memtable_write) <a 
href="https://redirect.github.com/facebook/rocksdb/issues/13675";>#13675</a>.</li>
   <li>Add new experimental 
<code>TransactionOptions::large_txn_commit_optimize_byte_threshold</code> to 
enable optimizations for large transaction commit by transaction batch data 
size.</li>
   <li>Add a new option 
<code>CompactionOptionsUniversal::reduce_file_locking</code> and if it's true, 
auto universal compaction picking will adjust to minimize locking of input 
files when bottom priority compactions are waiting to run. This can increase 
the likelihood of existing L0s being selected for compaction, thereby improving 
write stall and reducing read regression.</li>
   <li>Add new <code>format_version=7</code> to aid experimental support of 
custom compression algorithms with CompressionManager and block-based table. 
This format version includes changing the format of 
<code>TableProperties::compression_name</code>.</li>
   </ul>
   <h3>Public API Changes</h3>
   <ul>
   <li>Change NewExternalTableFactory to return a unique_ptr instead of 
shared_ptr.</li>
   <li>Add an optional min file size requirement for deletion triggered 
compaction. It can be specified when creating 
<code>CompactOnDeletionCollectorFactory</code>.</li>
   </ul>
   <h3>Behavior Changes</h3>
   <ul>
   <li><code>TransactionOptions::large_txn_commit_optimize_threshold</code> now 
has default value 0 for disabled. 
<code>TransactionDBOptions::txn_commit_bypass_memtable_threshold</code> now has 
no effect on transactions.</li>
   </ul>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix a bug where CreateColumnFamilyWithImport() could miss the SST file 
for the memtable flush it triggered. The exported CF then may not contain the 
updates in the memtable when CreateColumnFamilyWithImport() is called.</li>
   <li>Fix iterator operations returning NotImplemented status if 
disallow_memtable_writes and paranoid_memory_checks CF options are both 
set.</li>
   <li>Fixed handling of file checksums in IngestExternalFile() to allow 
providing checksums using recognized but not necessarily the DB's preferred 
checksum function, to ease migration between checksum functions.</li>
   </ul>
   <h2>10.3.0 (05/17/2025)</h2>
   <h3>New Features</h3>
   <ul>
   <li>Add new experimental 
<code>CompactionOptionsFIFO::allow_trivial_copy_when_change_temperature</code> 
along with <code>CompactionOptionsFIFO::trivial_copy_buffer_size</code> to 
allow optimizing FIFO compactions with tiering when kChangeTemperature to move 
files from source tier FileSystem to another tier FileSystem via trivial and 
direct copying raw sst file instead of reading thru the content of the SST file 
then rebuilding the table files.</li>
   <li>Add a new field to Compaction Stats in LOG files for the pre-compression 
size written to each level.</li>
   <li>Add new experimental 
<code>TransactionOptions::large_txn_commit_optimize_threshold</code> to enable 
optimizations for large transaction commit with per transaction threshold. 
<code>TransactionDBOptions::txn_commit_bypass_memtable_threshold</code> is 
deprecated in favor of this transaction option.</li>
   <li>[internal team use only] Allow an application-defined 
<code>request_id</code> to be passed to RocksDB and propagated to the 
filesystem via IODebugContext</li>
   </ul>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix a bug where transaction lock upgrade can incorrectly fail with a 
Deadlock status. This happens when a transaction has a non-zero timeout and 
tries to upgrade a shared lock that is also held by another transaction.</li>
   </ul>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Commits</summary>
   <ul>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/c007dd1da63fadc6bd0f34c7b8314cc29e7a42bd";><code>c007dd1</code></a>
 Switch back to FSWritableFile in external sst file ingestion job (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13791";>#13791</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/0d7938619723bdf8a705ec4bf5d9c20fd0729407";><code>0d79386</code></a>
 Update version.h</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/621dbd54fa28a93c4d2af3d7005c05b7e26687da";><code>621dbd5</code></a>
 Update HISTORY.md</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/03cfaf3931d541d2adda07eb151dd1cca307c2a1";><code>03cfaf3</code></a>
 Make CompactionPicker::CompactFiles() take earliest_snapshot and 
snapshot_che...</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/1061f74b7d4e6d137b39b61004dbbc131fcff29a";><code>1061f74</code></a>
 UnitTest for Remote Compaction Empty Result (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13812";>#13812</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/6a83adcea81435157b901e754621d3825344473c";><code>6a83adc</code></a>
 prevent data loss when all entries are expired in Remote Compaction (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13743";>#13743</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/ab42881ce799f221df0edc5eb2f8e35d8010f271";><code>ab42881</code></a>
 Update HISTORY.md</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/27c544fb29ef833afa7c7a4c8c1c54ced8c18882";><code>27c544f</code></a>
 Expose GetTtl() API in TTL DB (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13790";>#13790</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/11a61ca109c544412c90c11f2555c0d536ebdf09";><code>11a61ca</code></a>
 Support recompress-with-CompressionManager in sst_dump (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13783";>#13783</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/462388a4348f0007a0669f5856024ae9a0785357";><code>462388a</code></a>
 Update HISTORY.md</li>
   <li>Additional commits viewable in <a 
href="https://github.com/facebook/rocksdb/compare/v10.2.1...v10.5.1";>compare 
view</a></li>
   </ul>
   </details>
   <br />
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.rocksdb:rocksdbjni&package-manager=gradle&previous-version=10.2.1&new-version=10.5.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   <details>
   <summary>Dependabot commands and options</summary>
   <br />
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot show <dependency name> ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to