dependabot[bot] opened a new pull request, #3654:
URL: https://github.com/apache/ignite-3/pull/3654

   Bumps [org.rocksdb:rocksdbjni](https://github.com/facebook/rocksdb) from 
8.11.3 to 9.1.1.
   <details>
   <summary>Release notes</summary>
   <p><em>Sourced from <a 
href="https://github.com/facebook/rocksdb/releases";>org.rocksdb:rocksdbjni's 
releases</a>.</em></p>
   <blockquote>
   <h2>RocksDB 9.1.1</h2>
   <h2>9.1.1 (2024-04-17)</h2>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fixed Java <code>SstFileMetaData</code> to prevent throwing 
<code>java.lang.NoSuchMethodError</code></li>
   <li>Fixed a regression when <code>ColumnFamilyOptions::max_successive_merges 
&gt; 0</code> where the CPU overhead for deciding whether to merge could have 
increased unless the user had set the option 
<code>ColumnFamilyOptions::strict_max_successive_merges</code></li>
   </ul>
   <h2>RocksDB 9.1.0</h2>
   <h2>9.1.0 (2024-03-22)</h2>
   <h3>New Features</h3>
   <ul>
   <li>Added an option, <code>GetMergeOperandsOptions::continue_cb</code>, to 
give users the ability to end <code>GetMergeOperands()</code>'s lookup process 
before all merge operands were found.</li>
   <li>*Add sanity checks for ingesting external files that currently checks if 
the user key comparator used to create the file is compatible with the column 
family's user key comparator.
   *Support ingesting external files for column family that has user-defined 
timestamps in memtable only enabled.</li>
   <li>On file systems that support storage level data checksum and 
reconstruction, retry SST block reads for point lookups, scans, and flush and 
compaction if there's a checksum mismatch on the initial read.</li>
   <li>Some enhancements and fixes to experimental Temperature handling 
features, including new <code>default_write_temperature</code> CF option and 
opening an <code>SstFileWriter</code> with a temperature.</li>
   <li><code>WriteBatchWithIndex</code> now supports wide-column point lookups 
via the <code>GetEntityFromBatch</code> API. See the API comments for more 
details.</li>
   <li>*Implement experimental features: API 
<code>Iterator::GetProperty(&quot;rocksdb.iterator.write-time&quot;)</code> to 
allow users to get data's approximate write unix time and write data with a 
specific write time via <code>WriteBatch::TimedPut</code> API.</li>
   </ul>
   <h3>Public API Changes</h3>
   <ul>
   <li>Best-effort recovery (<code>best_efforts_recovery == true</code>) may 
now be used together with atomic flush (<code>atomic_flush == true</code>). The 
all-or-nothing recovery guarantee for atomically flushed data will be 
upheld.</li>
   <li>Remove deprecated option <code>bottommost_temperature</code>, already 
replaced by <code>last_level_temperature</code></li>
   <li>Added new PerfContext counters for block cache bytes read - 
block_cache_index_read_byte, block_cache_filter_read_byte, 
block_cache_compression_dict_read_byte, and block_cache_read_byte.</li>
   <li>Deprecate experimental Remote Compaction APIs - StartV2() and 
WaitForCompleteV2() and introduce Schedule() and Wait(). The new APIs 
essentially does the same thing as the old APIs. They allow taking externally 
generated unique id to wait for remote compaction to complete.</li>
   <li>*For API <code>WriteCommittedTransaction::GetForUpdate</code>, if the 
column family enables user-defined timestamp, it was mandated that argument 
<code>do_validate</code> cannot be false, and UDT based validation has to be 
done with a user set read timestamp. It's updated to make the UDT based 
validation optional if user sets <code>do_validate</code> to false and does not 
set a read timestamp. With this, <code>GetForUpdate</code> skips UDT based 
validation and it's users' responsibility to enforce the UDT invariant. SO DO 
NOT skip this UDT-based validation if users do not have ways to enforce the UDT 
invariant. Ways to enforce the invariant on the users side include manage a 
monotonically increasing timestamp, commit transactions in a single thread 
etc.</li>
   <li>Defined a new PerfLevel <code>kEnableWait</code> to measure time spent 
by user threads blocked in RocksDB other than mutex, such as a write thread 
waiting to be added to a write group, a write thread delayed or stalled 
etc.</li>
   <li><code>RateLimiter</code>'s API no longer requires the burst size to be 
the refill size. Users of <code>NewGenericRateLimiter()</code> can now provide 
burst size in <code>single_burst_bytes</code>. Implementors of 
<code>RateLimiter::SetSingleBurstBytes()</code> need to adapt their 
implementations to match the changed API doc.</li>
   <li>Add <code>write_memtable_time</code> to the newly introduced PerfLevel 
<code>kEnableWait</code>.</li>
   </ul>
   <h3>Behavior Changes</h3>
   <ul>
   <li><code>RateLimiter</code>s created by 
<code>NewGenericRateLimiter()</code> no longer modify the refill period when 
<code>SetSingleBurstBytes()</code> is called.</li>
   <li>Merge writes will only keep merge operand count within 
<code>ColumnFamilyOptions::max_successive_merges</code> when the key's merge 
operands are all found in memory, unless 
<code>strict_max_successive_merges</code> is explicitly set.</li>
   </ul>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fixed <code>kBlockCacheTier</code> reads to return 
<code>Status::Incomplete</code> when I/O is needed to fetch a merge chain's 
base value from a blob file.</li>
   <li>Fixed <code>kBlockCacheTier</code> reads to return 
<code>Status::Incomplete</code> on table cache miss rather than incorrectly 
returning an empty value.</li>
   <li>Fixed a data race in WalManager that may affect how frequent 
PurgeObsoleteWALFiles() runs.</li>
   <li>Re-enable the recycle_log_file_num option in DBOptions for 
kPointInTimeRecovery WAL recovery mode, which was previously disabled due to a 
bug in the recovery logic. This option is incompatible with 
WriteOptions::disableWAL. A Status::InvalidArgument() will be returned if 
disableWAL is specified.</li>
   </ul>
   <h3>Performance Improvements</h3>
   <ul>
   <li>Java API <code>multiGet()</code> variants now take advantage of the 
underlying batched <code>multiGet()</code> performance improvements.
   Before</li>
   </ul>
   <pre><code>Benchmark (columnFamilyTestType) (keyCount) (keySize) 
(multiGetSize) (valueSize) Mode Cnt Score Error Units
   MultiGetBenchmarks.multiGetList10 no_column_family 10000 16 100 64 thrpt 25 
6315.541 ± 8.106 ops/s
   MultiGetBenchmarks.multiGetList10 no_column_family 10000 16 100 1024 thrpt 
25 6975.468 ± 68.964 ops/s
   </code></pre>
   <p>After</p>
   <pre><code>Benchmark (columnFamilyTestType) (keyCount) (keySize) 
(multiGetSize) (valueSize) Mode Cnt Score Error Units
   MultiGetBenchmarks.multiGetList10 no_column_family 10000 16 100 64 thrpt 25 
7046.739 ± 13.299 ops/s
   MultiGetBenchmarks.multiGetList10 no_column_family 10000 16 100 1024 thrpt 
25 7654.521 ± 60.121 ops/s
   &lt;/tr&gt;&lt;/table&gt; 
   </code></pre>
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Changelog</summary>
   <p><em>Sourced from <a 
href="https://github.com/facebook/rocksdb/blob/v9.1.1/HISTORY.md";>org.rocksdb:rocksdbjni's
 changelog</a>.</em></p>
   <blockquote>
   <h2>9.1.1 (04/17/2024)</h2>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fixed Java <code>SstFileMetaData</code> to prevent throwing 
<code>java.lang.NoSuchMethodError</code></li>
   <li>Fixed a regression when <code>ColumnFamilyOptions::max_successive_merges 
&gt; 0</code> where the CPU overhead for deciding whether to merge could have 
increased unless the user had set the option 
<code>ColumnFamilyOptions::strict_max_successive_merges</code></li>
   </ul>
   <h2>9.1.0 (03/22/2024)</h2>
   <h3>New Features</h3>
   <ul>
   <li>Added an option, <code>GetMergeOperandsOptions::continue_cb</code>, to 
give users the ability to end <code>GetMergeOperands()</code>'s lookup process 
before all merge operands were found.</li>
   <li>*Add sanity checks for ingesting external files that currently checks if 
the user key comparator used to create the file is compatible with the column 
family's user key comparator.
   *Support ingesting external files for column family that has user-defined 
timestamps in memtable only enabled.</li>
   <li>On file systems that support storage level data checksum and 
reconstruction, retry SST block reads for point lookups, scans, and flush and 
compaction if there's a checksum mismatch on the initial read.</li>
   <li>Some enhancements and fixes to experimental Temperature handling 
features, including new <code>default_write_temperature</code> CF option and 
opening an <code>SstFileWriter</code> with a temperature.</li>
   <li><code>WriteBatchWithIndex</code> now supports wide-column point lookups 
via the <code>GetEntityFromBatch</code> API. See the API comments for more 
details.</li>
   <li>*Implement experimental features: API 
<code>Iterator::GetProperty(&quot;rocksdb.iterator.write-time&quot;)</code> to 
allow users to get data's approximate write unix time and write data with a 
specific write time via <code>WriteBatch::TimedPut</code> API.</li>
   </ul>
   <h3>Public API Changes</h3>
   <ul>
   <li>Best-effort recovery (<code>best_efforts_recovery == true</code>) may 
now be used together with atomic flush (<code>atomic_flush == true</code>). The 
all-or-nothing recovery guarantee for atomically flushed data will be 
upheld.</li>
   <li>Remove deprecated option <code>bottommost_temperature</code>, already 
replaced by <code>last_level_temperature</code></li>
   <li>Added new PerfContext counters for block cache bytes read - 
block_cache_index_read_byte, block_cache_filter_read_byte, 
block_cache_compression_dict_read_byte, and block_cache_read_byte.</li>
   <li>Deprecate experimental Remote Compaction APIs - StartV2() and 
WaitForCompleteV2() and introduce Schedule() and Wait(). The new APIs 
essentially does the same thing as the old APIs. They allow taking externally 
generated unique id to wait for remote compaction to complete.</li>
   <li>*For API <code>WriteCommittedTransaction::GetForUpdate</code>, if the 
column family enables user-defined timestamp, it was mandated that argument 
<code>do_validate</code> cannot be false, and UDT based validation has to be 
done with a user set read timestamp. It's updated to make the UDT based 
validation optional if user sets <code>do_validate</code> to false and does not 
set a read timestamp. With this, <code>GetForUpdate</code> skips UDT based 
validation and it's users' responsibility to enforce the UDT invariant. SO DO 
NOT skip this UDT-based validation if users do not have ways to enforce the UDT 
invariant. Ways to enforce the invariant on the users side include manage a 
monotonically increasing timestamp, commit transactions in a single thread 
etc.</li>
   <li>Defined a new PerfLevel <code>kEnableWait</code> to measure time spent 
by user threads blocked in RocksDB other than mutex, such as a write thread 
waiting to be added to a write group, a write thread delayed or stalled 
etc.</li>
   <li><code>RateLimiter</code>'s API no longer requires the burst size to be 
the refill size. Users of <code>NewGenericRateLimiter()</code> can now provide 
burst size in <code>single_burst_bytes</code>. Implementors of 
<code>RateLimiter::SetSingleBurstBytes()</code> need to adapt their 
implementations to match the changed API doc.</li>
   <li>Add <code>write_memtable_time</code> to the newly introduced PerfLevel 
<code>kEnableWait</code>.</li>
   </ul>
   <h3>Behavior Changes</h3>
   <ul>
   <li><code>RateLimiter</code>s created by 
<code>NewGenericRateLimiter()</code> no longer modify the refill period when 
<code>SetSingleBurstBytes()</code> is called.</li>
   <li>Merge writes will only keep merge operand count within 
<code>ColumnFamilyOptions::max_successive_merges</code> when the key's merge 
operands are all found in memory, unless 
<code>strict_max_successive_merges</code> is explicitly set.</li>
   </ul>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fixed <code>kBlockCacheTier</code> reads to return 
<code>Status::Incomplete</code> when I/O is needed to fetch a merge chain's 
base value from a blob file.</li>
   <li>Fixed <code>kBlockCacheTier</code> reads to return 
<code>Status::Incomplete</code> on table cache miss rather than incorrectly 
returning an empty value.</li>
   <li>Fixed a data race in WalManager that may affect how frequent 
PurgeObsoleteWALFiles() runs.</li>
   <li>Re-enable the recycle_log_file_num option in DBOptions for 
kPointInTimeRecovery WAL recovery mode, which was previously disabled due to a 
bug in the recovery logic. This option is incompatible with 
WriteOptions::disableWAL. A Status::InvalidArgument() will be returned if 
disableWAL is specified.</li>
   </ul>
   <h3>Performance Improvements</h3>
   <ul>
   <li>Java API <code>multiGet()</code> variants now take advantage of the 
underlying batched <code>multiGet()</code> performance improvements.
   Before</li>
   </ul>
   <pre><code>Benchmark (columnFamilyTestType) (keyCount) (keySize) 
(multiGetSize) (valueSize) Mode Cnt Score Error Units
   MultiGetBenchmarks.multiGetList10 no_column_family 10000 16 100 64 thrpt 25 
6315.541 ± 8.106 ops/s
   MultiGetBenchmarks.multiGetList10 no_column_family 10000 16 100 1024 thrpt 
25 6975.468 ± 68.964 ops/s
   </code></pre>
   <p>After</p>
   <pre><code>Benchmark (columnFamilyTestType) (keyCount) (keySize) 
(multiGetSize) (valueSize) Mode Cnt Score Error Units
   MultiGetBenchmarks.multiGetList10 no_column_family 10000 16 100 64 thrpt 25 
7046.739 ± 13.299 ops/s
   MultiGetBenchmarks.multiGetList10 no_column_family 10000 16 100 1024 thrpt 
25 7654.521 ± 60.121 ops/s
   </code></pre>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Commits</summary>
   <ul>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/6f7cabeac80a3a6150be2c8a8369fcecb107bf43";><code>6f7cabe</code></a>
 update version.h and HISTORY.md for 9.1.1</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/adb9bf5179cff75942b4294abb245217351f8423";><code>adb9bf5</code></a>
 Fix <code>max_successive_merges</code> counting CPU overhead regression (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/12546";>#12546</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/7dd5e91e393c18b2962f782f71b7b9d7b74036b5";><code>7dd5e91</code></a>
 12474 history entry</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/e94141d8488ef02ec36280bc07baccabf6d36d6d";><code>e94141d</code></a>
 Fix exception on RocksDB.getColumnFamilyMetaData() (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/12474";>#12474</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/bcf88d48ce8aa8b536aee4dd305533b3b83cf435";><code>bcf88d4</code></a>
 Skip io_uring feature test when building with fbcode (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/12525";>#12525</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/f6d01f0f6e31a8be6a52592760345557e980a270";><code>f6d01f0</code></a>
 Don't swallow errors in BlockBasedTable::MultiGet (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/12486";>#12486</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/e223cd46c618448797b8b9dd8a9fd1d7243124c4";><code>e223cd4</code></a>
 Branch cut 9.1.fb</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/c449867236d7023a46b80e41608b5bde8ece0cb0";><code>c449867</code></a>
 MultiCfIterator Impl Follow up (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/12465";>#12465</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/b515a5db3f8013d5a8b6c1deaf99b50b90bd5b81";><code>b515a5d</code></a>
 Replace ScopedArenaIterator with ScopedArenaPtr&lt;InternalIterator&gt; (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/12470";>#12470</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/3b736c4aa3b997cc8ca05fda0f004f6e414a8812";><code>3b736c4</code></a>
 Fix heap use after free error on retry after checksum mismatch (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/12464";>#12464</a>)</li>
   <li>Additional commits viewable in <a 
href="https://github.com/facebook/rocksdb/compare/v8.11.3...v9.1.1";>compare 
view</a></li>
   </ul>
   </details>
   <br />
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.rocksdb:rocksdbjni&package-manager=gradle&previous-version=8.11.3&new-version=9.1.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   <details>
   <summary>Dependabot commands and options</summary>
   <br />
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show <dependency name> ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to