dependabot[bot] opened a new pull request, #8060:
URL: https://github.com/apache/storm/pull/8060

   Bumps [org.rocksdb:rocksdbjni](https://github.com/facebook/rocksdb) from 
9.10.0 to 10.1.3.
   <details>
   <summary>Release notes</summary>
   <p><em>Sourced from <a 
href="https://github.com/facebook/rocksdb/releases";>org.rocksdb:rocksdbjni's 
releases</a>.</em></p>
   <blockquote>
   <h2>v10.1.3</h2>
   <h2>10.1.3 (2025-04-09)</h2>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix a bug where resurrected full_history_ts_low from a previous session 
that enables UDT is used by this session that disables UDT.</li>
   </ul>
   <h2>10.1.2 (2025-04-07)</h2>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix a bug where tail size of remote compaction output is not persisted 
in primary db's manifest</li>
   </ul>
   <h2>10.1.0 (2025-03-24)</h2>
   <h3>New Features</h3>
   <ul>
   <li>Added a new <code>DBOptions.calculate_sst_write_lifetime_hint_set</code> 
setting that allows to customize which compaction styles SST write lifetime 
hint calculation is allowed on. Today RocksDB supports only two modes 
<code>kCompactionStyleLevel</code> and 
<code>kCompactionStyleUniversal</code>.</li>
   <li>Add a new field <code>num_l0_files</code> in 
<code>CompactionJobInfo</code> about the number of L0 files in the CF right 
before and after the compaction</li>
   <li>Added per-key-placement feature in Remote Compaction</li>
   <li>Implemented API DB::GetPropertiesOfTablesByLevel that retrieves table 
properties for files in each LSM tree level</li>
   </ul>
   <h3>Public API Changes</h3>
   <ul>
   <li><code>GetAllKeyVersions()</code> now interprets empty slices literally, 
as valid keys, and uses new <code>OptSlice</code> type default value for 
extreme upper and lower range limits.</li>
   <li><code>DeleteFilesInRanges()</code> now takes <code>RangeOpt</code> which 
is based on <code>OptSlice</code>. The overload taking <code>RangePtr</code> is 
deprecated.</li>
   <li>Add an unordered map of name/value pairs, ReadOptions::property_bag, to 
pass opaque options through to an external table when creating an Iterator.</li>
   <li>Introduced CompactionServiceJobStatus::kAborted to allow handling 
aborted scenario in Schedule(), Wait() or OnInstallation() APIs in Remote 
Compactions.</li>
   <li>format_version &lt; 2 in BlockBasedTableOptions is no longer supported 
for writing new files. Support for reading such files is deprecated and might 
be removed in the future. 
<code>CompressedSecondaryCacheOptions::compress_format_version == 1</code> is 
also deprecated.</li>
   </ul>
   <h3>Behavior Changes</h3>
   <ul>
   <li><code>ldb</code> now returns an error if the specified 
<code>--compression_type</code> is not supported in the build.</li>
   <li>MultiGet with snapshot and ReadOptions::read_tier = kPersistedTier will 
now read a consistent view across CFs (instead of potentially reading some CF 
before and some CF after a flush).</li>
   <li>CreateColumnFamily() is no longer allowed on a read-only DB 
(OpenForReadOnly())</li>
   </ul>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fixed stats for Tiered Storage with preclude_last_level feature</li>
   </ul>
   <h2>RocksDB 10.0.1 Release</h2>
   <h2>10.0.1 (2025-03-05)</h2>
   <h3>Public API Changes</h3>
   <ul>
   <li>Add an unordered map of name/value pairs, ReadOptions::property_bag, to 
pass opaque options through to an external table when creating an Iterator.</li>
   <li>Introduced CompactionServiceJobStatus::kAborted to allow handling 
aborted scenario in Schedule(), Wait() or OnInstallation() APIs in Remote 
Compactions.</li>
   <li>Added a column family option disallow_memtable_writes to safely fail any 
attempts to write to a non-default column family. This can be used for column 
families that are ingest only.</li>
   </ul>
   <h2>10.0.0 (2025-02-21)</h2>
   <h3>New Features</h3>
   <ul>
   <li>Introduced new <code>auto_refresh_iterator_with_snapshot</code> opt-in 
knob that (when enabled) will periodically release obsolete memory and storage 
resources for as long as the iterator is making progress and its supplied 
<code>read_options.snapshot</code> was initialized with non-nullptr value.</li>
   <li>Added the ability to plug-in a custom table reader implementation. See 
include/rocksdb/external_table_reader.h for more details.</li>
   <li>Experimental feature: RocksDB now supports FAISS inverted file based 
indices via the secondary indexing framework. Applications can use FAISS 
secondary indices to automatically quantize embeddings and perform 
K-nearest-neighbors similarity searches. See <code>FaissIVFIndex</code> and 
<code>SecondaryIndex</code> for more details. Note: the FAISS integration 
currently requires using the BUCK build.</li>
   <li>Add new DB property <code>num_running_compaction_sorted_runs</code> that 
tracks the number of sorted runs being processed by currently running 
compactions</li>
   <li>Experimental feature: added support for simple secondary indices that 
index the specified column as-is. See <code>SimpleSecondaryIndex</code> and 
<code>SecondaryIndex</code> for more details.</li>
   <li>Added new 
<code>TransactionDBOptions::txn_commit_bypass_memtable_threshold</code>, which 
enables optimized transaction commit (see 
<code>TransactionOptions::commit_bypass_memtable</code>) when the transaction 
size exceeds a configured threshold.</li>
   </ul>
   <h3>Public API Changes</h3>
   <ul>
   <li>Updated the query API of the experimental secondary indexing feature by 
removing the earlier <code>SecondaryIndex::NewIterator</code> virtual and 
adding a <code>SecondaryIndexIterator</code> class that can be utilized by 
applications to find the primary keys for a given search target.</li>
   <li>Added back the ability to leverage the primary key when building 
secondary index entries. This involved changes to the signatures of 
<code>SecondaryIndex::GetSecondary{KeyPrefix,Value}</code> as well as the 
addition of a new method 
<code>SecondaryIndex::FinalizeSecondaryKeyPrefix</code>. See the API comments 
for more details.</li>
   </ul>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Changelog</summary>
   <p><em>Sourced from <a 
href="https://github.com/facebook/rocksdb/blob/v10.1.3/HISTORY.md";>org.rocksdb:rocksdbjni's
 changelog</a>.</em></p>
   <blockquote>
   <h2>10.1.3 (04/09/2025)</h2>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix a bug where resurrected full_history_ts_low from a previous session 
that enables UDT is used by this session that disables UDT.</li>
   </ul>
   <h2>10.1.2 (04/07/2025)</h2>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fix a bug where tail size of remote compaction output is not persisted 
in primary db's manifest</li>
   </ul>
   <h2>10.1.0 (03/24/2025)</h2>
   <h3>New Features</h3>
   <ul>
   <li>Added a new <code>DBOptions.calculate_sst_write_lifetime_hint_set</code> 
setting that allows to customize which compaction styles SST write lifetime 
hint calculation is allowed on. Today RocksDB supports only two modes 
<code>kCompactionStyleLevel</code> and 
<code>kCompactionStyleUniversal</code>.</li>
   <li>Add a new field <code>num_l0_files</code> in 
<code>CompactionJobInfo</code> about the number of L0 files in the CF right 
before and after the compaction</li>
   <li>Added per-key-placement feature in Remote Compaction</li>
   <li>Implemented API DB::GetPropertiesOfTablesByLevel that retrieves table 
properties for files in each LSM tree level</li>
   </ul>
   <h3>Public API Changes</h3>
   <ul>
   <li><code>GetAllKeyVersions()</code> now interprets empty slices literally, 
as valid keys, and uses new <code>OptSlice</code> type default value for 
extreme upper and lower range limits.</li>
   <li><code>DeleteFilesInRanges()</code> now takes <code>RangeOpt</code> which 
is based on <code>OptSlice</code>. The overload taking <code>RangePtr</code> is 
deprecated.</li>
   <li>Add an unordered map of name/value pairs, ReadOptions::property_bag, to 
pass opaque options through to an external table when creating an Iterator.</li>
   <li>Introduced CompactionServiceJobStatus::kAborted to allow handling 
aborted scenario in Schedule(), Wait() or OnInstallation() APIs in Remote 
Compactions.</li>
   <li>format_version &lt; 2 in BlockBasedTableOptions is no longer supported 
for writing new files. Support for reading such files is deprecated and might 
be removed in the future. 
<code>CompressedSecondaryCacheOptions::compress_format_version == 1</code> is 
also deprecated.</li>
   </ul>
   <h3>Behavior Changes</h3>
   <ul>
   <li><code>ldb</code> now returns an error if the specified 
<code>--compression_type</code> is not supported in the build.</li>
   <li>MultiGet with snapshot and ReadOptions::read_tier = kPersistedTier will 
now read a consistent view across CFs (instead of potentially reading some CF 
before and some CF after a flush).</li>
   <li>CreateColumnFamily() is no longer allowed on a read-only DB 
(OpenForReadOnly())</li>
   </ul>
   <h3>Bug Fixes</h3>
   <ul>
   <li>Fixed stats for Tiered Storage with preclude_last_level feature</li>
   </ul>
   <h2>10.0.0 (02/21/2025)</h2>
   <h3>New Features</h3>
   <ul>
   <li>Introduced new <code>auto_refresh_iterator_with_snapshot</code> opt-in 
knob that (when enabled) will periodically release obsolete memory and storage 
resources for as long as the iterator is making progress and its supplied 
<code>read_options.snapshot</code> was initialized with non-nullptr value.</li>
   <li>Added the ability to plug-in a custom table reader implementation. See 
include/rocksdb/external_table_reader.h for more details.</li>
   <li>Experimental feature: RocksDB now supports FAISS inverted file based 
indices via the secondary indexing framework. Applications can use FAISS 
secondary indices to automatically quantize embeddings and perform 
K-nearest-neighbors similarity searches. See <code>FaissIVFIndex</code> and 
<code>SecondaryIndex</code> for more details. Note: the FAISS integration 
currently requires using the BUCK build.</li>
   <li>Add new DB property <code>num_running_compaction_sorted_runs</code> that 
tracks the number of sorted runs being processed by currently running 
compactions</li>
   <li>Experimental feature: added support for simple secondary indices that 
index the specified column as-is. See <code>SimpleSecondaryIndex</code> and 
<code>SecondaryIndex</code> for more details.</li>
   <li>Added new 
<code>TransactionDBOptions::txn_commit_bypass_memtable_threshold</code>, which 
enables optimized transaction commit (see 
<code>TransactionOptions::commit_bypass_memtable</code>) when the transaction 
size exceeds a configured threshold.</li>
   </ul>
   <h3>Public API Changes</h3>
   <ul>
   <li>Updated the query API of the experimental secondary indexing feature by 
removing the earlier <code>SecondaryIndex::NewIterator</code> virtual and 
adding a <code>SecondaryIndexIterator</code> class that can be utilized by 
applications to find the primary keys for a given search target.</li>
   <li>Added back the ability to leverage the primary key when building 
secondary index entries. This involved changes to the signatures of 
<code>SecondaryIndex::GetSecondary{KeyPrefix,Value}</code> as well as the 
addition of a new method 
<code>SecondaryIndex::FinalizeSecondaryKeyPrefix</code>. See the API comments 
for more details.</li>
   <li>Minimum supported version of ZSTD is now 1.4.0, for code simplification. 
Obsolete <code>CompressionType</code> <code>kZSTDNotFinalCompression</code> is 
also removed.</li>
   </ul>
   <h3>Behavior Changes</h3>
   <ul>
   <li><code>VerifyBackup</code> in 
<code>verify_with_checksum</code>=<code>true</code> mode will now evaluate 
checksums in parallel. As a result, unlike in case of original implementation, 
the API won't bail out on a very first corruption / mismatch and instead will 
iterate over all the backup files logging success / <em>degree_of_failure</em> 
for each.</li>
   <li>Reversed the order of updates to the same key in WriteBatchWithIndex. 
This means if there are multiple updates to the same key, the most recent 
update is ordered first. This affects the output of WBWIIterator. When 
WriteBatchWithIndex is created with <code>overwrite_key=true</code>, this 
affects the output only if Merge is used (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13387";>#13387</a>).</li>
   <li>Added support for Merge operations in transactions using option 
<code>TransactionOptions::commit_bypass_memtable</code>.</li>
   </ul>
   <h3>Bug Fixes</h3>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Commits</summary>
   <ul>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/5823cf08d69e4d9cba6953d51fb7d6996c72df94";><code>5823cf0</code></a>
 Bump version to 10.1.3</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/d4d712226ca7829c50a3243216ced1de7c285a81";><code>d4d7122</code></a>
 Add safeguarding from resurrected cutoff UDT from previous session (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13521";>#13521</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/d4a273c57756605ea8719baf7d16471d0afaf128";><code>d4a273c</code></a>
 Bump version to 10.1.2</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/e8a41213f85a764a9496a7415cb02472658dff28";><code>e8a4121</code></a>
 Persist tail size of remote compaction output file to manifest (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13522";>#13522</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/99112b2f5be466fb0e276e2446ee3a92c218cf97";><code>99112b2</code></a>
 Bump version to 10.1.1</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/5b54eb82ffeec0dc04ab1ffb2a4fbf765861e800";><code>5b54eb8</code></a>
 Multi scan API (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13473";>#13473</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/5cfafd5e674037adece3d8337b1d470b127885a4";><code>5cfafd5</code></a>
 History change for 10.1</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/82794e0a4f3116878db210bce7e3768a68c47173";><code>82794e0</code></a>
 Deprecate RangePtr, favor new RangeOpt and OptSlice (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13481";>#13481</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/934cf2d40dc77905ec565ffec92bb54689c3199c";><code>934cf2d</code></a>
 Implement the DB::GetPropertiesOfTablesForLevels API (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13469";>#13469</a>)</li>
   <li><a 
href="https://github.com/facebook/rocksdb/commit/0b815cf3b35182d91f378f6a647f61289c60942f";><code>0b815cf</code></a>
 Add a CompactionJobStats.num_input_files_trivially_moved field (<a 
href="https://redirect.github.com/facebook/rocksdb/issues/13479";>#13479</a>)</li>
   <li>Additional commits viewable in <a 
href="https://github.com/facebook/rocksdb/compare/v9.10.0...v10.1.3";>compare 
view</a></li>
   </ul>
   </details>
   <br />
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.rocksdb:rocksdbjni&package-manager=maven&previous-version=9.10.0&new-version=10.1.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   <details>
   <summary>Dependabot commands and options</summary>
   <br />
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show <dependency name> ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to