dependabot[bot] opened a new pull request, #680:
URL: https://github.com/apache/opennlp/pull/680

   Bumps `onnxruntime.version` from 1.19.2 to 1.20.0.
   Updates `com.microsoft.onnxruntime:onnxruntime` from 1.19.2 to 1.20.0
   <details>
   <summary>Release notes</summary>
   <p><em>Sourced from <a 
href="https://github.com/microsoft/onnxruntime/releases";>com.microsoft.onnxruntime:onnxruntime's
 releases</a>.</em></p>
   <blockquote>
   <h2>ONNX Runtime v1.20.0</h2>
   <p><strong>Release Manager: <a 
href="https://github.com/apsonawane";><code>@​apsonawane</code></a></strong></p>
   <h1>Announcements</h1>
   <ul>
   <li><strong>All ONNX Runtime Training packages have been 
deprecated.</strong> ORT 1.19.2 was the last release for which 
onnxruntime-training (PyPI), onnxruntime-training-cpu (PyPI), 
Microsoft.ML.OnnxRuntime.Training (Nuget), onnxruntime-training-c (CocoaPods), 
onnxruntime-training-objc (CocoaPods), and onnxruntime-training-android (Maven 
Central) were published.</li>
   <li><strong>ONNX Runtime packages will stop supporting Python 3.8 and Python 
3.9.</strong> This decision aligns with NumPy Python version support. To 
continue using ORT with Python 3.8 and Python 3.9, you can use ORT 1.19.2 and 
earlier.</li>
   <li><strong>ONNX Runtime 1.20 CUDA packages will include new dependencies 
that were not required in 1.19 packages.</strong> The following dependencies 
are new: libcudnn_adv.so.9, libcudnn_cnn.so.9, 
libcudnn_engines_precompiled.so.9, libcudnn_engines_runtime_compiled.so.9, 
libcudnn_graph.so.9, libcudnn_heuristic.so.9, libcudnn_ops.so.9, 
libnvrtc.so.12, and libz.so.1.</li>
   </ul>
   <h1>Build System &amp; Packages</h1>
   <ul>
   <li>Python 3.13 support is included in PyPI packages.</li>
   <li>ONNX 1.17 support will be delayed until a future release, but the ONNX 
version used by ONNX Runtime has been patched to include a <a 
href="https://redirect.github.com/onnx/onnx/pull/6010";>shape inference change 
to the Einsum op</a>.</li>
   <li>DLLs in the Maven build are now digitally signed (fix for issue reported 
<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/19204";>here</a>).</li>
   <li>(Experimental) vcpkg support added for the CPU EP. The DML EP does not 
yet support vcpkg, and other EPs have not been tested.</li>
   </ul>
   <h1>Core</h1>
   <ul>
   <li>MultiLoRA support.</li>
   <li>Reduced memory utilization.
   <ul>
   <li>Fixed alignment that was causing mmap to fail for external weights.</li>
   <li>Eliminated double allocations when deserializing external weights.</li>
   <li>Added ability to serialize pre-packed weights so that they don’t cause 
an increase in memory utilization when the model is loaded.</li>
   </ul>
   </li>
   <li>Support bfloat16 and float8 data types in python I/O binding API.</li>
   </ul>
   <h1>Performance</h1>
   <ul>
   <li>INT4 quantized embedding support on CPU and CUDA EPs.</li>
   <li>Miscellaneous performance improvements and bug fixes.</li>
   </ul>
   <h1>EPs</h1>
   <h2>CPU</h2>
   <ul>
   <li>FP16 support for MatMulNbits, Clip, and LayerNormalization ops.</li>
   </ul>
   <h2>CUDA</h2>
   <ul>
   <li>Added support of cuDNN Flash Attention and Lean Attention in 
MultiHeadAttention op.</li>
   </ul>
   <h2>TensorRT</h2>
   <ul>
   <li>TensorRT <a 
href="https://github.com/NVIDIA/TensorRT/releases/tag/v10.4.0";>10.4</a> and <a 
href="https://github.com/NVIDIA/TensorRT/releases/tag/v10.5.0";>10.5</a> 
support.</li>
   </ul>
   <h2>QNN</h2>
   <ul>
   <li>QNN HTP support for weight sharing across multiple ORT inference 
sessions. (See <a 
href="https://onnxruntime.ai/docs/execution-providers/QNN-ExecutionProvider.html#qnn-ep-weight-sharing";>ORT
 QNN EP documentation</a> for more information.)</li>
   <li>Support for QNN SDK 2.27.</li>
   </ul>
   <h2>OpenVINO</h2>
   <ul>
   <li>Added support up to OpenVINO 2024.4.1.</li>
   <li>Compile-time memory optimizations.</li>
   <li>Enhancement of ORT EPContext Session option for optimized first 
inference latency.</li>
   <li>Added remote tensors to ensure direct memory access for inferencing on 
NPU.</li>
   </ul>
   <h2>DirectML</h2>
   <ul>
   <li><a 
href="https://www.nuget.org/packages/Microsoft.AI.DirectML/1.15.2";>DirectML 
1.15.2</a> support.</li>
   </ul>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Commits</summary>
   <ul>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/c4fb724e810bb496165b9015c77f402727392933";><code>c4fb724</code></a>
 ORT 1.20.0 release preparation: Cherry pick round 2 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22643";>#22643</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/2d00351d7b4975a4d03f6a437772b6976726a252";><code>2d00351</code></a>
 ORT 1.20.0 Release: Cherry pick round 1 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22526";>#22526</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/f9e623e4d1cf0998e5499053d96ab5f77ddff6d0";><code>f9e623e</code></a>
 Update CMake to 3.31.0rc1 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22433";>#22433</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/691de83892e72f38bffd39c47777980a6c362b97";><code>691de83</code></a>
 Enable BrowserStack tests (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22457";>#22457</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/bf604428aa4b9faf84ce99f735ad6402a6d6f886";><code>bf60442</code></a>
 [ROCm] Update ROCm Nuget pipeline to ROCm 6.2 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22461";>#22461</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/2b8fc5529bce4d815d2e71e9c8258c29db377c87";><code>2b8fc55</code></a>
 Enable RunMatMulTest all test cases support FP16 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22440";>#22440</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/af00a20f8aeba17a68ea8a7c5f41df2c27d96116";><code>af00a20</code></a>
 Change ORT nightly python packages' name (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22450";>#22450</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/a5e85a950c2fab5729c46e7362a60765caa4b999";><code>a5e85a9</code></a>
 Fix training artifacts for 2GB+ models and <code>MSELoss</code> (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22414";>#22414</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/6407d81b35436cfa0d74dd39f05a182d750b572e";><code>6407d81</code></a>
 Disable BrowserStack testing stage (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22438";>#22438</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/4c47bca8fefa654a6c7b043293bb92161492f6e6";><code>4c47bca</code></a>
 [MIGraphX EP] Add additional operators  (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22446";>#22446</a>)</li>
   <li>Additional commits viewable in <a 
href="https://github.com/microsoft/onnxruntime/compare/v1.19.2...v1.20.0";>compare
 view</a></li>
   </ul>
   </details>
   <br />
   
   Updates `com.microsoft.onnxruntime:onnxruntime_gpu` from 1.19.2 to 1.20.0
   <details>
   <summary>Release notes</summary>
   <p><em>Sourced from <a 
href="https://github.com/microsoft/onnxruntime/releases";>com.microsoft.onnxruntime:onnxruntime_gpu's
 releases</a>.</em></p>
   <blockquote>
   <h2>ONNX Runtime v1.20.0</h2>
   <p><strong>Release Manager: <a 
href="https://github.com/apsonawane";><code>@​apsonawane</code></a></strong></p>
   <h1>Announcements</h1>
   <ul>
   <li><strong>All ONNX Runtime Training packages have been 
deprecated.</strong> ORT 1.19.2 was the last release for which 
onnxruntime-training (PyPI), onnxruntime-training-cpu (PyPI), 
Microsoft.ML.OnnxRuntime.Training (Nuget), onnxruntime-training-c (CocoaPods), 
onnxruntime-training-objc (CocoaPods), and onnxruntime-training-android (Maven 
Central) were published.</li>
   <li><strong>ONNX Runtime packages will stop supporting Python 3.8 and Python 
3.9.</strong> This decision aligns with NumPy Python version support. To 
continue using ORT with Python 3.8 and Python 3.9, you can use ORT 1.19.2 and 
earlier.</li>
   <li><strong>ONNX Runtime 1.20 CUDA packages will include new dependencies 
that were not required in 1.19 packages.</strong> The following dependencies 
are new: libcudnn_adv.so.9, libcudnn_cnn.so.9, 
libcudnn_engines_precompiled.so.9, libcudnn_engines_runtime_compiled.so.9, 
libcudnn_graph.so.9, libcudnn_heuristic.so.9, libcudnn_ops.so.9, 
libnvrtc.so.12, and libz.so.1.</li>
   </ul>
   <h1>Build System &amp; Packages</h1>
   <ul>
   <li>Python 3.13 support is included in PyPI packages.</li>
   <li>ONNX 1.17 support will be delayed until a future release, but the ONNX 
version used by ONNX Runtime has been patched to include a <a 
href="https://redirect.github.com/onnx/onnx/pull/6010";>shape inference change 
to the Einsum op</a>.</li>
   <li>DLLs in the Maven build are now digitally signed (fix for issue reported 
<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/19204";>here</a>).</li>
   <li>(Experimental) vcpkg support added for the CPU EP. The DML EP does not 
yet support vcpkg, and other EPs have not been tested.</li>
   </ul>
   <h1>Core</h1>
   <ul>
   <li>MultiLoRA support.</li>
   <li>Reduced memory utilization.
   <ul>
   <li>Fixed alignment that was causing mmap to fail for external weights.</li>
   <li>Eliminated double allocations when deserializing external weights.</li>
   <li>Added ability to serialize pre-packed weights so that they don’t cause 
an increase in memory utilization when the model is loaded.</li>
   </ul>
   </li>
   <li>Support bfloat16 and float8 data types in python I/O binding API.</li>
   </ul>
   <h1>Performance</h1>
   <ul>
   <li>INT4 quantized embedding support on CPU and CUDA EPs.</li>
   <li>Miscellaneous performance improvements and bug fixes.</li>
   </ul>
   <h1>EPs</h1>
   <h2>CPU</h2>
   <ul>
   <li>FP16 support for MatMulNbits, Clip, and LayerNormalization ops.</li>
   </ul>
   <h2>CUDA</h2>
   <ul>
   <li>Added support of cuDNN Flash Attention and Lean Attention in 
MultiHeadAttention op.</li>
   </ul>
   <h2>TensorRT</h2>
   <ul>
   <li>TensorRT <a 
href="https://github.com/NVIDIA/TensorRT/releases/tag/v10.4.0";>10.4</a> and <a 
href="https://github.com/NVIDIA/TensorRT/releases/tag/v10.5.0";>10.5</a> 
support.</li>
   </ul>
   <h2>QNN</h2>
   <ul>
   <li>QNN HTP support for weight sharing across multiple ORT inference 
sessions. (See <a 
href="https://onnxruntime.ai/docs/execution-providers/QNN-ExecutionProvider.html#qnn-ep-weight-sharing";>ORT
 QNN EP documentation</a> for more information.)</li>
   <li>Support for QNN SDK 2.27.</li>
   </ul>
   <h2>OpenVINO</h2>
   <ul>
   <li>Added support up to OpenVINO 2024.4.1.</li>
   <li>Compile-time memory optimizations.</li>
   <li>Enhancement of ORT EPContext Session option for optimized first 
inference latency.</li>
   <li>Added remote tensors to ensure direct memory access for inferencing on 
NPU.</li>
   </ul>
   <h2>DirectML</h2>
   <ul>
   <li><a 
href="https://www.nuget.org/packages/Microsoft.AI.DirectML/1.15.2";>DirectML 
1.15.2</a> support.</li>
   </ul>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Commits</summary>
   <ul>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/c4fb724e810bb496165b9015c77f402727392933";><code>c4fb724</code></a>
 ORT 1.20.0 release preparation: Cherry pick round 2 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22643";>#22643</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/2d00351d7b4975a4d03f6a437772b6976726a252";><code>2d00351</code></a>
 ORT 1.20.0 Release: Cherry pick round 1 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22526";>#22526</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/f9e623e4d1cf0998e5499053d96ab5f77ddff6d0";><code>f9e623e</code></a>
 Update CMake to 3.31.0rc1 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22433";>#22433</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/691de83892e72f38bffd39c47777980a6c362b97";><code>691de83</code></a>
 Enable BrowserStack tests (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22457";>#22457</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/bf604428aa4b9faf84ce99f735ad6402a6d6f886";><code>bf60442</code></a>
 [ROCm] Update ROCm Nuget pipeline to ROCm 6.2 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22461";>#22461</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/2b8fc5529bce4d815d2e71e9c8258c29db377c87";><code>2b8fc55</code></a>
 Enable RunMatMulTest all test cases support FP16 (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22440";>#22440</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/af00a20f8aeba17a68ea8a7c5f41df2c27d96116";><code>af00a20</code></a>
 Change ORT nightly python packages' name (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22450";>#22450</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/a5e85a950c2fab5729c46e7362a60765caa4b999";><code>a5e85a9</code></a>
 Fix training artifacts for 2GB+ models and <code>MSELoss</code> (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22414";>#22414</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/6407d81b35436cfa0d74dd39f05a182d750b572e";><code>6407d81</code></a>
 Disable BrowserStack testing stage (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22438";>#22438</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/4c47bca8fefa654a6c7b043293bb92161492f6e6";><code>4c47bca</code></a>
 [MIGraphX EP] Add additional operators  (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/22446";>#22446</a>)</li>
   <li>Additional commits viewable in <a 
href="https://github.com/microsoft/onnxruntime/compare/v1.19.2...v1.20.0";>compare
 view</a></li>
   </ul>
   </details>
   <br />
   
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   <details>
   <summary>Dependabot commands and options</summary>
   <br />
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show <dependency name> ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@opennlp.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to