dependabot[bot] opened a new pull request, #614:
URL: https://github.com/apache/opennlp/pull/614

   Bumps `onnxruntime.version` from 1.17.1 to 1.18.0.
   Updates `com.microsoft.onnxruntime:onnxruntime` from 1.17.1 to 1.18.0
   <details>
   <summary>Release notes</summary>
   <p><em>Sourced from <a 
href="https://github.com/microsoft/onnxruntime/releases";>com.microsoft.onnxruntime:onnxruntime's
 releases</a>.</em></p>
   <blockquote>
   <h2>ONNX Runtime v1.18.0</h2>
   <h2>Announcements</h2>
   <ul>
   <li><strong>Windows ARM32 support has been dropped at the source code 
level</strong>.</li>
   <li><strong>Python version &gt;=3.8 is now required for 
build.bat/build.sh</strong> (previously &gt;=3.7). <em>Note: If you have Python 
version &lt;3.8, you can bypass the tools and use CMake directly.</em></li>
   <li><strong>The <a 
href="https://mvnrepository.com/artifact/com.microsoft.onnxruntime/onnxruntime-mobile";>onnxruntime-mobile</a>
 Android package and onnxruntime-mobile-c/onnxruntime-mobile-objc iOS cocoapods 
are being deprecated</strong>. Please use the <a 
href="https://mvnrepository.com/artifact/com.microsoft.onnxruntime/onnxruntime-android";>onnxruntime-android</a>
 Android package, and onnxruntime-c/onnxruntime-objc cocoapods, which support 
ONNX and ORT format models and all operators and data types. <em>Note: If you 
require a smaller binary size, a custom build is required. See details on 
creating a custom Android or iOS package on <a 
href="https://onnxruntime.ai/docs/build/custom.html#custom-build-packages";>Custom
 build | onnxruntime</a>.</em></li>
   </ul>
   <h2>Build System &amp; Packages</h2>
   <ul>
   <li>CoreML execution provider now depends on coremltools.</li>
   <li>Flatbuffers has been upgraded from 1.12.0 → 23.5.26.</li>
   <li>ONNX has been upgraded from 1.15 → 1.16.</li>
   <li>EMSDK has been upgraded from 3.1.51 → 3.1.57.</li>
   <li>Intel neural_speed library has been upgraded from v0.1.1 → v0.3 with 
several important bug fixes.</li>
   <li>There is a new onnxruntime_CUDA_MINIMAL CMake option for building ONNX 
Runtime CUDA execution provider without any operations apart from memcpy 
ops.</li>
   <li>Added support for Catalyst for macOS build support.</li>
   <li>Added initial support for RISC-V and three new build options for it: 
<code>--rv64</code>, <code>--riscv_toolchain_root</code>, and 
<code>--riscv_qemu_path</code>.</li>
   <li>Now you can build TensorRT EP with protobuf-lite instead of the full 
version of protobuf.</li>
   <li>Some security-related compile/link flags have been moved from the 
default setting → new build option: 
<code>--use_binskim_compliant_compile_flags</code>. <em>Note: All our release 
binaries are built with this flag, but when building ONNX Runtime from source, 
this flag is default OFF.</em></li>
   <li>Windows ARM64 build now depends on PyTorch CPUINFO library.</li>
   <li>Windows OneCore build now uses “Reverse forwarding” apisets instead of 
“Direct forwarding”, so onnxruntime.dll in our Nuget packages will depend on 
kernel32.dll. <em>Note: Windows systems without kernel32.dll need to have 
reverse forwarders (see <a 
href="https://learn.microsoft.com/en-us/windows/win32/apiindex/api-set-loader-operation";>API
 set loader operation - Win32 apps | Microsoft Learn</a> for more 
information).</em></li>
   </ul>
   <h2>Core</h2>
   <ul>
   <li>Added ONNX 1.16 support.</li>
   <li>Added additional optimizations related to Dynamo-exported models.</li>
   <li>Improved testing infrastructure for EPs developed as shared 
libraries.</li>
   <li>Exposed Reserve() in OrtAllocator to allow custom allocators to work 
when session.use_device_allocator_for_initializers is specified.</li>
   <li>Improved lock contention due to memory allocations.</li>
   <li>Improved session creation time (graph and graph transformer 
optimizations).</li>
   <li>Added new SessionOptions config entry to disable specific transformers 
and rules.</li>
   <li>[C# API] Exposed SessionOptions.DisablePerSessionThreads to allow 
sharing of threadpool between sessions.</li>
   <li>[Java API] Added CUDA 12 Java support.</li>
   </ul>
   <h2>Performance</h2>
   <ul>
   <li>Improved 4bit quant support:
   <ul>
   <li>Added HQQ quantization support to improve accuracy.</li>
   <li>Implemented general GEMM kernel and improved GEMV kernel performance on 
GPU.</li>
   <li>Improved GEMM kernel quality and performance on x64.</li>
   <li>Implemented general GEMM kernel and improved GEMV performance on 
ARM64.</li>
   </ul>
   </li>
   <li>Improved MultiheadAttention performance on CPU.</li>
   </ul>
   <h2>Execution Providers</h2>
   <ul>
   <li>
   <p>TensorRT</p>
   <ul>
   <li>Added support for TensorRT 10.</li>
   <li>Finalized support for DDS ops.</li>
   <li>Added Python support for user provided CUDA stream.</li>
   <li>Fixed various bugs.</li>
   </ul>
   </li>
   <li>
   <p>CUDA</p>
   <ul>
   <li>Added support of multiple CUDA graphs.</li>
   <li>Added a provider option to disable TF32.</li>
   <li>Added Python support for user provided CUDA stream.</li>
   </ul>
   </li>
   </ul>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Commits</summary>
   <ul>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/45737400a2f3015c11f005ed7603611eaed306a6";><code>4573740</code></a>
 [ORT 1.18.0 Release] Cherry pick 3rd/Final round (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20677";>#20677</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/ed349b9d9d6741fb984b12d45dee1979e4fb3bd1";><code>ed349b9</code></a>
 Mark end of version 17 and 18 C API (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20671";>#20671</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/d72b4767231c3bef689e3210ffc48b035ba46599";><code>d72b476</code></a>
 [ORT 1.18.0 Release] Cherry pick 2nd round (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20620";>#20620</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/65f3fbf1375cf81621ff2b555caa264e3452e519";><code>65f3fbf</code></a>
 [ORT 1.18.0 Release] Cherry pick 1st round (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20585";>#20585</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/204f1f59b9b351954a1106def4ce6ad9e840fa9f";><code>204f1f5</code></a>
 Run fuzz testing before the CG task cleans up the build directory (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20500";>#20500</a>)
 (#...</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/21b3cbc3af50aa4f77e1e477451d6b0cbc2b180d";><code>21b3cbc</code></a>
 [WIP][JS/WebGPU] Inputs Key and Value could be 4-dims. (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20470";>#20470</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/2c19db0af1f73d6276e3ca30b9cf15dcaf0be9e0";><code>2c19db0</code></a>
 Put x64 specific benchmark code into ifdefs. (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20456";>#20456</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/227c4419fcb3e3bbeb3fbc3c4d52922e9cfa2be7";><code>227c441</code></a>
 add bf16 support for few ops (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20385";>#20385</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/464f199b95bd062558d46c5fd592f87f5eb28c99";><code>464f199</code></a>
 Extend mac package jobs time out limit (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20459";>#20459</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/edffa2a180d4219e547d3c22af292ad071f4141f";><code>edffa2a</code></a>
 Optimize MlasComputeSoftmax with prefetch (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20393";>#20393</a>)</li>
   <li>Additional commits viewable in <a 
href="https://github.com/microsoft/onnxruntime/compare/v1.17.1...v1.18.0";>compare
 view</a></li>
   </ul>
   </details>
   <br />
   
   Updates `com.microsoft.onnxruntime:onnxruntime_gpu` from 1.17.1 to 1.18.0
   <details>
   <summary>Release notes</summary>
   <p><em>Sourced from <a 
href="https://github.com/microsoft/onnxruntime/releases";>com.microsoft.onnxruntime:onnxruntime_gpu's
 releases</a>.</em></p>
   <blockquote>
   <h2>ONNX Runtime v1.18.0</h2>
   <h2>Announcements</h2>
   <ul>
   <li><strong>Windows ARM32 support has been dropped at the source code 
level</strong>.</li>
   <li><strong>Python version &gt;=3.8 is now required for 
build.bat/build.sh</strong> (previously &gt;=3.7). <em>Note: If you have Python 
version &lt;3.8, you can bypass the tools and use CMake directly.</em></li>
   <li><strong>The <a 
href="https://mvnrepository.com/artifact/com.microsoft.onnxruntime/onnxruntime-mobile";>onnxruntime-mobile</a>
 Android package and onnxruntime-mobile-c/onnxruntime-mobile-objc iOS cocoapods 
are being deprecated</strong>. Please use the <a 
href="https://mvnrepository.com/artifact/com.microsoft.onnxruntime/onnxruntime-android";>onnxruntime-android</a>
 Android package, and onnxruntime-c/onnxruntime-objc cocoapods, which support 
ONNX and ORT format models and all operators and data types. <em>Note: If you 
require a smaller binary size, a custom build is required. See details on 
creating a custom Android or iOS package on <a 
href="https://onnxruntime.ai/docs/build/custom.html#custom-build-packages";>Custom
 build | onnxruntime</a>.</em></li>
   </ul>
   <h2>Build System &amp; Packages</h2>
   <ul>
   <li>CoreML execution provider now depends on coremltools.</li>
   <li>Flatbuffers has been upgraded from 1.12.0 → 23.5.26.</li>
   <li>ONNX has been upgraded from 1.15 → 1.16.</li>
   <li>EMSDK has been upgraded from 3.1.51 → 3.1.57.</li>
   <li>Intel neural_speed library has been upgraded from v0.1.1 → v0.3 with 
several important bug fixes.</li>
   <li>There is a new onnxruntime_CUDA_MINIMAL CMake option for building ONNX 
Runtime CUDA execution provider without any operations apart from memcpy 
ops.</li>
   <li>Added support for Catalyst for macOS build support.</li>
   <li>Added initial support for RISC-V and three new build options for it: 
<code>--rv64</code>, <code>--riscv_toolchain_root</code>, and 
<code>--riscv_qemu_path</code>.</li>
   <li>Now you can build TensorRT EP with protobuf-lite instead of the full 
version of protobuf.</li>
   <li>Some security-related compile/link flags have been moved from the 
default setting → new build option: 
<code>--use_binskim_compliant_compile_flags</code>. <em>Note: All our release 
binaries are built with this flag, but when building ONNX Runtime from source, 
this flag is default OFF.</em></li>
   <li>Windows ARM64 build now depends on PyTorch CPUINFO library.</li>
   <li>Windows OneCore build now uses “Reverse forwarding” apisets instead of 
“Direct forwarding”, so onnxruntime.dll in our Nuget packages will depend on 
kernel32.dll. <em>Note: Windows systems without kernel32.dll need to have 
reverse forwarders (see <a 
href="https://learn.microsoft.com/en-us/windows/win32/apiindex/api-set-loader-operation";>API
 set loader operation - Win32 apps | Microsoft Learn</a> for more 
information).</em></li>
   </ul>
   <h2>Core</h2>
   <ul>
   <li>Added ONNX 1.16 support.</li>
   <li>Added additional optimizations related to Dynamo-exported models.</li>
   <li>Improved testing infrastructure for EPs developed as shared 
libraries.</li>
   <li>Exposed Reserve() in OrtAllocator to allow custom allocators to work 
when session.use_device_allocator_for_initializers is specified.</li>
   <li>Improved lock contention due to memory allocations.</li>
   <li>Improved session creation time (graph and graph transformer 
optimizations).</li>
   <li>Added new SessionOptions config entry to disable specific transformers 
and rules.</li>
   <li>[C# API] Exposed SessionOptions.DisablePerSessionThreads to allow 
sharing of threadpool between sessions.</li>
   <li>[Java API] Added CUDA 12 Java support.</li>
   </ul>
   <h2>Performance</h2>
   <ul>
   <li>Improved 4bit quant support:
   <ul>
   <li>Added HQQ quantization support to improve accuracy.</li>
   <li>Implemented general GEMM kernel and improved GEMV kernel performance on 
GPU.</li>
   <li>Improved GEMM kernel quality and performance on x64.</li>
   <li>Implemented general GEMM kernel and improved GEMV performance on 
ARM64.</li>
   </ul>
   </li>
   <li>Improved MultiheadAttention performance on CPU.</li>
   </ul>
   <h2>Execution Providers</h2>
   <ul>
   <li>
   <p>TensorRT</p>
   <ul>
   <li>Added support for TensorRT 10.</li>
   <li>Finalized support for DDS ops.</li>
   <li>Added Python support for user provided CUDA stream.</li>
   <li>Fixed various bugs.</li>
   </ul>
   </li>
   <li>
   <p>CUDA</p>
   <ul>
   <li>Added support of multiple CUDA graphs.</li>
   <li>Added a provider option to disable TF32.</li>
   <li>Added Python support for user provided CUDA stream.</li>
   </ul>
   </li>
   </ul>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Commits</summary>
   <ul>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/45737400a2f3015c11f005ed7603611eaed306a6";><code>4573740</code></a>
 [ORT 1.18.0 Release] Cherry pick 3rd/Final round (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20677";>#20677</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/ed349b9d9d6741fb984b12d45dee1979e4fb3bd1";><code>ed349b9</code></a>
 Mark end of version 17 and 18 C API (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20671";>#20671</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/d72b4767231c3bef689e3210ffc48b035ba46599";><code>d72b476</code></a>
 [ORT 1.18.0 Release] Cherry pick 2nd round (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20620";>#20620</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/65f3fbf1375cf81621ff2b555caa264e3452e519";><code>65f3fbf</code></a>
 [ORT 1.18.0 Release] Cherry pick 1st round (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20585";>#20585</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/204f1f59b9b351954a1106def4ce6ad9e840fa9f";><code>204f1f5</code></a>
 Run fuzz testing before the CG task cleans up the build directory (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20500";>#20500</a>)
 (#...</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/21b3cbc3af50aa4f77e1e477451d6b0cbc2b180d";><code>21b3cbc</code></a>
 [WIP][JS/WebGPU] Inputs Key and Value could be 4-dims. (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20470";>#20470</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/2c19db0af1f73d6276e3ca30b9cf15dcaf0be9e0";><code>2c19db0</code></a>
 Put x64 specific benchmark code into ifdefs. (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20456";>#20456</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/227c4419fcb3e3bbeb3fbc3c4d52922e9cfa2be7";><code>227c441</code></a>
 add bf16 support for few ops (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20385";>#20385</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/464f199b95bd062558d46c5fd592f87f5eb28c99";><code>464f199</code></a>
 Extend mac package jobs time out limit (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20459";>#20459</a>)</li>
   <li><a 
href="https://github.com/microsoft/onnxruntime/commit/edffa2a180d4219e547d3c22af292ad071f4141f";><code>edffa2a</code></a>
 Optimize MlasComputeSoftmax with prefetch (<a 
href="https://redirect.github.com/microsoft/onnxruntime/issues/20393";>#20393</a>)</li>
   <li>Additional commits viewable in <a 
href="https://github.com/microsoft/onnxruntime/compare/v1.17.1...v1.18.0";>compare
 view</a></li>
   </ul>
   </details>
   <br />
   
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   <details>
   <summary>Dependabot commands and options</summary>
   <br />
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show <dependency name> ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@opennlp.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to