dependabot[bot] opened a new pull request, #9060:
URL: https://github.com/apache/gravitino/pull/9060

   Bumps [llama-index](https://github.com/run-llama/llama_index) from 0.12.41 
to 0.14.7.
   <details>
   <summary>Release notes</summary>
   <p><em>Sourced from <a 
href="https://github.com/run-llama/llama_index/releases";>llama-index's 
releases</a>.</em></p>
   <blockquote>
   <h2>v0.14.7</h2>
   <h1>Release Notes</h1>
   <h2>[2025-10-30]</h2>
   <h3>llama-index-core [0.14.7]</h3>
   <ul>
   <li>Feat/serpex tool integration (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20141";>#20141</a>)</li>
   <li>Fix outdated error message about setting LLM (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20157";>#20157</a>)</li>
   <li>Fixing some recently failing tests (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20165";>#20165</a>)</li>
   <li>Fix: update lock to latest workflow and fix issues (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20173";>#20173</a>)</li>
   <li>fix: ensure full docstring is used in FunctionTool (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20175";>#20175</a>)</li>
   <li>fix api docs build (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20180";>#20180</a>)</li>
   </ul>
   <h3>llama-index-embeddings-voyageai [0.5.0]</h3>
   <ul>
   <li>Updating the VoyageAI integration (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20073";>#20073</a>)</li>
   </ul>
   <h3>llama-index-llms-anthropic [0.10.0]</h3>
   <ul>
   <li>feat: integrate anthropic with tool call block (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20100";>#20100</a>)</li>
   </ul>
   <h3>llama-index-llms-bedrock-converse [0.10.7]</h3>
   <ul>
   <li>feat: Add support for Bedrock Guardrails streamProcessingMode (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20150";>#20150</a>)</li>
   <li>bedrock structured output optional force (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20158";>#20158</a>)</li>
   </ul>
   <h3>llama-index-llms-fireworks [0.4.5]</h3>
   <ul>
   <li>Update FireworksAI models (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20169";>#20169</a>)</li>
   </ul>
   <h3>llama-index-llms-mistralai [0.9.0]</h3>
   <ul>
   <li>feat: mistralai integration with tool call block (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20103";>#20103</a>)</li>
   </ul>
   <h3>llama-index-llms-ollama [0.9.0]</h3>
   <ul>
   <li>feat: integrate ollama with tool call block (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20097";>#20097</a>)</li>
   </ul>
   <h3>llama-index-llms-openai [0.6.6]</h3>
   <ul>
   <li>Allow setting temp of gpt-5-chat (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20156";>#20156</a>)</li>
   </ul>
   <h3>llama-index-readers-confluence [0.5.0]</h3>
   <ul>
   <li>feat(confluence): make SVG processing optional to fix pycairo install… 
(<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20115";>#20115</a>)</li>
   </ul>
   <h3>llama-index-readers-github [0.9.0]</h3>
   <ul>
   <li>Add GitHub App authentication support (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20106";>#20106</a>)</li>
   </ul>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Changelog</summary>
   <p><em>Sourced from <a 
href="https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md";>llama-index's
 changelog</a>.</em></p>
   <blockquote>
   <h3>llama-index-core [0.14.7]</h3>
   <ul>
   <li>Feat/serpex tool integration (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20141";>#20141</a>)</li>
   <li>Fix outdated error message about setting LLM (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20157";>#20157</a>)</li>
   <li>Fixing some recently failing tests (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20165";>#20165</a>)</li>
   <li>Fix: update lock to latest workflow and fix issues (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20173";>#20173</a>)</li>
   <li>fix: ensure full docstring is used in FunctionTool (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20175";>#20175</a>)</li>
   <li>fix api docs build (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20180";>#20180</a>)</li>
   </ul>
   <h3>llama-index-embeddings-voyageai [0.5.0]</h3>
   <ul>
   <li>Updating the VoyageAI integration (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20073";>#20073</a>)</li>
   </ul>
   <h3>llama-index-llms-anthropic [0.10.0]</h3>
   <ul>
   <li>feat: integrate anthropic with tool call block (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20100";>#20100</a>)</li>
   </ul>
   <h3>llama-index-llms-bedrock-converse [0.10.7]</h3>
   <ul>
   <li>feat: Add support for Bedrock Guardrails streamProcessingMode (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20150";>#20150</a>)</li>
   <li>bedrock structured output optional force (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20158";>#20158</a>)</li>
   </ul>
   <h3>llama-index-llms-fireworks [0.4.5]</h3>
   <ul>
   <li>Update FireworksAI models (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20169";>#20169</a>)</li>
   </ul>
   <h3>llama-index-llms-mistralai [0.9.0]</h3>
   <ul>
   <li>feat: mistralai integration with tool call block (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20103";>#20103</a>)</li>
   </ul>
   <h3>llama-index-llms-ollama [0.9.0]</h3>
   <ul>
   <li>feat: integrate ollama with tool call block (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20097";>#20097</a>)</li>
   </ul>
   <h3>llama-index-llms-openai [0.6.6]</h3>
   <ul>
   <li>Allow setting temp of gpt-5-chat (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20156";>#20156</a>)</li>
   </ul>
   <h3>llama-index-readers-confluence [0.5.0]</h3>
   <ul>
   <li>feat(confluence): make SVG processing optional to fix pycairo install… 
(<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20115";>#20115</a>)</li>
   </ul>
   <h3>llama-index-readers-github [0.9.0]</h3>
   <ul>
   <li>Add GitHub App authentication support (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20106";>#20106</a>)</li>
   </ul>
   <h3>llama-index-retrievers-bedrock [0.5.1]</h3>
   <ul>
   <li>Fixing some recently failing tests (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20165";>#20165</a>)</li>
   </ul>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Commits</summary>
   <ul>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/74e5113c08c33ac0bcfc0d3ccd637f99f097f801";><code>74e5113</code></a>
 Remove unnecessary sanity check so that failed publish jobs can be retried 
(#...</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/34b8244de42d08d242bd620b5acc2c722cb4d5ad";><code>34b8244</code></a>
 fix stray uv.lock (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20190";>#20190</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/b9e4ca1e6a1c46196f013a63a145031d30f4a908";><code>b9e4ca1</code></a>
 Release 0.14.7 (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20187";>#20187</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/96541903b5cbd78304c92e0e5d516a9d99e3f783";><code>9654190</code></a>
 make pre release pre commit not fail, just reformat (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20189";>#20189</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/cf5917eb568478b79925c44ec09805bc582b3181";><code>cf5917e</code></a>
 run all pre-commit when preparing (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20188";>#20188</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/13f1263f28e8dc86c631dc891b443e6d902195c1";><code>13f1263</code></a>
 add missing toml info (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20186";>#20186</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/6f2eb4d2c065212e18b05f9525f4f0d9fb83ebd2";><code>6f2eb4d</code></a>
 format updated docs (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20185";>#20185</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/3aa6842cca939ebbce572f6900a5bed3ace528ec";><code>3aa6842</code></a>
 fix: ensure full docstring is used in FunctionTool (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20175";>#20175</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/59313d7accb0f6dcbdd26ec612d087e156b0d498";><code>59313d7</code></a>
 Add Hyperscale and Composite Vector Indexes support for Couchbase 
vector-stor...</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/4f7d86740aa5a9639e0ccc26c5aa16cd376f4b04";><code>4f7d867</code></a>
 chore(deps): bump starlette from 0.48.0 to 0.49.1 in /docs/api_reference in 
t...</li>
   <li>Additional commits viewable in <a 
href="https://github.com/run-llama/llama_index/compare/v0.12.41...v0.14.7";>compare
 view</a></li>
   </ul>
   </details>
   <br />
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=llama-index&package-manager=pip&previous-version=0.12.41&new-version=0.14.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   <details>
   <summary>Dependabot commands and options</summary>
   <br />
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show <dependency name> ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to