dependabot[bot] opened a new pull request, #9233: URL: https://github.com/apache/gravitino/pull/9233
Bumps [llama-index](https://github.com/run-llama/llama_index) from 0.12.41 to 0.14.8. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/run-llama/llama_index/releases">llama-index's releases</a>.</em></p> <blockquote> <h2>v0.14.8</h2> <h1>Release Notes</h1> <h2>[2025-11-10]</h2> <h3>llama-index-core [0.14.8]</h3> <ul> <li>Fix ReActOutputParser getting stuck when "Answer:" contains "Action:" (<a href="https://redirect.github.com/run-llama/llama_index/pull/20098">#20098</a>)</li> <li>Add buffer to image, audio, video and document blocks (<a href="https://redirect.github.com/run-llama/llama_index/pull/20153">#20153</a>)</li> <li>fix(agent): Handle multi-block ChatMessage in ReActAgent (<a href="https://redirect.github.com/run-llama/llama_index/pull/20196">#20196</a>)</li> <li>Fix/20209 (<a href="https://redirect.github.com/run-llama/llama_index/pull/20214">#20214</a>)</li> <li>Preserve Exception in ToolOutput (<a href="https://redirect.github.com/run-llama/llama_index/pull/20231">#20231</a>)</li> <li>fix weird pydantic warning (<a href="https://redirect.github.com/run-llama/llama_index/pull/20235">#20235</a>)</li> </ul> <h3>llama-index-embeddings-nvidia [0.4.2]</h3> <ul> <li>docs: Edit pass and update example model (<a href="https://redirect.github.com/run-llama/llama_index/pull/20198">#20198</a>)</li> </ul> <h3>llama-index-embeddings-ollama [0.8.4]</h3> <ul> <li>Added a test case (no code) to check the embedding through an actual connection to a Ollama server (after checking that the ollama server exists) (<a href="https://redirect.github.com/run-llama/llama_index/pull/20230">#20230</a>)</li> </ul> <h3>llama-index-llms-anthropic [0.10.2]</h3> <ul> <li>feat(llms/anthropic): Add support for RawMessageDeltaEvent in streaming (<a href="https://redirect.github.com/run-llama/llama_index/pull/20206">#20206</a>)</li> <li>chore: remove unsupported models (<a href="https://redirect.github.com/run-llama/llama_index/pull/20211">#20211</a>)</li> </ul> <h3>llama-index-llms-bedrock-converse [0.11.1]</h3> <ul> <li>feat: integrate bedrock converse with tool call block (<a href="https://redirect.github.com/run-llama/llama_index/pull/20099">#20099</a>)</li> <li>feat: Update model name extraction to include 'jp' region prefix and … (<a href="https://redirect.github.com/run-llama/llama_index/pull/20233">#20233</a>)</li> </ul> <h3>llama-index-llms-google-genai [0.7.3]</h3> <ul> <li>feat: google genai integration with tool block (<a href="https://redirect.github.com/run-llama/llama_index/pull/20096">#20096</a>)</li> <li>fix: non-streaming gemini tool calling (<a href="https://redirect.github.com/run-llama/llama_index/pull/20207">#20207</a>)</li> <li>Add token usage information in GoogleGenAI chat additional_kwargs (<a href="https://redirect.github.com/run-llama/llama_index/pull/20219">#20219</a>)</li> <li>bug fix google genai stream_complete (<a href="https://redirect.github.com/run-llama/llama_index/pull/20220">#20220</a>)</li> </ul> <h3>llama-index-llms-nvidia [0.4.4]</h3> <ul> <li>docs: Edit pass and code example updates (<a href="https://redirect.github.com/run-llama/llama_index/pull/20200">#20200</a>)</li> </ul> <h3>llama-index-llms-openai [0.6.8]</h3> <ul> <li>FixV2: Correct DocumentBlock type for OpenAI from 'input_file' to 'file' (<a href="https://redirect.github.com/run-llama/llama_index/pull/20203">#20203</a>)</li> <li>OpenAI v2 sdk support (<a href="https://redirect.github.com/run-llama/llama_index/pull/20234">#20234</a>)</li> </ul> <h3>llama-index-llms-upstage [0.6.5]</h3> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md">llama-index's changelog</a>.</em></p> <blockquote> <h3>llama-index-core [0.14.8]</h3> <ul> <li>Fix ReActOutputParser getting stuck when "Answer:" contains "Action:" (<a href="https://redirect.github.com/run-llama/llama_index/pull/20098">#20098</a>)</li> <li>Add buffer to image, audio, video and document blocks (<a href="https://redirect.github.com/run-llama/llama_index/pull/20153">#20153</a>)</li> <li>fix(agent): Handle multi-block ChatMessage in ReActAgent (<a href="https://redirect.github.com/run-llama/llama_index/pull/20196">#20196</a>)</li> <li>Fix/20209 (<a href="https://redirect.github.com/run-llama/llama_index/pull/20214">#20214</a>)</li> <li>Preserve Exception in ToolOutput (<a href="https://redirect.github.com/run-llama/llama_index/pull/20231">#20231</a>)</li> <li>fix weird pydantic warning (<a href="https://redirect.github.com/run-llama/llama_index/pull/20235">#20235</a>)</li> </ul> <h3>llama-index-embeddings-nvidia [0.4.2]</h3> <ul> <li>docs: Edit pass and update example model (<a href="https://redirect.github.com/run-llama/llama_index/pull/20198">#20198</a>)</li> </ul> <h3>llama-index-embeddings-ollama [0.8.4]</h3> <ul> <li>Added a test case (no code) to check the embedding through an actual connection to a Ollama server (after checking that the ollama server exists) (<a href="https://redirect.github.com/run-llama/llama_index/pull/20230">#20230</a>)</li> </ul> <h3>llama-index-llms-anthropic [0.10.2]</h3> <ul> <li>feat(llms/anthropic): Add support for RawMessageDeltaEvent in streaming (<a href="https://redirect.github.com/run-llama/llama_index/pull/20206">#20206</a>)</li> <li>chore: remove unsupported models (<a href="https://redirect.github.com/run-llama/llama_index/pull/20211">#20211</a>)</li> </ul> <h3>llama-index-llms-bedrock-converse [0.11.1]</h3> <ul> <li>feat: integrate bedrock converse with tool call block (<a href="https://redirect.github.com/run-llama/llama_index/pull/20099">#20099</a>)</li> <li>feat: Update model name extraction to include 'jp' region prefix and … (<a href="https://redirect.github.com/run-llama/llama_index/pull/20233">#20233</a>)</li> </ul> <h3>llama-index-llms-google-genai [0.7.3]</h3> <ul> <li>feat: google genai integration with tool block (<a href="https://redirect.github.com/run-llama/llama_index/pull/20096">#20096</a>)</li> <li>fix: non-streaming gemini tool calling (<a href="https://redirect.github.com/run-llama/llama_index/pull/20207">#20207</a>)</li> <li>Add token usage information in GoogleGenAI chat additional_kwargs (<a href="https://redirect.github.com/run-llama/llama_index/pull/20219">#20219</a>)</li> <li>bug fix google genai stream_complete (<a href="https://redirect.github.com/run-llama/llama_index/pull/20220">#20220</a>)</li> </ul> <h3>llama-index-llms-nvidia [0.4.4]</h3> <ul> <li>docs: Edit pass and code example updates (<a href="https://redirect.github.com/run-llama/llama_index/pull/20200">#20200</a>)</li> </ul> <h3>llama-index-llms-openai [0.6.8]</h3> <ul> <li>FixV2: Correct DocumentBlock type for OpenAI from 'input_file' to 'file' (<a href="https://redirect.github.com/run-llama/llama_index/pull/20203">#20203</a>)</li> <li>OpenAI v2 sdk support (<a href="https://redirect.github.com/run-llama/llama_index/pull/20234">#20234</a>)</li> </ul> <h3>llama-index-llms-upstage [0.6.5]</h3> <ul> <li>OpenAI v2 sdk support (<a href="https://redirect.github.com/run-llama/llama_index/pull/20234">#20234</a>)</li> </ul> <h3>llama-index-packs-streamlit-chatbot [0.5.2]</h3> <ul> <li>OpenAI v2 sdk support (<a href="https://redirect.github.com/run-llama/llama_index/pull/20234">#20234</a>)</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/run-llama/llama_index/commit/bc52c85394d9022f7785573c380147e75da87dbd"><code>bc52c85</code></a> Release 0.14.8 (<a href="https://redirect.github.com/run-llama/llama_index/issues/20236">#20236</a>)</li> <li><a href="https://github.com/run-llama/llama_index/commit/1a960beb7eb6fc95a1adeb4ee22e8aa2141deb77"><code>1a960be</code></a> Preserve Exception in ToolOutput (<a href="https://redirect.github.com/run-llama/llama_index/issues/20231">#20231</a>)</li> <li><a href="https://github.com/run-llama/llama_index/commit/60d102d026901678ffc0bcb9418c9806a4c5ae93"><code>60d102d</code></a> fix weird pydantic warning (<a href="https://redirect.github.com/run-llama/llama_index/issues/20235">#20235</a>)</li> <li><a href="https://github.com/run-llama/llama_index/commit/575a14c3ec86c1ba98ad94b5b4ca491c3c5d0d91"><code>575a14c</code></a> Added a test case (no code) to check the embedding through an actual connecti...</li> <li><a href="https://github.com/run-llama/llama_index/commit/8d4680f820bfcb0b464a981ff3935815b4a8809e"><code>8d4680f</code></a> Update Scrapy dependency to 2.13.3 (<a href="https://redirect.github.com/run-llama/llama_index/issues/20228">#20228</a>)</li> <li><a href="https://github.com/run-llama/llama_index/commit/1c8566b730dcbe8d83cbd18092d42d7e5b22131b"><code>1c8566b</code></a> Update llama-index-core dependency to 0.12.45 (<a href="https://redirect.github.com/run-llama/llama_index/issues/20227">#20227</a>)</li> <li><a href="https://github.com/run-llama/llama_index/commit/aaf94f8d898f7a09b12df66f66ef71339fb64a85"><code>aaf94f8</code></a> fix: Ensure schema creation only occurs if it doesn't already exist (<a href="https://redirect.github.com/run-llama/llama_index/issues/20225">#20225</a>)</li> <li><a href="https://github.com/run-llama/llama_index/commit/67b198ff9fb174c57a58622cf8118edc52ce3f8d"><code>67b198f</code></a> feat: Update model name extraction to include 'jp' region prefix and … (<a href="https://redirect.github.com/run-llama/llama_index/issues/20233">#20233</a>)</li> <li><a href="https://github.com/run-llama/llama_index/commit/74c7204af4e8c6199672144b6300cb86c80b0fcc"><code>74c7204</code></a> OpenAI v2 sdk support (<a href="https://redirect.github.com/run-llama/llama_index/issues/20234">#20234</a>)</li> <li><a href="https://github.com/run-llama/llama_index/commit/fdc676a714eddb7e801191a05156fda3bb595fa9"><code>fdc676a</code></a> feat(llms/anthropic): Add support for RawMessageDeltaEvent in streaming (<a href="https://redirect.github.com/run-llama/llama_index/issues/20206">#20206</a>)</li> <li>Additional commits viewable in <a href="https://github.com/run-llama/llama_index/compare/v0.12.41...v0.14.8">compare view</a></li> </ul> </details> <br /> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
