dependabot[bot] opened a new pull request, #9848:
URL: https://github.com/apache/gravitino/pull/9848

   Bumps [llama-index](https://github.com/run-llama/llama_index) from 0.13.0 to 
0.14.13.
   <details>
   <summary>Release notes</summary>
   <p><em>Sourced from <a 
href="https://github.com/run-llama/llama_index/releases";>llama-index's 
releases</a>.</em></p>
   <blockquote>
   <h2>v0.14.13</h2>
   <h1>Release Notes</h1>
   <h2>[2026-01-21]</h2>
   <h3>llama-index-core [0.14.13]</h3>
   <ul>
   <li>feat: add early_stopping_method parameter to agent workflows (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20389";>#20389</a>)</li>
   <li>feat: Add token-based code splitting support to CodeSplitter (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20438";>#20438</a>)</li>
   <li>Add RayIngestionPipeline integration for distributed data ingestion (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20443";>#20443</a>)</li>
   <li>Added the multi-modal version of the Condensed Conversation &amp; 
Context… (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20446";>#20446</a>)</li>
   <li>Replace ChatMemoryBuffer with Memory (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20458";>#20458</a>)</li>
   <li>fix(bug):Raise value error on when input is empty list in mean_agg 
instead of returning float (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20466";>#20466</a>)</li>
   <li>fix: The classmethod of ReActChatFormatter should use cls instead of the 
class name (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20475";>#20475</a>)</li>
   <li>feat: add configurable empty response message to synthesizers (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20503";>#20503</a>)</li>
   </ul>
   <h3>llama-index-embeddings-bedrock [0.7.3]</h3>
   <ul>
   <li>Enable use of ARNs for Bedrock Embedding Models (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20435";>#20435</a>)</li>
   </ul>
   <h3>llama-index-embeddings-ollama [0.8.6]</h3>
   <ul>
   <li>Improved Ollama batch embedding (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20447";>#20447</a>)</li>
   </ul>
   <h3>llama-index-embeddings-voyageai [0.5.3]</h3>
   <ul>
   <li>Adding voyage-4 models (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20497";>#20497</a>)</li>
   </ul>
   <h3>llama-index-ingestion-ray [0.1.0]</h3>
   <ul>
   <li>Add RayIngestionPipeline integration for distributed data ingestion (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20443";>#20443</a>)</li>
   </ul>
   <h3>llama-index-llms-anthropic [0.10.6]</h3>
   <ul>
   <li>feat: enhance structured predict methods for anthropic (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20440";>#20440</a>)</li>
   <li>fix: preserve input_tokens in Anthropic stream_chat responses (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20512";>#20512</a>)</li>
   </ul>
   <h3>llama-index-llms-apertis [0.1.0]</h3>
   <ul>
   <li>Add Apertis LLM integration with example notebook (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20436";>#20436</a>)</li>
   </ul>
   <h3>llama-index-llms-bedrock-converse [0.12.4]</h3>
   <ul>
   <li>chore(bedrock-converse): Remove extraneous thinking_delta kwarg from 
ChatMessage (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20455";>#20455</a>)</li>
   </ul>
   <h3>llama-index-llms-gemini [0.6.2]</h3>
   <ul>
   <li>chore: deprecate llama-index-llms-gemini (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20511";>#20511</a>)</li>
   </ul>
   <h3>llama-index-llms-openai [0.6.13]</h3>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Changelog</summary>
   <p><em>Sourced from <a 
href="https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md";>llama-index's
 changelog</a>.</em></p>
   <blockquote>
   <h3>llama-index-core [0.14.13]</h3>
   <ul>
   <li>feat: add early_stopping_method parameter to agent workflows (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20389";>#20389</a>)</li>
   <li>feat: Add token-based code splitting support to CodeSplitter (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20438";>#20438</a>)</li>
   <li>Add RayIngestionPipeline integration for distributed data ingestion (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20443";>#20443</a>)</li>
   <li>Added the multi-modal version of the Condensed Conversation &amp; 
Context… (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20446";>#20446</a>)</li>
   <li>Replace ChatMemoryBuffer with Memory (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20458";>#20458</a>)</li>
   <li>fix(bug):Raise value error on when input is empty list in mean_agg 
instead of returning float (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20466";>#20466</a>)</li>
   <li>fix: The classmethod of ReActChatFormatter should use cls instead of the 
class name (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20475";>#20475</a>)</li>
   <li>feat: add configurable empty response message to synthesizers (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20503";>#20503</a>)</li>
   </ul>
   <h3>llama-index-embeddings-bedrock [0.7.3]</h3>
   <ul>
   <li>Enable use of ARNs for Bedrock Embedding Models (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20435";>#20435</a>)</li>
   </ul>
   <h3>llama-index-embeddings-ollama [0.8.6]</h3>
   <ul>
   <li>Improved Ollama batch embedding (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20447";>#20447</a>)</li>
   </ul>
   <h3>llama-index-embeddings-voyageai [0.5.3]</h3>
   <ul>
   <li>Adding voyage-4 models (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20497";>#20497</a>)</li>
   </ul>
   <h3>llama-index-ingestion-ray [0.1.0]</h3>
   <ul>
   <li>Add RayIngestionPipeline integration for distributed data ingestion (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20443";>#20443</a>)</li>
   </ul>
   <h3>llama-index-llms-anthropic [0.10.6]</h3>
   <ul>
   <li>feat: enhance structured predict methods for anthropic (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20440";>#20440</a>)</li>
   <li>fix: preserve input_tokens in Anthropic stream_chat responses (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20512";>#20512</a>)</li>
   </ul>
   <h3>llama-index-llms-apertis [0.1.0]</h3>
   <ul>
   <li>Add Apertis LLM integration with example notebook (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20436";>#20436</a>)</li>
   </ul>
   <h3>llama-index-llms-bedrock-converse [0.12.4]</h3>
   <ul>
   <li>chore(bedrock-converse): Remove extraneous thinking_delta kwarg from 
ChatMessage (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20455";>#20455</a>)</li>
   </ul>
   <h3>llama-index-llms-gemini [0.6.2]</h3>
   <ul>
   <li>chore: deprecate llama-index-llms-gemini (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20511";>#20511</a>)</li>
   </ul>
   <h3>llama-index-llms-openai [0.6.13]</h3>
   <ul>
   <li>Sanitize OpenAI structured output JSON schema name for generic Pydantic 
models (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20452";>#20452</a>)</li>
   <li>chore: vbump openai (<a 
href="https://redirect.github.com/run-llama/llama_index/pull/20482";>#20482</a>)</li>
   </ul>
   <h3>llama-index-llms-openrouter [0.4.3]</h3>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Commits</summary>
   <ul>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/99d7e055611f0f07020f20e5f325070e056f0bbd";><code>99d7e05</code></a>
 Release 0.14.13 (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20516";>#20516</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/ca6f652335b31d48ab946d39383741ad15270207";><code>ca6f652</code></a>
 Revamp YouRetriever integration (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20493";>#20493</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/788fb32a239d394725d7c57a9dc76c3c4d0563a1";><code>788fb32</code></a>
 fix: preserve input_tokens in Anthropic stream_chat responses (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20512";>#20512</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/416fa20a09e05a47eabc22905dc680eed1c241c3";><code>416fa20</code></a>
 chore: deprecate llama-index-llms-gemini (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20511";>#20511</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/4d8775362a30777556e38202424b1664222e7b11";><code>4d87753</code></a>
 feat(vertexaivectorsearch): add hybrid search support (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20487";>#20487</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/265a5550cd85aa0caedbb46e0d01ec4cdf514571";><code>265a555</code></a>
 feat: add configurable empty response message to synthesizers (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20503";>#20503</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/2277956bb24994342ea69759df8009370457d848";><code>2277956</code></a>
 Adding voyage-4 models (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20497";>#20497</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/dc6e7f18a6d0570fc0fd109dba9ce157295d1b49";><code>dc6e7f1</code></a>
 docs: update NVIDIA notebooks (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20490";>#20490</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/e7182dc41e3eb4ee5c667977062373ac5ebaf603";><code>e7182dc</code></a>
 feat: Volcengine MySQL vector store integration (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20404";>#20404</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/30ad263a18825af6cfaa8c734cab734b857065a7";><code>30ad263</code></a>
 Patentsview reader api changes (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/20481";>#20481</a>)</li>
   <li>Additional commits viewable in <a 
href="https://github.com/run-llama/llama_index/compare/v0.13.0...v0.14.13";>compare
 view</a></li>
   </ul>
   </details>
   <br />
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=llama-index&package-manager=pip&previous-version=0.13.0&new-version=0.14.13)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   <details>
   <summary>Dependabot commands and options</summary>
   <br />
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show <dependency name> ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to