dependabot[bot] opened a new pull request, #8551:
URL: https://github.com/apache/gravitino/pull/8551

   Bumps [llama-index](https://github.com/run-llama/llama_index) from 0.11.18 
to 0.12.41.
   <details>
   <summary>Release notes</summary>
   <p><em>Sourced from <a 
href="https://github.com/run-llama/llama_index/releases";>llama-index's 
releases</a>.</em></p>
   <blockquote>
   <h2>v0.12.41 (2025-06-07)</h2>
   <h1>Release Notes</h1>
   <h3><code>llama-index-core</code> [0.12.41]</h3>
   <ul>
   <li>feat: Add MutableMappingKVStore for easier caching (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18893";>#18893</a>)</li>
   <li>fix: async functions in tool specs (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/19000";>#19000</a>)</li>
   <li>fix: properly apply file limit to SimpleDirectoryReader (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18983";>#18983</a>)</li>
   <li>fix: overwriting of LLM callback manager from Settings (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18951";>#18951</a>)</li>
   <li>fix: Adding warning in the docstring of JsonPickleSerializer for the 
user to deserialize only safe things, rename to PickleSerializer (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18943";>#18943</a>)</li>
   <li>fix: ImageDocument path and url checking to ensure that the input is 
really an image (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18947";>#18947</a>)</li>
   <li>chore: remove some unused utils from core (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18985";>#18985</a>)</li>
   </ul>
   <h3><code>llama-index-embeddings-azure-openai</code> [0.3.8]</h3>
   <ul>
   <li>fix: Azure api-key and azure-endpoint resolution fixes (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18975";>#18975</a>)</li>
   <li>fix: api_base vs azure_endpoint resolution fixes (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/19002";>#19002</a>)</li>
   </ul>
   <h3><code>llama-index-graph-stores-ApertureDB</code> [0.1.0]</h3>
   <ul>
   <li>feat: Aperturedb propertygraph (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18749";>#18749</a>)</li>
   </ul>
   <h3><code>llama-index-indices-managed-llama-cloud</code> [0.7.4]</h3>
   <ul>
   <li>fix: resolve retriever llamacloud index (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18949";>#18949</a>)</li>
   <li>chore: composite retrieval add ReRankConfig (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18973";>#18973</a>)</li>
   </ul>
   <h3><code>llama-index-llms-azure-openai</code> [0.3.4]</h3>
   <ul>
   <li>fix: api_base vs azure_endpoint resolution fixes (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/19002";>#19002</a>)</li>
   </ul>
   <h3><code>llama-index-llms-bedrock-converse</code> [0.7.1]</h3>
   <ul>
   <li>fix: handle empty message content to prevent ValidationError (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18914";>#18914</a>)</li>
   </ul>
   <h3><code>llama-index-llms-litellm</code> [0.5.1]</h3>
   <ul>
   <li>feat: Add DocumentBlock support to LiteLLM integration (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18955";>#18955</a>)</li>
   </ul>
   <h3><code>llama-index-llms-ollama</code> [0.6.2]</h3>
   <ul>
   <li>feat: Add support for the new think feature in ollama (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18993";>#18993</a>)</li>
   </ul>
   <h3><code>llama-index-llms-openai</code> [0.4.4]</h3>
   <ul>
   <li>feat: add OpenAI JSON Schema structured output support (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18897";>#18897</a>)</li>
   <li>fix: skip tool description length check in openai response api (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18956";>#18956</a>)</li>
   </ul>
   <h3><code>llama-index-packs-searchain</code> [0.1.0]</h3>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Changelog</summary>
   <p><em>Sourced from <a 
href="https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md";>llama-index's
 changelog</a>.</em></p>
   <blockquote>
   <h3><code>llama-index-core</code> [0.12.41]</h3>
   <ul>
   <li>feat: Add MutableMappingKVStore for easier caching (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18893";>#18893</a>)</li>
   <li>fix: async functions in tool specs (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/19000";>#19000</a>)</li>
   <li>fix: properly apply file limit to SimpleDirectoryReader (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18983";>#18983</a>)</li>
   <li>fix: overwriting of LLM callback manager from Settings (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18951";>#18951</a>)</li>
   <li>fix: Adding warning in the docstring of JsonPickleSerializer for the 
user to deserialize only safe things, rename to PickleSerializer (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18943";>#18943</a>)</li>
   <li>fix: ImageDocument path and url checking to ensure that the input is 
really an image (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18947";>#18947</a>)</li>
   <li>chore: remove some unused utils from core (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18985";>#18985</a>)</li>
   </ul>
   <h3><code>llama-index-embeddings-azure-openai</code> [0.3.8]</h3>
   <ul>
   <li>fix: Azure api-key and azure-endpoint resolution fixes (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18975";>#18975</a>)</li>
   <li>fix: api_base vs azure_endpoint resolution fixes (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/19002";>#19002</a>)</li>
   </ul>
   <h3><code>llama-index-graph-stores-ApertureDB</code> [0.1.0]</h3>
   <ul>
   <li>feat: Aperturedb propertygraph (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18749";>#18749</a>)</li>
   </ul>
   <h3><code>llama-index-indices-managed-llama-cloud</code> [0.7.4]</h3>
   <ul>
   <li>fix: resolve retriever llamacloud index (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18949";>#18949</a>)</li>
   <li>chore: composite retrieval add ReRankConfig (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18973";>#18973</a>)</li>
   </ul>
   <h3><code>llama-index-llms-azure-openai</code> [0.3.4]</h3>
   <ul>
   <li>fix: api_base vs azure_endpoint resolution fixes (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/19002";>#19002</a>)</li>
   </ul>
   <h3><code>llama-index-llms-bedrock-converse</code> [0.7.1]</h3>
   <ul>
   <li>fix: handle empty message content to prevent ValidationError (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18914";>#18914</a>)</li>
   </ul>
   <h3><code>llama-index-llms-litellm</code> [0.5.1]</h3>
   <ul>
   <li>feat: Add DocumentBlock support to LiteLLM integration (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18955";>#18955</a>)</li>
   </ul>
   <h3><code>llama-index-llms-ollama</code> [0.6.2]</h3>
   <ul>
   <li>feat: Add support for the new think feature in ollama (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18993";>#18993</a>)</li>
   </ul>
   <h3><code>llama-index-llms-openai</code> [0.4.4]</h3>
   <ul>
   <li>feat: add OpenAI JSON Schema structured output support (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18897";>#18897</a>)</li>
   <li>fix: skip tool description length check in openai response api (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18956";>#18956</a>)</li>
   </ul>
   <h3><code>llama-index-packs-searchain</code> [0.1.0]</h3>
   <ul>
   <li>feat: Add searchain package (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18929";>#18929</a>)</li>
   </ul>
   <h3><code>llama-index-readers-docugami</code> [0.3.1]</h3>
   <!-- raw HTML omitted -->
   </blockquote>
   <p>... (truncated)</p>
   </details>
   <details>
   <summary>Commits</summary>
   <ul>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/98739a603768e37a98c70275113d98e5d1f0979e";><code>98739a6</code></a>
 v0.12.41 (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/19002";>#19002</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/aff1065c2d163d5674972babc8b2295d808df262";><code>aff1065</code></a>
 fix: async functions in tool specs (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/19000";>#19000</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/fa00eb739872323d90579129651f1129c97404a5";><code>fa00eb7</code></a>
 actually format the args into a start event instance (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/19001";>#19001</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/0fb1351498186645b35f26339be646783861f5d1";><code>0fb1351</code></a>
 Add support for the new think feature in ollama 0.9 (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18993";>#18993</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/c68be036516d3d3e301e2284831558e57fc68c5a";><code>c68be03</code></a>
 Update custom_multi_turn_memory.ipynb (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18994";>#18994</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/c1cd863c62c446ee55e0b9e04b37c08daa017efd";><code>c1cd863</code></a>
 ElevenLabs integration (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18967";>#18967</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/2b7dbf366ad5840750049fdc9f5db63567d3bdcd";><code>2b7dbf3</code></a>
 docs: update resources docs (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18991";>#18991</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/3093dab281860930b87607261407917f8766079c";><code>3093dab</code></a>
 Adding Docs for resources (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18980";>#18980</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/53614e2f7913c0e86b58add9470b3c900b6c60b2";><code>53614e2</code></a>
 Prevent SimpleDirectoryReader from excessive memory consumption (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18983";>#18983</a>)</li>
   <li><a 
href="https://github.com/run-llama/llama_index/commit/5d8280cf5c2794e3c8756ffcfe1ca609c4e1083d";><code>5d8280c</code></a>
 llama-index-readers-gcs: Allow newer versions of gcsfs (<a 
href="https://redirect.github.com/run-llama/llama_index/issues/18987";>#18987</a>)</li>
   <li>Additional commits viewable in <a 
href="https://github.com/run-llama/llama_index/compare/v0.11.18...v0.12.41";>compare
 view</a></li>
   </ul>
   </details>
   <br />
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=llama-index&package-manager=pip&previous-version=0.11.18&new-version=0.12.41)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   <details>
   <summary>Dependabot commands and options</summary>
   <br />
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show <dependency name> ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   You can disable automated security fix PRs for this repo from the [Security 
Alerts page](https://github.com/apache/gravitino/network/alerts).
   
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to