dependabot[bot] opened a new pull request, #38228: URL: https://github.com/apache/beam/pull/38228
Bumps [transformers](https://github.com/huggingface/transformers) from 4.55.4 to 5.0.0rc3. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p> <blockquote> <h1>Release candidate v5.0.0rc3</h1> <h2>New models:</h2> <ul> <li>[GLM-4.7] GLM-Lite Supoort by <a href="https://github.com/zRzRzRzRzRzRzR"><code>@zRzRzRzRzRzRzR</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43031">huggingface/transformers#43031</a></li> <li>[GLM-Image] AR Model Support for GLM-Image by <a href="https://github.com/zRzRzRzRzRzRzR"><code>@zRzRzRzRzRzRzR</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43100">huggingface/transformers#43100</a></li> <li>Add LWDetr model by <a href="https://github.com/sbucaille"><code>@sbucaille</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/40991">huggingface/transformers#40991</a></li> <li>Add LightOnOCR model implementation by <a href="https://github.com/baptiste-aubertin"><code>@baptiste-aubertin</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/41621">huggingface/transformers#41621</a></li> </ul> <h2>What's Changed</h2> <p>We are getting closer and closer to the official release! This RC is focused on removing more of the deprecated stuff, fixing some minors issues, doc updates.</p> <ul> <li>Update Japanese README to match English version by <a href="https://github.com/lilin-1"><code>@lilin-1</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43069">huggingface/transformers#43069</a></li> <li>[docs] Deploying by <a href="https://github.com/stevhliu"><code>@stevhliu</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/42263">huggingface/transformers#42263</a></li> <li>[docs] inference engines by <a href="https://github.com/stevhliu"><code>@stevhliu</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/42932">huggingface/transformers#42932</a></li> <li>Fix typos: Remove duplicate duplicate words words by <a href="https://github.com/efeecllk"><code>@efeecllk</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43040">huggingface/transformers#43040</a></li> <li>[style] Rework ruff rules and update all files by <a href="https://github.com/Cyrilvallez"><code>@Cyrilvallez</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43144">huggingface/transformers#43144</a></li> <li>[CB] Minor fix in kwargs by <a href="https://github.com/remi-or"><code>@remi-or</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43147">huggingface/transformers#43147</a></li> <li>[Bug] qwen2_5_omni: cap generation length to be less than the max_position_embedding in DiT by <a href="https://github.com/sniper35"><code>@sniper35</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43068">huggingface/transformers#43068</a></li> <li>Fix some deprecated practices in torch 2.9 by <a href="https://github.com/Cyrilvallez"><code>@Cyrilvallez</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43167">huggingface/transformers#43167</a></li> <li>Fix Fuyu processor width dimension bug in <code>_get_num_multimodal_tokens</code> by <a href="https://github.com/Abhinavexists"><code>@Abhinavexists</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43137">huggingface/transformers#43137</a></li> <li>Inherit from PreTrainedTokenizerBase by <a href="https://github.com/juliendenize"><code>@juliendenize</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43143">huggingface/transformers#43143</a></li> <li>Generation config boolean defaults by <a href="https://github.com/zucchini-nlp"><code>@zucchini-nlp</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43000">huggingface/transformers#43000</a></li> <li>Fix failing <code>BartModelIntegrationTest</code> by <a href="https://github.com/Sai-Suraj-27"><code>@Sai-Suraj-27</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43160">huggingface/transformers#43160</a></li> <li>fix failure of llava/pixtral by <a href="https://github.com/sywangyi"><code>@sywangyi</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/42985">huggingface/transformers#42985</a></li> <li>GemmaTokenizer: remove redundant whitespace pre-tokenizer by <a href="https://github.com/vaibhav-research"><code>@vaibhav-research</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43106">huggingface/transformers#43106</a></li> <li>Support <code>auto_doctring</code> in Processors by <a href="https://github.com/yonigozlan"><code>@yonigozlan</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/42101">huggingface/transformers#42101</a></li> <li>Fix failing <code>BitModelIntegrationTest</code> by <a href="https://github.com/Sai-Suraj-27"><code>@Sai-Suraj-27</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43164">huggingface/transformers#43164</a></li> <li>[<code>Fp8</code>] Fix experts by <a href="https://github.com/vasqu"><code>@vasqu</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43154">huggingface/transformers#43154</a></li> <li>Docs: improve wording for documentation build instructions by <a href="https://github.com/Sailnagale"><code>@Sailnagale</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43007">huggingface/transformers#43007</a></li> <li>[makefile] Cleanup and improve the rules by <a href="https://github.com/Cyrilvallez"><code>@Cyrilvallez</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43171">huggingface/transformers#43171</a></li> <li>Some new models added stuff that was already removed by <a href="https://github.com/Cyrilvallez"><code>@Cyrilvallez</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43179">huggingface/transformers#43179</a></li> <li>Fixes and compilation warning in torchao docs by <a href="https://github.com/merveenoyan"><code>@merveenoyan</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/42909">huggingface/transformers#42909</a></li> <li>[cache] Remove all deprecated classes by <a href="https://github.com/Cyrilvallez"><code>@Cyrilvallez</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43168">huggingface/transformers#43168</a></li> <li>Bump huggingface_hub minimal version by <a href="https://github.com/Wauplin"><code>@Wauplin</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43188">huggingface/transformers#43188</a></li> <li>Rework check_config_attributes.py by <a href="https://github.com/Cyrilvallez"><code>@Cyrilvallez</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43191">huggingface/transformers#43191</a></li> <li>Fix generation config validation by <a href="https://github.com/zucchini-nlp"><code>@zucchini-nlp</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43175">huggingface/transformers#43175</a></li> <li>[style] Use 'x | y' syntax for processors as well by <a href="https://github.com/Wauplin"><code>@Wauplin</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43189">huggingface/transformers#43189</a></li> <li>Remove deprecated objects by <a href="https://github.com/Cyrilvallez"><code>@Cyrilvallez</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43170">huggingface/transformers#43170</a></li> <li>fix chunked prefill implementation issue-43082 by <a href="https://github.com/marcndo"><code>@marcndo</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43132">huggingface/transformers#43132</a></li> <li>Reduce add_dates verbosity by <a href="https://github.com/yonigozlan"><code>@yonigozlan</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43184">huggingface/transformers#43184</a></li> <li>Add support for MiniMax-M2 by <a href="https://github.com/rogeryoungh"><code>@rogeryoungh</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/42028">huggingface/transformers#42028</a></li> <li>Fix failing <code>salesforce-ctrl</code>, <code>xlm</code> & <code>gpt-neo</code> model generation tests by <a href="https://github.com/Sai-Suraj-27"><code>@Sai-Suraj-27</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43180">huggingface/transformers#43180</a></li> <li>Less verbose library helpers by <a href="https://github.com/Cyrilvallez"><code>@Cyrilvallez</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43197">huggingface/transformers#43197</a></li> <li>run all test files on CircleCI by <a href="https://github.com/ydshieh"><code>@ydshieh</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43146">huggingface/transformers#43146</a></li> <li>Clamp temperature to >=1.0 for Dia generation by <a href="https://github.com/Haseebasif7"><code>@Haseebasif7</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43029">huggingface/transformers#43029</a></li> <li>Fix spelling typos in comments and code by <a href="https://github.com/raimbekovm"><code>@raimbekovm</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43046">huggingface/transformers#43046</a></li> <li>[docs] llama.cpp by <a href="https://github.com/stevhliu"><code>@stevhliu</code></a> in <a href="https://redirect.github.com/huggingface/transformers/pull/43185">huggingface/transformers#43185</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/huggingface/transformers/commit/cb5079fa72456d8ce27fc2041389beb5e1357f48"><code>cb5079f</code></a> v5.0.0rc3</li> <li><a href="https://github.com/huggingface/transformers/commit/d1808f2c36c02faad537f9737a76165e49b041f9"><code>d1808f2</code></a> [ci] Fixing some failing tests for important models (<a href="https://redirect.github.com/huggingface/transformers/issues/43231">#43231</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/3d276453a2b7c74f3259b1c136db3dd79c51756b"><code>3d27645</code></a> Add LightOnOCR model implementation (<a href="https://redirect.github.com/huggingface/transformers/issues/41621">#41621</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/77146cc9088ec8fc1dd476b40b1c6cdb0792afe3"><code>77146cc</code></a> fix crash in when running FSDP2+TP (<a href="https://redirect.github.com/huggingface/transformers/issues/43226">#43226</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/61317f5ac78511a1c02b08c0e73012d9542183ed"><code>61317f5</code></a> [CB] Ensure parallel decoding test passes using FA (<a href="https://redirect.github.com/huggingface/transformers/issues/43277">#43277</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/1efe1a633a47628134a2ba6376512af99cc3c9df"><code>1efe1a6</code></a> Fix failing <code>PegasusX</code>, <code>Mvp</code> & <code>LED</code> model integration tests (<a href="https://redirect.github.com/huggingface/transformers/issues/43245">#43245</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/e8ae373133be1eff2254c5dd71fcd628445cb4a4"><code>e8ae373</code></a> [consistency] Ensure models are added to the <code>_toctree.yml</code> (<a href="https://redirect.github.com/huggingface/transformers/issues/43264">#43264</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/c85be9899355c72771b3237f2434c7c84748427a"><code>c85be98</code></a> [docs] tensorrt-llm (<a href="https://redirect.github.com/huggingface/transformers/issues/43176">#43176</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/38022fd891209fa1e386b9afb971a9d2d35ec175"><code>38022fd</code></a> [style] Fix init isort and align makefile and CI (<a href="https://redirect.github.com/huggingface/transformers/issues/43260">#43260</a>)</li> <li><a href="https://github.com/huggingface/transformers/commit/e977446e632670f9972fc4ff1432b414c8b813cb"><code>e977446</code></a> Fix failing <code>Hiera</code>, <code>SwiftFormer</code> & <code>LED</code> Model integration tests (<a href="https://redirect.github.com/huggingface/transformers/issues/43225">#43225</a>)</li> <li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v4.55.4...v5.0.0rc3">compare view</a></li> </ul> </details> <br /> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/apache/beam/network/alerts). </details> -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
