PsiACE opened a new pull request, #7082:
URL: https://github.com/apache/opendal/pull/7082
# Which issue does this PR close?
<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements. You can link an issue to this PR using the GitHub syntax. For
example `Closes #123` indicates that this PR will close issue #123.
-->
Closes #6591 .
Port #6618 with fixes.
# Rationale for this change
<!--
Why are you proposing this change? If this is already explained clearly in
the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand your
changes and offer better suggestions for fixes.
-->
Add support for passing a shared/custom semaphore to ConcurrentLimitLayer
without breaking existing APIs.
# What changes are included in this PR?
<!--
There is no need to duplicate the description in the issue here but it is
sometimes worth providing a summary of the individual changes in this PR.
-->
- Introduce ConcurrentLimitSemaphore abstraction (layer-named) with
default Arc<mea::Semaphore> implementation.
- Keep ConcurrentLimitLayer::new(permits) intact; add with_semaphore(...)
for shared/custom semaphore; align HTTP setter naming.
- Add tests for shared/custom semaphore usage.
# Are there any user-facing changes?
<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->
<!---
If there are any breaking changes to public APIs, please add the
`breaking-changes` label.
-->
Yes. ConcurrentLimitLayer can now accept a custom/shared semaphore via
with_semaphore(...) while keeping the existing new(permits) constructor.
# AI Usage Statement
<!--
If you are using AI tools to build this PR, please include a statement
specifying the tool and models you are using.
-->
Implemented with OpenAI Codex (GPT-5) via Codex CLI.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]