+1 bucket4j
I would choose bucket4j. Because it is ready to use. And the functions meet
our needs.


Best Regards,
Jiangke Wu(xingfudeshi)


熊靖浏 <jingliu_xi...@foxmail.com> 于2024年7月16日周二 12:20写道:

> There is still a decision we need to make, use Apache Guava or Bucket4j?
>
>
> &gt; There are currently two options for flow limit control components.
> One is
> &gt; to use RateLimiter in Apache Guava, and the second is Bucket4j. Both
> have
> &gt; active communities and are easy to use and integrate. Here are the
> &gt; advantages and disadvantages:
> &gt;
> &gt; Apache Guava: Its advantages are that it has been integrated into
> Seata
> &gt; and will not increase the packaging volume. The disadvantage is that
> it
> &gt; does not directly support setting the maximum and initial number of
> token
> &gt; buckets. The RateLimiter is relatively simple and requires expansion
> before
> &gt; it can be used.
> &gt;
> &gt; Bucket4j: The advantage is that it can be used directly. The
> disadvantage
> &gt; is that it increases the packaging volume.
>
>
>
> Jingliu Xiong, github id: xjlgod
> ------------------ Original Email&nbsp; ------------------
> 发件人:
>                                                   "dev"
>                                                                 <
> xingfude...@apache.org&gt;;
> 发送时间:&nbsp;2024年7月16日(星期二) 上午10:41
> 收件人:&nbsp;"dev"<dev@seata.apache.org&gt;;
>
> 主题:&nbsp;Re: Start an SIP(Seata Improvement Proposals): support flow limit
> control control for a single server.
>
>
>
> This feature looks great.
> Looking forward to its implementation.
>
> Best Regards,
> Jiangke Wu(xingfudeshi)
>
>
> 熊靖浏 <jingliu_xi...@foxmail.com&gt; 于2024年7月15日周一 22:12写道:
>
> &gt; Dear community,
> &gt;
> &gt; I want to start an SIP about "support flow limit control for a single
> &gt; server". The SIP of this mail in Chinese can be seen at&amp;nbsp;
> &gt;
> https://www.yuque.com/cairusigoudenanpeijiao/izo0ob/pl4abtngc92icaf0?singleDoc#
> &gt
> <https://www.yuque.com/cairusigoudenanpeijiao/izo0ob/pl4abtngc92icaf0?singleDoc#&gt>;
> .
> &gt;
> &gt;
> &gt;
> &gt; Motivation:
> &gt;
> &gt; The TC server side currently lacks flow limit control. When
> transactions
> &gt; are exceeded, excessive pressure on the server side may cause the
> server to
> &gt; crash . Therefore, it is necessary to provide a flow limit control
> for the
> &gt; server side to protect server flow traffic security.
> &gt;
> &gt;
> &gt;
> &gt; Proposed Change:
> &gt;
> &gt; Add server single-node flow control. It is currently not recommended
> that
> &gt; distributed flow control be implemented. The first version should have
> &gt; simple functions and only be considered from an Ops perspective. In
> the
> &gt; future, it is planned to add resourceId level limiting to impose flow
> limit
> &gt; control on transaction initiators.
> &gt;
> &gt;
> &gt;
> &gt; New or Changed Public Interfaces:
> &gt;
> &gt; 1.Interface design
> &gt;
> &gt; Configure on the server side. The flow control takes effect on a
> single
> &gt; server. Use the following parameter configuration:
> &gt; # whether to enable flow limit control server.ratelimit.enable=true #
> &gt; limit token number of bucket per second
> &gt; server.ratelimit.bucket.token.second.num=10 # limit token max number
> of
> &gt; bucket server.ratelimit.bucket.token.max.num=100 # limit token initial
> &gt; number of bucket server.ratelimit.bucket.token.initial.num=10
> &gt; 2.Flow limit control design
> &gt;
> &gt; Flow limit control is based on global transactions. When a global
> &gt; transaction starts, the flow limit controls the global transaction
> start
> &gt; request, and other types of requests do not. This prevents
> transactions
> &gt; from being limited in the middle and causing resource waste due to
> &gt; rollback. The current limiting algorithm uses the token bucket
> algorithm.
> &gt; The advantage of this algorithm is that it can better detect sudden
> traffic
> &gt; changes instead of bringing a constant rate limit flow.
> &gt;
> &gt;
> &gt;
> &gt; There are currently two options for flow limit control components.
> One is
> &gt; to use RateLimiter in Apache Guava, and the second is Bucket4j. Both
> have
> &gt; active communities and are easy to use and integrate. Here are the
> &gt; advantages and disadvantages:
> &gt;
> &gt; Apache Guava: Its advantages are that it has been integrated into
> Seata
> &gt; and will not increase the packaging volume. The disadvantage is that
> it
> &gt; does not directly support setting the maximum and initial number of
> token
> &gt; buckets. The RateLimiter is relatively simple and requires expansion
> before
> &gt; it can be used.
> &gt;
> &gt; Bucket4j: The advantage is that it can be used directly. The
> disadvantage
> &gt; is that it increases the packaging volume.
> &gt;
> &gt; 3.Flow limit control position design
> &gt;
> &gt; Check the request method at DefaultCordinator and enter the limiting
> logic
> &gt; judgment.
> &gt; @Override public AbstractResultMessage onRequest(AbstractMessage
> request,
> &gt; RpcContext context) {&nbsp;&nbsp;&nbsp;&nbsp; if (!(request instanceof
> &gt; AbstractTransactionRequestToTC))
> {&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; throw new
> &gt; IllegalArgumentException();&nbsp;&nbsp;&nbsp;&nbsp;
> }&nbsp;&nbsp;&nbsp;&nbsp; AbstractTransactionRequestToTC
> &gt; transactionRequest = (AbstractTransactionRequestToTC) request;
> &gt;&nbsp;
> transactionRequest.setTCInboundHandler(this);&nbsp;&nbsp;&nbsp;&nbsp; if
> &gt; (ENABLE_SERVER_RATELIMIT &amp;amp;&amp;amp; isGloabalBegin(request))
> {&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //
> &gt; start flow limit
> control&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if (resultMessage
> != null &amp;amp;&amp;amp;
> &gt; !rateLimiter.canPass())
> {&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
> // record metrics
> &gt;&nbsp; eventBus.post(new GlobalTransactionEvent(-1,
> &gt; GlobalTransactionEvent.ROLE_TC, null,
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
> context.getApplicationId(),
> &gt; context.getTransactionServiceGroup(),
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
> null, null, null, true));
> &gt;&nbsp;
> handleRateLimited(transactionRequest);&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
> return resultMessage;
> &gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }&nbsp;&nbsp;&nbsp;&nbsp;
> }&nbsp;&nbsp;&nbsp;&nbsp; return transactionRequest.handle(context); }
> &gt; Implementation points
> &gt;
> &gt; Flow limiting components: Select third-party flow limit components to
> &gt; implement flow limit control.
> &gt;
> &gt; Processing flow limit control: Process flow is limited at the top
> level in
> &gt; DefaultCordinator, and flow limiting checks are performed in the
> onRequest
> &gt; method.
> &gt;
> &gt; Transaction end: All transaction ends (like client end) need to
> identify
> &gt; flow limit control and throw corresponding exceptions correctly.
> &gt;
> &gt; Metrics: Extended GlobalTransactionEvent, metrics record events when
> &gt; current limiting is triggered
> &gt;
> &gt;
> &gt;
> &gt; Migration Plan and Compatibility:
> &gt;
> &gt; This function will not make other Seata functions unusable; it is a
> newly
> &gt; added function. Possible compatibility issues include how the client
> &gt; handles the flow-limiting exception thrown. At present, it can be
> thrown
> &gt; directly without modification.
> &gt;
> &gt; It is planned to be improved from both the Ops and business
> perspectives:
> &gt;
> &gt; Improvement from the Ops: Adding flow limit control based on
> resourceId,
> &gt; limiting the transaction initiator (TM) to start global transactions.
> &gt;
> &gt; Improvements from the business: Adding a transaction allowlist
> mechanism,
> &gt; allowing notable business transactions to pass the flow limit.
> &gt;
> &gt;
> &gt;
> &gt; Rejected Alternatives:
> &gt;
> &gt; There are two rejected alternatives: The first is strongly related to
> &gt; business and will not be considered in the first version. The second
> is too
> &gt; costly to limit the flow of all RPC requests, which will cause many
> &gt; transactions to be rolled back and waste resources.
> &gt;
> &gt; Option One:
> &gt;
> &gt; Add a flow limit control context to the global transaction. Different
> &gt; priority levels correspond to different strategies:
> &gt;
> &gt; ● The priority is 0, and it does not participate in the flow limit.
> &gt;
> &gt; ● The priority is 1, the default priority level, and participate in
> the
> &gt; flow limit.
> &gt;
> &gt; Option Two:
> &gt;
> &gt; ● Taking RPC requests as the flow limit control unit, all RPC requests
> &gt; will be flow-limited. If the current RPC request is limited, the
> &gt; corresponding failed request of the current request will be directly
> &gt; returned.
> &gt;
> &gt;
> &gt;
> &gt;
> &gt; Thanks for your attention.
> &gt;
> &gt; Jingliu Xiong, github id: xjlgod

Reply via email to