This feature looks great. Looking forward to its implementation. Best Regards, Jiangke Wu(xingfudeshi)
熊靖浏 <jingliu_xi...@foxmail.com> 于2024年7月15日周一 22:12写道: > Dear community, > > I want to start an SIP about "support flow limit control for a single > server". The SIP of this mail in Chinese can be seen at > https://www.yuque.com/cairusigoudenanpeijiao/izo0ob/pl4abtngc92icaf0?singleDoc# > . > > > > Motivation: > > The TC server side currently lacks flow limit control. When transactions > are exceeded, excessive pressure on the server side may cause the server to > crash . Therefore, it is necessary to provide a flow limit control for the > server side to protect server flow traffic security. > > > > Proposed Change: > > Add server single-node flow control. It is currently not recommended that > distributed flow control be implemented. The first version should have > simple functions and only be considered from an Ops perspective. In the > future, it is planned to add resourceId level limiting to impose flow limit > control on transaction initiators. > > > > New or Changed Public Interfaces: > > 1.Interface design > > Configure on the server side. The flow control takes effect on a single > server. Use the following parameter configuration: > # whether to enable flow limit control server.ratelimit.enable=true # > limit token number of bucket per second > server.ratelimit.bucket.token.second.num=10 # limit token max number of > bucket server.ratelimit.bucket.token.max.num=100 # limit token initial > number of bucket server.ratelimit.bucket.token.initial.num=10 > 2.Flow limit control design > > Flow limit control is based on global transactions. When a global > transaction starts, the flow limit controls the global transaction start > request, and other types of requests do not. This prevents transactions > from being limited in the middle and causing resource waste due to > rollback. The current limiting algorithm uses the token bucket algorithm. > The advantage of this algorithm is that it can better detect sudden traffic > changes instead of bringing a constant rate limit flow. > > > > There are currently two options for flow limit control components. One is > to use RateLimiter in Apache Guava, and the second is Bucket4j. Both have > active communities and are easy to use and integrate. Here are the > advantages and disadvantages: > > Apache Guava: Its advantages are that it has been integrated into Seata > and will not increase the packaging volume. The disadvantage is that it > does not directly support setting the maximum and initial number of token > buckets. The RateLimiter is relatively simple and requires expansion before > it can be used. > > Bucket4j: The advantage is that it can be used directly. The disadvantage > is that it increases the packaging volume. > > 3.Flow limit control position design > > Check the request method at DefaultCordinator and enter the limiting logic > judgment. > @Override public AbstractResultMessage onRequest(AbstractMessage request, > RpcContext context) { if (!(request instanceof > AbstractTransactionRequestToTC)) { throw new > IllegalArgumentException(); } AbstractTransactionRequestToTC > transactionRequest = (AbstractTransactionRequestToTC) request; > transactionRequest.setTCInboundHandler(this); if > (ENABLE_SERVER_RATELIMIT && isGloabalBegin(request)) { // > start flow limit control if (resultMessage != null && > !rateLimiter.canPass()) { // record metrics > eventBus.post(new GlobalTransactionEvent(-1, > GlobalTransactionEvent.ROLE_TC, null, > context.getApplicationId(), > context.getTransactionServiceGroup(), > null, null, null, true)); > handleRateLimited(transactionRequest); return resultMessage; > } } return transactionRequest.handle(context); } > Implementation points > > Flow limiting components: Select third-party flow limit components to > implement flow limit control. > > Processing flow limit control: Process flow is limited at the top level in > DefaultCordinator, and flow limiting checks are performed in the onRequest > method. > > Transaction end: All transaction ends (like client end) need to identify > flow limit control and throw corresponding exceptions correctly. > > Metrics: Extended GlobalTransactionEvent, metrics record events when > current limiting is triggered > > > > Migration Plan and Compatibility: > > This function will not make other Seata functions unusable; it is a newly > added function. Possible compatibility issues include how the client > handles the flow-limiting exception thrown. At present, it can be thrown > directly without modification. > > It is planned to be improved from both the Ops and business perspectives: > > Improvement from the Ops: Adding flow limit control based on resourceId, > limiting the transaction initiator (TM) to start global transactions. > > Improvements from the business: Adding a transaction allowlist mechanism, > allowing notable business transactions to pass the flow limit. > > > > Rejected Alternatives: > > There are two rejected alternatives: The first is strongly related to > business and will not be considered in the first version. The second is too > costly to limit the flow of all RPC requests, which will cause many > transactions to be rolled back and waste resources. > > Option One: > > Add a flow limit control context to the global transaction. Different > priority levels correspond to different strategies: > > ● The priority is 0, and it does not participate in the flow limit. > > ● The priority is 1, the default priority level, and participate in the > flow limit. > > Option Two: > > ● Taking RPC requests as the flow limit control unit, all RPC requests > will be flow-limited. If the current RPC request is limited, the > corresponding failed request of the current request will be directly > returned. > > > > > Thanks for your attention. > > Jingliu Xiong, github id: xjlgod