GitHub user funky-eyes added a comment to the discussion: Proposal for 
global_table Cleanup Optimization in Seata Transacion Coordinator

> Thank you for your response! 👍🏻
> 
> I wasn’t aware that ShardingSphere internally uses Seata—appreciate the 
> insight. I also agree that introducing an additional component to solve this 
> issue might not be the best approach.
> 
> As you suggested, using different `global_table` names per `TC` node would 
> effectively result in sharding. However, I’m a bit concerned that this setup 
> could introduce issues in case of a TC node failure. If a node goes down, 
> **recovery from another TC node might not be possible**, which could lead to 
> `data loss`. Also seems like handling data imbalance across nodes would be 
> difficult in this setup.
> 
> Since this is a simpler approach, the trade-offs are quite clear. What are 
> your thoughts on these potential downsides?

If the goal is to leverage the computational power of all nodes, I believe we 
could consider a solution similar to a dispatcher or ticket-issuer model.

A leader node could be elected using either a distributedLockTable or a Raft 
cluster (where store.mode would still be db or redis).

If leader election happens via a distributedLockTable, we could establish a 
task table. The leader node would scan the list of transactions currently in 
'rollbacking' and 'committing' states and publish tasks to this table. Other 
nodes would then continuously poll the task table, competing to claim these 
tasks. Each task would contain a batch of XIDs (transaction IDs), and the node 
acquiring the task would be responsible for executing the 'end' actions (e.g., 
commit or rollback) for these transactions. Naturally, we would also need to 
account for the possibility of a node crashing after claiming a task. To 
address this, each task should have a timeout period. If a task's timeout is 
reached and it still exists (i.e., hasn't been completed), it would become 
available for other nodes to claim.

Alternatively, if using a Raft cluster, the leader could directly dispatch 
tasks to a specific follower node within the cluster for execution. Once the 
follower completes the task, it would send a response back to the leader.

It's worth noting, however, that these proposals were previously rejected by 
the community. The reasoning was that they involve a distributed environment 
and were deemed too complex. Consequently, the prevailing recommendation was to 
opt for a multi-threaded processing solution instead.

GitHub link: 
https://github.com/apache/incubator-seata/discussions/7362#discussioncomment-13268722

----
This is an automatically sent email for dev@seata.apache.org.
To unsubscribe, please send an email to: dev-unsubscr...@seata.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@seata.apache.org
For additional commands, e-mail: dev-h...@seata.apache.org

Reply via email to