leapcat opened a new issue #13657:
URL: https://github.com/apache/shardingsphere/issues/13657
## Feature Request
**For English only**, other languages will not accept.
Please pay attention on issues you submitted, because we maybe need more
details.
If no response anymore and we cannot make decision by current information,
we will **close it**.
Please answer these questions before submitting your issue. Thanks!
### Is your feature request related to a problem?
Yes.
1. We are migrating current db sharding framework to ShardingSphere
2. We need a feature for more than one sharding strategies for a logic table
so as to be back compatible for the current usage sceneriaios.
3. I would get a "Only allowed 0 or 1 sharding strategy configuration" error
if I config 2 strategies for a same table.
### Describe the feature you would like.
For example, I have a logic table (order) which has 4 * 128 actual tables.
1. Sometimes we want to use Complex strategy with specified sharding colums
2. Sometimes we need Hint strategy to batch load or update records from one
of the tables, especially in a job situation, we would split sub tasks which
will execute SQL for one table concurrently.
I wonder if there is a feature to support more than 1 strategies for a logic
table, or any tricks?
ShardingSphere Version: 4.1.1
The following is the sharding rule config of the table (order) :
```yaml
order:
actualDataNodes: "db_${(0..3)}.order_$->{(0..3)}"
databaseStrategy:
complex:
shardingColumns: id, client_id
algorithmClassName: com.*.OrderShardingAlgorithm$DbSharding
# hint:
# algorithmClassName: com.*.OrderShardingAlgorithm$DbSharding
tableStrategy:
complex:
shardingColumns: id, client_id
algorithmClassName: com.*.OrderShardingAlgorithm$TableSharding
# hint:
# algorithmClassName: com.*.OrderShardingAlgorithm$DbSharding
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]