774886347 opened a new issue, #32957:
URL: https://github.com/apache/shardingsphere/issues/32957
I have **50** project module and **each of them** using sharding-jdbc now,
and I have to config all tables for dev,sit,uat,prod env:
```
spring:
datasource:
url: jdbc:shardingsphere:classpath:config/sharding-uat.yaml
driver-class-name:
org.apache.shardingsphere.driver.ShardingSphereDriver
type: com.alibaba.druid.pool.DruidDataSource
druid:
filters: stat
max-active: 50
initial-size: 5
max-wait: 60000
min-idle: 5
time-between-eviction-runs-millis: 60000
min-evictable-idle-time-millis: 300000
max-evictable-idle-time-millis: 1800000
filter:
config:
enabled: false
web-stat-filter:
enabled: false
stat-view-servlet:
enabled: false
keep-alive: true
validation-query: select 1
remove-abandoned: true
test-while-idle: true
test-on-borrow: false
test-on-return: false
pool-prepared-statements: true
max-open-prepared-statements: 50
remove-abandoned-timeout: 300
log-abandoned: true
```
**Here is the sharding-uat.yaml**:
```
dataSources:
m1:
dataSourceClassName: com.alibaba.druid.pool.DruidDataSource
driverClassName: com.mysql.cj.jdbc.Driver
url:
username:
password:
m2:
dataSourceClassName: com.alibaba.druid.pool.DruidDataSource
driverClassName: com.mysql.cj.jdbc.Driver
url:
username:
password:
rules:
- !SINGLE
tables:
- m2.my_table_without_sharding
...**<422 items>**
- !SHARDING
tables:
here is the sharding config
shardingAlgorithms:
here is the sharding config
keyGenerators:
snowflake:
type: custom_snowflake_id
props:
sql-show: true
```
When I adding a new "- !SINGLE" table for sharding, I hava to add this
config for 4 environment(dev,sit,uat,prod) and all project modules (50 odules)
have to adding these table config, and they are all the same!
So how to config common rule config for sharding-jdbc:5.5.0?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
[email protected]
For queries about this service, please contact Infrastructure at:
[email protected]