wsg96321 commented on issue #15727:
URL: 
https://github.com/apache/shardingsphere/issues/15727#issuecomment-1056724498


   Maybe we can do it in a batch ? like 
   ```
   ALTER SHARDING TABLE RULE hero1,hero2,hero3,hero4,hero5...hero100(
   DATANODES("ds_${1..49}.hero,ds_${51..100},ds_${500..549},ds_${551..600}"),
   
DATABASE_STRATEGY(TYPE=standard,SHARDING_COLUMN=id,SHARDING_ALGORITHM=hint_common)
   );
   
   or 
   
   ALTER SHARDING TABLE RULE hero1,hero2,hero3,hero4,hero5...hero100(
   RESOURCES("ds_${1..49},ds_${51..100},ds_${500..549},ds_${551..600}"),
   
DATABASE_STRATEGY(TYPE=standard,SHARDING_COLUMN=id,SHARDING_ALGORITHM=hint_common)
   );
   ```
   
   At first we have to support this grammar which does not specify table . The 
purpose of this is to reuse one connection to process different meta of tables 
, and then we just reflush metadata X times (the resources we have)  instead of 
Y times (the Cartesian Product of resource's number and table's number ). 
   eg. We have 100 datasource and 200 tables  in every dataresource(which in 
different LAN) , we need to reflush 100*200=2000 times , surpose every time we 
need 1s , it will cost us 2000s (about 33.3 min). But if we can reuse 
connection with batch sql , we just reflush 100 times which cost us 100s (about 
1.7min) , and it must cost more than 1s in really operate because of the 
different of processing one table  and processing 200 tables . But I think 
maybe it is a good direction of optimization . 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to