dybizqq opened a new issue, #24999:
URL: https://github.com/apache/shardingsphere/issues/24999

   ## Binding snowflake ID and create_time column for sharding
   
   ### Scenario
   Currently my table uses create_time for yyyyMM sharding purpose, at the same 
time, the ID is generated with snowflake, 
   program can infer the timestamp too, but I am not sure whether or not these 
2 timestamp-based columns are based on exact same timestamp value because they 
are not generated by same code.
   
   #### I tried following sharding configurations:
   
   1. sharding with create_time, updating record with ID will generate SQL for 
all shards, so that I must estimate its create_time and append create_time to 
the where clause (to make things  correct, when estimated timestamp is on 
month-begin or month-end, I suggest a wide range, etc, let it scan 2 tables via 
 ' where id =? and create_time >= '2023-03-31 23:59:59' and create_time < 
'2023-04-01 00:00:05'), this is better than than scanning all 50+ tables.
   2. sharding with snowflake ID based timestamp (implements my own 
StandardShardingAlgorithm to infer timestamp), this can be used for tables with 
ID / Key based insert/update/select only, when query data of a range of dates 
(filter on create_time) it generates SQL for all shards.
   
   ### Question
   Is it possibly to mix them together like following, so that we can use 
sharding with either create_time or snowflake ID column:
   
   1. Can we bind snowflake ID with create_time, so that when snowflake 
generated, create_time also updated to same timestamp?
   2. When snowflake ID used for sharding, can routing algorithm just apply / 
suggest create_time automatically so that also limit to single shard?
    
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 
[email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to