azexcy opened a new issue, #20068:
URL: https://github.com/apache/shardingsphere/issues/20068

   ## Bug Report
   
   Environment
   ShardingSphere-Proxy: 2
   Zookeeper: 1
   Sharding Count: 2
   
   Relation ci:
   > 
https://github.com/azexcy/shardingsphere/runs/7766763537?check_suite_focus=true
   
   Relation code at PipelineJobExecutor:
   ```
       private void processEvent(final DataChangedEvent event, final 
JobConfigurationPOJO jobConfigPOJO) {
           ......
           switch (event.getType()) {
               case ADDED:
               case UPDATED:
                   if 
(PipelineJobCenter.isJobExisting(jobConfigPOJO.getJobName())) {
                       log.info("{} added to executing jobs failed since it 
already exists", jobConfigPOJO.getJobName());
                   } else {
                       log.info("{} executing jobs", 
jobConfigPOJO.getJobName());
                       executor.execute(() -> execute(jobConfigPOJO));
                   }
                   break;
               default:
                   break;
           }
       }
   
       private void execute(final JobConfigurationPOJO jobConfigPOJO) {
           RuleAlteredJob job = new RuleAlteredJob();
           PipelineJobCenter.addJob(jobConfigPOJO.getJobName(), job);
           OneOffJobBootstrap oneOffJobBootstrap = new 
OneOffJobBootstrap(PipelineAPIFactory.getRegistryCenter(), job, 
jobConfigPOJO.toJobConfiguration());
           oneOffJobBootstrap.execute();
           job.setOneOffJobBootstrap(oneOffJobBootstrap);
       }
   ```
   
   The ci log
   ```
   [INFO ] 2022-08-10 12:51:18.471 [docker-java-stream-1031373745] 
:ShardingSphere-Proxy - STDOUT: [INFO ] 2022-08-10 12:51:17.684 
[Curator-SafeNotifyService-0] o.a.s.d.p.c.e.PipelineJobExecutor - 
0130317c30317c3054317c7368617264696e675f6462 executing jobs
   [INFO ] 2022-08-10 12:51:18.471 [docker-java-stream-1031373745] 
:ShardingSphere-Proxy - STDOUT: [INFO ] 2022-08-10 12:51:17.688 
[pool-16-thread-1] o.a.s.d.p.core.job.PipelineJobCenter - add job, 
jobId=0130317c30317c3054317c7368617264696e675f6462
   [INFO ] 2022-08-10 12:51:18.609 [docker-java-stream-1470981068] 
:ShardingSphere-Proxy - STDOUT: [INFO ] 2022-08-10 12:51:17.669 
[Curator-SafeNotifyService-0] o.a.s.d.p.c.e.PipelineJobExecutor - ADDED job 
config: /scaling/0130317c30317c3054317c7368617264696e675f6462/config
   [INFO ] 2022-08-10 12:51:18.609 [docker-java-stream-1470981068] 
:ShardingSphere-Proxy - STDOUT: [INFO ] 2022-08-10 12:51:17.673 
[Curator-SafeNotifyService-0] o.a.s.d.p.c.e.PipelineJobExecutor - 
0130317c30317c3054317c7368617264696e675f6462 executing jobs
   [INFO ] 2022-08-10 12:51:18.609 [docker-java-stream-1470981068] 
:ShardingSphere-Proxy - STDOUT: [INFO ] 2022-08-10 12:51:17.685 
[pool-16-thread-1] o.a.s.d.p.core.job.PipelineJobCenter - add job, 
jobId=0130317c30317c3054317c7368617264696e675f6462
   [INFO ] 2022-08-10 12:51:18.609 [docker-java-stream-1470981068] 
:ShardingSphere-Proxy - STDOUT: [INFO ] 2022-08-10 12:51:18.125 
[0130317c30317c3054317c7368617264696e675f6462_Worker-1] 
o.a.s.d.p.s.r.RuleAlteredJob - Execute job 
0130317c30317c3054317c7368617264696e675f6462-0
   [INFO ] 2022-08-10 12:51:18.609 [docker-java-stream-1470981068] 
:ShardingSphere-Proxy - STDOUT: [INFO ] 2022-08-10 12:51:18.292 
[0130317c30317c3054317c7368617264696e675f6462_Worker-1] 
o.a.s.d.p.s.r.RuleAlteredJob - start RuleAlteredJobScheduler, 
jobId=0130317c30317c3054317c7368617264696e675f6462, shardingItem=0
   [INFO ] 2022-08-10 12:51:18.609 [docker-java-stream-1470981068] 
:ShardingSphere-Proxy - STDOUT: [INFO ] 2022-08-10 12:51:18.325 
[0130317c30317c3054317c7368617264696e675f6462_Worker-1] 
o.a.s.d.p.c.j.p.p.PipelineJobProgressPersistService - Add job progress persist 
context, jobId=0130317c30317c3054317c7368617264696e675f6462, shardingItem=0
   ```
   
   From the logs, we know that at this time, The two tasks are divided equally 
between the two proxies. 
   But only one item triggered `start RuleAlteredJobScheduler, 
jobId=0130317c30317c3054317c7368617264696e675f6462, shardingItem=0`, another 
`shardingItem=1` not find.
   
   The zookeeper snapshot data  is below.
   ```
   /scaling=, 
/scaling/_finished_check=org.apache.shardingsphere.data.pipeline.core.job.FinishedCheckJob,
 
   /scaling/_finished_check/sharding=, /scaling/_finished_check/sharding/0=, 
   /scaling/_finished_check/sharding/0/instance=172.21.0.5@-@12, 
   /scaling/_finished_check/servers=, 
   /scaling/_finished_check/servers/172.21.0.5=ENABLED, 
   /scaling/_finished_check/servers/172.21.0.4=ENABLED, 
   /scaling/_finished_check/leader=, 
   /scaling/_finished_check/leader/sharding=, 
   /scaling/_finished_check/leader/election=, 
   /scaling/_finished_check/leader/election/instance=172.21.0.4@-@14, 
   /scaling/_finished_check/instances=,
   /scaling/_finished_check/instances/172.21.0.5@-@12=jobInstanceId: 
172.21.0.5@-@12
   serverIp: 172.21.0.5,
   /scaling/_finished_check/instances/172.21.0.4@-@14=jobInstanceId: 
172.21.0.4@-@14
   serverIp: 172.21.0.4, 
/scaling/0130317c30317c3054317c7368617264696e675f6462=org.apache.shardingsphere.data.pipeline.scenario.rulealtered.RuleAlteredJob,
 
   /scaling/0130317c30317c3054317c7368617264696e675f6462/trigger=, 
/scaling/0130317c30317c3054317c7368617264696e675f6462/sharding=, 
   /scaling/0130317c30317c3054317c7368617264696e675f6462/sharding/1=, 
   
/scaling/0130317c30317c3054317c7368617264696e675f6462/sharding/1/instance=172.21.0.4@-@14,
 
   /scaling/0130317c30317c3054317c7368617264696e675f6462/sharding/0=, 
   
/scaling/0130317c30317c3054317c7368617264696e675f6462/sharding/0/instance=172.21.0.5@-@12,
 
   /scaling/0130317c30317c3054317c7368617264696e675f6462/servers=, 
   
/scaling/0130317c30317c3054317c7368617264696e675f6462/servers/172.21.0.5=ENABLED,
 
   
/scaling/0130317c30317c3054317c7368617264696e675f6462/servers/172.21.0.4=ENABLED,
 
   /scaling/0130317c30317c3054317c7368617264696e675f6462/offset=, 
   /scaling/0130317c30317c3054317c7368617264696e675f6462/offset/0=incremental:
     dataSourceName: ds_0
     delay:
       lastEventTimestamps: 0
       latestActiveTimeMillis: 1660135885794
     position: '24852248'
   inventory:
     finished:
     - ds_0.t_order_2#0
     - ds_0.t_order_0#0
   sourceDatabaseType: PostgreSQL
   status: EXECUTE_INCREMENTAL_TASK,
   /scaling/0130317c30317c3054317c7368617264696e675f6462/leader=, 
   /scaling/0130317c30317c3054317c7368617264696e675f6462/leader/sharding=, 
   /scaling/0130317c30317c3054317c7368617264696e675f6462/leader/election=, 
   
/scaling/0130317c30317c3054317c7368617264696e675f6462/leader/election/instance=172.21.0.5@-@12,
 
   /scaling/0130317c30317c3054317c7368617264696e675f6462/instances=, 
   
/scaling/0130317c30317c3054317c7368617264696e675f6462/instances/172.21.0.5@-@12=jobInstanceId:
 172.21.0.5@-@12
   serverIp: 172.21.0.5,
   
/scaling/0130317c30317c3054317c7368617264696e675f6462/instances/172.21.0.4@-@14=jobInstanceId:
 172.21.0.4@-@14
   serverIp: 172.21.0.4
   ```
   
   ### Which version of ShardingSphere did you use?
   
   master
   
   ### Which project did you use? ShardingSphere-JDBC or ShardingSphere-Proxy?
   
   ShardingSphere-Proxy
   
   ### Expected behavior
   
   All sharding task triggered
   
   ### Actual behavior
   
   Only one sharding task triggered.
   
   ### Reason analyze (If you can)
   
   ### Steps to reproduce the behavior, such as: SQL to execute, sharding rule 
configuration, when exception occur etc.
   
   The problem is not always possible to reproduce
   
   ### Example codes for reproduce this issue (such as a github link).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 
[email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to