astorage opened a new issue, #2203: URL: https://github.com/apache/shardingsphere-elasticjob/issues/2203
## Question follow the quick start (https://shardingsphere.apache.org/elasticjob/current/en/quick-start/elasticjob-cloud/) I meet the question— status is: TASK_FAILED, message is: Executor terminated, source is: SOURCE_SLAVE this is my publish job step ``` curl -l -H "Content-type: application/json" -X POST -d '{"appName":"my_app","appURL":"/root/elastic_cloud/test-elasticjob-cloud.tar.gz","cpuCount":0.1,"memoryMB":64.0,"bootstrapScript":"bin/start.sh","appCacheEnable":true,"eventTraceSamplingCount":0}' http://192.168.18.78:8899/api/app ``` appURL is local file: /root/elastic_cloud/test-elasticjob-cloud.tar.gz the task code is extremely simple ``` @Slf4j public class Main { public static void main(String[] args) { JobBootstrap.execute(new MyJob()); log.info("Hello world!"); } } ``` ``` @Slf4j public class MyJob implements SimpleJob { @Override public void execute(ShardingContext shardingContext) { switch (shardingContext.getShardingItem()) { case 0: // do something by sharding item 0 log.info("处理0分片逻辑"); break; case 1: // do something by sharding item 1 log.info("处理1分片逻辑"); break; case 2: // do something by sharding item 2 log.info("处理2分片逻辑"); break; // case n: ... } } } ``` **For English only**, other languages will not accept. Before asking a question, make sure you have: - Googled your question. - Searched open and closed [GitHub issues](https://github.com/apache/shardingsphere-elasticjob/issues). - Read documentation: [ElasticJob Doc](https://shardingsphere.apache.org/elasticjob/current/en/overview/). Please pay attention on issues you submitted, because we maybe need more details. If no response anymore and we cannot reproduce it on current information, we will **close it**. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
