mashanshan979 opened a new issue, #4203:
URL: https://github.com/apache/streampark/issues/4203

   ### Search before asking
   
   - [x] I had searched in the 
[issues](https://github.com/apache/streampark/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### Java Version
   
   17.0.12
   
   ### Scala Version
   
   2.12.x
   
   ### StreamPark Version
   
   2.1.5
   
   ### Flink Version
   
   1.19.1
   
   ### Deploy mode
   
   kubernetes-application
   
   ### What happened
   
   Parse the exception using the streampark parameter  
   
   ### Error Exception
   
   ```log
   Resolution of the Spaces wrong kafka properties. The sasl. Jaas. Config 
split into multiple arguments
   
   
   
   
   
   flink run parameter:
   ./bin/flink run -D pipeline.name=market_sync_paimon1 \
        -D jobmanager.memory.process.size=2048m \
        -D taskmanager.memory.process.size=4096m \
        -D taskmanager.numberOfTaskSlots=4 \
        -D taskmanager.memory.task.heap.size=2048m \
        -D taskmanager.memory.managed.size=1024m \
        -D execution.checkpointing.interval=300000 \
        -D execution.checkpointing.mode=EXACTLY_ONCE \
        -D 
execution.checkpointing.externalized-checkpoint-retention=RETAIN_ON_CANCELLATION
 \
        -D execution.checkpointing.max-concurrent-checkpoints=2 \
        -D execution.checkpointing.timeout=600000 \
        -D state.backend.type=filesystem \
        -D state.checkpoint-storage=filesystem \
        -D 
state.checkpoints.dir=hdfs:///checkpoint/flink/paimon/zmn_bigdata_market_paimon/big-data-flink-consumer-marketSyncPaimonApp1
 \
        -m xxxx:32597 \
       /a/data/flinkjar/paimon-flink-action-1.0.0.jar \
       kafka_sync_database \
        --warehouse /user/hive/warehouse \
       --database ods3 \
       --table_prefix ods_ \
        --table_prefix_db biz_mpbf=ods_biz_mpbf_ \
        --table_prefix_db zmn_mct=ods_zmn_mct_ \
        --table_prefix_db biz_clue=ods_biz_clue_ \
        --partition_keys region \
        --type_mapping tinyint1-not-bool,bigint-unsigned-to-bigint \
        --kafka_conf properties.bootstrap.servers=xxxx:9092,xxxx:9092,xxx:9092 \
        --kafka_conf properties.security.protocol=SASL_PLAINTEXT \
        --kafka_conf properties.sasl.mechanism=PLAIN \
        --kafka_conf 
properties.sasl.jaas.config='org.apache.flink.kafka.shaded.org.apache.kafka.common.security.plain.PlainLoginModule
 required username="xxx" password="Ixx&c&Tv$d";' \
        --kafka_conf topic=zmn_bigdata_market_format \
        --kafka_conf 
properties.group.id=big-data-flink-consumer-marketSyncPaimonApp1 \
        --kafka_conf scan.startup.mode=latest-offset \
        --kafka_conf value.format=canal-json \
        --catalog_conf metastore=hive \
       --catalog_conf uri=thrift://xxx:9083 \
        --table_conf bucket=4 \
        --table_conf changelog-producer=input \
        --table_conf sink.parallelism=4 \
        --table_conf write-buffer-size=256mb \
        --table_conf write-buffer-spillable=true \
        --including_dbs "zmn_mct|biz_mpbf|biz_clue"     
   
   
   
   
   
   
   
   
   streampark Program Args: kafka_sync_database --warehouse 
/user/hive/warehouse --database ods3 --table_prefix ods_ --table_prefix_db 
biz_mpbf=ods_biz_mpbf_ --table_prefix_db zmn_mct=ods_zmn_mct_ --table_prefix_db 
biz_clue=ods_biz_clue_ --partition_keys region --type_mapping 
tinyint1-not-bool,bigint-unsigned-to-bigint --kafka_conf 
properties.bootstrap.servers=xxx:9092,xxx:9092,xxx:9092 --kafka_conf 
properties.security.protocol=SASL_PLAINTEXT --kafka_conf 
properties.sasl.mechanism=PLAIN --kafka_conf 
properties.sasl.jaas.config='org.apache.flink.kafka.shaded.org.apache.kafka.common.security.plain.PlainLoginModule
 required username="xxx-kafka-admin" password="ssdsd&c&Tv$d";' --kafka_conf 
topic=zmn_bigdata_market_format --kafka_conf 
properties.group.id=big-data-flink-consumer-marketSyncPaimonApp1 --kafka_conf 
scan.startup.mode=latest-offset --kafka_conf value.format=canal-json 
--catalog_conf metastore=hive --catalog_conf uri=thrift://xxx:9083 --table_conf 
bucket=4 --table_conf changelog
 -producer=input --table_conf sink.parallelism=4 --table_conf 
write-buffer-size=256mb --table_conf write-buffer-spillable=true 
--including_dbs "zmn_mct|biz_mpbf|biz_clue"
   ```
   
   ### Screenshots
   
   
![Image](https://github.com/user-attachments/assets/f5227c5a-3773-435c-ac1b-7243c4876fbc)
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!(您是否要贡献这个PR?)
   
   ### Code of Conduct
   
   - [x] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to