charlielin opened a new issue, #15750: URL: https://github.com/apache/doris/issues/15750
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/doris/issues?q=is%3Aissue) and found no similar issues. ### Version doris: 1.2.0 kafka: 2.8.0 ### What's Wrong? When I create a routine load job for kafka with SASL_PLAIN_TEXT like this below: ``` CREATE ROUTINE LOAD mysy2_eco.td_user_unique ON log_625_s_eco_td_user_unique PROPERTIES ( "max_batch_interval" = "5", "max_batch_rows" = "300000", "max_batch_size" = "209715200", "format" = "json", "strip_outer_array" = "false" ) FROM KAFKA ( "kafka_broker_list" = "ip:port", "kafka_topic" = "log-625-s-eco-td-user-cdc", "property.security.protocal" = "sasl_plaintext", "property.sasl.mechanisms" = "SCRAM-SHA-256", "property.sasl.username" = "admin", "property.sasl.password" = "password", "property.group.id" = "doris_625_s_eco_td2", "property.kafka_default_offsets" = "OFFSET_BEGINNING" ); ``` **The sasl.username admin is the superadmin of kafka.** Report the error: ``` mysql> show routine load\G; *************************** 1. row *************************** Id: 61927 Name: td_user_unique CreateTime: 2023-01-09 23:29:40 PauseTime: 2023-01-09 23:29:49 EndTime: NULL DbName: default_cluster:mysy2_eco TableName: log_625_s_eco_td_user_unique State: PAUSED DataSourceType: KAFKA CurrentTaskNum: 0 JobProperties: {"timezone":"Asia/Shanghai","send_batch_parallelism":"1","load_to_single_tablet":"false","maxBatchSizeBytes":"209715200","exec_mem_limit":"2147483648","strict_mode":"false","jsonpaths":"","currentTaskConcurrentNum":"0","fuzzy_parse":"false","partitions":"*","columnToColumnExpr":"","maxBatchIntervalS":"5","whereExpr":"*","dataFormat":"json","precedingFilter":"*","mergeType":"APPEND","format":"json","json_root":"","deleteCondition":"*","desireTaskConcurrentNum":"5","maxErrorNum":"0","strip_outer_array":"false","execMemLimit":"2147483648","num_as_string":"false","maxBatchRows":"300000"} DataSourceProperties: {"topic":"log-625-s-eco-td-user-cdc","currentKafkaPartitions":"","brokerList":"ip:port"} CustomProperties: {"security.protocal":"sasl_plaintext","sasl.username":"admin","kafka_default_offsets":"OFFSET_BEGINNING","group.id":"doris_625_s_eco_td2","sasl.mechanisms":"SCRAM-SHA-256","sasl.password":"password"} Statistic: {"receivedBytes":0,"runningTxns":[],"errorRows":0,"committedTaskNum":0,"loadedRows":0,"loadRowsRate":0,"abortedTaskNum":0,"errorRowsAfterResumed":0,"totalRows":0,"unselectedRows":0,"receivedBytesRate":0,"taskExecuteTimeMs":1} Progress: {} Lag: {} ReasonOfStateChanged: ErrorReason{code=errCode = 4, msg='Job failed to fetch all current partition with error errCode = 2, detailMessage = Failed to get all partitions of kafka topic: log-625-s-eco-td-user-cdc. error: Waited 5 seconds (plus 116834 nanoseconds delay) for io.grpc.stub.ClientCalls$GrpcFuture@4111d94[status=PENDING, info=[GrpcFuture{clientCall={delegate=ClientCallImpl{method=MethodDescriptor{fullMethodName=doris.PBackendService/get_info, type=UNARY, idempotent=false, safe=false, sampledToLocalTracing=true, requestMarshaller=io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@34eb5d1d, responseMarshaller=io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller@6928bbd7, schemaDescriptor=org.apache.doris.proto.PBackendServiceGrpc$PBackendServiceMethodDescriptorSupplier@4439490f}}}}]]'} ErrorLogUrls: OtherMsg: 1 row in set (0.01 sec) ``` ### What You Expected? Proceed successfully, please. ### How to Reproduce? _No response_ ### Anything Else? _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
