zengqinchris opened a new issue, #10565:
URL: https://github.com/apache/seatunnel/issues/10565

   测试版本  2.3.8 和 2.3.11都测试失败
   
   配置文件如下
   `env {
   "job.mode"=BATCH
   "job.name"="SeaTunnel_Job"
   "savemode.execute.location"=CLUSTER
   }
   source {
   SftpFile {
       path="/opt/module/qingyang"
       "file_format_type"=BINARY
       "xml_use_attr_format"="false"
       "parse_partition_from_path"="true"
       file_filter_pattern = "/opt/module/qingyang/.*\\.pdf"
       "date_format"=yyyy-MM-dd
       "datetime_format"="yyyy-MM-dd HH:mm:ss"
       "time_format"="HH:mm:ss"
       "compress_codec"=None
       "archive_compress_codec"=none
       parallelism="1"
       "result_table_name"=Table20895870643936
       host="0.0.0.0"
       port="22"
       user=xx
       password="xx"
       encoding=UTF-8
       "skip_header_row_number"="0"
   }
   }
   transform {
   }
   sink {
   S3File {
       path="/ods/red_story/pdf"
       "file_format_type"=BINARY
       "compress_codec"=NONE
       "enable_header_write"="false"
       "parquet_avro_write_fixed_as_int96"=[]
       "parquet_avro_write_timestamp_as_int96"="false"
       "xml_use_attr_format"=null
       "custom_filename"="false"
       "file_name_expression"="${transactionId}"
       "filename_time_format"="yyyy.MM.dd"
       "have_partition"="false"
       "partition_dir_expression"="${k0}=${v0}/${k1}=${v1}/.../${kn}=${vn}/"
       "is_partition_field_write_in_file"="false"
       "is_enable_transaction"="true"
       "tmp_path"="/tmp/seatunnel"
       "multi_table_sink_replica"="1"
       "schema_save_mode"="CREATE_SCHEMA_WHEN_NOT_EXIST"
       "data_save_mode"="APPEND_DATA"
       "source_table_name"=Table20895870643936
       "parse_partition_from_path"="true"
       "hadoop_s3_properties" {
           "fs.s3a.attempts.maximum"="3"
           "fs.s3a.connection.ssl.enabled"="false"
           "fs.s3a.connection.establish.timeout"="5000"
           "fs.s3a.path.style.access"="true"
       }
       "fs.s3a.endpoint"="http://0.0.0.0:000/";
       "secret_key"=000
       "access_key"=xx
       
"fs.s3a.aws.credentials.provider"="org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider"
       bucket="s3a://xx"
   }
   }
   `
   
   执行命令如下
   [root@hadoop001 seatunnel-2.3.11]# ./bin/seatunnel.sh --config 
./config/aa.config
   
   报错如下
   `001 seatunnel-2.3.11]# 
   [root@hadoop001 seatunnel-2.3.11]# ./bin/seatunnel.sh --config 
./config/aa.config 
   2026-03-05 15:23:34,303 INFO  [c.h.i.c.AbstractConfigLocator ] [main] - 
Loading configuration '/opt/module/seatunnel-2.3.11/config/seatunnel.yaml' from 
System property 'seatunnel.config'
   2026-03-05 15:23:34,307 INFO  [c.h.i.c.AbstractConfigLocator ] [main] - 
Using configuration file at /opt/module/seatunnel-2.3.11/config/seatunnel.yaml
   2026-03-05 15:23:34,309 INFO  [o.a.s.e.c.c.SeaTunnelConfig   ] [main] - 
seatunnel.home is /opt/module/seatunnel-2.3.11
   2026-03-05 15:23:34,400 INFO  [amlSeaTunnelDomConfigProcessor] [main] - 
Dynamic slot is enabled, the schedule strategy is set to REJECT
   2026-03-05 15:23:34,401 INFO  [c.h.i.c.AbstractConfigLocator ] [main] - 
Loading configuration '/opt/module/seatunnel-2.3.11/config/hazelcast.yaml' from 
System property 'hazelcast.config'
   2026-03-05 15:23:34,401 INFO  [c.h.i.c.AbstractConfigLocator ] [main] - 
Using configuration file at /opt/module/seatunnel-2.3.11/config/hazelcast.yaml
   2026-03-05 15:23:34,688 INFO  [c.h.i.c.AbstractConfigLocator ] [main] - 
Loading configuration 
'/opt/module/seatunnel-2.3.11/config/hazelcast-client.yaml' from System 
property 'hazelcast.client.config'
   2026-03-05 15:23:34,688 INFO  [c.h.i.c.AbstractConfigLocator ] [main] - 
Using configuration file at 
/opt/module/seatunnel-2.3.11/config/hazelcast-client.yaml
   2026-03-05 15:23:34,957 INFO  [.c.i.s.ClientInvocationService] [main] - 
hz.client_1 [seatunnel] [5.1] Running with 2 response threads, dynamic=true
   2026-03-05 15:23:35,007 INFO  [c.h.c.LifecycleService        ] [main] - 
hz.client_1 [seatunnel] [5.1] HazelcastClient 5.1 (20220228 - 21f20e7) is 
STARTING
   2026-03-05 15:23:35,008 INFO  [c.h.c.LifecycleService        ] [main] - 
hz.client_1 [seatunnel] [5.1] HazelcastClient 5.1 (20220228 - 21f20e7) is 
STARTED
   2026-03-05 15:23:35,028 INFO  [.c.i.c.ClientConnectionManager] [main] - 
hz.client_1 [seatunnel] [5.1] Trying to connect to cluster: seatunnel
   2026-03-05 15:23:35,030 INFO  [.c.i.c.ClientConnectionManager] [main] - 
hz.client_1 [seatunnel] [5.1] Trying to connect to [10.60.0.61]:5801
   2026-03-05 15:23:35,058 INFO  [c.h.c.LifecycleService        ] [main] - 
hz.client_1 [seatunnel] [5.1] HazelcastClient 5.1 (20220228 - 21f20e7) is 
CLIENT_CONNECTED
   2026-03-05 15:23:35,059 INFO  [.c.i.c.ClientConnectionManager] [main] - 
hz.client_1 [seatunnel] [5.1] Authenticated with server 
[10.60.0.61]:5801:7d814c4a-8f35-4897-af9f-7a863d884047, server version: 5.1, 
local address: /10.60.0.60:27154
   2026-03-05 15:23:35,060 INFO  [c.h.i.d.Diagnostics           ] [main] - 
hz.client_1 [seatunnel] [5.1] Diagnostics disabled. To enable add 
-Dhazelcast.diagnostics.enabled=true to the JVM arguments.
   2026-03-05 15:23:35,069 INFO  [c.h.c.i.s.ClientClusterService] 
[hz.client_1.event-4] - hz.client_1 [seatunnel] [5.1] 
   
   Members [3] {
           Member [10.60.0.61]:5801 - 7d814c4a-8f35-4897-af9f-7a863d884047 
[master node]
           Member [10.60.0.60]:5801 - dee1105f-ccfa-4d78-8d81-9de620d80022 
[master node]
           Member [10.60.0.62]:5801 - 371b70af-940d-46f2-86e2-94ed5ed885ec 
[master node]
   }
   
   2026-03-05 15:23:35,090 INFO  [.c.i.c.ClientConnectionManager] [main] - 
hz.client_1 [seatunnel] [5.1] Authenticated with server 
[10.60.0.60]:5801:dee1105f-ccfa-4d78-8d81-9de620d80022, server version: 5.1, 
local address: /10.60.0.60:19770
   2026-03-05 15:23:35,092 INFO  [.c.i.c.ClientConnectionManager] [main] - 
hz.client_1 [seatunnel] [5.1] Authenticated with server 
[10.60.0.62]:5801:371b70af-940d-46f2-86e2-94ed5ed885ec, server version: 5.1, 
local address: /10.60.0.60:35391
   2026-03-05 15:23:35,117 INFO  [.c.i.s.ClientStatisticsService] [main] - 
Client statistics is enabled with period 5 seconds.
   2026-03-05 15:23:35,255 INFO  [o.a.s.c.s.u.ConfigBuilder     ] [main] - 
Loading config file from path: ./config/aa.config
   2026-03-05 15:23:35,372 INFO  [o.a.s.c.s.u.ConfigShadeUtils  ] [main] - Load 
config shade spi: [base64]
   2026-03-05 15:23:35,409 INFO  [o.a.s.c.s.u.ConfigBuilder     ] [main] - 
Parsed config file: 
   {
       "env" : {
           "job.mode" : "BATCH",
           "job.name" : "SeaTunnel_Job",
           "savemode.execute.location" : "CLUSTER"
       },
       "source" : [
           {
               "path" : "/opt/module/qingyang",
               "file_format_type" : "BINARY",
               "xml_use_attr_format" : "false",
               "parse_partition_from_path" : "true",
               "file_filter_pattern" : "/opt/module/qingyang/.*\\.pdf",
               "date_format" : "yyyy-MM-dd",
               "datetime_format" : "yyyy-MM-dd HH:mm:ss",
               "time_format" : "HH:mm:ss",
               "compress_codec" : "None",
               "archive_compress_codec" : "none",
               "parallelism" : "1",
               "result_table_name" : "Table20895870643936",
               "host" : "10.60.0.60",
               "port" : "22",
               "user" : "dama",
               "password" : "******",
               "encoding" : "UTF-8",
               "skip_header_row_number" : "0",
               "plugin_name" : "SftpFile"
           }
       ],
       "transform" : [],
       "sink" : [
           {
               "path" : "/ods/red_story/pdf",
               "file_format_type" : "BINARY",
               "compress_codec" : "NONE",
               "enable_header_write" : "false",
               "parquet_avro_write_fixed_as_int96" : [],
               "parquet_avro_write_timestamp_as_int96" : "false",
               "xml_use_attr_format" : null,
               "custom_filename" : "false",
               "file_name_expression" : "${transactionId}",
               "filename_time_format" : "yyyy.MM.dd",
               "have_partition" : "false",
               "partition_dir_expression" : 
"${k0}=${v0}/${k1}=${v1}/.../${kn}=${vn}/",
               "is_partition_field_write_in_file" : "false",
               "is_enable_transaction" : "true",
               "tmp_path" : "/tmp/seatunnel",
               "multi_table_sink_replica" : "1",
               "schema_save_mode" : "CREATE_SCHEMA_WHEN_NOT_EXIST",
               "data_save_mode" : "APPEND_DATA",
               "source_table_name" : "Table20895870643936",
               "parse_partition_from_path" : "true",
               "hadoop_s3_properties" : {
                   "fs.s3a.attempts.maximum" : "3",
                   "fs.s3a.connection.ssl.enabled" : "false",
                   "fs.s3a.connection.establish.timeout" : "5000",
                   "fs.s3a.path.style.access" : "true"
               },
               "fs.s3a.endpoint" : "http://10.60.0.60:9000/";,
               "secret_key" : "******",
               "access_key" : "******",
               "fs.s3a.aws.credentials.provider" : 
"org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider",
               "bucket" : "s3a://qingyang",
               "plugin_name" : "S3File"
           }
       ]
   }
   
   2026-03-05 15:23:35,416 INFO  [p.MultipleTableJobConfigParser] [main] - add 
common jar in plugins :[]
   2026-03-05 15:23:35,431 INFO  [.s.p.d.AbstractPluginDiscovery] [main] - Load 
SeaTunnelSink Plugin from /opt/module/seatunnel-2.3.11/connectors
   2026-03-05 15:23:35,434 INFO  [.s.p.d.AbstractPluginDiscovery] [main] - 
Discovery plugin jar for: PluginIdentifier{engineType='seatunnel', 
pluginType='source', pluginName='SftpFile'} at: 
file:/opt/module/seatunnel-2.3.11/connectors/connector-file-sftp-2.3.11.jar
   2026-03-05 15:23:35,441 INFO  [.s.p.d.AbstractPluginDiscovery] [main] - Load 
SeaTunnelSink Plugin from /opt/module/seatunnel-2.3.11/connectors
   2026-03-05 15:23:35,449 INFO  [.s.p.d.AbstractPluginDiscovery] [main] - Load 
SeaTunnelSink Plugin from /opt/module/seatunnel-2.3.11/connectors
   2026-03-05 15:23:35,450 INFO  [.s.p.d.AbstractPluginDiscovery] [main] - 
Discovery plugin jar for: PluginIdentifier{engineType='seatunnel', 
pluginType='sink', pluginName='S3File'} at: 
file:/opt/module/seatunnel-2.3.11/connectors/connector-file-s3-2.3.11.jar
   2026-03-05 15:23:35,453 WARN  [o.a.s.a.c.ReadonlyConfig      ] [main] - 
Please use the new key 'plugin_output' instead of the deprecated key 
'result_table_name'.
   2026-03-05 15:23:35,453 WARN  [o.a.s.a.c.ReadonlyConfig      ] [main] - 
Please use the new key 'plugin_input' instead of the deprecated key 
'source_table_name'.
   2026-03-05 15:23:35,454 WARN  [o.a.s.a.c.u.ConfigUtil        ] [main] - 
Option 'plugin_input' is a List, and it is recommended to configure it as 
["string1","string2"]; we will only use ',' to split the String into a list.
   2026-03-05 15:23:35,455 INFO  [p.MultipleTableJobConfigParser] [main] - 
start generating all sources.
   2026-03-05 15:23:35,455 WARN  [o.a.s.a.c.ReadonlyConfig      ] [main] - 
Please use the new key 'plugin_output' instead of the deprecated key 
'result_table_name'.
   2026-03-05 15:23:35,651 WARN  [o.a.h.u.NativeCodeLoader      ] [main] - 
Unable to load native-hadoop library for your platform... using builtin-java 
classes where applicable
   2026-03-05 15:23:38,444 INFO  [o.a.s.a.t.f.FactoryUtil       ] [main] - get 
the CatalogTable from source SftpFile: schema.default.default
   2026-03-05 15:23:38,455 INFO  [.s.p.d.AbstractPluginDiscovery] [main] - Load 
SeaTunnelSource Plugin from /opt/module/seatunnel-2.3.11/connectors
   2026-03-05 15:23:38,456 INFO  [.s.p.d.AbstractPluginDiscovery] [main] - 
Discovery plugin jar for: PluginIdentifier{engineType='seatunnel', 
pluginType='source', pluginName='SftpFile'} at: 
file:/opt/module/seatunnel-2.3.11/connectors/connector-file-sftp-2.3.11.jar
   2026-03-05 15:23:38,457 INFO  [p.MultipleTableJobConfigParser] [main] - 
start generating all transforms.
   2026-03-05 15:23:38,458 INFO  [p.MultipleTableJobConfigParser] [main] - 
start generating all sinks.
   2026-03-05 15:23:38,458 WARN  [o.a.s.a.c.ReadonlyConfig      ] [main] - 
Please use the new key 'plugin_input' instead of the deprecated key 
'source_table_name'.
   2026-03-05 15:23:38,459 WARN  [o.a.s.a.c.u.ConfigUtil        ] [main] - 
Option 'plugin_input' is a List, and it is recommended to configure it as 
["string1","string2"]; we will only use ',' to split the String into a list.
   2026-03-05 15:23:38,462 INFO  [.s.p.d.AbstractPluginDiscovery] [main] - Load 
SeaTunnelSink Plugin from /opt/module/seatunnel-2.3.11/connectors
   2026-03-05 15:23:38,463 INFO  [.s.p.d.AbstractPluginDiscovery] [main] - 
Discovery plugin jar for: PluginIdentifier{engineType='seatunnel', 
pluginType='sink', pluginName='S3File'} at: 
file:/opt/module/seatunnel-2.3.11/connectors/connector-file-s3-2.3.11.jar
   2026-03-05 15:23:38,488 INFO  [o.a.s.a.t.f.FactoryUtil       ] [main] - 
Create sink 'S3File' with upstream input catalog-table[database: default, 
schema: null, table: default]
   2026-03-05 15:23:38,546 INFO  [o.a.s.e.c.j.ClientJobProxy    ] [main] - 
Start submit job, job id: 1081839411257147395, with plugin jar 
[file:/opt/module/seatunnel-2.3.11/connectors/connector-file-s3-2.3.11.jar, 
file:/opt/module/seatunnel-2.3.11/connectors/connector-file-sftp-2.3.11.jar]
   2026-03-05 15:23:38,618 INFO  [o.a.s.e.c.j.ClientJobProxy    ] [main] - 
Submit job finished, job id: 1081839411257147395, job name: SeaTunnel_Job
   2026-03-05 15:23:38,625 INFO  [o.a.s.e.c.j.JobStatusRunner   ] 
[job-status-runner-1081839411257147395] - Job Id : 1081839411257147395 enter 
pending queue, current status:PENDING ,please wait task schedule
   2026-03-05 15:23:38,626 WARN  [o.a.s.e.c.j.JobMetricsRunner  ] 
[job-metrics-runner-1081839411257147395] - Failed to get job metrics summary, 
it maybe first-run
   2026-03-05 15:23:43,627 INFO  [o.a.s.e.c.j.JobStatusRunner   ] 
[job-status-runner-1081839411257147395] - Job ID: 1081839411257147395 has been 
scheduled and entered the next state. Current status: RUNNING
   2026-03-05 15:23:48,422 INFO  [o.a.s.e.c.j.ClientJobProxy    ] [main] - Job 
(1081839411257147395) end with state FAILED
   2026-03-05 15:23:48,422 INFO  [c.h.c.LifecycleService        ] [main] - 
hz.client_1 [seatunnel] [5.1] HazelcastClient 5.1 (20220228 - 21f20e7) is 
SHUTTING_DOWN
   2026-03-05 15:23:48,427 INFO  [.c.i.c.ClientConnectionManager] [main] - 
hz.client_1 [seatunnel] [5.1] Removed connection to endpoint: 
[10.60.0.62]:5801:371b70af-940d-46f2-86e2-94ed5ed885ec, connection: 
ClientConnection{alive=false, connectionId=3, 
channel=NioChannel{/10.60.0.60:35391->/10.60.0.62:5801}, 
remoteAddress=[10.60.0.62]:5801, lastReadTime=2026-03-05 15:23:45.029, 
lastWriteTime=2026-03-05 15:23:45.028, closedTime=2026-03-05 15:23:48.424, 
connected server version=5.1}
   2026-03-05 15:23:48,427 INFO  [.c.i.c.ClientConnectionManager] [main] - 
hz.client_1 [seatunnel] [5.1] Removed connection to endpoint: 
[10.60.0.60]:5801:dee1105f-ccfa-4d78-8d81-9de620d80022, connection: 
ClientConnection{alive=false, connectionId=2, 
channel=NioChannel{/10.60.0.60:19770->/10.60.0.60:5801}, 
remoteAddress=[10.60.0.60]:5801, lastReadTime=2026-03-05 15:23:45.124, 
lastWriteTime=2026-03-05 15:23:45.123, closedTime=2026-03-05 15:23:48.427, 
connected server version=5.1}
   2026-03-05 15:23:48,429 INFO  [.c.i.c.ClientConnectionManager] [main] - 
hz.client_1 [seatunnel] [5.1] Removed connection to endpoint: 
[10.60.0.61]:5801:7d814c4a-8f35-4897-af9f-7a863d884047, connection: 
ClientConnection{alive=false, connectionId=1, 
channel=NioChannel{/10.60.0.60:27154->/10.60.0.61:5801}, 
remoteAddress=[10.60.0.61]:5801, lastReadTime=2026-03-05 15:23:48.411, 
lastWriteTime=2026-03-05 15:23:43.625, closedTime=2026-03-05 15:23:48.428, 
connected server version=5.1}
   2026-03-05 15:23:48,429 INFO  [c.h.c.LifecycleService        ] [main] - 
hz.client_1 [seatunnel] [5.1] HazelcastClient 5.1 (20220228 - 21f20e7) is 
CLIENT_DISCONNECTED
   2026-03-05 15:23:48,431 INFO  [c.h.c.LifecycleService        ] [main] - 
hz.client_1 [seatunnel] [5.1] HazelcastClient 5.1 (20220228 - 21f20e7) is 
SHUTDOWN
   2026-03-05 15:23:48,431 INFO  [s.c.s.s.c.ClientExecuteCommand] [main] - 
Closed SeaTunnel client......
   2026-03-05 15:23:48,431 INFO  [s.c.s.s.c.ClientExecuteCommand] [main] - 
Closed metrics executor service ......
   2026-03-05 15:23:48,431 ERROR [o.a.s.c.s.SeaTunnel           ] [main] - 
   
   
===============================================================================
   
   
   2026-03-05 15:23:48,432 ERROR [o.a.s.c.s.SeaTunnel           ] [main] - 
Fatal Error, 
   
   2026-03-05 15:23:48,432 ERROR [o.a.s.c.s.SeaTunnel           ] [main] - 
Please submit bug report in https://github.com/apache/seatunnel/issues
   
   2026-03-05 15:23:48,432 ERROR [o.a.s.c.s.SeaTunnel           ] [main] - 
Reason:SeaTunnel job executed failed 
   
   2026-03-05 15:23:48,433 ERROR [o.a.s.c.s.SeaTunnel           ] [main] - 
Exception 
StackTrace:org.apache.seatunnel.core.starter.exception.CommandExecuteException: 
SeaTunnel job executed failed
           at 
org.apache.seatunnel.core.starter.seatunnel.command.ClientExecuteCommand.execute(ClientExecuteCommand.java:228)
           at org.apache.seatunnel.core.starter.SeaTunnel.run(SeaTunnel.java:40)
           at 
org.apache.seatunnel.core.starter.seatunnel.SeaTunnelClient.main(SeaTunnelClient.java:40)
   Caused by: 
org.apache.seatunnel.engine.common.exception.SeaTunnelEngineException: 
org.apache.seatunnel.engine.server.checkpoint.CheckpointException: 
CheckpointCoordinator inside have error.
           at 
org.apache.seatunnel.engine.server.checkpoint.CheckpointCoordinator.handleCoordinatorError(CheckpointCoordinator.java:282)
           at 
org.apache.seatunnel.engine.server.checkpoint.CheckpointCoordinator.handleCoordinatorError(CheckpointCoordinator.java:278)
           at 
org.apache.seatunnel.engine.server.checkpoint.CheckpointCoordinator.reportCheckpointErrorFromTask(CheckpointCoordinator.java:397)
           at 
org.apache.seatunnel.engine.server.checkpoint.CheckpointManager.reportCheckpointErrorFromTask(CheckpointManager.java:182)
           at 
org.apache.seatunnel.engine.server.checkpoint.operation.CheckpointErrorReportOperation.runInternal(CheckpointErrorReportOperation.java:48)
           at 
org.apache.seatunnel.engine.server.task.operation.TracingOperation.run(TracingOperation.java:42)
           at 
com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:189)
           at 
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:273)
           at 
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:248)
           at 
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:213)
           at 
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:175)
           at 
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:139)
           at 
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123)
           at 
com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
   Caused by: org.apache.seatunnel.common.utils.SeaTunnelException: 
org.apache.seatunnel.connectors.seatunnel.file.exception.FileConnectorException:
 ErrorCode:[FILE-07], ErrorDescription:[Format not support] - 
BinaryWriteStrategy only supports binary format, please read file with `BINARY` 
format, and do not change schema in the transform.
           at 
org.apache.seatunnel.connectors.seatunnel.file.sink.writer.BinaryWriteStrategy.setCatalogTable(BinaryWriteStrategy.java:57)
           at 
org.apache.seatunnel.connectors.seatunnel.file.sink.BaseMultipleTableFileSink.createWriteStrategy(BaseMultipleTableFileSink.java:127)
           at 
org.apache.seatunnel.connectors.seatunnel.file.sink.BaseMultipleTableFileSink.createWriter(BaseMultipleTableFileSink.java:106)
           at 
org.apache.seatunnel.connectors.seatunnel.file.sink.BaseMultipleTableFileSink.createWriter(BaseMultipleTableFileSink.java:52)
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSink.createWriter(MultiTableSink.java:82)
           at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.restoreState(SinkFlowLifeCycle.java:342)
           at 
org.apache.seatunnel.engine.server.task.SeaTunnelTask.lambda$restoreState$16(SeaTunnelTask.java:401)
           at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
           at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
           at 
java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
           at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
           at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
           at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
           at 
java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
           at 
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
           at 
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
           at 
java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
           at 
org.apache.seatunnel.engine.server.task.SeaTunnelTask.restoreState(SeaTunnelTask.java:398)
           at 
org.apache.seatunnel.engine.server.checkpoint.operation.NotifyTaskRestoreOperation.lambda$null$0(NotifyTaskRestoreOperation.java:107)
           at 
java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1640)
           at 
org.apache.seatunnel.api.tracing.MDCRunnable.run(MDCRunnable.java:43)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   
           ... 12 more
   
           at 
org.apache.seatunnel.core.starter.seatunnel.command.ClientExecuteCommand.execute(ClientExecuteCommand.java:220)
           ... 2 more
    
   2026-03-05 15:23:48,433 ERROR [o.a.s.c.s.SeaTunnel           ] [main] - 
   
===============================================================================
   
   
   
   Exception in thread "main" 
org.apache.seatunnel.core.starter.exception.CommandExecuteException: SeaTunnel 
job executed failed
           at 
org.apache.seatunnel.core.starter.seatunnel.command.ClientExecuteCommand.execute(ClientExecuteCommand.java:228)
           at org.apache.seatunnel.core.starter.SeaTunnel.run(SeaTunnel.java:40)
           at 
org.apache.seatunnel.core.starter.seatunnel.SeaTunnelClient.main(SeaTunnelClient.java:40)
   Caused by: 
org.apache.seatunnel.engine.common.exception.SeaTunnelEngineException: 
org.apache.seatunnel.engine.server.checkpoint.CheckpointException: 
CheckpointCoordinator inside have error.
           at 
org.apache.seatunnel.engine.server.checkpoint.CheckpointCoordinator.handleCoordinatorError(CheckpointCoordinator.java:282)
           at 
org.apache.seatunnel.engine.server.checkpoint.CheckpointCoordinator.handleCoordinatorError(CheckpointCoordinator.java:278)
           at 
org.apache.seatunnel.engine.server.checkpoint.CheckpointCoordinator.reportCheckpointErrorFromTask(CheckpointCoordinator.java:397)
           at 
org.apache.seatunnel.engine.server.checkpoint.CheckpointManager.reportCheckpointErrorFromTask(CheckpointManager.java:182)
           at 
org.apache.seatunnel.engine.server.checkpoint.operation.CheckpointErrorReportOperation.runInternal(CheckpointErrorReportOperation.java:48)
           at 
org.apache.seatunnel.engine.server.task.operation.TracingOperation.run(TracingOperation.java:42)
           at 
com.hazelcast.spi.impl.operationservice.Operation.call(Operation.java:189)
           at 
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:273)
           at 
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:248)
           at 
com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:213)
           at 
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:175)
           at 
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:139)
           at 
com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.executeRun(OperationThread.java:123)
           at 
com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
   Caused by: org.apache.seatunnel.common.utils.SeaTunnelException: 
org.apache.seatunnel.connectors.seatunnel.file.exception.FileConnectorException:
 ErrorCode:[FILE-07], ErrorDescription:[Format not support] - 
BinaryWriteStrategy only supports binary format, please read file with `BINARY` 
format, and do not change schema in the transform.
           at 
org.apache.seatunnel.connectors.seatunnel.file.sink.writer.BinaryWriteStrategy.setCatalogTable(BinaryWriteStrategy.java:57)
           at 
org.apache.seatunnel.connectors.seatunnel.file.sink.BaseMultipleTableFileSink.createWriteStrategy(BaseMultipleTableFileSink.java:127)
           at 
org.apache.seatunnel.connectors.seatunnel.file.sink.BaseMultipleTableFileSink.createWriter(BaseMultipleTableFileSink.java:106)
           at 
org.apache.seatunnel.connectors.seatunnel.file.sink.BaseMultipleTableFileSink.createWriter(BaseMultipleTableFileSink.java:52)
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSink.createWriter(MultiTableSink.java:82)
           at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.restoreState(SinkFlowLifeCycle.java:342)
           at 
org.apache.seatunnel.engine.server.task.SeaTunnelTask.lambda$restoreState$16(SeaTunnelTask.java:401)
           at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
           at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
           at 
java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
           at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
           at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
           at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
           at 
java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
           at 
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
           at 
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
           at 
java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
           at 
org.apache.seatunnel.engine.server.task.SeaTunnelTask.restoreState(SeaTunnelTask.java:398)
           at 
org.apache.seatunnel.engine.server.checkpoint.operation.NotifyTaskRestoreOperation.lambda$null$0(NotifyTaskRestoreOperation.java:107)
           at 
java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1640)
           at 
org.apache.seatunnel.api.tracing.MDCRunnable.run(MDCRunnable.java:43)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   
           ... 12 more
   
           at 
org.apache.seatunnel.core.starter.seatunnel.command.ClientExecuteCommand.execute(ClientExecuteCommand.java:220)
           ... 2 more
   2026-03-05 15:23:48,434 INFO  [s.c.s.s.c.ClientExecuteCommand] 
[SeaTunnel-CompletableFuture-Thread-0] - run shutdown hook because get close 
signal
   [root@hadoop001 seatunnel-2.3.11]# 
   [root@hadoop001 seatunnel-2.3.11]# 
   [root@hadoop001 seatunnel-2.3.11]# ./bin/seatunnel.sh --config 
./config/aa.config`
   
   去掉  file_filter_pattern = "/opt/module/qingyang/.*\\.pdf"  这个过滤就可以采集成功
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to