luzongzhu commented on PR #7246:
URL: https://github.com/apache/seatunnel/pull/7246#issuecomment-2275624912

   > Can we extend a kerberos test on our iceberg e2e?
   
   ```
   [LOG-PATH]: 
/opt/dolphinscheduler/logs/20240801/13920631639648_98-319758-319902.log, 
[HOST]:  Host{address='xxxxxx:1234', ip='xxxxxx', port=1234}
   [INFO] 2024-08-01 19:15:05.170 +0800 - Begin to pulling task
   [INFO] 2024-08-01 19:15:05.179 +0800 - Begin to initialize task
   [INFO] 2024-08-01 19:15:05.179 +0800 - Set task startTime: Thu Aug 01 
19:15:05 CST 2024
   [INFO] 2024-08-01 19:15:05.181 +0800 - Set task envFile: 
/opt/dolphinscheduler/conf/dolphinscheduler_env.sh
   [INFO] 2024-08-01 19:15:05.182 +0800 - Set task appId: 319758_319902
   [INFO] 2024-08-01 19:15:05.182 +0800 - End initialize task
   [INFO] 2024-08-01 19:15:05.210 +0800 - Set task status to 
TaskExecutionStatus{code=1, desc='running'}
   [INFO] 2024-08-01 19:15:05.285 +0800 - TenantCode:admin check success
   [INFO] 2024-08-01 19:15:05.289 +0800 - 
ProcessExecDir:/tmp/dolphinscheduler/exec/process/admin/10633993590912/13920631639648_98/319758/319902
 check success
   [INFO] 2024-08-01 19:15:05.289 +0800 - Resources:{} check success
   [INFO] 2024-08-01 19:15:05.632 +0800 - Task plugin: DATAINTEGRATION create 
success
   [INFO] 2024-08-01 19:15:07.681 +0800 - Success initialized task plugin 
instance success
   [INFO] 2024-08-01 19:15:07.681 +0800 - Success set taskVarPool: null
   [INFO] 2024-08-01 19:15:07.926 +0800 - tenantCode :admin, task 
dir:/tmp/dolphinscheduler/exec/process/admin/10633993590912/13920631639648_98/319758/319902
   [INFO] 2024-08-01 19:15:07.927 +0800 - generate script 
file:/tmp/dolphinscheduler/exec/process/admin/10633993590912/13920631639648_98/319758/319902/seatunnel_319758_319902.conf
   [INFO] 2024-08-01 19:15:07.957 +0800 - SeaTunnel task command: 
${SEATUNNEL_HOME}/bin/seatunnel.sh --config 
/tmp/dolphinscheduler/exec/process/admin/10633993590912/13920631639648_98/319758/319902/seatunnel_319758_319902.conf
   [INFO] 2024-08-01 19:15:07.957 +0800 - log path 
/opt/dolphinscheduler/logs/20240801/13920631639648_98-319758-319902.log,task 
log name taskAppId=TASK-20240801-13920631639648_98-319758-319902
   [INFO] 2024-08-01 19:15:07.959 +0800 - Begin to create command 
file:/tmp/dolphinscheduler/exec/process/admin/10633993590912/13920631639648_98/319758/319902/319758_319902.command
   [INFO] 2024-08-01 19:15:07.961 +0800 - Success create command file, command: 
#!/bin/bash
   BASEDIR=$(cd `dirname $0`; pwd)
   cd $BASEDIR
   source /opt/dolphinscheduler/conf/dolphinscheduler_env.sh
   ${SEATUNNEL_HOME}/bin/seatunnel.sh --config 
/tmp/dolphinscheduler/exec/process/admin/10633993590912/13920631639648_98/319758/319902/seatunnel_319758_319902.conf
   [INFO] 2024-08-01 19:15:07.973 +0800 - task run command: sudo -u admin -E 
bash 
/tmp/dolphinscheduler/exec/process/admin/10633993590912/13920631639648_98/319758/319902/319758_319902.command
   [INFO] 2024-08-01 19:15:07.977 +0800 - process start, process id is: 141
   [INFO] 2024-08-01 19:15:08.980 +0800 -  -> Aug 01, 2024 7:15:08 PM 
com.hazelcast.internal.config.AbstractConfigLocator
        INFO: Loading configuration '/opt/soft/seatunnel/config/seatunnel.yaml' 
from System property 'seatunnel.config'
        Aug 01, 2024 7:15:08 PM 
com.hazelcast.internal.config.AbstractConfigLocator
        INFO: Using configuration file at 
/opt/soft/seatunnel/config/seatunnel.yaml
        Aug 01, 2024 7:15:08 PM 
org.apache.seatunnel.engine.common.config.SeaTunnelConfig
        INFO: seatunnel.home is 
/tmp/dolphinscheduler/exec/process/admin/10633993590912/13920631639648_98/319758/319902
   [INFO] 2024-08-01 19:15:09.982 +0800 -  -> Aug 01, 2024 7:15:09 PM 
com.hazelcast.internal.config.AbstractConfigLocator
        INFO: Loading configuration '/opt/soft/seatunnel/config/hazelcast.yaml' 
from System property 'hazelcast.config'
        Aug 01, 2024 7:15:09 PM 
com.hazelcast.internal.config.AbstractConfigLocator
        INFO: Using configuration file at 
/opt/soft/seatunnel/config/hazelcast.yaml
        Aug 01, 2024 7:15:09 PM 
com.hazelcast.internal.config.AbstractConfigLocator
        INFO: Loading configuration 
'/opt/soft/seatunnel/config/hazelcast-client.yaml' from System property 
'hazelcast.client.config'
        Aug 01, 2024 7:15:09 PM 
com.hazelcast.internal.config.AbstractConfigLocator
        INFO: Using configuration file at 
/opt/soft/seatunnel/config/hazelcast-client.yaml
        2024-08-01 19:15:09,951 INFO  [.c.i.s.ClientInvocationService] [main] - 
hz.client_1 [seatunnel] [5.1] Running with 2 response threads, dynamic=true
   [INFO] 2024-08-01 19:15:10.989 +0800 -  -> 2024-08-01 19:15:10,043 INFO  
[c.h.c.LifecycleService        ] [main] - hz.client_1 [seatunnel] [5.1] 
HazelcastClient 5.1 (20220228 - 21f20e7) is STARTING
        2024-08-01 19:15:10,044 INFO  [c.h.c.LifecycleService        ] [main] - 
hz.client_1 [seatunnel] [5.1] HazelcastClient 5.1 (20220228 - 21f20e7) is 
STARTED
        2024-08-01 19:15:10,087 INFO  [.c.i.c.ClientConnectionManager] [main] - 
hz.client_1 [seatunnel] [5.1] Trying to connect to cluster: seatunnel
        2024-08-01 19:15:10,092 INFO  [.c.i.c.ClientConnectionManager] [main] - 
hz.client_1 [seatunnel] [5.1] Trying to connect to [10.109.xx.xx]:5801
        2024-08-01 19:15:10,201 INFO  [c.h.c.LifecycleService        ] [main] - 
hz.client_1 [seatunnel] [5.1] HazelcastClient 5.1 (20220228 - 21f20e7) is 
CLIENT_CONNECTED
        2024-08-01 19:15:10,201 INFO  [.c.i.c.ClientConnectionManager] [main] - 
hz.client_1 [seatunnel] [5.1] Authenticated with server 
[10.109.xx.xx]:5801:f85e791b-bcb5-45b0-a754-f7855e2b343b, server version: 5.1, 
local address: /xxxxxx:3105
        2024-08-01 19:15:10,203 INFO  [c.h.i.d.Diagnostics           ] [main] - 
hz.client_1 [seatunnel] [5.1] Diagnostics disabled. To enable add 
-Dhazelcast.diagnostics.enabled=true to the JVM arguments.
        2024-08-01 19:15:10,253 INFO  [c.h.c.i.s.ClientClusterService] 
[hz.client_1.event-3] - hz.client_1 [seatunnel] [5.1] 
        
        Members [1] {
                Member [10.109.xx.xx]:5801 - 
f85e791b-bcb5-45b0-a754-f7855e2b343b
        }
        
        2024-08-01 19:15:10,325 INFO  [.c.i.s.ClientStatisticsService] [main] - 
Client statistics is enabled with period 5 seconds.
        2024-08-01 19:15:10,579 INFO  [.ClientJobExecutionEnvironment] [main] - 
add common jar in plugins :[]
        2024-08-01 19:15:10,664 INFO  [o.a.s.c.s.u.ConfigBuilder     ] [main] - 
Loading config file from path: 
/tmp/dolphinscheduler/exec/process/admin/10633993590912/13920631639648_98/319758/319902/seatunnel_319758_319902.conf
        2024-08-01 19:15:10,718 INFO  [o.a.s.c.s.u.ConfigShadeUtils  ] [main] - 
Load config shade spi: [base64]
        2024-08-01 19:15:10,839 INFO  [o.a.s.c.s.u.ConfigBuilder     ] [main] - 
Parsed config file: 
        {
            "env" : {
                "job.name" : "1818968701584089089",
                "job.mode" : "BATCH",
                "execution.parallelism" : 1,
                "shade.identifier" : "base64"
            },
            "source" : [
                {
                    "password" : "123456",
                    "driver" : "org.postgresql.Driver",
                    "query" : "SELECT mandt,matnr,spras,maktx,maktg FROM 
bi_source.makt_kafka_source",
                    "result_table_name" : "ST_GLOBAL_VIEW_NAME",
                    "plugin_name" : "Jdbc",
                    "user" : "postgres",
                    "url" : "jdbc:postgresql://xxxxxx:5432/test"
                }
            ],
            "transform" : [
                {
                    "query" : "SELECT mandt as mandt,matnr as matnr,spras as 
spras,maktx as maktx,maktg as maktg,TO_DATE('2024-08-01 19:15:05', 'yyyy-MM-dd 
HH:mm:ss') as fdop_import_time FROM ST_GLOBAL_VIEW_NAME",
                    "source_table_name" : "ST_GLOBAL_VIEW_NAME",
                    "result_table_name" : "ST_GLOBAL_VIEW_NAME",
                    "plugin_name" : "Sql"
                }
            ],
            "sink" : [
                {
                    "catalog_name" : "iceberg",
                    "iceberg.table.write-props" : {
                        "write.format.default" : "parquet",
                        "write.target-file-size-bytes" : "536870912"
                    },
                    "iceberg.catalog.config" : {
                        "type" : "hive",
                        "uri" : "thrift://hdp002.test.io:9083",
                        "warehouse" : 
"hdfs://hdp002.test.io:8020/warehouse/tablespace/managed/hive"
                    },
                    "kerberos_principal" : "hive/[email protected]",
                    "case_sensitive" : true,
                    "namespace" : "tst",
                    "source_table_name" : "ST_GLOBAL_VIEW_NAME",
                    "kerberos_krb5_conf_path" : "/data/test/hdp/krb5.conf",
                    "plugin_name" : "Iceberg",
                    "iceberg.hadoop-conf-path" : "/data/test/hdp",
                    "kerberos_keytab_path" : 
"/data/test/hdp/hive.service.keytab",
                    "table" : "ods_makt_kafka_iceberg"
                }
            ]
        }
        
        2024-08-01 19:15:10,863 INFO  [.s.p.d.AbstractPluginDiscovery] [main] - 
Load SeaTunnelSink Plugin from /opt/soft/seatunnel/connectors
        2024-08-01 19:15:10,883 INFO  [.s.p.d.AbstractPluginDiscovery] [main] - 
Discovery plugin jar for: PluginIdentifier{engineType='seatunnel', 
pluginType='source', pluginName='Jdbc'} at: 
file:/opt/soft/seatunnel/connectors/connector-jdbc-2.3.5.jar
        2024-08-01 19:15:10,885 INFO  [.s.p.d.AbstractPluginDiscovery] [main] - 
Discovery plugin jar for: PluginIdentifier{engineType='seatunnel', 
pluginType='sink', pluginName='Iceberg'} at: 
file:/opt/soft/seatunnel/connectors/connector-iceberg-2.3.5.jar
        2024-08-01 19:15:10,889 WARN  [o.a.s.a.c.u.ConfigUtil        ] [main] - 
Option 'source_table_name' is a List, and it is recommended to configure it as 
["string1","string2"]; we will only use ',' to split the String into a list.
        2024-08-01 19:15:10,889 WARN  [o.a.s.a.c.u.ConfigUtil        ] [main] - 
Option 'source_table_name' is a List, and it is recommended to configure it as 
["string1","string2"]; we will only use ',' to split the String into a list.
        2024-08-01 19:15:10,895 INFO  [p.MultipleTableJobConfigParser] [main] - 
start generating all sources.
        2024-08-01 19:15:10,971 INFO  [o.a.s.c.s.j.u.JdbcCatalogUtils] [main] - 
Loading catalog tables for catalog : class 
org.apache.seatunnel.connectors.seatunnel.jdbc.catalog.psql.PostgresCatalog
   [INFO] 2024-08-01 19:15:11.990 +0800 -  -> SLF4J: Failed to load class 
"org.slf4j.impl.StaticLoggerBinder".
        SLF4J: Defaulting to no-operation (NOP) logger implementation
        SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for 
further details.
        2024-08-01 19:15:11,456 INFO  [.s.c.s.j.c.AbstractJdbcCatalog] [main] - 
Catalog Postgres established connection to jdbc:postgresql://xxxxxx:5432/test
        2024-08-01 19:15:11,498 INFO  [o.a.s.c.s.j.u.JdbcCatalogUtils] [main] - 
Loaded catalog table : default.default.default, 
JdbcSourceTable(tablePath=default.default.default, query=SELECT 
mandt,matnr,spras,maktx,maktg FROM bi_source.makt_kafka_source, 
partitionColumn=null, partitionNumber=10, partitionStart=null, 
partitionEnd=null, 
catalogTable=CatalogTable{tableId=jdbc_catalog.default.default.default, 
tableSchema=TableSchema(columns=[PhysicalColumn(super=Column(name=mandt, 
dataType=STRING, columnLength=12, scale=null, nullable=false, 
defaultValue=null, comment=null, sourceType=varchar(3), options=null, 
isUnsigned=false, isZeroFill=false, bitLen=96, longColumnLength=12)), 
PhysicalColumn(super=Column(name=matnr, dataType=STRING, columnLength=160, 
scale=null, nullable=false, defaultValue=null, comment=null, 
sourceType=varchar(40), options=null, isUnsigned=false, isZeroFill=false, 
bitLen=1280, longColumnLength=160)), PhysicalColumn(super=Column(name=spras, 
dataType=STRING, columnLength=
 4, scale=null, nullable=false, defaultValue=null, comment=null, 
sourceType=varchar(1), options=null, isUnsigned=false, isZeroFill=false, 
bitLen=32, longColumnLength=4)), PhysicalColumn(super=Column(name=maktx, 
dataType=STRING, columnLength=160, scale=null, nullable=true, 
defaultValue=null, comment=null, sourceType=varchar(40), options=null, 
isUnsigned=false, isZeroFill=false, bitLen=1280, longColumnLength=160)), 
PhysicalColumn(super=Column(name=maktg, dataType=STRING, columnLength=160, 
scale=null, nullable=true, defaultValue=null, comment=null, 
sourceType=varchar(40), options=null, isUnsigned=false, isZeroFill=false, 
bitLen=1280, longColumnLength=160))], primaryKey=null, constraintKeys=[]), 
options={}, partitionKeys=[], comment='', catalogName='jdbc_catalog'})
        2024-08-01 19:15:11,498 INFO  [o.a.s.c.s.j.u.JdbcCatalogUtils] [main] - 
Loaded 1 catalog tables for catalog : class 
org.apache.seatunnel.connectors.seatunnel.jdbc.catalog.psql.PostgresCatalog
        2024-08-01 19:15:11,499 INFO  [.s.c.s.j.c.AbstractJdbcCatalog] [main] - 
Catalog Postgres closing
        2024-08-01 19:15:11,500 INFO  [o.a.s.a.t.f.FactoryUtil       ] [main] - 
get the CatalogTable from source Jdbc: jdbc_catalog.default.default.default
        2024-08-01 19:15:11,512 INFO  [.s.p.d.AbstractPluginDiscovery] [main] - 
Load SeaTunnelSource Plugin from /opt/soft/seatunnel/connectors
        2024-08-01 19:15:11,517 INFO  [.s.p.d.AbstractPluginDiscovery] [main] - 
Discovery plugin jar for: PluginIdentifier{engineType='seatunnel', 
pluginType='source', pluginName='Jdbc'} at: 
file:/opt/soft/seatunnel/connectors/connector-jdbc-2.3.5.jar
        2024-08-01 19:15:11,519 INFO  [p.MultipleTableJobConfigParser] [main] - 
start generating all transforms.
        2024-08-01 19:15:11,522 INFO  [.s.p.d.AbstractPluginDiscovery] [main] - 
Load SeaTunnelTransform Plugin from /opt/soft/seatunnel/lib
        2024-08-01 19:15:11,522 WARN  [o.a.s.a.c.u.ConfigUtil        ] [main] - 
Option 'source_table_name' is a List, and it is recommended to configure it as 
["string1","string2"]; we will only use ',' to split the String into a list.
        2024-08-01 19:15:11,529 WARN  [o.a.s.a.c.u.ConfigUtil        ] [main] - 
Option 'source_table_name' is a List, and it is recommended to configure it as 
["string1","string2"]; we will only use ',' to split the String into a list.
        2024-08-01 19:15:11,599 INFO  [p.MultipleTableJobConfigParser] [main] - 
start generating all sinks.
        2024-08-01 19:15:11,600 WARN  [o.a.s.a.c.u.ConfigUtil        ] [main] - 
Option 'source_table_name' is a List, and it is recommended to configure it as 
["string1","string2"]; we will only use ',' to split the String into a list.
        2024-08-01 19:15:11,602 INFO  [.s.p.d.AbstractPluginDiscovery] [main] - 
Load SeaTunnelSink Plugin from /opt/soft/seatunnel/connectors
        2024-08-01 19:15:11,603 INFO  [.s.p.d.AbstractPluginDiscovery] [main] - 
Discovery plugin jar for: PluginIdentifier{engineType='seatunnel', 
pluginType='sink', pluginName='Iceberg'} at: 
file:/opt/soft/seatunnel/connectors/connector-iceberg-2.3.5.jar
        2024-08-01 19:15:11,872 WARN  [o.a.h.u.NativeCodeLoader      ] [main] - 
Unable to load native-hadoop library for your platform... using builtin-java 
classes where applicable
        2024-08-01 19:15:11,925 INFO  [a.s.c.s.i.IcebergCatalogLoader] [main] - 
Start Kerberos authentication using principal hive/[email protected] and 
keytab /data/test/hdp/hive.service.keytab
   [INFO] 2024-08-01 19:15:12.991 +0800 -  -> 2024-08-01 19:15:12,015 INFO  
[o.a.h.s.UserGroupInformation  ] [main] - Login successful for user 
hive/[email protected] using keytab file /data/test/hdp/hive.service.keytab
        2024-08-01 19:15:12,016 INFO  [a.s.c.s.i.IcebergCatalogLoader] [main] - 
Kerberos authentication successful,UGIhive/[email protected] 
(auth:KERBEROS)
        2024-08-01 19:15:12,016 INFO  [a.s.c.s.i.IcebergCatalogLoader] [main] - 
Hadoop config initialized: org.apache.hadoop.hdfs.HdfsConfiguration
        2024-08-01 19:15:12,184 INFO  [a.s.a.s.SaveModeExecuteWrapper] [main] - 
Executing save mode for table: tst.default.ods_makt_kafka_iceberg, with 
SchemaSaveMode: CREATE_SCHEMA_WHEN_NOT_EXIST, DataSaveMode: APPEND_DATA using 
Catalog: Iceberg
        2024-08-01 19:15:12,217 INFO  [o.a.h.h.c.HiveConf            ] [main] - 
Found configuration file null
        2024-08-01 19:15:12,422 WARN  [o.a.h.h.c.HiveConf            ] [main] - 
HiveConf of name hive.hook.proto.base-directory does not exist
        2024-08-01 19:15:12,423 WARN  [o.a.h.h.c.HiveConf            ] [main] - 
HiveConf of name hive.strict.managed.tables does not exist
        2024-08-01 19:15:12,423 WARN  [o.a.h.h.c.HiveConf            ] [main] - 
HiveConf of name hive.stats.fetch.partition.stats does not exist
        2024-08-01 19:15:12,424 WARN  [o.a.h.h.c.HiveConf            ] [main] - 
HiveConf of name hive.heapsize does not exist
        2024-08-01 19:15:12,480 INFO  [o.a.h.h.m.HiveMetaStoreClient ] [main] - 
Trying to connect to metastore with URI thrift://hdp002.test.io:9083
        2024-08-01 19:15:12,488 INFO  [o.a.h.h.m.HiveMetaStoreClient ] [main] - 
HMSC::open(): Could not find delegation token. Creating KERBEROS-based thrift 
connection.
        2024-08-01 19:15:12,696 INFO  [o.a.h.h.m.HiveMetaStoreClient ] [main] - 
Opened a connection to metastore, current connections: 1
        2024-08-01 19:15:12,697 INFO  [o.a.h.h.m.HiveMetaStoreClient ] [main] - 
Connected to metastore.
        2024-08-01 19:15:12,697 INFO  [.h.h.m.RetryingMetaStoreClient] [main] - 
RetryingMetaStoreClient proxy=class 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient 
ugi=hive/[email protected] (auth:KERBEROS) retries=24 delay=5 lifetime=0
   [INFO] 2024-08-01 19:15:13.992 +0800 -  -> 2024-08-01 19:15:12,999 INFO  
[i.BaseMetastoreTableOperations] [main] - Refreshing table metadata from new 
version: 
hdfs://ns1/warehouse/tablespace/managed/hive/tst.db/ods_makt_kafka_iceberg/metadata/00000-0e52a973-aa2d-4542-8878-7c00c89a83f7.metadata.json
        2024-08-01 19:15:13,371 WARN  [o.a.h.h.s.DomainSocketFactory ] [main] - 
The short-circuit local reads feature cannot be used because libhadoop cannot 
be loaded.
   [INFO] 2024-08-01 19:15:14.993 +0800 -  -> 2024-08-01 19:15:14,148 INFO  
[o.a.i.BaseMetastoreCatalog    ] [main] - Table loaded by catalog: 
iceberg.tst.ods_makt_kafka_iceberg
        2024-08-01 19:15:14,206 INFO  [o.a.s.e.c.j.ClientJobProxy    ] [main] - 
Start submit job, job id: 871350342623690753, with plugin jar 
[file:/opt/soft/seatunnel/connectors/connector-jdbc-2.3.5.jar, 
file:/opt/soft/seatunnel/connectors/connector-iceberg-2.3.5.jar]
        2024-08-01 19:15:14,728 INFO  [o.a.s.e.c.j.ClientJobProxy    ] [main] - 
Submit job finished, job id: 871350342623690753, job name: 1818968701584089089
        2024-08-01 19:15:14,809 WARN  [o.a.s.e.c.j.JobMetricsRunner  ] 
[job-metrics-runner-871350342623690753] - Failed to get job metrics summary, it 
maybe first-run
   [INFO] 2024-08-01 19:15:18.994 +0800 -  -> 2024-08-01 19:15:18,949 INFO  
[o.a.s.e.c.j.ClientJobProxy    ] [main] - Job (871350342623690753) end with 
state FINISHED
   [INFO] 2024-08-01 19:15:19.104 +0800 - process has exited. execute 
path:/tmp/dolphinscheduler/exec/process/admin/10633993590912/13920631639648_98/319758/319902,
 processId:141 ,exitStatusCode:0 ,processWaitForStatus:true ,processExitValue:0
   [INFO] 2024-08-01 19:15:19.108 +0800 - Send task execute result to master, 
the current task status: TaskExecutionStatus{code=7, desc='success'}
   [INFO] 2024-08-01 19:15:19.109 +0800 - Remove the current task execute 
context from worker cache
   [INFO] 2024-08-01 19:15:19.109 +0800 - The current execute mode isn't 
develop mode, will clear the task execute file: 
/tmp/dolphinscheduler/exec/process/admin/10633993590912/13920631639648_98/319758/319902
   [INFO] 2024-08-01 19:15:19.119 +0800 - Success clear the task execute file: 
/tmp/dolphinscheduler/exec/process/admin/10633993590912/13920631639648_98/319758/319902
   [INFO] 2024-08-01 19:15:19.995 +0800 -  -> 2024-08-01 19:15:19,021 INFO  
[s.c.s.s.c.ClientExecuteCommand] [main] - 
        ***********************************************
                   Job Statistic Information
        ***********************************************
        Start Time                : 2024-08-01 19:15:10
        End Time                  : 2024-08-01 19:15:18
        Total Time(s)             :                   8
        Total Read Count          :                 179
        Total Write Count         :                 179
        Total Failed Count        :                   0
        ***********************************************
        
        2024-08-01 19:15:19,021 INFO  [c.h.c.LifecycleService        ] [main] - 
hz.client_1 [seatunnel] [5.1] HazelcastClient 5.1 (20220228 - 21f20e7) is 
SHUTTING_DOWN
        2024-08-01 19:15:19,055 INFO  [.c.i.c.ClientConnectionManager] [main] - 
hz.client_1 [seatunnel] [5.1] Removed connection to endpoint: 
[10.109.xx.xx]:5801:f85e791b-bcb5-45b0-a754-f7855e2b343b, connection: 
ClientConnection{alive=false, connectionId=1, 
channel=NioChannel{/xxxxxx:3105->/10.109.xx.xx:5801}, 
remoteAddress=[10.109.xx.xx]:5801, lastReadTime=2024-08-01 19:15:18.998, 
lastWriteTime=2024-08-01 19:15:18.949, closedTime=2024-08-01 19:15:19.025, 
connected server version=5.1}
        2024-08-01 19:15:19,055 INFO  [c.h.c.LifecycleService        ] [main] - 
hz.client_1 [seatunnel] [5.1] HazelcastClient 5.1 (20220228 - 21f20e7) is 
CLIENT_DISCONNECTED
        2024-08-01 19:15:19,058 INFO  [c.h.c.LifecycleService        ] [main] - 
hz.client_1 [seatunnel] [5.1] HazelcastClient 5.1 (20220228 - 21f20e7) is 
SHUTDOWN
        2024-08-01 19:15:19,058 INFO  [s.c.s.s.c.ClientExecuteCommand] [main] - 
Closed SeaTunnel client......
        2024-08-01 19:15:19,058 INFO  [s.c.s.s.c.ClientExecuteCommand] [main] - 
Closed metrics executor service ......
        2024-08-01 19:15:19,060 INFO  [s.c.s.s.c.ClientExecuteCommand] 
[ForkJoinPool.commonPool-worker-1] - run shutdown hook because get close signal
   [INFO] 2024-08-01 19:15:19.997 +0800 - FINALIZE_SESSION
   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to