[ 
https://issues.apache.org/jira/browse/FLINK-37873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin Chen updated FLINK-37873:
-----------------------------
    Description: 
 If I have enabled the buffer flush, such as setting 
''sink.buffer-flush.max-rows'='1000', but when I specify a table that 
definitely does not exist, that is, the ‘table-name‘ field is a completely 
non-existent table, why does the flink task not report an error? It shows 
successful execution, and there is no error log in the task. Only when closed, 
there is a stack print.

 

The Flink SQL:

 
{code:java}
tbEnv.executeSql("CREATE TABLE IF NOT EXISTS hTable (\n"
        + "rowkey STRING,\n"
        + "family1 ROW<q1 STRING,q2 STRING>,\n"
        + "family2 ROW<q3 STRING,q4 STRING>,\n"
        + "PRIMARY KEY (rowkey) NOT ENFORCED\n"
        + ") WITH (\n"
        + "'connector' = 'hbase-1.4',\n"
        + "'table-name' = 'mytable3',\n" // mytable3 does not exist in Hbase!!!
        + "'sink.buffer-flush.max-rows' = '1000',\n"
        + "'hbase.regionserver.kerberos.principal' = 'hbase/_h...@dahua.com',\n"
        + "'hbase.master.kerberos.principal' = 'hbase/_h...@dahua.com',\n"
        + "'hbase.security.authentication' = 'kerberos',\n"
        + "'hbase.ipc.client.fallback-to-simple-auth-allowed' = 'true',\n"
        + "'zookeeper.quorum' = 
'hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local:2181'\n"
        + "    )"); 

String sql3 = "insert into hTable values ('200', ROW('300','AAA'), 
ROW('400','BBB'))";

tbEnv.executeSql(sql3);{code}
Flink task state:

 

!image-2025-05-30-11-02-54-974.png!

Taskmanager.log:
{code:java}
2025-05-30 09:11:53,825 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Establish 
JobManager connection for job ffffffffb9bf812e0000000000000000.
2025-05-30 09:11:53,828 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Offer 
reserved slots to the leader of job ffffffffb9bf812e0000000000000000.
2025-05-30 09:11:53,850 INFO  
org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Activate slot 
4e07d1d05b97f8882c94c3372b11e540.
2025-05-30 09:11:53,855 INFO  
org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Activate slot 
4e07d1d05b97f8882c94c3372b11e540.
2025-05-30 09:11:53,885 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Received task 
Source: Values(tuples=[[{ 0 }]]) -> Calc(select=[_UTF-16LE'100' AS EXPR$0, 
(_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
_UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f), deploy into slot with 
allocation id 4e07d1d05b97f8882c94c3372b11e540.
2025-05-30 09:11:53,895 INFO  org.apache.flink.runtime.taskmanager.Task         
           [] - Source: Values(tuples=[[{ 0 }]]) -> Calc(select=[_UTF-16LE'100' 
AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
_UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) switched from CREATED to 
DEPLOYING.
2025-05-30 09:11:53,898 INFO  org.apache.flink.runtime.taskmanager.Task         
           [] - Loading JAR files for task Source: Values(tuples=[[{ 0 }]]) -> 
Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS 
EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) [DEPLOYING].
2025-05-30 09:11:54,060 INFO  org.apache.flink.runtime.taskmanager.Task         
           [] - Registering task at network: Source: Values(tuples=[[{ 0 }]]) 
-> Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') 
AS EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) [DEPLOYING].
2025-05-30 09:11:54,084 INFO  
org.apache.flink.streaming.runtime.tasks.StreamTask          [] - No state 
backend has been configured, using default (Memory / JobManager) 
MemoryStateBackend (data in heap memory / checkpoints to JobManager) 
(checkpoints: 'null', savepoints: 'null', asynchronous: TRUE, maxStateSize: 
5242880)
2025-05-30 09:11:54,092 INFO  org.apache.flink.runtime.taskmanager.Task         
           [] - Source: Values(tuples=[[{ 0 }]]) -> Calc(select=[_UTF-16LE'100' 
AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
_UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) switched from DEPLOYING to 
RUNNING.
2025-05-30 09:11:54,178 WARN  org.apache.flink.metrics.MetricGroup              
           [] - The operator name Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) exceeded the 80 characters length limit and was truncated.
2025-05-30 09:11:54,461 WARN  org.apache.flink.metrics.MetricGroup              
           [] - The operator name Calc(select=[_UTF-16LE'100' AS EXPR$0, 
(_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
_UTF-16LE'BBB') AS EXPR$2]) exceeded the 80 characters length limit and was 
truncated.

=============================================================================================================================================================
2025-05-30 09:11:54,494 INFO  
org.apache.flink.connector.hbase.sink.HBaseSinkFunction      [] - start open ...
2025-05-30 09:11:54,525 WARN  
org.apache.flink.connector.hbase.util.HBaseConfigurationUtil [] - Could not 
find HBase configuration via any of the supported methods (Flink configuration, 
environment variables).
2025-05-30 09:11:54,678 INFO  
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper
 [] - Process identifier=hconnection-0x1a4ac2a8 connecting to ZooKeeper 
ensemble=hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local:2181
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:zookeeper.version=3.4.14-HDP-22.10.2-efc83fdf3946366b6cd1191a5af00dd26735cde9,
 built on 09/06/2022 11:01 GMT
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:host.name=192.168.234.96
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:java.version=1.8.0_342
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:java.vendor=Bisheng
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.342.b07-0.oe2203.x86_64/jre
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:java.class.path=:xxxx
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:java.io.tmpdir=/tmp
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:java.compiler=<NA>
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:os.name=Linux
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:os.arch=amd64
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:os.version=5.10.0-60.66.0.91.oe2203.x86_64
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:user.name=root
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:user.home=/root
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:user.dir=/cloud/data/hadoop/yarn/nodemanager/local/usercache/hadoop/appcache/application_1747795725599_0302/container_1747795725599_0302_01_000002
2025-05-30 09:11:54,685 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Initiating 
client connection, 
connectString=hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local:2181
 sessionTimeout=90000 
watcher=org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.zookeeper.PendingWatcher@4e2b3acc
2025-05-30 09:11:54,702 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - Client 
successfully logged in.
2025-05-30 09:11:54,703 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - TGT refresh 
thread started.
2025-05-30 09:11:54,703 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - TGT valid 
starting at:        Fri May 30 09:11:54 CST 2025
2025-05-30 09:11:54,703 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - TGT expires:  
                Sat May 31 09:11:54 CST 2025
2025-05-30 09:11:54,704 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - TGT refresh 
sleeping until: Sat May 31 05:19:36 CST 2025
2025-05-30 09:11:54,705 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.client.ZooKeeperSaslClient 
[] - Client will use GSSAPI as SASL mechanism.
2025-05-30 09:11:54,706 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ClientCnxn [] - Opening 
socket connection to server 
hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local/192.168.123.164:2181.
 Will attempt to SASL-authenticate using Login Context section 'Client'
2025-05-30 09:11:54,707 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ClientCnxn [] - Socket 
connection established to 
hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local/192.168.123.164:2181,
 initiating session
2025-05-30 09:11:54,710 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ClientCnxn [] - Session 
establishment complete on server 
hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local/192.168.123.164:2181,
 sessionid = 0x1008fd670fd20a8, negotiated timeout = 90000
2025-05-30 09:11:54,922 INFO  
org.apache.flink.connector.hbase.sink.HBaseSinkFunction      [] - end open.

=====================================================================================================================================================

2025-05-30 09:11:55,211 ERROR 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.AsyncProcess [] - 
Failed to get region location 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.TableNotFoundException: 
Table 'mytable3' was not found, got: mytable2.
        at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1345)
 ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1221)
 ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:496)
 ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:436)
 ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:246)
 ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.BufferedMutatorImpl.close(BufferedMutatorImpl.java:186)
 ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.connector.hbase.sink.HBaseSinkFunction.close(HBaseSinkFunction.java:216)
 ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:41)
 ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.close(AbstractUdfStreamOperator.java:109)
 ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.lambda$closeOperator$5(StreamOperatorWrapper.java:213)
 ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
 ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.closeOperator(StreamOperatorWrapper.java:210)
 ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.lambda$deferCloseOperatorToMailbox$3(StreamOperatorWrapper.java:185)
 ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
 [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90) 
[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.tryYield(MailboxExecutorImpl.java:97)
 [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.quiesceTimeServiceAndCloseOperator(StreamOperatorWrapper.java:162)
 [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:131)
 [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:135)
 [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:135)
 [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.closeOperators(OperatorChain.java:439)
 [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.afterInvoke(StreamTask.java:627)
 [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:589) 
[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:755) 
[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:570) 
[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at java.lang.Thread.run(Thread.java:750) [?:1.8.0_342]
2025-05-30 09:11:55,220 INFO  
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation
 [] - Closing zookeeper sessionid=0x1008fd670fd20a8
2025-05-30 09:11:55,221 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Session: 
0x1008fd670fd20a8 closed
2025-05-30 09:11:55,221 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ClientCnxn [] - EventThread 
shut down for session: 0x1008fd670fd20a8
2025-05-30 09:11:55,227 INFO  org.apache.flink.runtime.taskmanager.Task         
           [] - Source: Values(tuples=[[{ 0 }]]) -> Calc(select=[_UTF-16LE'100' 
AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
_UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) switched from RUNNING to 
FINISHED.
2025-05-30 09:11:55,227 INFO  org.apache.flink.runtime.taskmanager.Task         
           [] - Freeing task resources for Source: Values(tuples=[[{ 0 }]]) -> 
Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS 
EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f).
2025-05-30 09:11:55,228 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - 
Un-registering task and sending final execution state FINISHED to JobManager 
for task Source: Values(tuples=[[{ 0 }]]) -> Calc(select=[_UTF-16LE'100' AS 
EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
_UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) (1/1)#0 5675710f80e829cc299d53501cd9765f.
2025-05-30 09:11:55,267 INFO  
org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Free slot 
TaskSlot(index:0, state:ACTIVE, resource profile: 
ResourceProfile{cpuCores=1.0000000000000000, taskHeapMemory=25.600mb (26843542 
bytes), taskOffHeapMemory=0 bytes, managedMemory=230.400mb (241591914 bytes), 
networkMemory=64.000mb (67108864 bytes)}, allocationId: 
4e07d1d05b97f8882c94c3372b11e540, jobId: ffffffffb9bf812e0000000000000000).
2025-05-30 09:11:55,271 INFO  
org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Remove job 
ffffffffb9bf812e0000000000000000 from job leader monitoring. {code}
 

 

 

  was:
 If I have enabled the buffer flush, such as setting 
''sink.buffer-flush.max-rows'='1000', but when I specify a table that 
definitely does not exist, that is, the ‘table-name‘ field is a completely 
non-existent table, why does the flink task not report an error? It shows 
successful execution, and there is no error log in the task. Only when closed, 
there is a stack print.

 

The Flink SQL:

 
{code:java}
tbEnv.executeSql("CREATE TABLE IF NOT EXISTS hTable (\n"
        + "rowkey STRING,\n"
        + "family1 ROW<q1 STRING,q2 STRING>,\n"
        + "family2 ROW<q3 STRING,q4 STRING>,\n"
        + "PRIMARY KEY (rowkey) NOT ENFORCED\n"
        + ") WITH (\n"
        + "'connector' = 'hbase-1.4',\n"
        + "'table-name' = 'mytable3',\n" // mytable3 does not exist in Hbase!!!
        + "'sink.buffer-flush.max-rows' = '1000',\n"
        + "'hbase.regionserver.kerberos.principal' = 'hbase/_h...@dahua.com',\n"
        + "'hbase.master.kerberos.principal' = 'hbase/_h...@dahua.com',\n"
        + "'hbase.security.authentication' = 'kerberos',\n"
        + "'hbase.ipc.client.fallback-to-simple-auth-allowed' = 'true',\n"
        + "'zookeeper.quorum' = 
'hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local:2181'\n"
        + "    )"); 

String sql3 = "insert into hTable values ('200', ROW('300','AAA'), 
ROW('400','BBB'))";

tbEnv.executeSql(sql3);{code}
Flink task state:

 

!image-2025-05-30-11-02-54-974.png!

Taskmanager.log:
{code:java}
2025-05-30 09:11:53,825 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Establish 
JobManager connection for job ffffffffb9bf812e0000000000000000.
2025-05-30 09:11:53,828 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Offer 
reserved slots to the leader of job ffffffffb9bf812e0000000000000000.
2025-05-30 09:11:53,850 INFO  
org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Activate slot 
4e07d1d05b97f8882c94c3372b11e540.
2025-05-30 09:11:53,855 INFO  
org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Activate slot 
4e07d1d05b97f8882c94c3372b11e540.
2025-05-30 09:11:53,885 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Received task 
Source: Values(tuples=[[{ 0 }]]) -> Calc(select=[_UTF-16LE'100' AS EXPR$0, 
(_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
_UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f), deploy into slot with 
allocation id 4e07d1d05b97f8882c94c3372b11e540.
2025-05-30 09:11:53,895 INFO  org.apache.flink.runtime.taskmanager.Task         
           [] - Source: Values(tuples=[[{ 0 }]]) -> Calc(select=[_UTF-16LE'100' 
AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
_UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) switched from CREATED to 
DEPLOYING.
2025-05-30 09:11:53,898 INFO  org.apache.flink.runtime.taskmanager.Task         
           [] - Loading JAR files for task Source: Values(tuples=[[{ 0 }]]) -> 
Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS 
EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) [DEPLOYING].
2025-05-30 09:11:54,060 INFO  org.apache.flink.runtime.taskmanager.Task         
           [] - Registering task at network: Source: Values(tuples=[[{ 0 }]]) 
-> Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') 
AS EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) [DEPLOYING].
2025-05-30 09:11:54,084 INFO  
org.apache.flink.streaming.runtime.tasks.StreamTask          [] - No state 
backend has been configured, using default (Memory / JobManager) 
MemoryStateBackend (data in heap memory / checkpoints to JobManager) 
(checkpoints: 'null', savepoints: 'null', asynchronous: TRUE, maxStateSize: 
5242880)
2025-05-30 09:11:54,092 INFO  org.apache.flink.runtime.taskmanager.Task         
           [] - Source: Values(tuples=[[{ 0 }]]) -> Calc(select=[_UTF-16LE'100' 
AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
_UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) switched from DEPLOYING to 
RUNNING.
2025-05-30 09:11:54,178 WARN  org.apache.flink.metrics.MetricGroup              
           [] - The operator name Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) exceeded the 80 characters length limit and was truncated.
2025-05-30 09:11:54,461 WARN  org.apache.flink.metrics.MetricGroup              
           [] - The operator name Calc(select=[_UTF-16LE'100' AS EXPR$0, 
(_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
_UTF-16LE'BBB') AS EXPR$2]) exceeded the 80 characters length limit and was 
truncated.
2025-05-30 09:11:54,494 INFO  
org.apache.flink.connector.hbase.sink.HBaseSinkFunction      [] - start open ...
2025-05-30 09:11:54,525 WARN  
org.apache.flink.connector.hbase.util.HBaseConfigurationUtil [] - Could not 
find HBase configuration via any of the supported methods (Flink configuration, 
environment variables).
2025-05-30 09:11:54,678 INFO  
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper
 [] - Process identifier=hconnection-0x1a4ac2a8 connecting to ZooKeeper 
ensemble=hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local:2181
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:zookeeper.version=3.4.14-HDP-22.10.2-efc83fdf3946366b6cd1191a5af00dd26735cde9,
 built on 09/06/2022 11:01 GMT
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:host.name=192.168.234.96
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:java.version=1.8.0_342
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:java.vendor=Bisheng
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.342.b07-0.oe2203.x86_64/jre
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:java.class.path=:connector/amqp-client-5.9.0.jar:connector/flink-avro-1.12.2-HDP-25.04.3.jar:connector/flink-connector-VideoSource-1.12.2-HDP-25.04.1-25.04.2.jar:connector/flink-connector-cassandra_2.11-1.12.2-HDP-25.04.3.jar:connector/flink-connector-clouddb-1.12.2-HDP-25.04.1-25.04.2.jar:connector/flink-connector-files-1.12.2-HDP-25.04.3.jar:connector/flink-connector-hive-hdp_2.11-1.12.2-HDP-25.04.1-25.04.2.jar:connector/flink-connector-hive_2.11-1.12.2-HDP-25.04.3.jar:connector/flink-connector-image-gateway-1.12.2-HDP-25.04.1-25.04.2.jar:connector/flink-connector-jdbc_2.11-1.12.2-HDP-25.04.3.jar:connector/flink-connector-oracle_2.11-1.12.2-HDP-25.04.1-25.04.2.jar:connector/flink-connector-rabbitmq_2.11-1.12.2-HDP-25.04.3.jar:connector/flink-connector-redis_2.11-1.12.2-HDP-25.04.3.jar:connector/flink-sql-analyzer-1.12.2-HDP-25.04.1-25.04.2.jar:connector/flink-sql-connector-elasticsearch6_2.11-1.12.2-HDP-25.04.3.jar:connector/flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:connector/flink-sql-connector-kafka_2.11-1.12.2-HDP-25.04.3.jar:connector/hive-exec-2.3.6-HDP-24.04.6.jar:connector/jna-5.10.0.jar:connector/libthrift-0.9.3.jar:connector/mysql-connector-java-8.0.20.jar:connector/postgresql-42.2.10.jar:connector/streamapp-HDP-23.12.2.jar:flink-demo-UDTAF-1.0.jar:lib/flink-csv-1.12.2-HDP-25.04.3.jar:lib/flink-json-1.12.2-HDP-25.04.3.jar:lib/flink-shaded-zookeeper-3.4.14-HDP-22.10.2.jar:lib/flink-table-blink_2.11-1.12.2-HDP-25.04.3.jar:lib/flink-table_2.11-1.12.2-HDP-25.04.3.jar:lib/log4j-1.2-api-2.17.0.jar:lib/log4j-api-2.17.0.jar:lib/log4j-core-2.17.0.jar:lib/log4j-slf4j-impl-2.17.0.jar:opt/flink-cep-scala_2.11-1.12.2-HDP-25.04.3.jar:opt/flink-cep_2.11-1.12.2-HDP-25.04.3.jar:opt/flink-gelly-scala_2.11-1.12.2-HDP-25.04.3.jar:opt/flink-gelly_2.11-1.12.2-HDP-25.04.3.jar:opt/flink-ml_2.11-1.12.2-HDP-25.04.3.jar:opt/flink-python_2.11-1.12.2-HDP-25.04.3.jar:opt/flink-queryable-state-runtime_2.11-1.12.2-HDP-25.04.3.jar:opt/flink-shaded-netty-tcnative-dynamic-2.0.44.Final-12.0.jar:opt/flink-shaded-zookeeper-3.4.14-HDP-22.10.2.jar:opt/flink-sql-client_2.11-1.12.2-HDP-25.04.3.jar:opt/flink-state-processor-api_2.11-1.12.2-HDP-25.04.3.jar:opt/python/cloudpickle-1.2.2-src.zip:opt/python/py4j-0.10.8.1-src.zip:opt/python/pyflink.zip:lib/flink-dist_2.11-1.12.2-HDP-25.04.3.jar:flink-conf.yaml::/cloud/service/hadoop/etc/hadoop:/cloud/service/hadoop/share/hadoop/common/hadoop-common-2.10.1-HDP-24.11.1-tests.jar:/cloud/service/hadoop/share/hadoop/common/hadoop-common-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop/share/hadoop/common/hadoop-nfs-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop/share/hadoop/common/lib/activation-1.1.jar:/cloud/service/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/cloud/service/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/cloud/service/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/cloud/service/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/cloud/service/hadoop/share/hadoop/common/lib/asm-3.2.jar:/cloud/service/hadoop/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/cloud/service/hadoop/share/hadoop/common/lib/avro-1.7.7.jar:/cloud/service/hadoop/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/cloud/service/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/cloud/service/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/cloud/service/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/cloud/service/hadoop/share/hadoop/common/lib/commons-compress-1.21.jar:/cloud/service/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/cloud/service/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/cloud/service/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/cloud/service/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/cloud/service/hadoop/share/hadoop/common/lib/commons-lang3-3.4.jar:/cloud/service/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/cloud/service/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/cloud/service/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/cloud/service/hadoop/share/hadoop/common/lib/curator-client-2.13.0.jar:/cloud/service/hadoop/share/hadoop/common/lib/curator-framework-2.13.0.jar:/cloud/service/hadoop/share/hadoop/common/lib/curator-recipes-2.13.0.jar:/cloud/service/hadoop/share/hadoop/common/lib/gson-2.9.0.jar:/cloud/service/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/cloud/service/hadoop/share/hadoop/common/lib/hadoop-annotations-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop/share/hadoop/common/lib/hadoop-auth-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/cloud/service/hadoop/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/cloud/service/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/cloud/service/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/cloud/service/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/cloud/service/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/cloud/service/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/cloud/service/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/cloud/service/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/cloud/service/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/cloud/service/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/cloud/service/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/cloud/service/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/cloud/service/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/cloud/service/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/cloud/service/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/cloud/service/hadoop/share/hadoop/common/lib/jettison-1.5.3.jar:/cloud/service/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/cloud/service/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/cloud/service/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/cloud/service/hadoop/share/hadoop/common/lib/jsch-0.1.55.jar:/cloud/service/hadoop/share/hadoop/common/lib/json-smart-1.3.1.jar:/cloud/service/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/cloud/service/hadoop/share/hadoop/common/lib/jsr305-3.0.2.jar:/cloud/service/hadoop/share/hadoop/common/lib/junit-4.11.jar:/cloud/service/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/cloud/service/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/cloud/service/hadoop/share/hadoop/common/lib/netty-3.10.6.Final.jar:/cloud/service/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-7.9.jar:/cloud/service/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/cloud/service/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/cloud/service/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/cloud/service/hadoop/share/hadoop/common/lib/slf4j-api-1.7.32.jar:/cloud/service/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.32.jar:/cloud/service/hadoop/share/hadoop/common/lib/snappy-java-1.0.5.jar:/cloud/service/hadoop/share/hadoop/common/lib/spotbugs-annotations-3.1.9.jar:/cloud/service/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/cloud/service/hadoop/share/hadoop/common/lib/stax2-api-3.1.4.jar:/cloud/service/hadoop/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/cloud/service/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/cloud/service/hadoop/share/hadoop/common/lib/zookeeper-3.4.14-HDP-22.10.2.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/hadoop-hdfs-2.10.1-HDP-24.11.1-tests.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/hadoop-hdfs-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/hadoop-hdfs-client-2.10.1-HDP-24.11.1-tests.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/hadoop-hdfs-client-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/hadoop-hdfs-native-client-2.10.1-HDP-24.11.1-tests.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/hadoop-hdfs-native-client-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/hadoop-hdfs-nfs-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/hadoop-hdfs-rbf-2.10.1-HDP-24.11.1-tests.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/hadoop-hdfs-rbf-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/asm-3.2.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/commons-io-2.4.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/guava-11.0.2.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/jackson-annotations-2.12.7.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/jackson-core-2.12.7.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/jackson-databind-2.12.7.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/mysql-connector-java-8.0.16.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-3.10.6.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-all-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-buffer-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-codec-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-codec-dns-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-codec-haproxy-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-codec-http-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-codec-http2-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-codec-memcache-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-codec-mqtt-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-codec-redis-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-codec-smtp-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-codec-socks-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-codec-stomp-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-codec-xml-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-common-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-handler-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-handler-proxy-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-handler-ssl-ocsp-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-resolver-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-resolver-dns-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-resolver-dns-classes-macos-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.87.Final-osx-aarch_64.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-resolver-dns-native-macos-4.1.87.Final-osx-x86_64.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-transport-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-transport-classes-epoll-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-transport-classes-kqueue-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.87.Final-linux-aarch_64.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-transport-native-epoll-4.1.87.Final-linux-x86_64.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.87.Final-osx-aarch_64.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-transport-native-kqueue-4.1.87.Final-osx-x86_64.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-transport-native-unix-common-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-transport-rxtx-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-transport-sctp-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/netty-transport-udt-4.1.87.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/okio-1.17.2.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/xercesImpl-2.12.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/xml-apis-1.4.01.jar:/cloud/service/hadoop-2.10.1/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/hadoop-yarn-api-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/hadoop-yarn-client-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/hadoop-yarn-common-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/hadoop-yarn-logaggregation-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/hadoop-yarn-registry-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/hadoop-yarn-server-common-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/hadoop-yarn-server-initcontainer-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/hadoop-yarn-server-router-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/hadoop-yarn-server-tests-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/activation-1.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/aopalliance-1.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/apacheds-i18n-2.0.0-M15.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/api-asn1-api-1.0.0-M20.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/api-util-1.0.0-M20.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/asm-3.2.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/audience-annotations-0.5.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/avro-1.7.7.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/bcpkix-jdk15on-1.66.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/bcprov-ext-jdk15on-1.66.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/bcprov-jdk15on-1.66.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/client-java-11.0.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/client-java-api-11.0.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/client-java-proto-11.0.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/commons-beanutils-1.9.4.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/commons-cli-1.2.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/commons-codec-1.4.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/commons-collections4-4.4.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/commons-compress-1.21.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/commons-configuration-1.6.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/commons-digester-1.8.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/commons-io-2.4.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/commons-lang-2.6.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/commons-lang3-3.4.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/commons-math3-3.1.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/commons-net-3.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/cueue.dahuatech.com-1.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/curator-client-2.13.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/curator-framework-2.13.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/curator-recipes-2.13.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/fst-2.50.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/gson-2.9.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/gson-fire-1.8.5.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/guava-11.0.2.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/guice-3.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/htrace-core4-4.1.0-incubating.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/httpclient-4.5.2.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/httpcore-4.4.4.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/java-util-1.9.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/java-xmlbuilder-0.4.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/javax.annotation-api-1.3.2.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/javax.inject-1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jcip-annotations-1.0-1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jersey-client-1.9.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jersey-core-1.9.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jersey-json-1.9.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jersey-server-1.9.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jets3t-0.9.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jettison-1.5.3.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jetty-6.1.26.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jetty-sslengine-6.1.26.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/joda-convert-2.2.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/joda-time-2.9.4.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jose4j-0.7.3.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jsch-0.1.55.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/json-io-2.5.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/json-smart-1.3.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jsp-api-2.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/jsr305-3.0.2.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/log4j-1.2.17.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/logging-interceptor-3.14.9.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/lombok-1.18.38.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/metrics-core-3.0.1.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/mysql-connector-java-8.0.16.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/netty-3.10.6.Final.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/nimbus-jose-jwt-7.9.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/okhttp-3.14.9.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/okio-1.17.2.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/paranamer-2.3.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/servlet-api-2.5.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/simpleclient-0.9.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/simpleclient_common-0.9.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/simpleclient_httpserver-0.9.0.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/snakeyaml-1.27.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/snappy-java-1.0.5.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/spotbugs-annotations-3.1.9.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/stax2-api-3.1.4.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/swagger-annotations-1.6.2.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/woodstox-core-5.0.3.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/xmlenc-0.52.jar:/cloud/service/hadoop-2.10.1/share/hadoop/yarn/lib/zookeeper-3.4.14-HDP-22.10.2.jar:/cloud/service/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.10.1-HDP-24.11.1-tests.jar:/cloud/service/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/avro-1.7.7.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.21.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.10.1-HDP-24.11.1.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/netty-3.10.6.Final.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/cloud/service/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.5.jar
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:java.io.tmpdir=/tmp
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:java.compiler=<NA>
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:os.name=Linux
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:os.arch=amd64
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:os.version=5.10.0-60.66.0.91.oe2203.x86_64
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:user.name=root
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:user.home=/root
2025-05-30 09:11:54,684 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
environment:user.dir=/cloud/data/hadoop/yarn/nodemanager/local/usercache/hadoop/appcache/application_1747795725599_0302/container_1747795725599_0302_01_000002
2025-05-30 09:11:54,685 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Initiating 
client connection, 
connectString=hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local:2181
 sessionTimeout=90000 
watcher=org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.zookeeper.PendingWatcher@4e2b3acc
2025-05-30 09:11:54,702 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - Client 
successfully logged in.
2025-05-30 09:11:54,703 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - TGT refresh 
thread started.
2025-05-30 09:11:54,703 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - TGT valid 
starting at:        Fri May 30 09:11:54 CST 2025
2025-05-30 09:11:54,703 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - TGT expires:  
                Sat May 31 09:11:54 CST 2025
2025-05-30 09:11:54,704 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - TGT refresh 
sleeping until: Sat May 31 05:19:36 CST 2025
2025-05-30 09:11:54,705 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.client.ZooKeeperSaslClient 
[] - Client will use GSSAPI as SASL mechanism.
2025-05-30 09:11:54,706 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ClientCnxn [] - Opening 
socket connection to server 
hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local/192.168.123.164:2181.
 Will attempt to SASL-authenticate using Login Context section 'Client'
2025-05-30 09:11:54,707 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ClientCnxn [] - Socket 
connection established to 
hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local/192.168.123.164:2181,
 initiating session
2025-05-30 09:11:54,710 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ClientCnxn [] - Session 
establishment complete on server 
hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local/192.168.123.164:2181,
 sessionid = 0x1008fd670fd20a8, negotiated timeout = 90000
2025-05-30 09:11:54,922 INFO  
org.apache.flink.connector.hbase.sink.HBaseSinkFunction      [] - end open.
2025-05-30 09:11:55,211 ERROR 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.AsyncProcess [] - 
Failed to get region location 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.TableNotFoundException: 
Table 'mytable3' was not found, got: mytable2.
        at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1345)
 ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1221)
 ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:496)
 ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:436)
 ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:246)
 ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.BufferedMutatorImpl.close(BufferedMutatorImpl.java:186)
 ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.connector.hbase.sink.HBaseSinkFunction.close(HBaseSinkFunction.java:216)
 ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:41)
 ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.close(AbstractUdfStreamOperator.java:109)
 ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.lambda$closeOperator$5(StreamOperatorWrapper.java:213)
 ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
 ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.closeOperator(StreamOperatorWrapper.java:210)
 ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.lambda$deferCloseOperatorToMailbox$3(StreamOperatorWrapper.java:185)
 ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
 [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90) 
[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.tryYield(MailboxExecutorImpl.java:97)
 [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.quiesceTimeServiceAndCloseOperator(StreamOperatorWrapper.java:162)
 [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:131)
 [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:135)
 [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:135)
 [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.closeOperators(OperatorChain.java:439)
 [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.afterInvoke(StreamTask.java:627)
 [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:589) 
[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:755) 
[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:570) 
[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
        at java.lang.Thread.run(Thread.java:750) [?:1.8.0_342]
2025-05-30 09:11:55,220 INFO  
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation
 [] - Closing zookeeper sessionid=0x1008fd670fd20a8
2025-05-30 09:11:55,221 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Session: 
0x1008fd670fd20a8 closed
2025-05-30 09:11:55,221 INFO  
org.apache.flink.hbase.shaded.org.apache.zookeeper.ClientCnxn [] - EventThread 
shut down for session: 0x1008fd670fd20a8
2025-05-30 09:11:55,227 INFO  org.apache.flink.runtime.taskmanager.Task         
           [] - Source: Values(tuples=[[{ 0 }]]) -> Calc(select=[_UTF-16LE'100' 
AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
_UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) switched from RUNNING to 
FINISHED.
2025-05-30 09:11:55,227 INFO  org.apache.flink.runtime.taskmanager.Task         
           [] - Freeing task resources for Source: Values(tuples=[[{ 0 }]]) -> 
Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS 
EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f).
2025-05-30 09:11:55,228 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - 
Un-registering task and sending final execution state FINISHED to JobManager 
for task Source: Values(tuples=[[{ 0 }]]) -> Calc(select=[_UTF-16LE'100' AS 
EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
_UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
EXPR$2]) (1/1)#0 5675710f80e829cc299d53501cd9765f.
2025-05-30 09:11:55,267 INFO  
org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Free slot 
TaskSlot(index:0, state:ACTIVE, resource profile: 
ResourceProfile{cpuCores=1.0000000000000000, taskHeapMemory=25.600mb (26843542 
bytes), taskOffHeapMemory=0 bytes, managedMemory=230.400mb (241591914 bytes), 
networkMemory=64.000mb (67108864 bytes)}, allocationId: 
4e07d1d05b97f8882c94c3372b11e540, jobId: ffffffffb9bf812e0000000000000000).
2025-05-30 09:11:55,271 INFO  
org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Remove job 
ffffffffb9bf812e0000000000000000 from job leader monitoring. {code}
 

 

 


> 【Hbase-connector】Enable buffer flush, specify a non-existent table when 
> writing data to hbase, the Flink task displays successful
> ---------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-37873
>                 URL: https://issues.apache.org/jira/browse/FLINK-37873
>             Project: Flink
>          Issue Type: Bug
>            Reporter: Xin Chen
>            Priority: Major
>             Fix For: 1.16.2
>
>         Attachments: image-2025-05-30-11-02-54-974.png
>
>
>  If I have enabled the buffer flush, such as setting 
> ''sink.buffer-flush.max-rows'='1000', but when I specify a table that 
> definitely does not exist, that is, the ‘table-name‘ field is a completely 
> non-existent table, why does the flink task not report an error? It shows 
> successful execution, and there is no error log in the task. Only when 
> closed, there is a stack print.
>  
> The Flink SQL:
>  
> {code:java}
> tbEnv.executeSql("CREATE TABLE IF NOT EXISTS hTable (\n"
>         + "rowkey STRING,\n"
>         + "family1 ROW<q1 STRING,q2 STRING>,\n"
>         + "family2 ROW<q3 STRING,q4 STRING>,\n"
>         + "PRIMARY KEY (rowkey) NOT ENFORCED\n"
>         + ") WITH (\n"
>         + "'connector' = 'hbase-1.4',\n"
>         + "'table-name' = 'mytable3',\n" // mytable3 does not exist in 
> Hbase!!!
>         + "'sink.buffer-flush.max-rows' = '1000',\n"
>         + "'hbase.regionserver.kerberos.principal' = 
> 'hbase/_h...@dahua.com',\n"
>         + "'hbase.master.kerberos.principal' = 'hbase/_h...@dahua.com',\n"
>         + "'hbase.security.authentication' = 'kerberos',\n"
>         + "'hbase.ipc.client.fallback-to-simple-auth-allowed' = 'true',\n"
>         + "'zookeeper.quorum' = 
> 'hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local:2181'\n"
>         + "    )"); 
> String sql3 = "insert into hTable values ('200', ROW('300','AAA'), 
> ROW('400','BBB'))";
> tbEnv.executeSql(sql3);{code}
> Flink task state:
>  
> !image-2025-05-30-11-02-54-974.png!
> Taskmanager.log:
> {code:java}
> 2025-05-30 09:11:53,825 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Establish 
> JobManager connection for job ffffffffb9bf812e0000000000000000.
> 2025-05-30 09:11:53,828 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Offer 
> reserved slots to the leader of job ffffffffb9bf812e0000000000000000.
> 2025-05-30 09:11:53,850 INFO  
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Activate 
> slot 4e07d1d05b97f8882c94c3372b11e540.
> 2025-05-30 09:11:53,855 INFO  
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Activate 
> slot 4e07d1d05b97f8882c94c3372b11e540.
> 2025-05-30 09:11:53,885 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - Received 
> task Source: Values(tuples=[[{ 0 }]]) -> Calc(select=[_UTF-16LE'100' AS 
> EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
> _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
> Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
> EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f), deploy into slot with 
> allocation id 4e07d1d05b97f8882c94c3372b11e540.
> 2025-05-30 09:11:53,895 INFO  org.apache.flink.runtime.taskmanager.Task       
>              [] - Source: Values(tuples=[[{ 0 }]]) -> 
> Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS 
> EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
> Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
> EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) switched from CREATED to 
> DEPLOYING.
> 2025-05-30 09:11:53,898 INFO  org.apache.flink.runtime.taskmanager.Task       
>              [] - Loading JAR files for task Source: Values(tuples=[[{ 0 }]]) 
> -> Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') 
> AS EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
> Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
> EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) [DEPLOYING].
> 2025-05-30 09:11:54,060 INFO  org.apache.flink.runtime.taskmanager.Task       
>              [] - Registering task at network: Source: Values(tuples=[[{ 0 
> }]]) -> Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW 
> _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> 
> Sink: Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, 
> EXPR$1, EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) [DEPLOYING].
> 2025-05-30 09:11:54,084 INFO  
> org.apache.flink.streaming.runtime.tasks.StreamTask          [] - No state 
> backend has been configured, using default (Memory / JobManager) 
> MemoryStateBackend (data in heap memory / checkpoints to JobManager) 
> (checkpoints: 'null', savepoints: 'null', asynchronous: TRUE, maxStateSize: 
> 5242880)
> 2025-05-30 09:11:54,092 INFO  org.apache.flink.runtime.taskmanager.Task       
>              [] - Source: Values(tuples=[[{ 0 }]]) -> 
> Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS 
> EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
> Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
> EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) switched from DEPLOYING 
> to RUNNING.
> 2025-05-30 09:11:54,178 WARN  org.apache.flink.metrics.MetricGroup            
>              [] - The operator name Sink: 
> Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
> EXPR$2]) exceeded the 80 characters length limit and was truncated.
> 2025-05-30 09:11:54,461 WARN  org.apache.flink.metrics.MetricGroup            
>              [] - The operator name Calc(select=[_UTF-16LE'100' AS EXPR$0, 
> (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
> _UTF-16LE'BBB') AS EXPR$2]) exceeded the 80 characters length limit and was 
> truncated.
> =============================================================================================================================================================
> 2025-05-30 09:11:54,494 INFO  
> org.apache.flink.connector.hbase.sink.HBaseSinkFunction      [] - start open 
> ...
> 2025-05-30 09:11:54,525 WARN  
> org.apache.flink.connector.hbase.util.HBaseConfigurationUtil [] - Could not 
> find HBase configuration via any of the supported methods (Flink 
> configuration, environment variables).
> 2025-05-30 09:11:54,678 INFO  
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper
>  [] - Process identifier=hconnection-0x1a4ac2a8 connecting to ZooKeeper 
> ensemble=hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local:2181
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:zookeeper.version=3.4.14-HDP-22.10.2-efc83fdf3946366b6cd1191a5af00dd26735cde9,
>  built on 09/06/2022 11:01 GMT
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:host.name=192.168.234.96
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:java.version=1.8.0_342
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:java.vendor=Bisheng
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.342.b07-0.oe2203.x86_64/jre
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:java.class.path=:xxxx
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:java.io.tmpdir=/tmp
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:java.compiler=<NA>
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:os.name=Linux
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:os.arch=amd64
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:os.version=5.10.0-60.66.0.91.oe2203.x86_64
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:user.name=root
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:user.home=/root
> 2025-05-30 09:11:54,684 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Client 
> environment:user.dir=/cloud/data/hadoop/yarn/nodemanager/local/usercache/hadoop/appcache/application_1747795725599_0302/container_1747795725599_0302_01_000002
> 2025-05-30 09:11:54,685 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Initiating 
> client connection, 
> connectString=hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local:2181
>  sessionTimeout=90000 
> watcher=org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.zookeeper.PendingWatcher@4e2b3acc
> 2025-05-30 09:11:54,702 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - Client 
> successfully logged in.
> 2025-05-30 09:11:54,703 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - TGT refresh 
> thread started.
> 2025-05-30 09:11:54,703 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - TGT valid 
> starting at:        Fri May 30 09:11:54 CST 2025
> 2025-05-30 09:11:54,703 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - TGT 
> expires:                  Sat May 31 09:11:54 CST 2025
> 2025-05-30 09:11:54,704 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.Login     [] - TGT refresh 
> sleeping until: Sat May 31 05:19:36 CST 2025
> 2025-05-30 09:11:54,705 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.client.ZooKeeperSaslClient 
> [] - Client will use GSSAPI as SASL mechanism.
> 2025-05-30 09:11:54,706 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ClientCnxn [] - Opening 
> socket connection to server 
> hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local/192.168.123.164:2181.
>  Will attempt to SASL-authenticate using Login Context section 'Client'
> 2025-05-30 09:11:54,707 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ClientCnxn [] - Socket 
> connection established to 
> hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local/192.168.123.164:2181,
>  initiating session
> 2025-05-30 09:11:54,710 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ClientCnxn [] - Session 
> establishment complete on server 
> hdp-zookeeper-hdp-zookeeper-0.hdp-zookeeper-hdp-zookeeper.cx-test.svc.cluster.local/192.168.123.164:2181,
>  sessionid = 0x1008fd670fd20a8, negotiated timeout = 90000
> 2025-05-30 09:11:54,922 INFO  
> org.apache.flink.connector.hbase.sink.HBaseSinkFunction      [] - end open.
> =====================================================================================================================================================
> 2025-05-30 09:11:55,211 ERROR 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.AsyncProcess [] 
> - Failed to get region location 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.TableNotFoundException: 
> Table 'mytable3' was not found, got: mytable2.
>       at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1345)
>  ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1221)
>  ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:496)
>  ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:436)
>  ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:246)
>  ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.BufferedMutatorImpl.close(BufferedMutatorImpl.java:186)
>  ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.connector.hbase.sink.HBaseSinkFunction.close(HBaseSinkFunction.java:216)
>  ~[flink-sql-connector-hbase-1.4_1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:41)
>  ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.close(AbstractUdfStreamOperator.java:109)
>  ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.lambda$closeOperator$5(StreamOperatorWrapper.java:213)
>  ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
>  ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.closeOperator(StreamOperatorWrapper.java:210)
>  ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.lambda$deferCloseOperatorToMailbox$3(StreamOperatorWrapper.java:185)
>  ~[flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90) 
> [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.tryYield(MailboxExecutorImpl.java:97)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.quiesceTimeServiceAndCloseOperator(StreamOperatorWrapper.java:162)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:131)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:135)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:135)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.closeOperators(OperatorChain.java:439)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.afterInvoke(StreamTask.java:627)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:589)
>  [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:755) 
> [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at org.apache.flink.runtime.taskmanager.Task.run(Task.java:570) 
> [flink-dist_2.11-1.12.2-HDP-25.04.3.jar:1.12.2-HDP-25.04.3]
>       at java.lang.Thread.run(Thread.java:750) [?:1.8.0_342]
> 2025-05-30 09:11:55,220 INFO  
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation
>  [] - Closing zookeeper sessionid=0x1008fd670fd20a8
> 2025-05-30 09:11:55,221 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ZooKeeper [] - Session: 
> 0x1008fd670fd20a8 closed
> 2025-05-30 09:11:55,221 INFO  
> org.apache.flink.hbase.shaded.org.apache.zookeeper.ClientCnxn [] - 
> EventThread shut down for session: 0x1008fd670fd20a8
> 2025-05-30 09:11:55,227 INFO  org.apache.flink.runtime.taskmanager.Task       
>              [] - Source: Values(tuples=[[{ 0 }]]) -> 
> Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS 
> EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
> Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
> EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f) switched from RUNNING to 
> FINISHED.
> 2025-05-30 09:11:55,227 INFO  org.apache.flink.runtime.taskmanager.Task       
>              [] - Freeing task resources for Source: Values(tuples=[[{ 0 }]]) 
> -> Calc(select=[_UTF-16LE'100' AS EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') 
> AS EXPR$1, (_UTF-16LE'400' ROW _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
> Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
> EXPR$2]) (1/1)#0 (5675710f80e829cc299d53501cd9765f).
> 2025-05-30 09:11:55,228 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutor           [] - 
> Un-registering task and sending final execution state FINISHED to JobManager 
> for task Source: Values(tuples=[[{ 0 }]]) -> Calc(select=[_UTF-16LE'100' AS 
> EXPR$0, (_UTF-16LE'300' ROW _UTF-16LE'AAA') AS EXPR$1, (_UTF-16LE'400' ROW 
> _UTF-16LE'BBB') AS EXPR$2]) -> Sink: 
> Sink(table=[default_catalog.default_database.hTable], fields=[EXPR$0, EXPR$1, 
> EXPR$2]) (1/1)#0 5675710f80e829cc299d53501cd9765f.
> 2025-05-30 09:11:55,267 INFO  
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl [] - Free slot 
> TaskSlot(index:0, state:ACTIVE, resource profile: 
> ResourceProfile{cpuCores=1.0000000000000000, taskHeapMemory=25.600mb 
> (26843542 bytes), taskOffHeapMemory=0 bytes, managedMemory=230.400mb 
> (241591914 bytes), networkMemory=64.000mb (67108864 bytes)}, allocationId: 
> 4e07d1d05b97f8882c94c3372b11e540, jobId: ffffffffb9bf812e0000000000000000).
> 2025-05-30 09:11:55,271 INFO  
> org.apache.flink.runtime.taskexecutor.DefaultJobLeaderService [] - Remove job 
> ffffffffb9bf812e0000000000000000 from job leader monitoring. {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to