todd5167 opened a new issue #4469:
URL: https://github.com/apache/hudi/issues/4469
**Environment Description**
Hudi version : 0.10.0
Flink version : 1.13.3
Storage (HDFS/S3/GCS..) : AWS s3
Running on Docker? (yes/no) : yes, k8s
**Additional context**
flink batch mode, use bulk_insert operation. When the data is written,
the last commit information cannot be synchronized to .hoodie/ .
Because when the handleEndInputEvent is executed, the filesystem close
method has already been called. When I try to remove the filesystem close
method, I can write the commit file.
**Stacktrace**
[2021-12-28T08:57:55.763Z] INFO
org.apache.hudi.sink.StreamWriteOperatorCoordinator [] - Executor
executes action [handle write metadata event for instant 20211228085644435]
success!
[2021-12-28T08:57:55.773Z] INFO
org.apache.flink.runtime.executiongraph.ExecutionGraph [] -
hoodie_bulk_insert_write -> Sink: dummy (1/2)
(c55eb46558109432bb0eb6e9a4a569e6) switched from RUNNING to FINISHED.
[2021-12-28T08:57:55.774Z] INFO
org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager []
- Received resource requirements from job 00000000000000000000000000000000:
[ResourceRequirement{resourceProfile=ResourceProfile{UNKNOWN},
numberOfRequiredSlots=1}]
[2021-12-28T08:58:31.786Z] INFO
org.apache.hudi.client.AbstractHoodieWriteClient [] - Committing
20211228085644435 action deltacommit
[2021-12-28T08:58:31.786Z] INFO
org.apache.hudi.common.table.HoodieTableMetaClient [] - Loading
HoodieTableMetaClient from
s3a://shareit.tmp.us-east-1/Default/hudi_user_ac_account_mini/
[2021-12-28T08:58:31.801Z] INFO
org.apache.flink.runtime.executiongraph.ExecutionGraph [] -
hoodie_bulk_insert_write -> Sink: dummy (2/2)
(05e0347b9b33b4b199f3f45e68cc554b) switched from RUNNING to FINISHED.
[2021-12-28T08:58:31.802Z] INFO
org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Job
insert-into_default_catalog.default_database.hudiSink
(00000000000000000000000000000000) switched from state RUNNING to FINISHED.
[2021-12-28T08:58:31.803Z] INFO
org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - Stopping
checkpoint coordinator for job 00000000000000000000000000000000.
[2021-12-28T08:58:31.805Z] INFO
org.apache.flink.runtime.checkpoint.DefaultCompletedCheckpointStore [] -
Shutting down
[2021-12-28T08:58:31.805Z] INFO
org.apache.flink.runtime.zookeeper.ZooKeeperStateHandleStore [] - Removing
/flink/testhudisinkbatch/checkpoints/00000000000000000000000000000000 from
ZooKeeper
[2021-12-28T08:58:31.805Z] INFO
org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager []
- Clearing resource requirements of job 00000000000000000000000000000000
[2021-12-28T08:58:31.811Z] INFO
org.apache.flink.runtime.checkpoint.ZooKeeperCheckpointIDCounter [] - Shutting
down.
[2021-12-28T08:58:31.811Z] INFO
org.apache.flink.runtime.checkpoint.ZooKeeperCheckpointIDCounter [] - Removing
/checkpoint-counter/00000000000000000000000000000000 from ZooKeeper
[2021-12-28T08:58:31.825Z] INFO
org.apache.flink.runtime.dispatcher.StandaloneDispatcher [] - Job
00000000000000000000000000000000 reached terminal state FINISHED.
[2021-12-28T08:58:31.834Z] INFO
org.apache.flink.runtime.jobmaster.JobMaster [] - Stopping the
JobMaster for job
insert-into_default_catalog.default_database.hudiSink(00000000000000000000000000000000).
[2021-12-28T08:58:31.878Z] INFO
org.apache.hudi.common.table.HoodieTableConfig [] - Loading table
properties from
s3a://shareit.tmp.us-east-1/Default/hudi_user_ac_account_mini/.hoodie/hoodie.properties
[2021-12-28T08:58:31.919Z] INFO
org.apache.hudi.common.table.HoodieTableMetaClient [] - Finished
Loading Table of type MERGE_ON_READ(version=1, baseFileFormat=PARQUET) from
s3a://shareit.tmp.us-east-1/Default/hudi_user_ac_account_mini/
[2021-12-28T08:58:31.920Z] INFO
org.apache.hudi.common.table.HoodieTableMetaClient [] - Loading
Active commit timeline for
s3a://shareit.tmp.us-east-1/Default/hudi_user_ac_account_mini/
[2021-12-28T08:58:32.005Z] INFO
org.apache.hudi.common.table.timeline.HoodieActiveTimeline [] - Loaded
instants upto : Option{val=[==>20211228085644435__deltacommit__INFLIGHT]}
[2021-12-28T08:58:32.006Z] INFO
org.apache.hudi.common.table.view.FileSystemViewManager [] - Creating View
Manager with storage type :REMOTE_FIRST
[2021-12-28T08:58:32.006Z] INFO
org.apache.hudi.common.table.view.FileSystemViewManager [] - Creating
remote first table view
[2021-12-28T08:58:32.007Z] INFO org.apache.hudi.common.util.CommitUtils
[] - Creating metadata for null
numWriteStats:62numReplaceFileIds:0
[2021-12-28T08:58:32.008Z] INFO
org.apache.hudi.client.AbstractHoodieWriteClient [] - Committing
20211228085644435 action deltacommit
[2021-12-28T08:58:32.863Z] INFO
org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap
[] - Application completed SUCCESSFULLY
[2021-12-28T08:58:32.863Z] INFO
org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Shutting
KubernetesApplicationClusterEntrypoint down with application status SUCCEEDED.
Diagnostics null.
[2021-12-28T08:58:32.864Z] INFO
org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] - Shutting
down rest endpoint.
[2021-12-28T08:58:32.889Z] INFO
org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] - Removing
cache directory /tmp/flink-web-6d81dcf8-0517-439a-aa8b-22709c087b00/flink-web-ui
[2021-12-28T08:58:32.897Z] INFO
org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] -
Stopping DefaultLeaderElectionService.
[2021-12-28T08:58:32.897Z] INFO
org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionDriver [] -
Closing ZooKeeperLeaderElectionDriver{leaderPath='/leader/rest_server_lock'}
[2021-12-28T08:58:32.898Z] INFO
org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint [] - Shut down
complete.
[2021-12-28T08:58:32.898Z] INFO
org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - Shut
down cluster because application is in SUCCEEDED, diagnostics null.
[2021-12-28T08:58:32.898Z] INFO
org.apache.flink.kubernetes.KubernetesResourceManagerDriver [] - Deregistering
Flink Kubernetes cluster, clusterId: testhudisinkbatch, diagnostics:
[2021-12-28T08:58:32.944Z] INFO
org.apache.flink.runtime.entrypoint.component.DispatcherResourceManagerComponent
[] - Closing components.
[2021-12-28T08:58:32.944Z] INFO
org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] -
Stopping DefaultLeaderRetrievalService.
[2021-12-28T08:58:32.944Z] INFO
org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalDriver [] -
Closing ZookeeperLeaderRetrievalDriver{retrievalPath='/leader/dispatcher_lock'}.
[2021-12-28T08:58:32.944Z] INFO
org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] -
Stopping DefaultLeaderRetrievalService.
[2021-12-28T08:58:32.944Z] INFO
org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalDriver [] -
Closing
ZookeeperLeaderRetrievalDriver{retrievalPath='/leader/resource_manager_lock'}.
[2021-12-28T08:58:32.944Z] INFO
org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] -
Stopping DefaultLeaderElectionService.
[2021-12-28T08:58:32.944Z] INFO
org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionDriver [] -
Closing ZooKeeperLeaderElectionDriver{leaderPath='/leader/dispatcher_lock'}
[2021-12-28T08:58:32.945Z] INFO
org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] -
Stopping SessionDispatcherLeaderProcess.
[2021-12-28T08:58:32.945Z] INFO
org.apache.flink.runtime.dispatcher.StandaloneDispatcher [] - Stopping
dispatcher akka.tcp://[email protected]:6123/user/rpc/dispatcher_1.
[2021-12-28T08:58:32.945Z] INFO
org.apache.flink.runtime.dispatcher.StandaloneDispatcher [] - Stopping all
currently running jobs of dispatcher
akka.tcp://[email protected]:6123/user/rpc/dispatcher_1.
[2021-12-28T08:58:32.946Z] INFO
org.apache.flink.kubernetes.kubeclient.resources.KubernetesPodsWatcher [] - The
watcher is closing.
[2021-12-28T08:58:32.952Z] INFO
org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager []
- Closing the slot manager.
[2021-12-28T08:58:32.952Z] INFO
org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager []
- Suspending the slot manager.
[2021-12-28T08:58:32.952Z] INFO
org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] -
Stopping DefaultLeaderElectionService.
[2021-12-28T08:58:32.952Z] INFO
org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionDriver [] -
Closing
ZooKeeperLeaderElectionDriver{leaderPath='/leader/resource_manager_lock'}
[2021-12-28T08:58:32.955Z] INFO
org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] -
Stopping DefaultLeaderRetrievalService.
[2021-12-28T08:58:32.955Z] INFO
org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalDriver [] -
Closing
ZookeeperLeaderRetrievalDriver{retrievalPath='/leader/00000000000000000000000000000000/job_manager_lock'}.
[2021-12-28T08:58:33.137Z] INFO
org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - RECEIVED
SIGNAL 15: SIGTERM. Shutting down as requested.
[2021-12-28T08:58:33.139Z] INFO org.apache.flink.runtime.blob.BlobServer
[] - Stopped BLOB server at 0.0.0.0:6124
[2021-12-28T08:58:33.186Z] ERROR
org.apache.hudi.sink.StreamWriteOperatorCoordinator [] - Executor
executes action [handle write metadata event for instant 20211228085644435]
error
org.apache.hudi.exception.HoodieException:
org.apache.hudi.exception.HoodieException: Error occurs when executing flatMap
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method) ~[?:1.8.0_202]
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
~[?:1.8.0_202]
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
~[?:1.8.0_202]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
~[?:1.8.0_202]
at
java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:593)
~[?:1.8.0_202]
at
java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:677)
~[?:1.8.0_202]
at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:735)
~[?:1.8.0_202]
at
java.util.stream.ReduceOps$ReduceOp.evaluateParallel(ReduceOps.java:714)
~[?:1.8.0_202]
at
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
~[?:1.8.0_202]
at
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
~[?:1.8.0_202]
at
org.apache.hudi.client.common.HoodieFlinkEngineContext.flatMap(HoodieFlinkEngineContext.java:136)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
org.apache.hudi.table.marker.DirectWriteMarkers.createdAndMergedDataPaths(DirectWriteMarkers.java:107)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
org.apache.hudi.table.HoodieTable.getInvalidDataPaths(HoodieTable.java:540)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
org.apache.hudi.table.HoodieTable.reconcileAgainstMarkers(HoodieTable.java:569)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
org.apache.hudi.table.HoodieTable.finalizeWrite(HoodieTable.java:511)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
org.apache.hudi.client.AbstractHoodieWriteClient.finalizeWrite(AbstractHoodieWriteClient.java:1131)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
org.apache.hudi.client.AbstractHoodieWriteClient.commit(AbstractHoodieWriteClient.java:224)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
org.apache.hudi.client.AbstractHoodieWriteClient.commitStats(AbstractHoodieWriteClient.java:197)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
org.apache.hudi.client.HoodieFlinkWriteClient.commit(HoodieFlinkWriteClient.java:112)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
org.apache.hudi.sink.StreamWriteOperatorCoordinator.doCommit(StreamWriteOperatorCoordinator.java:482)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
org.apache.hudi.sink.StreamWriteOperatorCoordinator.commitInstant(StreamWriteOperatorCoordinator.java:458)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
org.apache.hudi.sink.StreamWriteOperatorCoordinator.commitInstant(StreamWriteOperatorCoordinator.java:431)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
org.apache.hudi.sink.StreamWriteOperatorCoordinator.handleEndInputEvent(StreamWriteOperatorCoordinator.java:379)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
org.apache.hudi.sink.StreamWriteOperatorCoordinator.lambda$handleEventFromOperator$3(StreamWriteOperatorCoordinator.java:258)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
org.apache.hudi.sink.utils.NonThrownExecutor.lambda$execute$0(NonThrownExecutor.java:93)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[?:1.8.0_202]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[?:1.8.0_202]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_202]
Caused by: org.apache.hudi.exception.HoodieException: Error occurs when
executing flatMap
at
org.apache.hudi.common.function.FunctionWrapper.lambda$throwingFlatMapWrapper$1(FunctionWrapper.java:50)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267)
~[?:1.8.0_202]
at
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
~[?:1.8.0_202]
at
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
~[?:1.8.0_202]
at
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
~[?:1.8.0_202]
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747)
~[?:1.8.0_202]
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721)
~[?:1.8.0_202]
at java.util.stream.AbstractTask.compute(AbstractTask.java:316)
~[?:1.8.0_202]
at
java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
~[?:1.8.0_202]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
~[?:1.8.0_202]
at
java.util.concurrent.ForkJoinPool$WorkQueue.execLocalTasks(ForkJoinPool.java:1040)
~[?:1.8.0_202]
at
java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1058)
~[?:1.8.0_202]
at
java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
~[?:1.8.0_202]
at
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
~[?:1.8.0_202]
Caused by: java.lang.IllegalStateException: Connection pool shut down
at org.apache.http.util.Asserts.check(Asserts.java:34)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:189)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.requestConnection(PoolingHttpClientConnectionManager.java:268)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
~[?:?]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[?:1.8.0_202]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_202]
at
com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at com.amazonaws.http.conn.$Proxy26.requestConnection(Unknown
Source) ~[?:1.13.3]
at
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:176)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1330)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5062)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5008)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5002)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
com.amazonaws.services.s3.AmazonS3Client.listObjectsV2(AmazonS3Client.java:941)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listObjects$5(S3AFileSystem.java:1262)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:280)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
org.apache.hadoop.fs.s3a.S3AFileSystem.listObjects(S3AFileSystem.java:1255)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
org.apache.hadoop.fs.s3a.Listing$ObjectListingIterator.<init>(Listing.java:558)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
org.apache.hadoop.fs.s3a.Listing.createFileStatusListingIterator(Listing.java:118)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
org.apache.hadoop.fs.s3a.S3AFileSystem.innerListFiles(S3AFileSystem.java:3126)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
org.apache.hadoop.fs.s3a.S3AFileSystem.listFiles(S3AFileSystem.java:3068)
~[flink-s3-fs-hadoop-1.13.3.jar:1.13.3]
at
org.apache.hudi.table.marker.DirectWriteMarkers.lambda$createdAndMergedDataPaths$69cdea3b$1(DirectWriteMarkers.java:110)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
org.apache.hudi.common.function.FunctionWrapper.lambda$throwingFlatMapWrapper$1(FunctionWrapper.java:48)
~[hudi-flink-bundle_2.12-0.10.0.jar:0.10.0]
at
java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267)
~[?:1.8.0_202]
at
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
~[?:1.8.0_202]
at
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
~[?:1.8.0_202]
at
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
~[?:1.8.0_202]
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747)
~[?:1.8.0_202]
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721)
~[?:1.8.0_202]
at java.util.stream.AbstractTask.compute(AbstractTask.java:316)
~[?:1.8.0_202]
at
java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
~[?:1.8.0_202]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
~[?:1.8.0_202]
at
java.util.concurrent.ForkJoinPool$WorkQueue.execLocalTasks(ForkJoinPool.java:1040)
~[?:1.8.0_202]
at
java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1058)
~[?:1.8.0_202]
at
java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
~[?:1.8.0_202]
at
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
~[?:1.8.0_202]
[2021-12-28T08:58:33.202Z] INFO org.apache.hudi.client.AbstractHoodieClient
[] - Stopping Timeline service !!
[2021-12-28T08:58:33.202Z] INFO
org.apache.hudi.client.embedded.EmbeddedTimelineService [] - Closing
Timeline server
[2021-12-28T08:58:33.202Z] INFO
org.apache.hudi.timeline.service.TimelineService [] - Closing
Timeline Service
[2021-12-28T08:58:33.202Z] INFO io.javalin.Javalin
[] - Stopping Javalin ...
[2021-12-28T08:58:33.213Z] INFO io.javalin.Javalin
[] - Javalin has stopped
[2021-12-28T08:58:33.213Z] INFO
org.apache.hudi.timeline.service.TimelineService [] - Closed
Timeline Service
[2021-12-28T08:58:33.213Z] INFO
org.apache.hudi.client.embedded.EmbeddedTimelineService [] - Closed
Timeline server
[2021-12-28T08:58:33.214Z] INFO
org.apache.flink.runtime.jobmaster.slotpool.DefaultDeclarativeSlotPool [] -
Releasing slot [340a93bd0c5e0c38c5c5c7a8a2ef0dff].
[2021-12-28T08:58:33.221Z] INFO
org.apache.flink.runtime.jobmaster.slotpool.DefaultDeclarativeSlotPool [] -
Releasing slot [6ac41974ba31d455839f10a9073a1696].
[2021-12-28T08:58:33.225Z] INFO
org.apache.flink.runtime.jobmaster.JobMaster [] - Close
ResourceManager connection 071211de95a2cc0e1b1be3ab5264db45: Stopping JobMaster
for job
insert-into_default_catalog.default_database.hudiSink(00000000000000000000000000000000)..
[2021-12-28T08:58:33.229Z] INFO
org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] -
Stopping DefaultLeaderRetrievalService.
[2021-12-28T08:58:33.229Z] INFO
org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalDriver [] -
Closing
ZookeeperLeaderRetrievalDriver{retrievalPath='/leader/resource_manager_lock'}.
[2021-12-28T08:58:33.236Z] INFO
org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] -
Stopping DefaultLeaderElectionService.
[2021-12-28T08:58:33.236Z] INFO
org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionDriver [] -
Closing
ZooKeeperLeaderElectionDriver{leaderPath='/leader/00000000000000000000000000000000/job_manager_lock'}
[2021-12-28T08:58:33.345Z] INFO
org.apache.flink.runtime.jobmanager.DefaultJobGraphStore [] - Removed job
graph 00000000000000000000000000000000 from
ZooKeeperStateHandleStore{namespace='flink/testhudisinkbatch/jobgraphs'}.
[2021-12-28T08:58:33.348Z] INFO
org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServices [] -
Clean up the high availability data for job 00000000000000000000000000000000.
[2021-12-28T08:58:33.361Z] INFO
org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServices [] -
Finished cleaning up the high availability data for job
00000000000000000000000000000000.
[2021-12-28T08:58:33.657Z] WARN akka.remote.transport.netty.NettyTransport
[] - Remote connection to [/172.32.130.206:35256] failed with
java.io.IOException: Connection reset by peer
[2021-12-28T08:58:33.669Z] WARN akka.remote.ReliableDeliverySupervisor
[] - Association with remote system
[akka.tcp://[email protected]:40101] has failed, address is now
gated for [50] ms. Reason: [Disassociated]
[2021-12-28T08:58:33.670Z] WARN akka.remote.ReliableDeliverySupervisor
[] - Association with remote system
[akka.tcp://[email protected]:6122] has failed, address is now gated for
[50] ms. Reason: [Disassociated]
[2021-12-28T08:58:33.672Z] WARN akka.remote.ReliableDeliverySupervisor
[] - Association with remote system
[akka.tcp://[email protected]:36867] has failed, address is now
gated for [50] ms. Reason: [Disassociated]
[2021-12-28T08:58:33.673Z] WARN akka.remote.transport.netty.NettyTransport
[] - Remote connection to [/172.32.131.105:40262] failed with
java.io.IOException: Connection reset by peer
[2021-12-28T08:58:33.673Z] WARN akka.remote.ReliableDeliverySupervisor
[] - Association with remote system
[akka.tcp://[email protected]:6122] has failed, address is now gated for
[50] ms. Reason: [Disassociated]
[2021-12-28T08:58:33.880Z] INFO
org.apache.flink.runtime.dispatcher.StandaloneDispatcher [] - Stopped
dispatcher akka.tcp://[email protected]:6123/user/rpc/dispatcher_1.
[2021-12-28T08:58:33.881Z] INFO
org.apache.flink.runtime.jobmanager.DefaultJobGraphStore [] - Stopping
DefaultJobGraphStore.
[2021-12-28T08:58:33.882Z] INFO
org.apache.flink.runtime.jobmanager.ZooKeeperJobGraphStoreWatcher [] - Stopping
ZooKeeperJobGraphStoreWatcher
[2021-12-28T08:58:33.882Z] INFO
org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServices [] -
Close and clean up all data for ZooKeeperHaServices.
[2021-12-28T08:58:33.911Z] INFO
org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.CuratorFrameworkImpl
[] - backgroundOperationsLoop exiting
[2021-12-28T08:58:33.915Z] INFO
org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ZooKeeper [] - Session:
0x314b617f28f07ef closed
[2021-12-28T08:58:33.916Z] INFO
org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn [] -
EventThread shut down for session: 0x314b617f28f07ef
[2021-12-28T08:58:33.958Z] INFO
org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServices [] -
Finished cleaning up the high availability data.
[2021-12-28T08:58:33.960Z] INFO
org.apache.flink.runtime.rpc.akka.AkkaRpcService [] - Stopping Akka
RPC service.
[2021-12-28T08:58:33.964Z] INFO
org.apache.flink.runtime.rpc.akka.AkkaRpcService [] - Stopping Akka
RPC service.
[2021-12-28T08:58:33.992Z] INFO
akka.remote.RemoteActorRefProvider$RemotingTerminator [] - Shutting down
remote daemon.
[2021-12-28T08:58:33.993Z] INFO
akka.remote.RemoteActorRefProvider$RemotingTerminator [] - Remote daemon
shut down; proceeding with flushing remote transports.
[2021-12-28T08:58:34.030Z] INFO
org.apache.flink.runtime.rpc.akka.AkkaRpcService [] - Stopped Akka
RPC service.
[2021-12-28T08:58:34.048Z] INFO
akka.remote.RemoteActorRefProvider$RemotingTerminator [] - Shutting down
remote daemon.
[2021-12-28T08:58:34.048Z] INFO
akka.remote.RemoteActorRefProvider$RemotingTerminator [] - Remote daemon
shut down; proceeding with flushing remote transports.
[2021-12-28T08:58:34.050Z] INFO
akka.remote.RemoteActorRefProvider$RemotingTerminator [] - Remoting shut
down.
[2021-12-28T08:58:34.062Z] INFO
org.apache.flink.runtime.rpc.akka.AkkaRpcService [] - Stopped Akka
RPC service.
[2021-12-28T08:58:34.062Z] INFO
org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Terminating
cluster entrypoint process KubernetesApplicationClusterEntrypoint with exit
code 0.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]