lucasberlang opened a new issue, #7185:
URL: https://github.com/apache/hudi/issues/7185

   **Descripcion**
   
   I'm running flink on k8s and I'm trying to write to an s3 bucket in hudi 
format, but it fails. If I write to s3 with another format it works correctly:
   ```sql
   CREATE TABLE t1_json (
     uid VARCHAR(20),
     sdata INT
   ) PARTITIONED BY (uid) WITH (
     'connector' = 'filesystem',
     'path' = 's3://{bucket_name}/json',
     'format' = 'json'
   );
   ```
   
   ```sql
   INSERT INTO t1_json VALUES
     ('id1',22),
     ('id2',23);
   ```
   So this seems to be a bug with Hudi and not with s3.
   
   **To Reproduce**
   
   Create a table in hudi format:
   ```sql
   CREATE TABLE t1_hudi(
     uuid VARCHAR(20) PRIMARY KEY NOT ENFORCED,
     name VARCHAR(10),
     age INT,
     ts TIMESTAMP(3),
     `partition` VARCHAR(20)
   )
   PARTITIONED BY (`partition`)
   WITH (
     'connector' = 'hudi',
     'path' = 's3://${bucket_name}/hudi',
     'table.type' = 'MERGE_ON_READ'
   );
   ```
   
   Insert data into the table:
   ```sql
   INSERT INTO t1_hudi VALUES
     ('id1','Danny',23,TIMESTAMP '1970-01-01 00:00:01','par1'),
     ('id2','Stephen',33,TIMESTAMP '1970-01-01 00:00:02','par1'),
     ('id3','Julian',53,TIMESTAMP '1970-01-01 00:00:03','par2'),
     ('id4','Fabian',31,TIMESTAMP '1970-01-01 00:00:04','par2'),
     ('id5','Sophia',18,TIMESTAMP '1970-01-01 00:00:05','par3'),
     ('id6','Emma',20,TIMESTAMP '1970-01-01 00:00:06','par3'),
     ('id7','Bob',44,TIMESTAMP '1970-01-01 00:00:07','par4'),
     ('id8','Han',56,TIMESTAMP '1970-01-01 00:00:08','par4');
   ```
   **Expected behavior**
   I should be able to write in s3 with hudi format.
   
   **Environment Description**
   
   * Hudi version : 0.12.0
   
   * Flink version : 1.15.0
   
   * Spark version : N/A
   
   * Hive version :N/A
   
   * Hadoop version :N/A
   
   * Storage (HDFS/S3/GCS..) :S3
   
   * Running on Docker? (yes/no) : yes, GKE
   
   
   
   **Additional context**
   
   Properties in core-site.xml:
   ```xml
     <property>
       <name>fs.s3.awsAccessKeyId</name>
       <value></value>
     </property>
     <property>
       <name>fs.s3.awsSecretAccessKey</name>
       <value></value>
     </property>
     <property>
       <name>fs.s3.impl</name>
       <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
     </property>
   ```
   flink-conf.yaml:
   ```yaml
     flink-conf.yaml: |+
       jobmanager.rpc.address: flink-jobmanager-session
       taskmanager.numberOfTaskSlots: 2
       blob.server.port: 6124
       jobmanager.rpc.port: 6123
       taskmanager.rpc.port: 6122
       queryable-state.proxy.ports: 6125
       jobmanager.memory.process.size: 1600m
       taskmanager.memory.process.size: 1728m
       parallelism.default: 2
       execution.checkpointing.interval: 60s   
       metrics.reporter.prom.class: 
org.apache.flink.metrics.prometheus.PrometheusReporter
       metrics.reporters: prom
       metrics.reporter.prom.port: 9249
       s3a.access-key:
       s3a.secret-key:
   ```
   Dockerfile to execute flink with s3 plugin:
   ```Dockerfile
   ARG FLINK_VERSION
   ARG SCALA_VERSION
   FROM flink:${FLINK_VERSION}-scala_${SCALA_VERSION}
   ARG FLINK_HADOOP_VERSION
   ARG GCS_CONNECTOR_VERSION
   
   RUN test -n "$FLINK_HADOOP_VERSION"
   RUN test -n "$GCS_CONNECTOR_VERSION"
   
   ARG HUDI_HADOOP_JAR_NAME=hudi-flink1.15-bundle-0.12.0.jar
   ARG 
HUDI_HADOOP_JAR_URI=https://repo.maven.apache.org/maven2/org/apache/hudi/hudi-flink1.15-bundle/0.12.0/hudi-flink1.15-bundle-0.12.0.jar
   
   RUN echo "Downloading ${HUDI_HADOOP_JAR_URI}" && \
     wget -q -O /opt/flink/lib/${HUDI_HADOOP_JAR_NAME} ${HUDI_HADOOP_JAR_URI}
   
   RUN mkdir -p /opt/flink/plugins/flink-s3-fs-hadoop/ && cp 
/opt/flink/opt/flink-s3-fs-hadoop-1.15.0.jar 
/opt/flink/plugins/flink-s3-fs-hadoop/ && cp 
/opt/flink/opt/flink-s3-fs-hadoop-1.15.0.jar /opt/flink/lib/
   
   ```
   Properties
   ```properties
   FLINK_VERSION ?= 1.15.0
   SCALA_VERSION ?= 2.12
   FLINK_HADOOP_VERSION ?= 2.8.3-9.0
   GCS_CONNECTOR_VERSION ?= hadoop3-2.2.6
   PYTHON_VERSION ?= 
   
   ```
   **Stacktrace**
   2022-11-11 12:31:48,211 INFO  
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor         [] - The 
RpcEndpoint jobmanager_2 failed.
   org.apache.flink.runtime.rpc.akka.exceptions.AkkaRpcException: Could not 
start RpcEndpoint jobmanager_2.
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:617)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:185)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) 
~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) 
~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) 
~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) 
~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) 
~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) 
~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.actor.Actor.aroundReceive(Actor.scala:537) 
~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.actor.Actor.aroundReceive$(Actor.scala:535) 
~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220) 
~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580) 
~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.actor.ActorCell.invoke(ActorCell.scala:548) 
~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.dispatch.Mailbox.run(Mailbox.scala:231) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.dispatch.Mailbox.exec(Mailbox.scala:243) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at java.util.concurrent.ForkJoinTask.doExec(Unknown Source) [?:?]
        at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown 
Source) [?:?]
        at java.util.concurrent.ForkJoinPool.scan(Unknown Source) [?:?]
        at java.util.concurrent.ForkJoinPool.runWorker(Unknown Source) [?:?]
        at java.util.concurrent.ForkJoinWorkerThread.run(Unknown Source) [?:?]
   Caused by: org.apache.flink.runtime.jobmaster.JobMasterException: Could not 
start the JobMaster.
        at 
org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:390) 
~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:612)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:611)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        ... 21 more
   Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to start the 
operator coordinators
        at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:169)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:82)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:624)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:1010)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:927)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:388) 
~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:612)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:611)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        ... 21 more
   Caused by: org.apache.hudi.exception.HoodieIOException: Failed to get 
instance of org.apache.hadoop.fs.FileSystem
        at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:109) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:100) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at org.apache.hudi.util.StreamerUtil.tableExists(StreamerUtil.java:338) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at 
org.apache.hudi.util.StreamerUtil.initTableIfNotExists(StreamerUtil.java:307) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at 
org.apache.hudi.util.StreamerUtil.initTableIfNotExists(StreamerUtil.java:294) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at 
org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:179)
 ~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at 
org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:194)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:164)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:82)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:624)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:1010)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:927)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:388) 
~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:612)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:611)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        ... 21 more
   Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: No 
FileSystem for scheme "s3"
        at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3353) 
~[flink-s3-fs-hadoop-1.15.0.jar:1.15.0]
        at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3373) 
~[flink-s3-fs-hadoop-1.15.0.jar:1.15.0]
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:125) 
~[flink-s3-fs-hadoop-1.15.0.jar:1.15.0]
        at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3424) 
~[flink-s3-fs-hadoop-1.15.0.jar:1.15.0]
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3392) 
~[flink-s3-fs-hadoop-1.15.0.jar:1.15.0]
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:485) 
~[flink-s3-fs-hadoop-1.15.0.jar:1.15.0]
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365) 
~[flink-s3-fs-hadoop-1.15.0.jar:1.15.0]
        at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:107) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:100) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at org.apache.hudi.util.StreamerUtil.tableExists(StreamerUtil.java:338) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at 
org.apache.hudi.util.StreamerUtil.initTableIfNotExists(StreamerUtil.java:307) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at 
org.apache.hudi.util.StreamerUtil.initTableIfNotExists(StreamerUtil.java:294) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at 
org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:179)
 ~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at 
org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:194)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:164)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:82)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:624)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:1010)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:927)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:388) 
~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:612)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:611)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        ... 21 more
   2022-11-11 12:31:48,216 WARN  
org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess [] - 
Unexpected termination of the JobMasterService for job 
c2572d50c86fb028d0c79ed9c55e0e31 under leader id 
00000000-0000-0000-0000-000000000000.
   2022-11-11 12:31:48,212 ERROR 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] - Fatal error 
occurred in the cluster entrypoint.
   org.apache.flink.util.FlinkException: JobMaster for job 
c2572d50c86fb028d0c79ed9c55e0e31 failed.
        at 
org.apache.flink.runtime.dispatcher.Dispatcher.jobMasterFailed(Dispatcher.java:1149)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.dispatcher.Dispatcher.jobManagerRunnerFailed(Dispatcher.java:667)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.dispatcher.Dispatcher.lambda$runJob$4(Dispatcher.java:616)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at java.util.concurrent.CompletableFuture.uniHandle(Unknown Source) 
~[?:?]
        at java.util.concurrent.CompletableFuture$UniHandle.tryFire(Unknown 
Source) ~[?:?]
        at java.util.concurrent.CompletableFuture$Completion.run(Unknown 
Source) ~[?:?]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRunAsync$4(AkkaRpcActor.java:443)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:443)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:213)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.actor.Actor.aroundReceive(Actor.scala:537) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.actor.Actor.aroundReceive$(Actor.scala:535) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.actor.ActorCell.invoke(ActorCell.scala:548) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.dispatch.Mailbox.run(Mailbox.scala:231) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.dispatch.Mailbox.exec(Mailbox.scala:243) 
[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at java.util.concurrent.ForkJoinTask.doExec(Unknown Source) [?:?]
        at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown 
Source) [?:?]
        at java.util.concurrent.ForkJoinPool.scan(Unknown Source) [?:?]
        at java.util.concurrent.ForkJoinPool.runWorker(Unknown Source) [?:?]
        at java.util.concurrent.ForkJoinWorkerThread.run(Unknown Source) [?:?]
   Caused by: org.apache.flink.runtime.jobmaster.JobMasterException: Could not 
start the JobMaster.
        at 
org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:390) 
~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:612)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:611)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:185)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?]
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?]
        at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) 
~[flink-scala_2.12-1.15.0.jar:1.15.0]
        at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) 
~[flink-scala_2.12-1.15.0.jar:1.15.0]
        at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) 
~[?:?]
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
~[flink-scala_2.12-1.15.0.jar:1.15.0]
        ... 14 more
   Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to start the 
operator coordinators
        at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:169)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:82)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:624)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:1010)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:927)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:388) 
~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:612)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:611)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:185)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?]
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?]
        at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) 
~[flink-scala_2.12-1.15.0.jar:1.15.0]
        at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) 
~[flink-scala_2.12-1.15.0.jar:1.15.0]
        at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) 
~[?:?]
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
~[flink-scala_2.12-1.15.0.jar:1.15.0]
        ... 14 more
   Caused by: org.apache.hudi.exception.HoodieIOException: Failed to get 
instance of org.apache.hadoop.fs.FileSystem
        at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:109) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:100) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at org.apache.hudi.util.StreamerUtil.tableExists(StreamerUtil.java:338) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at 
org.apache.hudi.util.StreamerUtil.initTableIfNotExists(StreamerUtil.java:307) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at 
org.apache.hudi.util.StreamerUtil.initTableIfNotExists(StreamerUtil.java:294) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at 
org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:179)
 ~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at 
org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:194)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:164)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:82)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:624)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:1010)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:927)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:388) 
~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:612)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:611)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:185)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?]
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?]
        at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) 
~[flink-scala_2.12-1.15.0.jar:1.15.0]
        at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) 
~[flink-scala_2.12-1.15.0.jar:1.15.0]
        at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) 
~[?:?]
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
~[flink-scala_2.12-1.15.0.jar:1.15.0]
        ... 14 more
   Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: No 
FileSystem for scheme "s3"
        at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3353) 
~[flink-s3-fs-hadoop-1.15.0.jar:1.15.0]
        at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3373) 
~[flink-s3-fs-hadoop-1.15.0.jar:1.15.0]
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:125) 
~[flink-s3-fs-hadoop-1.15.0.jar:1.15.0]
        at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3424) 
~[flink-s3-fs-hadoop-1.15.0.jar:1.15.0]
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3392) 
~[flink-s3-fs-hadoop-1.15.0.jar:1.15.0]
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:485) 
~[flink-s3-fs-hadoop-1.15.0.jar:1.15.0]
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365) 
~[flink-s3-fs-hadoop-1.15.0.jar:1.15.0]
        at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:107) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:100) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at org.apache.hudi.util.StreamerUtil.tableExists(StreamerUtil.java:338) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at 
org.apache.hudi.util.StreamerUtil.initTableIfNotExists(StreamerUtil.java:307) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at 
org.apache.hudi.util.StreamerUtil.initTableIfNotExists(StreamerUtil.java:294) 
~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at 
org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:179)
 ~[hudi-flink1.15-bundle-0.12.0.jar:0.12.0]
        at 
org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:194)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:164)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:82)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:624)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:1010)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:927)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:388) 
~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181)
 ~[flink-dist-1.15.0.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.lambda$start$0(AkkaRpcActor.java:612)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:611)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:185)
 ~[flink-rpc-akka_630cbffc-5067-4638-8ad0-1010bc943afa.jar:1.15.0]
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?]
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?]
        at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) 
~[flink-scala_2.12-1.15.0.jar:1.15.0]
        at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) 
~[flink-scala_2.12-1.15.0.jar:1.15.0]
        at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) 
~[?:?]
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
~[flink-scala_2.12-1.15.0.jar:1.15.0]
        ... 14 more
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to