ngk2009 opened a new issue #4297:
URL: https://github.com/apache/hudi/issues/4297


   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   yes
   
   **Describe the problem you faced**
   
   I used the Hudi0.10 version on the flink1.13.2 version. The flink storage 
has been successfully set to s3 access, and the storage access based on 
hdfs/alluxio is also normal, but there will be errors in the problem when the 
storage based on s3 is performed.
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   flink set:
   1.cd plugins/
   2.mkdir s3-fs-hadoop
   3.cd ..
   3.cp opt/flink-s3-fs-hadoop-1.13.2.jar plugins/s3-fs-hadoop/
   4.run a flink yarn session
   5.in Flink SQL Client
   CREATE TABLE hudi_demo(
       id BIGINT PRIMARY KEY NOT ENFORCED,
       name STRING,
       birthday TIMESTAMP(3),
       ts TIMESTAMP(3),
       `partition` VARCHAR(20)
   ) PARTITIONED BY (`partition`) WITH (
       'connector' = 'hudi',
       'table.type' = 'MERGE_ON_READ',
       'path' = 's3://lakehouse/hudi/demo1/'
   );
   6.do a query
   select * from hudi_demo;
   
   **Expected behavior**
   
   Can not execute a Query or Insert: 
org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme 
"s3" 
   
   **Environment Description**
   
   * Hudi version : 0.10.0
   
   * Spark version :N/A
   
   * Flink version : 1.13.2
   
   * Hive version :N/A
   
   * Hadoop version :3.0.0-CDH6.3.0
   
   * Storage (HDFS/S3/GCS..) :S3
   
   * Running on Docker? (yes/no) :no
   **Stacktrace**
   
   org.apache.flink.runtime.rpc.akka.exceptions.AkkaRpcException: Could not 
start RpcEndpoint jobmanager_2.
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:610)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:180)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at akka.actor.Actor$class.aroundReceive(Actor.scala:517) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at akka.actor.ActorCell.invoke(ActorCell.scala:561) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) 
[flink-dist_2.11-1.13.2.jar:1.13.2]
        at akka.dispatch.Mailbox.run(Mailbox.scala:225) 
[flink-dist_2.11-1.13.2.jar:1.13.2]
        at akka.dispatch.Mailbox.exec(Mailbox.scala:235) 
[flink-dist_2.11-1.13.2.jar:1.13.2]
        at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) 
[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 
[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 
[flink-dist_2.11-1.13.2.jar:1.13.2]
   Caused by: org.apache.flink.runtime.jobmaster.JobMasterException: Could not 
start the JobMaster.
        at 
org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:385) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:605)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        ... 18 more
   Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to start the 
operator coordinators
        at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:90)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:592)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:955)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:873)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:383) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:605)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        ... 18 more
   Caused by: org.apache.hudi.exception.HoodieIOException: Failed to get 
instance of org.apache.hadoop.fs.FileSystem
        at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:104) 
~[hudi-flink-bundle_2.11-0.10.0.jar:0.10.0]
        at org.apache.hudi.util.StreamerUtil.tableExists(StreamerUtil.java:266) 
~[hudi-flink-bundle_2.11-0.10.0.jar:0.10.0]
        at 
org.apache.hudi.util.StreamerUtil.initTableIfNotExists(StreamerUtil.java:236) 
~[hudi-flink-bundle_2.11-0.10.0.jar:0.10.0]
        at 
org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:164)
 ~[hudi-flink-bundle_2.11-0.10.0.jar:0.10.0]
        at 
org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:194)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:85)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:592)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:955)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:873)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:383) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:605)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        ... 18 more
   Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: No 
FileSystem for scheme "s3"
        at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3215) 
~[hadoop-common-3.0.0-cdh6.3.0.jar:?]
        at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3235) 
~[hadoop-common-3.0.0-cdh6.3.0.jar:?]
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123) 
~[hadoop-common-3.0.0-cdh6.3.0.jar:?]
        at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3286) 
~[hadoop-common-3.0.0-cdh6.3.0.jar:?]
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3254) 
~[hadoop-common-3.0.0-cdh6.3.0.jar:?]
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:478) 
~[hadoop-common-3.0.0-cdh6.3.0.jar:?]
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361) 
~[hadoop-common-3.0.0-cdh6.3.0.jar:?]
        at org.apache.hudi.common.fs.FSUtils.getFs(FSUtils.java:102) 
~[hudi-flink-bundle_2.11-0.10.0.jar:0.10.0]
        at org.apache.hudi.util.StreamerUtil.tableExists(StreamerUtil.java:266) 
~[hudi-flink-bundle_2.11-0.10.0.jar:0.10.0]
        at 
org.apache.hudi.util.StreamerUtil.initTableIfNotExists(StreamerUtil.java:236) 
~[hudi-flink-bundle_2.11-0.10.0.jar:0.10.0]
        at 
org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:164)
 ~[hudi-flink-bundle_2.11-0.10.0.jar:0.10.0]
        at 
org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:194)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:85)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:592)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:955)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:873)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:383) 
~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:605)
 ~[flink-dist_2.11-1.13.2.jar:1.13.2]
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to