PINK97 opened a new issue, #6402:
URL: https://github.com/apache/seatunnel/issues/6402

   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/seatunnel/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### What happened
   
   The namenode of the cluster is in HA mode. When using hdfs as the status 
backend, when testing the built-in v2.streaming.conf.template, the following 
error is reported,The contents of my seatunnel.yaml configuration file are as 
follows:
   
![image](https://github.com/apache/seatunnel/assets/130653156/4114de8f-1099-4411-bc7d-fe0a89bfc359)
   
   
   ### SeaTunnel Version
   
   2.3.3
   
   ### SeaTunnel Config
   
   ```conf
   env {
     # You can set flink configuration here
     execution.parallelism = 2
     job.mode = "STREAMING"
     checkpoint.interval = 2000
     #execution.checkpoint.interval = 10000
     #execution.checkpoint.data-uri = "hdfs://localhost:9000/checkpoint"
   }
   
   source {
     # This is a example source plugin **only for test and demonstrate the 
feature source plugin**
     FakeSource {
       parallelism = 2
       result_table_name = "fake"
       row.num = 16
       schema = {
         fields {
           name = "string"
           age = "int"
         }
       }
     }
   
     # If you would like to get more information about how to configure 
SeaTunnel and see full list of source plugins,
     # please go to https://seatunnel.apache.org/docs/category/source-v2
   }
   
   sink {
     Console {
     }
   
     # If you would like to get more information about how to configure 
SeaTunnel and see full list of sink plugins,
     # please go to https://seatunnel.apache.org/docs/category/sink-v2
   }
   ```
   
   
   ### Running Command
   
   ```shell
   ./bin/seatunnel.sh --config ./config/v2.streaming.conf.template
   ```
   
   
   ### Error Exception
   
   ```log
   Exception in thread "main" 
org.apache.seatunnel.core.starter.exception.CommandExecuteException: SeaTunnel 
job executed failed
           at 
org.apache.seatunnel.core.starter.seatunnel.command.ClientExecuteCommand.execute(ClientExecuteCommand.java:191)
           at org.apache.seatunnel.core.starter.SeaTunnel.run(SeaTunnel.java:40)
           at 
org.apache.seatunnel.core.starter.seatunnel.SeaTunnelClient.main(SeaTunnelClient.java:34)
   Caused by: java.util.concurrent.CompletionException: 
org.apache.seatunnel.engine.checkpoint.storage.exception.CheckpointStorageException:
 Failed to list files from names/seatunnel/checkpoint/815064978619891713
           at 
com.hazelcast.spi.impl.AbstractInvocationFuture.wrapInCompletionException(AbstractInvocationFuture.java:1347)
           at 
com.hazelcast.spi.impl.AbstractInvocationFuture.cascadeException(AbstractInvocationFuture.java:1340)
           at 
com.hazelcast.spi.impl.AbstractInvocationFuture.access$200(AbstractInvocationFuture.java:65)
           at 
com.hazelcast.spi.impl.AbstractInvocationFuture$ApplyNode.execute(AbstractInvocationFuture.java:1478)
           at 
com.hazelcast.spi.impl.AbstractInvocationFuture.unblockOtherNode(AbstractInvocationFuture.java:797)
           at 
com.hazelcast.spi.impl.AbstractInvocationFuture.unblockAll(AbstractInvocationFuture.java:759)
           at 
com.hazelcast.spi.impl.AbstractInvocationFuture.complete0(AbstractInvocationFuture.java:1235)
           at 
com.hazelcast.spi.impl.AbstractInvocationFuture.completeExceptionallyInternal(AbstractInvocationFuture.java:1223)
           at 
com.hazelcast.spi.impl.AbstractInvocationFuture.completeExceptionally(AbstractInvocationFuture.java:709)
           at 
com.hazelcast.client.impl.spi.impl.ClientInvocation.completeExceptionally(ClientInvocation.java:294)
           at 
com.hazelcast.client.impl.spi.impl.ClientInvocation.notifyExceptionWithOwnedPermission(ClientInvocation.java:321)
           at 
com.hazelcast.client.impl.spi.impl.ClientInvocation.notifyException(ClientInvocation.java:304)
           at 
com.hazelcast.client.impl.spi.impl.ClientResponseHandlerSupplier.handleResponse(ClientResponseHandlerSupplier.java:164)
           at 
com.hazelcast.client.impl.spi.impl.ClientResponseHandlerSupplier.process(ClientResponseHandlerSupplier.java:141)
           at 
com.hazelcast.client.impl.spi.impl.ClientResponseHandlerSupplier.access$300(ClientResponseHandlerSupplier.java:60)
           at 
com.hazelcast.client.impl.spi.impl.ClientResponseHandlerSupplier$DynamicResponseHandler.accept(ClientResponseHandlerSupplier.java:251)
           at 
com.hazelcast.client.impl.spi.impl.ClientResponseHandlerSupplier$DynamicResponseHandler.accept(ClientResponseHandlerSupplier.java:243)
           at 
com.hazelcast.client.impl.connection.tcp.TcpClientConnection.handleClientMessage(TcpClientConnection.java:245)
           at 
com.hazelcast.client.impl.protocol.util.ClientMessageDecoder.handleMessage(ClientMessageDecoder.java:135)
           at 
com.hazelcast.client.impl.protocol.util.ClientMessageDecoder.onRead(ClientMessageDecoder.java:89)
           at 
com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:136)
           at 
com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:383)
           at 
com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:368)
           at 
com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:294)
           at 
com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249)
           at 
com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
   Caused by: 
org.apache.seatunnel.engine.checkpoint.storage.exception.CheckpointStorageException:
 Failed to list files from names/seatunnel/checkpoint/815064978619891713
           at 
org.apache.seatunnel.engine.checkpoint.storage.hdfs.HdfsStorage.getFileNames(HdfsStorage.java:346)
           at 
org.apache.seatunnel.engine.checkpoint.storage.hdfs.HdfsStorage.getLatestCheckpointByJobIdAndPipelineId(HdfsStorage.java:188)
           at 
org.apache.seatunnel.engine.server.checkpoint.CheckpointManager.lambda$new$0(CheckpointManager.java:117)
           at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
           at 
java.util.HashMap$ValueSpliterator.forEachRemaining(HashMap.java:1628)
           at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
           at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
           at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747)
           at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721)
           at java.util.stream.AbstractTask.compute(AbstractTask.java:316)
           at 
java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
           at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
           at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401)
           at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734)
           at 
java.util.stream.ReduceOps$ReduceOp.evaluateParallel(ReduceOps.java:714)
           at 
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
           at 
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
           at 
org.apache.seatunnel.engine.server.checkpoint.CheckpointManager.<init>(CheckpointManager.java:146)
           at 
org.apache.seatunnel.engine.server.master.JobMaster.initCheckPointManager(JobMaster.java:251)
           at 
org.apache.seatunnel.engine.server.master.JobMaster.init(JobMaster.java:234)
           at 
org.apache.seatunnel.engine.server.CoordinatorService.lambda$submitJob$5(CoordinatorService.java:461)
           at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
           at java.util.concurrent.FutureTask.run(FutureTask.java:266)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   Caused by: org.apache.hadoop.security.AccessControlException: 
com.hazelcast.client.UndefinedErrorCodeException: Class name: 
org.apache.hadoop.ipc.RemoteException, Message: SIMPLE authentication is not 
enabled.  Available:[TOKEN, KERBEROS]
           at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method)
           at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
           at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
           at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
           at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
           at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
           at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1668)
           at 
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1582)
           at 
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1579)
           at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
           at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1594)
           at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1683)
           at 
org.apache.seatunnel.engine.checkpoint.storage.hdfs.HdfsStorage.getFileNames(HdfsStorage.java:334)
           ... 25 more
   Caused by: com.hazelcast.client.UndefinedErrorCodeException: Class name: 
org.apache.hadoop.ipc.RemoteException, Message: SIMPLE authentication is not 
enabled.  Available:[TOKEN, KERBEROS]
           at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1562)
           at org.apache.hadoop.ipc.Client.call(Client.java:1508)
           at org.apache.hadoop.ipc.Client.call(Client.java:1405)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
           at com.sun.proxy.$Proxy34.getFileInfo(Unknown Source)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:904)
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
           at com.sun.proxy.$Proxy35.getFileInfo(Unknown Source)
           at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1666)
           ... 31 more
   ```
   
   
   ### Zeta or Flink or Spark Version
   
   Zeta
   
   ### Java or Scala Version
   
   1.8
   
   ### Screenshots
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to