Leeeee16 commented on issue #6573:
URL: https://github.com/apache/seatunnel/issues/6573#issuecomment-2019852386

   i see the 
[doc](https://seatunnel.apache.org/docs/2.3.3/seatunnel-engine/checkpoint-storage#localfile)
   ```
   seatunnel:
     engine:
       checkpoint:
         interval: 6000
         timeout: 7000
         storage:
           type: hdfs
           max-retained: 3
           plugin-config:
             storage.type: hdfs
             fs.defaultFS: file:/// # Ensure that the directory has written 
permission 
   ```
   i change my seatunnel.yaml:
   ```yaml
   seatunnel:
     engine:
       history-job-expire-minutes: 1440
       backup-count: 1
       queue-type: blockingqueue
       print-execution-info-interval: 60
       print-job-metrics-info-interval: 60
       slot-service:
         dynamic-slot: true
       checkpoint:
         interval: 120000
         timeout: 2147483647
         storage:
           type: hdfs
           max-retained: 3
           plugin-config:
             storage.type: HDFS
             fs.defaultFS: file:///apps/apache-seatunnel-2.3.3/tmp # Ensure 
that the directory has written permission
   ```
   store checkpoint states failed
   logs:
   ```
   2024-03-26 16:33:42,257 ERROR 
org.apache.seatunnel.engine.server.checkpoint.CheckpointCoordinator - store 
checkpoint states failed.
   
org.apache.seatunnel.engine.checkpoint.storage.exception.CheckpointStorageException:
 Failed to write checkpoint data, state: 
PipelineState(jobId=824923716566646785, pipelineId=1, checkpointId=1, 
states=[..............xxx................])
        at 
org.apache.seatunnel.engine.checkpoint.storage.hdfs.HdfsStorage.storeCheckPoint(HdfsStorage.java:110)
 ~[seatunnel-starter.jar:2.3.3]
        at 
org.apache.seatunnel.engine.server.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:701)
 ~[seatunnel-starter.jar:2.3.3]
        at 
org.apache.seatunnel.engine.server.checkpoint.CheckpointCoordinator.lambda$null$7(CheckpointCoordinator.java:480)
 ~[seatunnel-starter.jar:2.3.3]
        at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
 ~[?:1.8.0_202]
        at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
 ~[?:1.8.0_202]
        at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
 ~[?:1.8.0_202]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[?:1.8.0_202]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[?:1.8.0_202]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_202]
   Caused by: java.io.IOException: Mkdirs failed to create 
/seatunnel/checkpoint/824923716566646785 (exists=false, 
cwd=file:/apps/apache-seatunnel-2.3.3)
        at 
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:458) 
~[seatunnel-hadoop3-3.1.4-uber-2.3.3-optional.jar:2.3.3]
        at 
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:443) 
~[seatunnel-hadoop3-3.1.4-uber-2.3.3-optional.jar:2.3.3]
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118) 
~[seatunnel-hadoop3-3.1.4-uber-2.3.3-optional.jar:2.3.3]
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098) 
~[seatunnel-hadoop3-3.1.4-uber-2.3.3-optional.jar:2.3.3]
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:987) 
~[seatunnel-hadoop3-3.1.4-uber-2.3.3-optional.jar:2.3.3]
        at 
org.apache.seatunnel.engine.checkpoint.storage.hdfs.HdfsStorage.storeCheckPoint(HdfsStorage.java:107)
 ~[seatunnel-starter.jar:2.3.3]
        ... 8 more
   ```
   why `Mkdirs failed to create ...` ,i have been add the written permission.
   and how to use localFile instead of hdfs


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to