yongster opened a new issue, #4908:
URL: https://github.com/apache/seatunnel/issues/4908

   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/seatunnel/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### What happened
   
   When executing SeaTunnel 2.1.2 with the Spark engine in cluster mode using 
DolphinScheduler, an error occurs stating that the library file failed to load. 
This is because DolphinScheduler executes the script in the bin directory, and 
in cluster mode, the script only looks for the lib directory in the current 
directory, resulting in the error.
   
   
![3976f3ff1a9aa36a5a72cd1bbdb7d76](https://github.com/apache/seatunnel/assets/76621090/e05f7ea9-b018-4bdd-8bff-da7092156629)
   
   
   ### SeaTunnel Version
   
   2.1.2
   
   ### SeaTunnel Config
   
   ```conf
   22 env {
    23   # You can set spark configuration here
    24   # see available properties defined by spark: 
https://spark.apache.org/docs/latest/configuration.html#available-properties
    25   spark.app.name = "SeaTunnel"
    26   spark.executor.instances = 2
    27   spark.executor.cores = 1
    28   spark.executor.memory = "1g"
    29 }
    30
    31 source {
    32   # This is a example input plugin **only for test and demonstrate the 
feature input plugin**
    33   Fake {
    34     result_table_name = "my_dataset"
    35   }
    36
    37   # You can also use other input plugins, such as file
    38   # file {
    39   #   result_table_name = "accesslog"
    40   #   path = "hdfs://hadoop-cluster-01/nginx/accesslog"
    41   #   format = "json"
    42   # }
    43
    44   # If you would like to get more information about how to configure 
seatunnel and see full list of input plugins,
    45   # please go to 
https://seatunnel.apache.org/docs/spark/configuration/source-plugins/Fake
    46 }
    47
    48 transform {
    49   # split data by specific delimiter
    50
    51   # you can also use other filter plugins, such as sql
    52   # sql {
    53   #   sql = "select * from accesslog where request_time > 1000"
    54   # }
    55
    56   # If you would like to get more information about how to configure 
seatunnel and see full list of filter plugins,
    57   # please go to 
https://seatunnel.apache.org/docs/spark/configuration/transform-plugins/Sql
    58 }
    59
    60 sink {
    61   # choose stdout output plugin to output data to console
    62   Console {}
    63
    64   # you can also use other output plugins, such as hdfs
    65   # hdfs {
    66   #   path = "hdfs://hadoop-cluster-01/nginx/accesslog_processed"
    67   #   save_mode = "append"
    68   # }
    69
    70   # If you would like to get more information about how to configure 
seatunnel and see full list of output plugins,
    71   # please go to 
https://seatunnel.apache.org/docs/spark/configuration/sink-plugins/Console
    72 }
   ```
   
   
   ### Running Command
   
   ```shell
   ./start-seatunnel-spark.sh --config ../config/spark.batch.conf.template 
--deploy-mode cluster --master yarn
   ```
   
   
   ### Error Exception
   
   ```log
   not found lib
   ```
   
   
   ### Flink or Spark Version
   
   spark 2.4.3
   
   ### Java or Scala Version
   
   java 1.8
   scala 2.11
   
   ### Screenshots
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [X] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to