Neilxzn opened a new issue, #9238:
URL: https://github.com/apache/seatunnel/issues/9238

   ### Search before asking
   
   - [x] I had searched in the 
[issues](https://github.com/apache/seatunnel/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### What happened
   
   Transform FieldMapper run fail on spark 3.2.2 engine. Error log 
   
   
   ### SeaTunnel Version
   
   2.3.10
   
   ### SeaTunnel Config
   
   ```conf
   env {
     spark.app.name = "example"
     spark.sql.catalogImplementation = "hive"
     spark.executor.memory= "2g"
     spark.executor.instances = "2"
     spark.yarn.priority = "100"
     hive.exec.dynamic.partition.mode = "nonstrict"
     spark.dynamicAllocation.enabled = "false"
     spark.driver.extraClassPath = "xxxx"
   }
   source {
     hive {
        plugin_output = "fake"
       metastore_uri = "xxx"
       table_name = "mydatabase.students"
     }
   }
   transform {
     FieldMapper {
       plugin_input = "fake"
       plugin_output = "fake1"
       field_mapper = {
           name = name
       }
     }
   }
   sink {
       HdfsFile {
          plugin_input = "fake1"
         fs.defaultFS = "hdfs://hdfscluster"
         path = "/user/hadoop/st20"
         file_format = "text"
      }
   }
   ```
   
   ### Running Command
   
   ```shell
   apache-seatunnel-2.3.10/bin/start-seatunnel-spark-3-connector-v2.sh --master 
yarn --deploy-mode client --config hive2hdfstrans.conf
   ```
   
   ### Error Exception
   
   ```log
   Caused by: org.apache.spark.SparkException: Job aborted due to stage failure:
   Aborting TaskSet 0.0 because task 0 (partition 0)
   cannot run anywhere due to node and executor excludeOnFailure.
   Most recent failure:
   Lost task 0.5 in stage 0.0 (TID 5) (host1 executor 6): 
java.lang.ClassCastException: cannot assign instance of 
java.lang.invoke.SerializedLambda to field 
org.apache.spark.sql.execution.MapPartitionsExec.func of type scala.Function1 
in instance of org.apache.spark.sql.execution.MapPartitionsExec
        at 
java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2233)
        at 
java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1405)
        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2288)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2206)
        at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2064)
   ```
   
   ### Zeta or Flink or Spark Version
   
   spark 3.2.2
   
   ### Java or Scala Version
   
   1.8/2.12
   
   ### Screenshots
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [x] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [x] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to