michael1991 opened a new issue, #8048:
URL: https://github.com/apache/hudi/issues/8048
**To Reproduce**
Steps to reproduce the behavior:
```scala
val df = ...
df.persist()
df.filter(filter_condition_1).write.format("hudi").options(options).mode("append").save(path1)
df.filter(filter_condition_2).write.format("hudi").options(options).mode("append").save(path2)
df.unpersist()
```
I checked stages on Spark HistoryServer WebUI, df has been calculated twice,
is it possible to utilize cached dataframe in Spark?
**Expected behavior**
Cached dataframe in Spark should not be calculated twice until releasing
action called.
**Environment Description**
* Hudi version : 0.12.0
* Spark version : 3.3.0
* Hive version : Not used
* Hadoop version : 3.3.3
* Storage (HDFS/S3/GCS..) : GCS
* Running on Docker? (yes/no) : no
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]