Amar1404 opened a new issue, #11888: URL: https://github.com/apache/hudi/issues/11888
**Describe the problem you faced** I am using bloom index where will finding the file it is repartition and sorting seen in the spark dag view causing the problem of executor failure even I have increase memory per executor to 8GB for only 10GB of data. **To Reproduce** Steps to reproduce the behavior: 1. Create a partition path based on year month day and hour 2.Enable Bloom Index and using HudiDeltstreamer to incrementally load data from the above path 3. The size of data is 10GB in parquet partitioned between three hours. 4. Try writing the **Expected behavior** A clear and concise description of what you expected to happen. **Environment Description** * Hudi version : 0.12.3 * Spark version :3.3 * Hive version : * Hadoop version : * Storage (HDFS/S3/GCS..) : s3 * Running on Docker? (yes/no) :no Num Executor= 15 Num Core Per Executor =6 Drive Memory = 6GB Drive Core= 6 Memory per executor=8gb **Additional context** Add any other context about the problem here. **Stacktrace** ```Add the stacktrace of the error.``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
