lamber-ken commented on issue #1491: [SUPPORT] OutOfMemoryError during upsert 53M records URL: https://github.com/apache/incubator-hudi/issues/1491#issuecomment-611069603 hi @vinothchandar @bvaradar I think we can analyze this issue in parallel, reproduce steps: 1. Download CSV data with 5M records ``` https://drive.google.com/open?id=1uwJ68_RrKMUTbEtsGl56_P5b_mNX3k2S ``` 2. Run demo command ``` export SPARK_HOME=/work/BigData/install/spark/spark-2.4.4-bin-hadoop2.7 ${SPARK_HOME}/bin/spark-shell \ --driver-memory 6G \ --packages org.apache.hudi:hudi-spark-bundle_2.11:0.5.1-incubating,org.apache.spark:spark-avro_2.11:2.4.4 \ --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' import org.apache.spark.sql.functions._ val tableName = "hudi_mor_table" val basePath = "file:///tmp/hudi_mor_table" var inputDF = spark.read.format("csv").option("header", "true").load("file:///work/hudi-debug/2.csv") val hudiOptions = Map[String,String]( "hoodie.insert.shuffle.parallelism" -> "10", "hoodie.upsert.shuffle.parallelism" -> "10", "hoodie.delete.shuffle.parallelism" -> "10", "hoodie.bulkinsert.shuffle.parallelism" -> "10", "hoodie.datasource.write.recordkey.field" -> "tds_cid", "hoodie.datasource.write.partitionpath.field" -> "hit_date", "hoodie.table.name" -> tableName, "hoodie.datasource.write.precombine.field" -> "hit_timestamp", "hoodie.datasource.write.operation" -> "upsert" ) inputDF.write.format("org.apache.hudi"). options(hudiOptions). mode("Overwrite"). save(basePath) spark.read.format("org.apache.hudi").load(basePath + "/2020-03-19/*").count(); ```
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
