tverdokhlebd edited a comment on issue #1491: [SUPPORT] OutOfMemoryError during 
upsert 53M records
URL: https://github.com/apache/incubator-hudi/issues/1491#issuecomment-610564674
 
 
   Code:
   
   sparkSession
     .read
     .jdbc(
       url = jdbcConfig.url,
       table = table,
       columnName = "partition",
       lowerBound = 0,
       upperBound = jdbcConfig.partitionsCount.toInt,
       numPartitions = jdbcConfig.partitionsCount.toInt,
       connectionProperties = new Properties() {
         put("driver", jdbcConfig.driver)
         put("user", jdbcConfig.user)
         put("password", jdbcConfig.password)
       }
     )
     .withColumn("year", substring(col(jdbcConfig.dateColumnName), 0, 4))
     .withColumn("month", substring(col(jdbcConfig.dateColumnName), 6, 2))
     .withColumn("day", substring(col(jdbcConfig.dateColumnName), 9, 2))
     .write
     .option(HoodieWriteConfig.TABLE_NAME, hudiConfig.tableName)
     .option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, 
hudiConfig.recordKey)
     .option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, 
hudiConfig.precombineKey)
     .option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, 
hudiConfig.partitionPathKey)
     .option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY, 
classOf[ComplexKeyGenerator].getName)
     .option(DataSourceWriteOptions.HIVE_STYLE_PARTITIONING_OPT_KEY, "true")
     .option("hoodie.datasource.write.operation", writeOperation)
     .option("hoodie.bulkinsert.shuffle.parallelism", 
hudiConfig.bulkInsertParallelism)
     .option("hoodie.insert.shuffle.parallelism", hudiConfig.parallelism)
     .option("hoodie.upsert.shuffle.parallelism", hudiConfig.parallelism)
     .option("hoodie.cleaner.policy", 
HoodieCleaningPolicy.KEEP_LATEST_FILE_VERSIONS.name())
     .option("hoodie.cleaner.fileversions.retained", "1")
     .option("hoodie.metrics.graphite.host", hudiConfig.graphiteHost)
     .option("hoodie.metrics.graphite.port", hudiConfig.graphitePort)
     .option("hoodie.metrics.graphite.metric.prefix", 
hudiConfig.graphiteMetricPrefix)
     .format("org.apache.hudi")
     .mode(SaveMode.Append)
     .save(outputPath)
   
   This code is executing on Jenkins, with next parameters:
   
   docker run --rm -v ${PWD}:${PWD} -v /mnt/ml_data:/mnt/ml_data 
bde2020/spark-master:2.4.5-hadoop2.7 \
   bash ./spark/bin/spark-submit \
   --master "local[2]" \
   --packages 
org.apache.hudi:hudi-spark-bundle_2.11:0.5.2-incubating,org.apache.hadoop:hadoop-aws:2.7.3,org.apache.spark:spark-avro_2.11:2.4.4
 \
   --conf spark.local.dir=/mnt/ml_data \
   --conf spark.ui.enabled=false \
   --conf spark.driver.memory=4g \
   --conf spark.driver.memoryOverhead=1024 \
   --conf spark.driver.maxResultSize=2g \
   --conf spark.kryoserializer.buffer.max=512m \
   --conf spark.serializer=org.apache.spark.serializer.KryoSerializer \
   --conf spark.rdd.compress=true \
   --conf spark.shuffle.service.enabled=true \
   --conf spark.sql.hive.convertMetastoreParquet=false \
   --conf spark.hadoop.fs.defaultFS=s3a://ir-mtu-ml-bucket/ml_hudi \
   --conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem \
   --conf spark.hadoop.fs.s3a.access.key=${AWS_ACCESS_KEY_ID} \
   --conf spark.hadoop.fs.s3a.secret.key=${AWS_SECRET_ACCESS_KEY} \
   --conf spark.executorEnv.period.startDate=${date} \
   --conf spark.executorEnv.period.numDays=${numDays} \
   --conf spark.executorEnv.jdbc.url=${VERTICA_URL} \
   --conf spark.executorEnv.jdbc.user=${VERTICA_USER} \
   --conf spark.executorEnv.jdbc.password=${VERTICA_PWD} \
   --conf spark.executorEnv.jdbc.driver=${VERTICA_DRIVER}\
   --conf spark.executorEnv.jdbc.schemaName=mtu_owner \
   --conf spark.executorEnv.jdbc.tableName=ext_ml_data \
   --conf spark.executorEnv.jdbc.dateColumnName=hit_date \
   --conf spark.executorEnv.jdbc.partitionColumnName=hit_timestamp \
   --conf spark.executorEnv.jdbc.partitionsCount=8 \
   --conf spark.executorEnv.hudi.outputPath=s3a://ir-mtu-ml-bucket/ml_hudi \
   --conf spark.executorEnv.hudi.tableName=ext_ml_data \
   --conf spark.executorEnv.hudi.recordKey=tds_cid \
   --conf spark.executorEnv.hudi.precombineKey=hit_timestamp \
   --conf spark.executorEnv.hudi.parallelism=8 \
   --conf spark.executorEnv.hudi.bulkInsertParallelism=8 \
   --class mtu.spark.analytics.ExtMLDataToS3 \
   ${PWD}/ml-vertica-to-s3-hudi.jar

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to