alberttwong opened a new issue, #10725:
URL: https://github.com/apache/hudi/issues/10725

   related to https://github.com/apache/hudi/issues/10697
   
   Similar scala.  However if I use the insert operation and even use 2x more 
memory, I'll still get a process killed error.   I also divided the 1.1 GB file 
into 3 smaller parquet files and still used 24G for memory driver and also get 
process killed error.
   
   ```
   import org.apache.spark.sql.functions._
   import org.apache.spark.sql.types._
   import org.apache.spark.sql.Row
   import org.apache.spark.sql.SaveMode._
   import org.apache.hudi.DataSourceReadOptions._
   import org.apache.hudi.DataSourceWriteOptions._
   import org.apache.hudi.config.HoodieWriteConfig._
   import scala.collection.JavaConversions._
   
   val df = 
spark.read.parquet("s3a://huditest/user_behavior_sample_data.parquet")
   
   val databaseName = "hudi_ecommerce"
   val tableName = "user_behavior"
   val basePath = "s3a://huditest/hudi_ecommerce"
   
   df.write.format("hudi").
     option(org.apache.hudi.config.HoodieWriteConfig.TABLE_NAME, tableName).
     option("hoodie.datasource.hive_sync.enable", "true").
     option("hoodie.datasource.hive_sync.mode", "hms").
     option("hoodie.datasource.hive_sync.database", databaseName).
     option("hoodie.datasource.hive_sync.table", tableName).
     option("hoodie.datasource.hive_sync.metastore.uris", 
"thrift://hive-metastore:9083").
     option("fs.defaultFS", "s3://huditest/").  
     mode(Overwrite).
     save(basePath)
   ```
   
   vs
   
   ```
   import org.apache.spark.sql.functions._
   import org.apache.spark.sql.types._
   import org.apache.spark.sql.Row
   import org.apache.spark.sql.SaveMode._
   import org.apache.hudi.DataSourceReadOptions._
   import org.apache.hudi.DataSourceWriteOptions._
   import org.apache.hudi.config.HoodieWriteConfig._
   import scala.collection.JavaConversions._
   
   val df = 
spark.read.parquet("s3a://huditest/user_behavior_sample_data.parquet")
   
   val databaseName = "hudi_ecommerce"
   val tableName = "user_behavior"
   val basePath = "s3a://huditest/hudi_ecommerce"
   
   df.write.format("hudi").
     option(org.apache.hudi.config.HoodieWriteConfig.TABLE_NAME, tableName).
     option("hoodie.datasource.write.operation", "insert").
     option("hoodie.datasource.hive_sync.enable", "true").
     option("hoodie.datasource.hive_sync.mode", "hms").
     option("hoodie.datasource.hive_sync.database", databaseName).
     option("hoodie.datasource.hive_sync.table", tableName).
     option("hoodie.datasource.hive_sync.metastore.uris", 
"thrift://hive-metastore:9083").
     option("fs.defaultFS", "s3://huditest/").  
     mode(Overwrite).
     save(basePath)
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to