lamber-ken edited a comment on issue #1552:
URL: https://github.com/apache/incubator-hudi/issues/1552#issuecomment-619449248


   hi @harshi2506, build steps:
   **1. Build Env**
   - JDK8
   - Unix
   
   **2. Commands**
   ```
   git clone https://github.com/apache/incubator-hudi.git
   mvn clean install -DskipTests -DskipITs -Dcheckstyle.skip=true 
-Drat.skip=true
   ```
   
   **3. Run env**
   - Spark-2.4.4+
   - avro-1.8.0
   ```
   // run in local env
   export SPARK_HOME=/work/BigData/install/spark/spark-2.4.4-bin-hadoop2.7
   ${SPARK_HOME}/bin/spark-shell \
     --driver-memory 6G \
     --packages org.apache.spark:spark-avro_2.11:2.4.4 \
     --jars `ls 
packaging/hudi-spark-bundle/target/hudi-spark-bundle_*.*-*.*.*-SNAPSHOT.jar` \
     --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'
   
   // run in yarn env
   export SPARK_HOME=/BigData/install/spark-2.4.4-bin-hadoop2.7
   ${SPARK_HOME}/bin/spark-shell \
     --master yarn \
     --driver-memory 6G \
     --executor-memory 6G \
     --num-executors 5 \
     --executor-cores 5 \
     --queue root.default \
     --packages org.apache.spark:spark-avro_2.11:2.4.4 \
     --jars `ls 
packaging/hudi-spark-bundle/target/hudi-spark-bundle_*.*-*.*.*-SNAPSHOT.jar` \
     --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'
   
   // scripts
   import org.apache.spark.sql.functions._
   
   val tableName = "hudi_mor_table"
   val basePath = "file:///tmp/hudi_mor_tablen"
   // val basePath = "hdfs:///hudi/test"
   
   val hudiOptions = Map[String,String](
     "hoodie.upsert.shuffle.parallelism" -> "10",
     "hoodie.datasource.write.recordkey.field" -> "key",
     "hoodie.datasource.write.partitionpath.field" -> "dt", 
     "hoodie.table.name" -> tableName,
     "hoodie.datasource.write.precombine.field" -> "timestamp"
   )
   
   val inputDF = spark.range(1, 7).
      withColumn("key", $"id").
      withColumn("data", lit("data")).
      withColumn("timestamp", unix_timestamp()).
      withColumn("dtstamp", unix_timestamp() + ($"id" * 24 * 3600)).
      withColumn("dt", from_unixtime($"dtstamp", "yyyy/MM/dd"))
   
   inputDF.write.format("org.apache.hudi").
     options(hudiOptions).
     mode("Overwrite").
     save(basePath)
   
   spark.read.format("org.apache.hudi").load(basePath + "/*/*/*").show();
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to