tmac2100 opened a new issue #2806:
URL: https://github.com/apache/hudi/issues/2806


   **_Tips before filing an issue_**
   
   - Have you gone through our 
[FAQs](https://cwiki.apache.org/confluence/display/HUDI/FAQ)?
   
   - Join the mailing list to engage in conversations and get faster support at 
[email protected].
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   While using the Spark structured streaming way of writing COW tables,the 
processing time is increasing gradually.
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   1.Read from kafka. Here is the configs of the spark submit.
       `/opt/spark-2.4.7-bin-hadoop2.6/bin/spark-submit --master yarn \
       --deploy-mode cluster \
       --driver-memory 8G \
       --executor-memory 4G \
       --num-executors 4 \
       --executor-cores 4 \
       --queue hikit \
       --conf spark.network.timeout=10000000 \
       --conf spark.yarn.appMasterEnv.JAVA_HOME=/opt/jdk1.8.0_73 \
       --conf spark.executorEnv.JAVA_HOME=/opt/jdk1.8.0_73 \
       --conf "spark.executor.extraJavaOptions=-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=heapdump" \
       --conf spark.default.parallelism=80 \
       --conf spark.streaming.kafka.maxRatePerPartition=25000 \
       --class ems.ErpAuspGroup \
       /home/users/jars/hudiOdsKfkaWithDepend.jar \
       --hiveDataBase ods_erp \
       --groupId terpausp041201 \
       --tableName ods_hudi_pro.t_erp_ausp \`
   2.Hudi properties while writing to HDFS hudi table.
       `df.write.format("org.apache.hudi")
       .option(TABLE_TYPE_OPT_KEY,hudiTableType)
       .option(OPERATION_OPT_KEY,'upsert')
       .options(getQuickstartWriteConfigs)
       .option(PRECOMBINE_FIELD_OPT_KEY,"comit_time")//提交时间列
       .option(RECORDKEY_FIELD_OPT_KEY,"uuid")//指定uuid唯一标示列
       .option(PARTITIONPATH_FIELD_OPT_KEY,"parti_path")
       .option(HIVE_SYNC_ENABLED_OPT_KEY,"true").
       option(HIVE_DATABASE_OPT_KEY,hiveDataBase).
       option(HIVE_TABLE_OPT_KEY,hiveTableName).
       option(HIVE_USER_OPT_KEY,hiveUser).
       option(HIVE_PASS_OPT_KEY,hivePass).
       option(HoodieIndexConfig.BLOOM_INDEX_UPDATE_PARTITION_PATH,"true").
       option(HIVE_URL_OPT_KEY,hiveUrl).
       
option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY,"org.apache.hudi.keygen.ComplexKeyGenerator").
       option(HIVE_PARTITION_FIELDS_OPT_KEY,"parti_path").
       
option(HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY,classOf[MultiPartKeysValueExtractor].getName)
       
.option(HoodieIndexConfig.INDEX_TYPE_PROP,HoodieIndex.IndexType.BLOOM.name())
       .option("hoodie.insert.shuffle.parallelism","12")
       .option("hoodie.upsert.shuffle.parallelism","12")
       .option(TABLE_NAME,tableName)
       .mode(Append)
       .save(baseLocation+checkOption)`
   3.Spark UI
   4.
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version :0.6.0
   
   * Spark version :2.4.4
   
   * Hive version :2.1.1
   
   * Hadoop version :3.2.1
   
   * Storage (HDFS/S3/GCS..) :HDFS
   
   * Running on Docker? (yes/no) :no
   
   
   **Additional context**
   
   The table has 183 column.
   
   
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to