wosow opened a new issue #2409:
URL: https://github.com/apache/hudi/issues/2409


   
    Spark structured Streaming writes to Hudi and synchronizes Hive to create 
only read-optimized tables without creating real-time tables   , no errors 
happening
   
   
   **Environment Description**
   
   * Hudi version :0.6.0
   
   * Spark version : 2.4.4
   
   * Hive version : 2.3.7
   
   * Hadoop version : 2.7.5
   
   * Storage (HDFS/S3/GCS..) : HDFS
   
   * Running on Docker? (yes/no) : no
   
   code as follows:
           batchDF.write.format("org.apache.hudi")
             .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY, "MERGE_ON_READ")
             .option(DataSourceWriteOptions.OPERATION_OPT_KEY, "upsert")
             
.option(HoodieCompactionConfig.INLINE_COMPACT_NUM_DELTA_COMMITS_PROP, "10")
             .option("hoodie.datasource.compaction.async.enable", "true") 
             .option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, "rec_id")
             .option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, 
"modified")
             .option(DataSourceWriteOptions.HIVE_DATABASE_OPT_KEY, "ads") 
             .option(DataSourceWriteOptions.HIVE_TABLE_OPT_KEY, hiveTableName) 
             .option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, "dt") 
             .option(DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY, "dt")
             .option(DataSourceWriteOptions.HIVE_STYLE_PARTITIONING_OPT_KEY, 
"true")
             .option(HoodieWriteConfig.TABLE_NAME, hiveTableName)
             .option(HoodieIndexConfig.BLOOM_INDEX_UPDATE_PARTITION_PATH, 
"true") 
             .option(DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY, "true") 
             .option(DataSourceWriteOptions.HIVE_URL_OPT_KEY, 
"jdbc:hive2://0.0.0.0:10000") 
             .option(DataSourceWriteOptions.HIVE_USER_OPT_KEY, "")
             .option(DataSourceWriteOptions.HIVE_PASS_OPT_KEY, "")
             
.option(DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY, 
classOf[MultiPartKeysValueExtractor].getName)
             .option(HoodieIndexConfig.INDEX_TYPE_PROP, 
HoodieIndex.IndexType.GLOBAL_BLOOM.name())
             .option("hoodie.insert.shuffle.parallelism", "10")
             .option("hoodie.upsert.shuffle.parallelism", "10")
             .mode("append")
             .save("/data/mor/user")
   
   only create user_ro ,  no user_rt
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to