jaehyeon-kim opened a new issue #4244:
URL: https://github.com/apache/hudi/issues/4244


   **_Tips before filing an issue_**
   
   - Have you gone through our 
[FAQs](https://cwiki.apache.org/confluence/display/HUDI/FAQ)?
   
   - Join the mailing list to engage in conversations and get faster support at 
[email protected].
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   When submitting DeltaStreamer on EMR with `cluster` deploy mode in the 
configuration below, it is expected 
   
   * Hudi data is stored to S3
   * Glue table is created (eg `datalake.cdc_events_simple`).
   
   The Glue table is not created while data is stored to S3. I don't see this 
issue if the deployment is set to `client`.
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   Submit DeltaStreamer app with `cluster` deploy mode.
   
   ```
   spark-submit --jars 
/usr/lib/spark/external/lib/spark-avro.jar,/usr/lib/hudi/hudi-utilities-bundle.jar
 \
       --master yarn \
       --deploy-mode cluster \
       --driver-memory 2g \
       --executor-memory 2g \
       --conf spark.sql.catalogImplementation=hive \
       --conf spark.serializer=org.apache.spark.serializer.KryoSerializer \
       --class org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer 
/usr/lib/hudi/hudi-utilities-bundle.jar \
       --table-type COPY_ON_WRITE \
       --source-ordering-field __source_ts_ms \
       --props 
"s3://data-lake-demo-cevo/hudi/config/cdc_events_deltastreamer_s3-simple.properties"
 \
       --source-class org.apache.hudi.utilities.sources.JsonDFSSource \
       --target-base-path "s3://data-lake-demo-cevo/hudi/cdc-events-simple/" \
       --target-table datalake.cdc_events_simple \
       --schemaprovider-class 
org.apache.hudi.utilities.schema.FilebasedSchemaProvider \
       --enable-hive-sync \
       --min-sync-interval-seconds 5 \
       --continuous \
       --op UPSERT
   ```
   
   `cdc_events_deltastreamer_s3-simple.properties`
   ```
   ## base properties
   hoodie.upsert.shuffle.parallelism=2
   hoodie.insert.shuffle.parallelism=2
   hoodie.delete.shuffle.parallelism=2
   hoodie.bulkinsert.shuffle.parallelism=2
   
   ## datasource properties
   hoodie.datasource.hive_sync.database=datalake
   hoodie.datasource.hive_sync.table=cdc_events_simple
   hoodie.datasource.hive_sync.partition_fields=customer_id,order_id
   
hoodie.datasource.hive_sync.partition_extractor_class=org.apache.hudi.hive.MultiPartKeysValueExtractor
   hoodie.datasource.write.recordkey.field=order_id
   hoodie.datasource.write.partitionpath.field=customer_id,order_id
   
hoodie.datasource.write.keygenerator.class=org.apache.hudi.keygen.ComplexKeyGenerator
   hoodie.datasource.write.hive_style_partitioning=true
   # only supported in Hudi 0.9.0+
   # hoodie.datasource.write.drop.partition.columns=true
   
   ## deltastreamer properties
   
hoodie.deltastreamer.schemaprovider.source.schema.file=s3://data-lake-demo-cevo/hudi/config/schema-msk.datalake.cdc_events.avsc
   
hoodie.deltastreamer.source.dfs.root=s3://data-lake-demo-cevo/cdc-events/customer_id=ALFKI/
   
   ## file properties
   # 1,024 * 1,024 * 128 = 134,217,728 (128 MB)
   hoodie.parquet.small.file.limit=134217728
   ```
   
   **Expected behavior**
   
   Hudi data is saved to S3 and a Glue table (eg, `datalake.cdc_event_simple`) 
is created. However the Glue table is not created while data is saved to S3.
   
   **Environment Description**
   
   * Hudi version : 0.8.0
   
   * Spark version : 3.0.1-amzn-0 (EMR-6,4.0)
   
   * Hive version : 3.1.2
   
   * Hadoop version : 3.2.1
   
   * Storage (HDFS/S3/GCS..) : S3
   
   * Running on Docker? (yes/no) : No
   
   
   **Additional context**
   
   Add any other context about the problem here.
   
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to