ergophobiac opened a new issue, #9352:
URL: https://github.com/apache/hudi/issues/9352

   
   **Describe the problem you faced**
   
   We are running MOR tables using Hudi 0.12.1 on EMR 6.11.0 (Spark 3.3.1). We 
have a single writer, with metadata, async services and OCC enabled. 
   
   Writer reads from Kafka topic and uses Spark Structured Streaming to write 
to Hudi tables on S3. Micro-batch interval is 120 seconds.
   
   Our Hudi Config:
   {
   "hoodie.table.version": 5,
         "hoodie.datasource.write.hive_style_partitioning": true,
         "hoodie.datasource.meta.sync.enable": false,
         "hoodie.datasource.hive_sync.enable": true,
         "hoodie.datasource.hive_sync.skip_ro_suffix": true,
         "hoodie.datasource.hive_sync.partition_extractor_class": 
"org.apache.hudi.hive.MultiPartKeysValueExtractor",
         "hoodie.datasource.write.insert.drop.duplicates": true,
         "hoodie.compact.inline.trigger.strategy": "NUM_OR_TIME",
         "hoodie.compact.inline.max.delta.commits": 10,
         "hoodie.compact.inline.max.delta.seconds": 600,
         "hoodie.clean.async": true,
         "hoodie.parquet.compression.codec": "snappy",
         "hoodie.embed.timeline.server": true,
         "hoodie.embed.timeline.server.async": false,
         "hoodie.cleaner.policy.failed.writes": "LAZY",
         "hoodie.write.concurrency.mode": "OPTIMISTIC_CONCURRENCY_CONTROL",
         "hoodie.write.lock.provider": 
"org.apache.hudi.client.transaction.lock.FileSystemBasedLockProvider",
         "hoodie.index.type": "BLOOM",
         "hoodie.bloom.index.use.metadata": true,
         "hoodie.metadata.enable": true,
         "hoodie.metadata.index.async": true,
         "hoodie.metadata.clean.async": true,
         "hoodie.metadata.index.bloom.filter.enable": true,
         "hoodie.keep.max.commits": 50,
         "hoodie.archive.async": true,
         "hoodie.archive.merge.enable": false,
         "hoodie.archive.beyond.savepoint": true,
         "hoodie.cleaner.policy": "KEEP_LATEST_BY_HOURS",
         "hoodie.cleaner.hours.retained": 1
   } 
   
   We ran 2 tests on the same table (Same topic/same configs) with only one 
change -- **hoodie.embed.timeline.server.async=true**
   
   We found the ton of parquet files in the table, so we looked at the timeline 
and we could see cleans happening, but it seemed they were skipping 
files/versions of the table; as if they were looking at a partial snapshot of 
the timeline while deciding which files to clean. We had file versions all the 
way from the bulk insert stage still present in the table.
   
   We really want to run as many services as we can in async, as our datasets 
are extremely update heavy and we benefit a great deal from them. What could be 
causing this?
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   We haven't tried to reliably reproduce the issue yet, instead choosing to 
run with _hoodie.embed.timeline.server=true_ & 
_hoodie.embed.timeline.server.async=false_.
   
   But we'll try it and add a comment soon.
   
   
   **Expected behavior**
   
   All services run as before, with cleans occurring correctly.
   
   **Environment Description**
   
   * Hudi version : 0.12.1
   
   * Spark version : 3.3.1
   
   * Storage (HDFS/S3/GCS..) : S3
   
   * Running on Docker? (yes/no) : No
   
   
   **Additional context**
   
   The 
[docs](https://hudi.apache.org/docs/0.12.1/concurrency_control#enabling-multi-writing)
 mentions **FileSystemBasedLockProvider** doesn't work with cloud stores like 
S3. We were unaware of this when we started, but our writers were working fine 
from the start. Could this be the causing skipped writes/bad locks?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to