gtwuser opened a new issue, #6925:
URL: https://github.com/apache/hudi/issues/6925

   A clear and concise description of the problem.
   using below configs as mentioned in document we are writing to hudi tables 
multiple dataframes concurrently using the 
`concurrent.futures.ProcessPoolExecutor(max_workers=200) as executor`, but the 
Dynamo DB is not getting created. Please correct me here, trying to undetrstand 
why isnt the dynamo DB getting created ?
   ```bash
   'hoodie.write.lock.provider': 
'org.apache.hudi.aws.transaction.lock.DynamoDBBasedLockProvider',
                   'hoodie.write.lock.dynamodb.table': 'hudi_db_lock',
                   
'hoodie.write.lock.dynamodb.endpoint_url':'dynamodb.us-east-1.amazonaws.com',
                   'hoodie.write.lock.dynamodb.partition_key': 'hudi_db_lock',
   ```
   
   `Full config`:
   ```bash
   commonConfig = {
                   'className': 'org.apache.hudi',
                   'hoodie.datasource.hive_sync.use_jdbc': 'false',
                   'hoodie.datasource.write.precombine.field': 
'payload.recordedAt',
                   'hoodie.datasource.write.recordkey.field': 
'metadata.msgID,metadata.topic',
                   'hoodie.write.lock.provider': 
'org.apache.hudi.aws.transaction.lock.DynamoDBBasedLockProvider',
                   'hoodie.write.lock.dynamodb.table': 'hudi_db_lock',
                   
'hoodie.write.lock.dynamodb.endpoint_url':'dynamodb.us-east-1.amazonaws.com',
                   'hoodie.write.lock.dynamodb.partition_key': 'hudi_db_lock',
                   'hoodie.table.name': 'sse',
                   # 'hoodie.consistency.check.enabled': 'true',
                   'hoodie.datasource.hive_sync.database': 
args['database_name'],
                   'hoodie.datasource.write.reconcile.schema': 'true',
                   'hoodie.datasource.hive_sync.table': 
f'sse_{"_".join(prefix.split("/")[-7:-5])}'.lower(),
                   'hoodie.datasource.hive_sync.enable': 'true',
                   'path': 's3://' + args['curated_bucket'] + 
'/merged/sse-native/' + f'{prefix.split("/")[-7]}'.lower(),
                   # 1,024 * 1,024 * 128 = 134,217,728 (134 MB)
                   'hoodie.parquet.small.file.limit': '307200',
                   'hoodie.parquet.max.file.size': '128000000'
               }
   ```
   
   **Environment Description**
   
   * Hudi version : 0.11.1
   
   * Spark version : 3.1
   
   * Storage (HDFS/S3/GCS..) : S3
   
   * Running on Docker? (yes/no) :  no
   
   
   **Additional context**
   
   Add any other context about the problem here.
   
   We are running hudi apis via aws glue jobs
   
   @n3nash @nsivabalan  @alexeykudinkin please provide some pointer on this. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to