prashantwason opened a new pull request #3873:
URL: https://github.com/apache/hudi/pull/3873
## What is the purpose of the pull request
Metadata Table Bootstrap for very large tables (1200+ partitions, 10Million+
files) does not complete in a reasonable amount of time or leads to OOM on the
Spark driver node even with 16GB memory.
This patch tries to fix the scale issue for bootstrapping very large
datasets.
## Brief change log
Following improvements are implemented:
1. Memory overhead reduction:
- Existing code caches FileStatus for each file in memory.
- Created a new class DirectoryInfo which is used to cache a director's
file list with parts of the FileStatus (only filename and file len). This
reduces the memory requirements.
2. Improved parallelism:
- Existing code collects all the listing to the Driver and then creates
HoodieRecord on the Driver.
- This takes a long time for large tables (11million HoodieRecords to be
created)
- Created a new function in SparkRDDWriteClient specifically for bootstrap
commit. In it, the HoodieRecord creation is parallelized across executors so it
completes fast.
3. Fixed setting to limit the number of parallel listings:
- Existing code had a bug wherein 1500 executors were hardcoded to perform
listing. This leads to exception due to limit in the spark's result memory.
- Corrected the use of the config.
Result:
Dataset has 1299 partitions and 12Million files.
file listing time=1.5mins
HoodieRecord creation time=13seconds
deltacommit duration=2.6mins
## Verify this pull request
This pull request is already covered by existing tests for Hoodie Metadata
Table.
## Committer checklist
- [ ] Has a corresponding JIRA in PR title & commit
- [ ] Commit message is descriptive of the change
- [ ] CI is green
- [ ] Necessary doc changes done or have another open PR
- [ ] For large changes, please consider breaking it into sub-tasks under
an umbrella JIRA.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]