[
https://issues.apache.org/jira/browse/HUDI-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Prashant Wason updated HUDI-1509:
---------------------------------
Description:
During the in-house testing for 0.5x to 0.6x release upgrade, I have detected a
performance degradation for writes into HUDI. I have traced the issue due to
the changes in the following commit
[[HUDI-727]: Copy default values of fields if not present when rewriting
incoming record with new
schema|https://github.com/apache/hudi/commit/6d7ca2cf7e441ad19d32d7a25739e454f39ed253]
I wrote a unit test to reduce the scope of testing as follows:
# Take an existing parquet file from production dataset (size=690MB,
#records=960K)
# Read all the records from this parquet into a JavaRDD
# Time the call HoodieWriteClient.bulkInsertPrepped(). (bulkInsertParallelism=1)
The above scenario is directly taken from our production pipelines where each
executor will ingest about a million record creating a single parquet file in a
COW dataset. This is bulk insert only dataset.
The time to complete the bulk insert prepped *decreased from 680seconds to
380seconds* when I reverted the above commit.
Schema details: This HUDI dataset uses a large schema with 51 fields in the
record.
was:
During the in-house testing for 0.5x to 0.6x release upgrade, I have detected a
performance degradation for writes into HUDI. I have traced the issue due to
the changes in the following commit
[[HUDI-727]: Copy default values of fields if not present when rewriting
incoming record with new
schema|https://github.com/apache/hudi/commit/6d7ca2cf7e441ad19d32d7a25739e454f39ed253]
I wrote a unit test to reduce the scope of testing as follows:
# Take an existing parquet file from production dataset (size=690MB,
#records=960K)
# Read all the records from this parquet into a JavaRDD
# Time the call HoodieWriteClient.bulkInsertPrepped(). (bulkInsertParallelism=1)
The above scenario is directly taken from our production pipelines where each
executor will ingest about a million record creating a single parquet file in a
COW dataset. This is bulk insert only dataset.
The time to complete the bulk insert prepped *decreased from 680seconds to
380seconds* when I reverted the above commit.
> Major performance degradation due to rewriting records with default values
> --------------------------------------------------------------------------
>
> Key: HUDI-1509
> URL: https://issues.apache.org/jira/browse/HUDI-1509
> Project: Apache Hudi
> Issue Type: Bug
> Reporter: Prashant Wason
> Priority: Blocker
>
> During the in-house testing for 0.5x to 0.6x release upgrade, I have detected
> a performance degradation for writes into HUDI. I have traced the issue due
> to the changes in the following commit
> [[HUDI-727]: Copy default values of fields if not present when rewriting
> incoming record with new
> schema|https://github.com/apache/hudi/commit/6d7ca2cf7e441ad19d32d7a25739e454f39ed253]
> I wrote a unit test to reduce the scope of testing as follows:
> # Take an existing parquet file from production dataset (size=690MB,
> #records=960K)
> # Read all the records from this parquet into a JavaRDD
> # Time the call HoodieWriteClient.bulkInsertPrepped().
> (bulkInsertParallelism=1)
> The above scenario is directly taken from our production pipelines where each
> executor will ingest about a million record creating a single parquet file in
> a COW dataset. This is bulk insert only dataset.
> The time to complete the bulk insert prepped *decreased from 680seconds to
> 380seconds* when I reverted the above commit.
> Schema details: This HUDI dataset uses a large schema with 51 fields in the
> record.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)