[
https://issues.apache.org/jira/browse/HUDI-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17259988#comment-17259988
]
Nishith Agarwal edited comment on HUDI-1509 at 1/6/21, 7:01 PM:
----------------------------------------------------------------
[~Pratyaksh] [~uditme] Since you guys implemented/reviewed this, did you folks
notice such degradation ?
It looks like the implementation ensures that the rewritten record has default
fields for any new fields in the new schema. If yes, can we just use this
implementation ->
[https://github.com/apache/hudi/blob/master/hudi-common/src/main/java/org/apache/hudi/avro/HoodieAvroUtils.java#L307]
and revert this particular change ?
# The above implementation ensures that the Record is created from the new
schema, hence any fields not present in the schema of the current record will
just have a default value in the new record ?
# Since Hudi don't necessarily support deleting fields from a schema (going
from a larger schema to a smaller schema), there wouldn't be a case where a
field is present in the older schema and not present in the new schema. Is that
the use case is what you were looking to support via this PR ?
was (Author: nishith29):
[~Pratyaksh] [~uditme] Since you guys implemented/reviewed this, did you folks
notice such degradation ?
> Major performance degradation due to rewriting records with default values
> --------------------------------------------------------------------------
>
> Key: HUDI-1509
> URL: https://issues.apache.org/jira/browse/HUDI-1509
> Project: Apache Hudi
> Issue Type: Bug
> Affects Versions: 0.6.0, 0.6.1, 0.7.0
> Reporter: Prashant Wason
> Priority: Blocker
> Fix For: 0.7.0
>
>
> During the in-house testing for 0.5x to 0.6x release upgrade, I have detected
> a performance degradation for writes into HUDI. I have traced the issue due
> to the changes in the following commit
> [[HUDI-727]: Copy default values of fields if not present when rewriting
> incoming record with new
> schema|https://github.com/apache/hudi/commit/6d7ca2cf7e441ad19d32d7a25739e454f39ed253]
> I wrote a unit test to reduce the scope of testing as follows:
> # Take an existing parquet file from production dataset (size=690MB,
> #records=960K)
> # Read all the records from this parquet into a JavaRDD
> # Time the call HoodieWriteClient.bulkInsertPrepped().
> (bulkInsertParallelism=1)
> The above scenario is directly taken from our production pipelines where each
> executor will ingest about a million record creating a single parquet file in
> a COW dataset. This is bulk insert only dataset.
> The time to complete the bulk insert prepped *decreased from 680seconds to
> 380seconds* when I reverted the above commit.
> Schema details: This HUDI dataset uses a large schema with 51 fields in the
> record.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)