[ 
https://issues.apache.org/jira/browse/HUDI-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17645527#comment-17645527
 ] 

Alexey Kudinkin commented on HUDI-5363:
---------------------------------------

*Relationship of Parallelism and # of files created*

 

- Shuffle parallelism should be decoupled from the # of files

created (at least by default, in some explicit performance-oriented cases this

coupling might be alright)

 

- If single Spark partition writes into a single table partition, 1 file will 
be  

created* (depending on the data size)

 

- If single Spark partition writes into multiple physical partitions, number of

files created depends on the *ordering* of the records w/in the Spark partition

(this is the case for Iceberg/Delta, but not Hudi, since it keeps file-

writer open until done; for Hudi it could result in OOMs though)

> Remove default parallelism values for all ops
> ---------------------------------------------
>
>                 Key: HUDI-5363
>                 URL: https://issues.apache.org/jira/browse/HUDI-5363
>             Project: Apache Hudi
>          Issue Type: Bug
>          Components: writer-core
>    Affects Versions: 0.12.1
>            Reporter: Alexey Kudinkin
>            Assignee: Alexey Kudinkin
>            Priority: Blocker
>             Fix For: 0.13.0
>
>
> Currently, we always override the parallelism of the incoming datasets:
>  # If user specified shuffle parallelism explicitly, we'd use it to override 
> the original one
>  # If user did NOT specify shuffle parallelism, we'd use default value of 200
> Second case is problematic: we're blindly overriding "natural" parallelism of 
> the data (determined based on the source of the data) and replace it with 
> static unrelated value.
> Instead, we should only be overriding the parallelism in following cases:
>  # User provided an overriding value explicitly



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to