[ 
https://issues.apache.org/jira/browse/SPARK-29248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16938050#comment-16938050
 ] 

Ximo Guanter commented on SPARK-29248:
--------------------------------------

Thanks for the quick reply [~kabhwan]! The issue you linked to sound like a 
very interesting feature, but I feel this issue is different from it. I'm not 
trying to impose a specific clustering or sorting requirement to the data. The 
requirement here is only providing information to the writer, not changing the 
execution plan in any way. I don't think that is achievable with the 
RequiresClustering interface, since the information flow is the opposite in 
that case (the writers imposes a number of partitions to Spark).

> Pass in number of partitions to BuildWriter
> -------------------------------------------
>
>                 Key: SPARK-29248
>                 URL: https://issues.apache.org/jira/browse/SPARK-29248
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>    Affects Versions: 3.0.0
>            Reporter: Ximo Guanter
>            Priority: Major
>
> When implementing a ScanBuilder, we require the implementor to provide the 
> schema of the data and the number of partitions.
> However, when someone is implementing WriteBuilder we only pass them the 
> schema, but not the number of partitions. This is an asymetrical developer 
> experience. Passing in the number of partitions on the WriteBuilder would 
> enable data sources to provision their write targets before starting to 
> write. For example, it could be used to provision a Kafka topic with a 
> specific number of partitions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to