cloud-fan commented on issue #26750: [SPARK-28948][SQL] Support passing all 
Table metadata in TableProvider
URL: https://github.com/apache/spark/pull/26750#issuecomment-563053185
 
 
   > That way, Spark is responsible for determining things like whether schemas 
"match" and can use more context to make a reasonable choice.
   
   We can also do that too, e.g. first call `getTable(schema, partition, 
properties)` and then check the returned table reports the compatible 
schema/partitioning as the one passed in.
   
   Even if we have separated method `inferSchema` and `inferPartitioning`, we 
still require the `getTable` method to throw IllegalArgumentException to reject 
non-compatible schema/partitioning. E.g. there is a user-provided schema and we 
pass it to `getTable` directly.
   
   My main point is, this PR is a natural extension of the existing API: if the 
`TableProvider` accepts user-specified schema, why not accept user-specified 
partitioning? The refactor might be good, but it should be a separated story.
   
   If we all agree that the existing API is wrong (the way we accept 
user-specified schema), then this PR should be rejected as it extends a wrong 
API. But this seems not the case here.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to