Github user rdblue commented on the issue:

    https://github.com/apache/spark/pull/21306
  
    > Can we support column range partition predicates please?
    
    This has an "apply" transform for passing other functions directly through, 
so that may help if you have additional transforms that aren't committed to 
Spark yet.
    
    As for range partitioning, can you be more specific about what you mean? 
What does that transform function look like? Part of the rationale for the 
existing proposal is that these are all widely used and understood. I want to 
make sure that as we expand the set of validated transforms, we aren't 
introducing confusion.
    
    Also, could you share the use case you intend for this? It would be great 
to hear about uses other than just Iceberg tables.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to