ajantha-bhat commented on code in PR #4902:
URL: https://github.com/apache/iceberg/pull/4902#discussion_r884963861


##########
docs/spark/spark-procedures.md:
##########
@@ -268,6 +268,7 @@ Iceberg can compact data files in parallel using Spark with 
the `rewriteDataFile
 | `sort_order`  |    | string | Comma separated sort_order_column. Where 
sort_order_column is a space separated sort order info per column (ColumnName 
SortDirection NullOrder). <br/> SortDirection can be ASC or DESC. NullOrder can 
be NULLS FIRST or NULLS LAST |
 | `options`     | ️   | map<string, string> | Options to be used for actions|
 | `where`       | ️   | string | predicate as a string used for filtering the 
files. Note that all files that may contain data matching the filter will be 
selected for rewriting|
+| `z_order`     |    | string | Comma separated column names that are to be 
considered for z ordering |

Review Comment:
   I am not sure how to make it. Can you elaborate? (how to specify z_order 
without providing an extra argument?) 
   
   I know that z_order extends the sort strategy itself and at a given point of 
time only sort order or z_order can exist. 
   But the arguments for sort order will have sort direction, nulls first and 
last info which is not required to have for zorder. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to