rdblue commented on code in PR #4902: URL: https://github.com/apache/iceberg/pull/4902#discussion_r885058304
########## docs/spark/spark-procedures.md: ########## @@ -268,6 +268,7 @@ Iceberg can compact data files in parallel using Spark with the `rewriteDataFile | `sort_order` | | string | Comma separated sort_order_column. Where sort_order_column is a space separated sort order info per column (ColumnName SortDirection NullOrder). <br/> SortDirection can be ASC or DESC. NullOrder can be NULLS FIRST or NULLS LAST | | `options` | ️ | map<string, string> | Options to be used for actions| | `where` | ️ | string | predicate as a string used for filtering the files. Note that all files that may contain data matching the filter will be selected for rewriting| +| `z_order` | | string | Comma separated column names that are to be considered for z ordering | Review Comment: I think the intent is to make zorder look like a function: `zorder(col1, col2)`. +1 for unifying sort order and zorder. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
