ajantha-bhat commented on code in PR #7028:
URL: https://github.com/apache/iceberg/pull/7028#discussion_r1126718080


##########
docs/spark-procedures.md:
##########
@@ -206,6 +206,7 @@ the `expire_snapshots` procedure will never remove files 
which are still require
 | `retain_last` |    | int       | Number of ancestor snapshots to preserve 
regardless of `older_than` (defaults to 1) |
 | `max_concurrent_deletes` |    | int       | Size of the thread pool used for 
delete file actions (by default, no thread pool is used) |
 | `stream_results` |    | boolean       | When true, deletion files will be 
sent to Spark driver by RDD partition (by default, all the files will be sent 
to Spark driver). This option is recommended to set to `true` to prevent Spark 
driver OOM from large file size |
+| `snapshot_ids` |   | array of long       | List of snapshot IDs to expire. |

Review Comment:
   It would be good if we also add one example below after line 229



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to