skiselkov commented on this pull request.
> +}
+
+/*
+ * Executes a zio_trim on a range tree holding freed extents in the metaslab.
+ * The set of extents is taken from the metaslab's ms_prev_ts. If there is
+ * another trim currently executing on that metaslab, this function blocks
+ * until that trim completes.
+ * The `auto_trim' argument signals whether the trim is being invoked on
+ * behalf of auto or manual trim. The differences are:
+ * 1) For auto trim the trimset is split up into subtrees, each containing no
+ * more than zfs_max_bytes_per_trim total bytes. Each subtree is then
+ * trimmed in one zio. This is done to limit the number of LBAs per
+ * trim command, as many devices perform suboptimally with large trim
+ * commands, even if they indicate support for them. Manual trim already
+ * applies this limit earlier by limiting the trimset size, so the
+ * whole trimset can be issued in a single zio.
Because autotrim uses a different mechanism for command pacing. Basically it
tries to execute its trims as fast as it can, because more trims are likely
pending for the device to process from other metaslabs and more deletes may be
incoming to us. To that end, autotrim takes all the data it has accumulated,
constructs and immediately executes all trim zios (multiple zios because of
`zfs_max_bytes_per_trim`) at the same time. They don't all actually get
dispatched all at the same time. Rather, they are placed in the vdev_queue and
that takes care of limiting them to zfs_vdev_trim_{min,max}_active (and also
dispatching them in LBA order, although the utility of that is questionable).
It was easiest to just modify this function to perform the tree splitting &
subzio execution for autotrim here. Since manual trim already implements its
own limiting to `zfs_max_bytes_per_trim` higher up the stack in order to
implement its rate control, I've felt this is the most reasonable compromise
that produces the least amount of duplicate code while preserving all the rate
control and queueing logic.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/172#discussion_r114458049
------------------------------------------
openzfs-developer
Archives: https://openzfs.topicbox-beta.com/groups/developer/
Powered by Topicbox: https://topicbox-beta.com