Ok initial version can be found here:-
http://blog.multiplay.co.uk/dropzone/freebsd/zz-zfs-trim-priority.patch
The key differences from Andriy's original:
1. Switches zio_trim to use ZIO_TYPE_FREE with the new ZIO_FREE_PHYS_PIPELINE
so that minimal changes are required to allow compatibility vdev_xxx and
IO scheduler.
2. Prevents zio_vdev_child_io passing ZIO_STAGE_VDEV_IO_DONE for IO's which
don't require it according to the parent zio.
3. Drop the trim map lock while processing zio_trim as, due to the new IO
scheduler, it may result in a write instead of a free being executed;
which would otherwise cause a panic due to recurse on a non-recursive
lock.
Andriy / Pawel: Do you think #3 is the correct thing to do?
I was reluctant to flag the mutex as recursive as this should be the only
case where its allowed.
Testing with instrumentation at the CAM layer shows that the max TRIM IO's
are correctly limited, so I believe the behaviour is now correct.
I'm currently testing to see whats be a good baseline value for
zfs_vdev_trim_max_active as I suspect allowing multiple BIO_DELETEs's to be
coalesced into a single TRIM / UNMAP IO at the CAM layer will be beneficia
to performance.
Regards
Steve
----- Original Message -----
From: "Steven Hartland"
Found the issue, basically its never hitting the queue code as it has some
various unobvious conditions scattered around, I'm working on a fix.
Regards
Steve
----- Original Message -----
From: "Steven Hartland"
I've just been doing some testing on this as I suspected that the current
default of vfs.zfs.vdev.trim_max_active=1 could prevent trim coallessing
at the lower levels but it seems that the number of requests submitted to
the vdev / device can exceed the trim_max_active value.
For example I've seen 81 outstanding trim requests at the cam layer.
From the comments in the code this sounds like it should never happen,
is that the case?
Regards
Steve
----- Original Message -----
From: "Matthew Ahrens" <[email protected]>
That looks fine. I don't have any experience with how devices respond to
concurrent TRIM commands, so I would defer to you on what that
trim_max_active should be.
I could see arguments either way on whether TRIM should be higher or lower
priority than SCRUB, but in practice it doesn't matter; the priority only
comes into play once you have zfs_vdev_max_active total i/os queued to the
device. By default this is larger than the sum of all queue's
max_active's, so unless zfs_vdev_max_active is lowered it is not possible
for the priority to matter.
--matt
On Sat, Nov 30, 2013 at 7:13 AM, Andriy Gapon <[email protected]> wrote:
[resending with CC fixed]
Matt,
as you most likely know, ZFS/FreeBSD already has ability to use TRIM
command
with disks that support it.
As the command can be quite slow and it is a maintenance kind of command,
it
makes sense to assign a very low priority to it.
I've come up with the following change that introduces a new priority for
the
TRIM zios:
http://people.freebsd.org/~avg/zfs-trim-priority.diff
To be honest, this change actually attempts to restore what we already had
in
FreeBSD before I merged your write throttle & i/o scheduler performance
work.
Could you please review the change?
I am not sure if I correctly translated my intent to the min_active and
max_active values. I will greatly appreciate your help with these.
Thank you!
--
Andriy Gapon
_______________________________________________
developer mailing list
[email protected]
http://lists.open-zfs.org/mailman/listinfo/developer