I think you're saying you have a zvol with HFS+ on top, and that when you
mount the HFS+ volume, it sends a lot of unmap requests to the zvol, which
is slow.

Before we get into complicated solutions, I have some stupid questions:

 - Why does it need to issue a zillion unmaps every time you mount?

 - Could you just ignore the UNMAPs? (obvious answer is yes, but does it
hurt anything else)

 - Do you have this unmap performance fix?

commit 0f6d88aded0d165f5954688a9b13bac76c38da84
Author: Alex Reece <[email protected]>
Date:   Sat Jul 26 13:40:04 2014 -0800

    4873 zvol unmap calls can take a very long time for larger datasets

--matt


On Sun, Dec 7, 2014 at 9:48 PM, Jorgen Lundman <[email protected]> wrote:

>
> Hello list,
>
> A request for comments (not an actual RFC).
>
> We have been looking at trying to find a way to alleviate the time it takes
> to mount a ZVOL, due to it processing unmap requests.
>
> For example a 400G zvol with HFS+ will be processing unmaps for about 10
> minutes at mount. It is only after that, that you can use it as expected.
> Possibly as XNU iterates the DISCARD list as part of the mount preflight
> list and the mount is not considered done until the list as been depleted.
>
> I am experimenting with a (temporary) fix to attempt to address this, it
> does this by
>
> In zvol_unmap:
>
>   * Obtain zfs_range_lock(offset, bytes) as expected [1]
>   * Place unmap request on a list to be processed.
>
> As part of creating the zvol_create_minor() creation of "zv" zvol_state_t
>
>   * Start zvol_unmap thread
>
> which will pop off unmap requests, and call
>
>   * dmu_tx_commit() [2]
>   * zfs_range_unlock()
>   * zil_commit()
>
>
> The theory behind the patch is:
>
> * mount can quickly return (~30 seconds vs 10 mins)
> * holding the lock will block any write requests (one hopes)
> * delays are shifted from mount, to (possibly) unmount (waiting for list)
>
> The biggest concern is the possibility of write requests, going to a range
> which has yet to be unmapped. The theory being that the lock will block the
> write until the unmap has been processed. Usage appears to support this
> case. But, is it really?
>
> Possibly we could drop unmap requests on unmount, if it ends up being a
> problem (unmounting within 10 minutes of mounting). Or, store it in the
> dataset (zap?) But is that worth doing that for discard requests?
>
>
> The mount of the above 400GB zvol results in;
>
> kernel[0]: unmap_thr: 0xffffff823a7fb000 nodes cleared: 79394 (in list 0)
>
> ... from mount time.
>
> After mounting, the usage of the zvol generally has the list small (<10
> requests)
>
>
> So, any immediate concerns? Are there better ways to deal with unmaps? What
> solutions have other platforms gone with?
>
> Sincerely,
>
> Lund
>
>
>
>
> [1] (place request on list)
> https://github.com/openzfsonosx/zfs/blob/issue207/module/zfs/zvol.c#L2051
>
> [2] (finally process request)
> https://github.com/openzfsonosx/zfs/blob/issue207/module/zfs/zvol.c#L524
>
> Patch (diff):
> https://github.com/openzfsonosx/zfs/compare/issue207
>
> Original Issue tracker:
> https://github.com/openzfsonosx/zfs/issues/207
>
>
>
>
> --
> Jorgen Lundman       | <[email protected]>
> Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
> Shibuya-ku, Tokyo    | +81 (0)90-5578-8500          (cell)
> Japan                | +81 (0)3 -3375-1767          (home)
> _______________________________________________
> developer mailing list
> [email protected]
> http://lists.open-zfs.org/mailman/listinfo/developer
>
_______________________________________________
developer mailing list
[email protected]
http://lists.open-zfs.org/mailman/listinfo/developer

Reply via email to