----- On Apr 12, 2016, at 6:21 PM, Dirk Steinberg [email protected] wrote:
> Hi,
>
> in order to improve long-term performance on consumer-grade SSDs,
> I would like to reserve a certain range of LBA addresses on a freshly
> TRIMmed SSD to never be written to. That can be done by slicing the
> disk and leaving one slice of the disk unused.
>
> OTOH I really like to use whole-disk vdev pools and putting pools
> on slices is unnecessarily complex and error-prone.
> Also, should one later on decide that setting aside, say, 30% of the
> disk capacity as spare was too much, changing this to a smaller
> number afterwards in a slicing setup is a pain.
>
> Therefore my idea is to just reserve a certain amount of SSD blocks
> in the zpool and never use them. They must be nailed to specific
> block addresses but must never be written to.
>
> A sparsely provisioned zvol does not do the trick, and neither does
> filling is thickly with zeros (then the blocks would have been written to,
> ignoring for the moment compression etc.).
>
> What I need is more or less exactly what Joyent implemented for the
> multi_vdev_crash_dump support: just preallocate a range of blocks
> for a zvol. So I tried:
>
> zfs create -V 50G -o checksum=noparity zones/SPARE
>
> Looking at „zpool iostat“ I see that nothing much happens at all.
> Also, I can see that no actual blocks are allocated:
> [root@nuc6 ~]# zfs get referenced zones/SPARE
> NAME PROPERTY VALUE SOURCE
> zones/SPARE referenced 9K -
>
> So the magic apparently only happens when you actually
> activate dumping to that zvol:
>
> [root@nuc6 ~]# dumpadm -d /dev/zvol/dsk/zones/SPARE
> Dump content: kernel pages
> Dump device: /dev/zvol/dsk/zones/SPARE (dedicated)
>
> In zpool iostat 1 I can see that about 200mb of data is written:
>
> zones 22.6G 453G 0 0 0 0
> zones 27.3G 449G 0 7.54K 0 18.9M
> zones 49.5G 427G 0 33.6K 0 89.3M
> zones 71.7G 404G 0 34.6K 0 86.2M
> zones 72.6G 403G 0 1.39K 0 3.75M
> zones 72.6G 403G 0 0 0 0
>
> That must be the allocation metadata only, since this is much less than
> the 50G, but still a noticeable amount of data. And we can actually see
> that the full 50G have been pre-allocated:
>
> [root@nuc6 ~]# zfs get referenced zones/SPARE
> NAME PROPERTY VALUE SOURCE
> zones/SPARE referenced 50.0G -
>
> Now I have exactly what I want: a nailed-down allocation of
> 50G of blocks that never have been written to.
> I’d like to keep that zvol in this state indefinitely.
> Only problem: as soon as I change dumpadm to dump
> to another device (or none), this goes away again.
>
> [root@nuc6 ~]# dumpadm -d none
> [root@nuc6 ~]# zfs get referenced zones/SPARE
> NAME PROPERTY VALUE SOURCE
> zones/SPARE referenced 9K -
>
> Back to square one! BTW, the amount of data written for the de-allocation
> is much less:
>
> zones 72.6G 403G 0 0 0 0
> zones 72.6G 403G 0 101 0 149K
> zones 22.6G 453G 0 529 0 3.03M
> zones 22.6G 453G 0 0 0 0
>
> So my question is: can I somehow keep the zVOL in the pre-allocated state,
> even when I do not use it as a dump device?
>
> While we are at it: If I DID use it as dump device, will a de-allocation
> and re-allocation occur on each reboot, or will the allocation remain intact?
> Can I somehow get a list of blocks allocated for the zvol via zdb?
>
> Thanks.
>
> Cheers
> Dirk
have you seen this?
https://www.thomas-krenn.com/en/wiki/SSD_Over-provisioning_using_hdparm
this seems like it would address your original goal; i'd love to know how to
do the same thing without having to first boot a linux distribution.
-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription:
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com