On June 23, 2017 4:13:52 PM GMT+02:00, Artyom Zhandarovsky <bardi...@gmail.com> 
wrote:
>disk errors: none
>
>
>
>
>
>---------
>
>CAP Alert
>
>---------
>
>
>
> Is there any way to decrease fragmentation of dr_tank ?
>
>------------------------------
>
>zpool list (Sum of RAW disk capacity without redundancy counted)
>
>------------------------------
>
>NAME      SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH 
>ALTROOT
>
>dr_slow  9.06T  77.6M  9.06T         -     0%     0%  1.00x  ONLINE  -
>
>dr_tank  48.9T  35.1T  13.9T         -    23%    71%  1.00x  ONLINE  -
>
>rpool     272G  42.1G   230G         -    10%    15%  1.00x  ONLINE  -
>
>
>
>Real Pool capacity from zfs list
>
>------------------------------
>
>NAME   USED     AVAIL    MOUNTPOINT  %
>
>dr_slow               7.69T     1.26T     /dr_slow             14%!
>
>dr_tank 41.6T     6.33T     /dr_tank             13%!
>
>rpool     45.6G    218G      /rpool   83%

The issue of zfs fragmentation is that at some point it becomes hard to find 
free spots to write into, as well as to do large writes contiguously, so 
performance suddenly and noticeably drops. This can impact reads as well, 
especially if atime=on is left as default.

To recover from existing fragmentation you must free up space, perhaps zfs-send 
datasets to another pool, empty as much as you can on this one, and send data 
back - so it lands in large contiguous writes.

To prevent it, a ZIL caching all writes (including sync ones, e.g. nfs) can 
help. Perhaps a DDR drive (or mirror of these) with battery and flash 
protection from poweroffs, so it does not wear out like flash would. In this 
case, how-ever random writes come, ZFS does not have to put them on media asap 
- so it can do larger writes later. This can also protect SSD arrays from 
excessive small writes and wear-out, though there a bad(ly sized) ZIL can 
become a bottleneck.

Hope this helps,
Jim
--
Typos courtesy of K-9 Mail on my Redmi Android
_______________________________________________
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss

Reply via email to