ZIL pre-allocates at the block level, so think along the lines of 12k or 132k.
— richard
> On Jun 23, 2017, at 11:30 AM, Günther Alka wrote:
>
> hello Richard
>
> I can follow that the Zil does not add more fragmentation to the free space
> but is this effect relevant?
> If a ZIL pre-allocate
hello Richard
I can follow that the Zil does not add more fragmentation to the free
space but is this effect relevant?
If a ZIL pre-allocates say 4G and the remaining fragmented poolsize for
regular writes is 12T
Gea
Am 23.06.2017 um 19:30 schrieb Richard Elling:
A slog helps fragmentation
A slog helps fragmentation because the space for ZIL is pre-allocated based on
a prediction of
how big the write will be. The pre-allocated space includes a
physical-block-sized chain block for the
ZIL. An 8k write can allocate 12k for the ZIL entry that is freed when the txg
commits. Thus, a sl
A Zil or better dedicated Slog device will not help as this is not a
write cache but a logdevice. Its only there to commit every written
datablock and to put it onto stable storage. It is read only after a
crash to redo a missing committed write.
All writes, does not matter if sync or not, are
On June 23, 2017 4:13:52 PM GMT+02:00, Artyom Zhandarovsky
wrote:
>disk errors: none
>
>
>
>
>
>-
>
>CAP Alert
>
>-
>
>
>
> Is there any way to decrease fragmentation of dr_tank ?
>
>--
>
>zpool list (Sum of RAW disk capacity without redundancy counted)
Yes, but
If you increase your pool by adding a new vdev, your current data are
not auto-rebalanced. This will only happen over time with new or
modified data.
If you want the best performance then, you must copy over current data
ex by renaming a filesystem, replicate it to the former name an
So basically i need to add just more drives... ?
2017-06-23 18:09 GMT+03:00 Guenther Alka :
> The fragmentation info does not describe the fragmentation of the data on
> pool but the fragmentation of the free space. A high fragmentation value
> will result in high data fragmentation only when yo
The fragmentation info does not describe the fragmentation of the data
on pool but the fragmentation of the free space. A high fragmentation
value will result in high data fragmentation only when you write or
modify data.
https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSZpoolFragmentationM
> Is there any way to decrease fragmentation of dr_tank ?
>
> --
>
> zpool list (Sum of RAW disk capacity without redundancy counted)
>
> --
>
> NAME SIZE ALLOC FREE EXPANDSZ FRAGCAP DEDUP HEALTH ALTROOT
>
> dr_slow 9.
disk errors: none
-
CAP Alert
-
Is there any way to decrease fragmentation of dr_tank ?
--
zpool list (Sum of RAW disk capacity without redundancy counted)
--
NAME SIZE ALLOC FREE EXPANDSZ FRAGCAP
With ESXi 6.0 I have had NFS problems as well. You should use at least
6.0U2(or 6.5.0d with the 6.5 line)
Another problem may be timeouts. ZFS will wait longer for a disk than
ESXi for NFS. What you can do is reducing disk timeout time in
/etc/system with set sd:sd_io_time=0xF (=15s, default i
Hi,
We have a hyperconverged setup.
ESXi 6.0 -> OmniOS VM -> Storage Passthrough. The 10GB Nics are in configured
in Active / Standby. For NFS I use a dedicated /24 VLAN.
I wonder if ZFS replication or snapshotting could be the reason for the
problems we are seeing. The system fails at night t
12 matches
Mail list logo