There is zfs ocf_heartbeat agent, which let you can import/export and
failover zpool with PCS.
https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/ZFS
ZFS based NAS guide will be good reference, except this explains creating
NAS (export with NFS) but not lustre.
1. "zpool list" command shows zpool size as all sum of physical drives.
when you create raidz2 volume (which is identical to raid6) with 10 * 6TB
drives, the zpool size counts up to 60TB (54TiB), while usable space from
"lfs df" is 8 * 6TB = 48TB (about 42TiB). even "lfs df -h" whill show TiB
size
Thank you Jeff. I got the solution for this it is the variation in zfs
rather than the lustre because of the parity considered.
But the metadata should occupy the 60% for the inode creation which is not
happening in the zfs compared with ext4 ldisk.
Thanks,
ANS
On Tue, Jan 1, 2019 at 1:05 PM
On Tue, Jan 01, 2019 at 01:05:22PM +0530, ANS wrote:
>So what could be the reason for this variation of the size.
with our ZFS 0.7.9 + Lustre 2.10.6 the "lfs df" numbers seem to be the
same as those from "zfs list" (not "zpool list").
so I think your question is more about ZFS than Lustre.
the
Thank you Jeff. I have created the lustre on ZFS freshly and no other is
having access to it. So when mounted it on client it is showing around 40TB
variation from the actual space.
So what could be the reason for this variation of the size.
Thanks,
ANS
On Tue, Jan 1, 2019 at 12:21 PM Jeff
Very forward versions...especially on ZFS.
You build OST volumes in a pool. If no other volumes are defined in a pool
then 100% of that pool will be available for the OST volume but the way ZFS
works the capacity doesn’t really belong to the OST volume until blocks are
allocated for writes. So
Thanks Jeff. Currently i am using
modinfo zfs | grep version
version:0.8.0-rc2
rhelversion:7.4
lfs --version
lfs 2.12.0
And this is a fresh install. So is there any other possibility to show the
complete zpool lun has been allocated for lustre alone.
Thanks,
ANS
On Tue, Jan 1,
ANS,
Lustre on top of ZFS has to estimate capacities and it’s fairly off when
the OSTs are new and empty. As objects are written to OSTs and capacity is
consumed it gets the sizing of capacity more accurate. At the beginning
it’s so off that it appears to be an error.
What version are you