Yao,

Glad you have the correct modules working now. Can you explain why you are
employing the virtual disk driver with osd-zfs?

Capacities with Lustre-ZFS are estimated as zfs has functions like
compression that can change "capacity". As ZFS volumes are used and filled
as Lustre volumes their capacities will be reported more accurately as
objects are allocated to storage.

Running a Lustre command from a client `lfs df -h` will give a better view
of Lustre capacities from the client perspective.

Your use of dev/vda may be adding obscurity and I'm not sure why you would
be adding that.

--Jeff



--Jeff


On Thu, Jun 29, 2023 at 10:33 PM Yao Weng <wengya...@gmail.com> wrote:

> Hi Jeff:
> Thank you very much ! I install lustre-zfs-dkms and lustre client can
> mount lustre filesystem. However, the size does not adds-up.
> I have
> - msg
>
>  sudo mkfs.lustre --mgs  --reformat --backfstype=zfs --fsname=lustre  
> lustre-mgs/mgs
> /dev/vda2
>
>
>    Permanent disk data:
>
> Target:     MGS
>
> Index:      unassigned
>
> Lustre FS:  lustre
>
> Mount type: zfs
>
> Flags:      0x64
>
>               (MGS first_time update )
>
> Persistent mount opts:
>
> Parameters:
>
> mkfs_cmd = zpool create -f -O canmount=off lustre-mgs /dev/vda2
>
> mkfs_cmd = zfs create -o canmount=off  lustre-mgs/mgs
>
>   xattr=sa
>
>   dnodesize=auto
>
> Writing lustre-mgs/mgs properties
>
>   lustre:version=1
>
>   lustre:flags=100
>
>   lustre:index=65535
>
>   lustre:fsname=lustre
>
>   lustre:svname=MGS
>
>
> - mdt
>
>  sudo mkfs.lustre --mdt --reformat --backfstype=zfs --fsname=lustre
> --index=0  --mgsnode=10.34.0.103@tcp0 lustre-mdt0/mdt0 /dev/vda2
>
>
>    Permanent disk data:
>
> Target:     lustre:MDT0000
>
> Index:      0
>
> Lustre FS:  lustre
>
> Mount type: zfs
>
> Flags:      0x61
>
>               (MDT first_time update )
>
> Persistent mount opts:
>
> Parameters: mgsnode=10.34.0.103@tcp
>
> mkfs_cmd = zpool create -f -O canmount=off lustre-mdt0 /dev/vda2
>
> mkfs_cmd = zfs create -o canmount=off  lustre-mdt0/mdt0
>
>   xattr=sa
>
>   dnodesize=auto
>
> Writing lustre-mdt0/mdt0 properties
>
>   lustre:mgsnode=10.34.0.103@tcp
>
>   lustre:version=1
>
>   lustre:flags=97
>
>   lustre:index=0
>
>   lustre:fsname=lustre
>
>   lustre:svname=lustre:MDT0000
>
>
> - ost
>
> sudo mkfs.lustre --ost --reformat --backfstype=zfs --fsname=lustre
> --index=0  --mgsnode=10.34.0.103@tcp0 lustre-ost0/ost0 /dev/vda2
>
>
>    Permanent disk data:
>
> Target:     lustre:OST0000
>
> Index:      0
>
> Lustre FS:  lustre
>
> Mount type: zfs
>
> Flags:      0x62
>
>               (OST first_time update )
>
> Persistent mount opts:
>
> Parameters: mgsnode=10.34.0.103@tcp
>
> mkfs_cmd = zpool create -f -O canmount=off lustre-ost0 /dev/vda2
>
> mkfs_cmd = zfs create -o canmount=off  lustre-ost0/ost0
>
>   xattr=sa
>
>   dnodesize=auto
>
>   recordsize=1M
>
> Writing lustre-ost0/ost0 properties
>
>   lustre:mgsnode=10.34.0.103@tcp
>
>   lustre:version=1
>
>   lustre:flags=98
>
>   lustre:index=0
>
>   lustre:fsname=lustre
>
>   lustre:svname=lustre:OST0000
>
>
>
> I have 51G for /dev/vda
>
> df -H /dev/vda2
>
> Filesystem      Size  Used Avail Use% Mounted on
>
> devtmpfs         51G     0   51G   0% /dev
>
> On my client node,
>
> sudo mount -t lustre 10.34.0.103@tcp0:/lustre /mnt
>
> However, the size is only 25M, shouldn't it be 51G ?
>
> df -H /mnt
>
> Filesystem               Size  Used Avail Use% Mounted on
>
> 10.34.0.103@tcp:/lustre   25M  3.2M   19M  15% /mnt
>
> Thanks
> Yao
>
> On Wed, Jun 28, 2023 at 12:22 PM Jeff Johnson <
> jeff.john...@aeoncomputing.com> wrote:
>
>> Yao,
>>
>> Either add the required kernel-{devel,debuginfo} so the osd-ldiskfs
>> kernel modules can be built against your kernel or remove lustre-all-dkms
>> package and replace with lustre-zfs-dkms and build ZFS only Lustre modules.
>>
>> --Jeff
>>
>>
>> On Wed, Jun 28, 2023 at 8:45 AM Yao Weng <wengya...@gmail.com> wrote:
>>
>>> I have error when installing lustre-all-dkms-2.15.3-1.el8.noarch
>>>
>>> Loading new lustre-all-2.15.3 DKMS files...
>>>
>>> Deprecated feature: REMAKE_INITRD (/usr/src/lustre-all-2.15.3/dkms.conf)
>>>
>>> Building for 4.18.0-477.15.1.el8_8.x86_64
>>> 4.18.0-477.10.1.el8_lustre.x86_64
>>>
>>> Building initial module for 4.18.0-477.15.1.el8_8.x86_64
>>>
>>> Deprecated feature: REMAKE_INITRD
>>> (/var/lib/dkms/lustre-all/2.15.3/source/dkms.conf)
>>>
>>> realpath: /var/lib/dkms/spl/2.1.12/source: No such file or directory
>>>
>>> realpath: /var/lib/dkms/spl/kernel-4.18.0-477.15.1.el8_8.x86_64-x86_64:
>>> No such file or directory
>>>
>>> configure: WARNING:
>>>
>>>
>>> Disabling ldiskfs support because complete ext4 source does not exist.
>>>
>>>
>>> If you are building using kernel-devel packages and require ldiskfs
>>>
>>> server support then ensure that the matching kernel-debuginfo-common
>>>
>>> and kernel-debuginfo-common-<arch> packages are installed.
>>>
>>>
>>> awk: fatal: cannot open file
>>> `/var/lib/dkms/lustre-all/2.15.3/build/_lpb/Makefile.compile.lustre' for
>>> reading (No such file or directory)
>>>
>>> ./configure: line 53751: test: too many arguments
>>>
>>> ./configure: line 53755: test: too many arguments
>>>
>>> Error!  Build of osd_ldiskfs.ko failed for:
>>> 4.18.0-477.15.1.el8_8.x86_64 (x86_64)
>>>
>>> Make sure the name of the generated module is correct and at the root of
>>> the
>>>
>>> build directory, or consult make.log in the build directory
>>>
>>> /var/lib/dkms/lustre-all/2.15.3/build for more information.
>>>
>>> warning: %post(lustre-all-dkms-2.15.3-1.el8.noarch) scriptlet failed,
>>> exit status 7
>>>
>>>
>>> Error in POSTIN scriptlet in rpm package lustre-all-dkms
>>>
>>>
>>>
>>> On Wed, Jun 28, 2023 at 9:57 AM Yao Weng <wengya...@gmail.com> wrote:
>>>
>>>> Thank Jeff:
>>>> My installation steps are
>>>>
>>>> step1: set local software provision (
>>>> https://wiki.lustre.org/Installing_the_Lustre_Software)
>>>> I download all rpm from
>>>>
>>>> https://downloads.whamcloud.com/public/lustre/lustre-2.15.3/el8.8/server
>>>>
>>>> https://downloads.whamcloud.com/public/lustre/lustre-2.15.3/el8.8/client
>>>>
>>>> https://downloads.whamcloud.com/public/e2fsprogs/1.47.0.wc2/el8
>>>>
>>>> step 2: Install the Lustre e2fsprogs distribution:
>>>>
>>>> sudo yum --nogpgcheck --disablerepo=* --enablerepo=e2fsprogs-wc install
>>>> e2fsprogs
>>>>
>>>>
>>>> step 3Install EPEL repository support:
>>>>
>>>> sudo yum -y install epel-release
>>>>
>>>>
>>>> step 4 Follow the instructions from the ZFS on Linux project
>>>> <https://openzfs.github.io/openzfs-docs/Getting%20Started/RHEL-based%20distro/index.html>
>>>>  to
>>>> install the ZFS YUM repository definition. Use the DKMS package repository
>>>> (the default)
>>>>
>>>> sudo dnf install https://zfsonlinux.org/epel/zfs-release-2-3$(rpm
>>>> --eval "%{dist}").noarch.rpm
>>>>
>>>>
>>>> step 5 Install the Lustre-patched kernel packages. Ensure that the
>>>> Lustre repository is picked for the kernel packages, by disabling the OS
>>>> repos:
>>>>
>>>>
>>>> sudo yum --nogpgcheck --disablerepo=base,extras,updates \
>>>>
>>>> --enablerepo=lustre-server install \
>>>>
>>>> kernel \
>>>>
>>>> kernel-devel \
>>>>
>>>> kernel-headers \
>>>>
>>>> kernel-tools \
>>>>
>>>> kernel-tools-libs \
>>>>
>>>> kernel-tools-libs-devel
>>>>
>>>>
>>>> step 6 Generate a persistent hostid on the machine, if one does not
>>>> already exist. This is needed to help protect ZFS zpools against
>>>> simultaneous imports on multiple servers. For example:
>>>>
>>>> hid=`[ -f /etc/hostid ] && od -An -tx /etc/hostid|sed 's/ //g'`
>>>>
>>>> [ "$hid" = `hostid` ] || genhostid
>>>>
>>>>
>>>> step 7 reboot
>>>>
>>>> step 8 install lustre and zfs
>>>> sudo yum --skip-broken --nogpgcheck --enablerepo=lustre-server install \
>>>>          lustre-dkms \
>>>>          lustre-osd-zfs-mount \
>>>>          lustre \
>>>>          lustre-resource-agents \
>>>>         lustre-dkms \
>>>>          zfs
>>>>
>>>> step 9 Load the Lustre and ZFS kernel modules to verify that the
>>>> software has installed correctly
>>>>
>>>> sudo modprobe -v zfs
>>>>
>>>> sudo modprobe -v lustre
>>>>
>>>> On Wed, Jun 28, 2023 at 1:04 AM Jeff Johnson <
>>>> jeff.john...@aeoncomputing.com> wrote:
>>>>
>>>>> Did you install the Lustre server RPMs?
>>>>> Your email lists both server and client repositories.
>>>>>
>>>>> Are you using DKMS? Did you install and built lustre-zfs-dkms or
>>>>> lustre-all-dkms packages?
>>>>>
>>>>> It doesn’t appear that you have any Lustre server kernel modules
>>>>> loaded, which makes me suspect you didn’t install or built the server side
>>>>> RPMs or DKMS trees
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Jun 27, 2023 at 21:41 Yao Weng via lustre-discuss <
>>>>> lustre-discuss@lists.lustre.org> wrote:
>>>>>
>>>>>> Hi:
>>>>>> I follow https://wiki.lustre.org/Installing_the_Lustre_Software to
>>>>>> install lustre.
>>>>>>
>>>>>> My kernel is
>>>>>>
>>>>>> $ uname -r
>>>>>>
>>>>>> 4.18.0-477.13.1.el8_8.x86_64
>>>>>>
>>>>>> I install
>>>>>>
>>>>>>
>>>>>> https://downloads.whamcloud.com/public/lustre/lustre-2.15.3/el8.8/server
>>>>>>
>>>>>>
>>>>>> https://downloads.whamcloud.com/public/lustre/lustre-2.15.3/el8.8/client
>>>>>>
>>>>>> https://downloads.whamcloud.com/public/e2fsprogs/1.47.0.wc2/el8
>>>>>>
>>>>>>
>>>>>> lsmod | grep lustre
>>>>>>
>>>>>> *lustre*               1048576  0
>>>>>>
>>>>>> lmv                   204800  1 *lustre*
>>>>>>
>>>>>> mdc                   282624  1 *lustre*
>>>>>>
>>>>>> lov                   344064  2 mdc,*lustre*
>>>>>>
>>>>>> ptlrpc               2490368  7 fld,osc,fid,lov,mdc,lmv,*lustre*
>>>>>>
>>>>>> obdclass             3633152  8 fld,osc,fid,ptlrpc,lov,mdc,lmv,
>>>>>> *lustre*
>>>>>>
>>>>>> lnet                  704512  7 osc,obdclass,ptlrpc,ksocklnd,lmv,
>>>>>> *lustre*
>>>>>>
>>>>>> libcfs                266240  11
>>>>>> fld,lnet,osc,fid,obdclass,ptlrpc,ksocklnd,lov,mdc,lmv,
>>>>>>
>>>>>> *lustre*lsmod | grep zfs
>>>>>>
>>>>>> lsmod | grep zfs
>>>>>>
>>>>>> *zfs*                  3887104  0
>>>>>>
>>>>>> zunicode              335872  1 *zfs*
>>>>>>
>>>>>> zzstd                 512000  1 *zfs*
>>>>>>
>>>>>> zlua                  176128  1 *zfs*
>>>>>>
>>>>>> zavl                   16384  1 *zfs*
>>>>>>
>>>>>> icp                   319488  1 *zfs*
>>>>>>
>>>>>> zcommon               102400  2 *zfs*,icp
>>>>>>
>>>>>> znvpair                90112  2 *zfs*,zcommon
>>>>>>
>>>>>> spl                   114688  6 *zfs*,icp,zzstd,znvpair,zcommon,zavl
>>>>>>
>>>>>>
>>>>>> I am able to create mgs/mdt/ost
>>>>>>
>>>>>> But when I try to mount
>>>>>>
>>>>>> sudo mount.lustre lustre-mgs/mgs /lustre/mnt
>>>>>>
>>>>>> mount.lustre: mount lustre-mgs/mgs at /lustre/mnt failed: No such
>>>>>> device
>>>>>>
>>>>>> Are the lustre modules loaded?
>>>>>>
>>>>>>  Check /etc/modprobe.conf and /proc/filesystems
>>>>>>
>>>>>> dmesg gives these error
>>>>>>
>>>>>> [76783.604090] LustreError: 158-c: Can't load module 'osd-zfs'
>>>>>>
>>>>>> [76783.606174] LustreError: 223535:0:(genops.c:361:class_newdev())
>>>>>> OBD: unknown type: osd-zfs
>>>>>>
>>>>>> [76783.607856] LustreError:
>>>>>> 223535:0:(obd_config.c:620:class_attach()) Cannot create device MGS-osd 
>>>>>> of
>>>>>> type osd-zfs : -19
>>>>>>
>>>>>> [76783.609805] LustreError:
>>>>>> 223535:0:(obd_mount.c:195:lustre_start_simple()) MGS-osd attach error -19
>>>>>>
>>>>>> [76783.611426] LustreError:
>>>>>> 223535:0:(obd_mount_server.c:1993:server_fill_super()) Unable to start 
>>>>>> osd
>>>>>> on lustre-mgs/mgs: -19
>>>>>>
>>>>>> [76783.613457] LustreError:
>>>>>> 223535:0:(super25.c:183:lustre_fill_super()) llite: Unable to mount
>>>>>> <unknown>: rc = -19
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> lustre-discuss mailing list
>>>>>> lustre-discuss@lists.lustre.org
>>>>>> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>>>>>>
>>>>> --
>>>>> ------------------------------
>>>>> Jeff Johnson
>>>>> Co-Founder
>>>>> Aeon Computing
>>>>>
>>>>> jeff.john...@aeoncomputing.com
>>>>> www.aeoncomputing.com
>>>>> t: 858-412-3810 x1001   f: 858-412-3845
>>>>> m: 619-204-9061
>>>>>
>>>>> 4170 Morena Boulevard, Suite C - San Diego, CA 92117
>>>>>
>>>>> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>>>>>
>>>>
>>
>> --
>> ------------------------------
>> Jeff Johnson
>> Co-Founder
>> Aeon Computing
>>
>> jeff.john...@aeoncomputing.com
>> www.aeoncomputing.com
>> t: 858-412-3810 x1001   f: 858-412-3845
>> m: 619-204-9061
>>
>> 4170 Morena Boulevard, Suite C - San Diego, CA 92117
>>
>> High-Performance Computing / Lustre Filesystems / Scale-out Storage
>>
>

-- 
------------------------------
Jeff Johnson
Co-Founder
Aeon Computing

jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001   f: 858-412-3845
m: 619-204-9061

4170 Morena Boulevard, Suite C - San Diego, CA 92117

High-Performance Computing / Lustre Filesystems / Scale-out Storage
_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to