Re: [lustre-discuss] dkms-2.8.6 breaks installation of lustre-zfs-dkms-2.12.7-1.el7.noarch
thanks a lot, and thanks for the corrections. Anyway I never use more than one dkms module built for different kernel versions and usually I build the lustre dkms module always on the current version of the running kernel, but yes your fixes will address correctly the issues you mentioned and it is a more general approach. On 10/21/21 11:38 AM, Franke, Knut wrote: Hi, Am Mittwoch, dem 13.10.2021 um 16:06 -0700 schrieb Riccardo Veraldi: This is my patch to make things works and build the lustre-dkms rpm Thank you! I just ran into the exact same problem. Two comments on the patch: - ZFS_VERSION=$(dkms status -m zfs -k $3 -a $5 | awk -F', ' '{print $2; exit 0}' | grep -v ': added$') + ZFS_VERSION=$(dkms status -m zfs | awk ' { print $1 } ' | sed -e 's/zfs\///' -e 's/,//') This produces an incorrect result if the dkms module is already built for multiple kernel versions. I would suggest picking the largest zfs version for simplicity's sake: + ZFS_VERSION=$(dkms status -m zfs | awk ' { print $1 } ' | sed -e 's/zfs\///' -e 's/,//' | sort -V | tail -n1) Secondly, + SERVER="--enable-server $LDISKFS \ +- --with-linux=$4 --with-linux-obj=$4 \ +- --with-spl=$6/spl-${ZFS_VERSION} \ +- --with-spl-obj=$7/spl/${ZFS_VERSION}/$3/$5 \ +- --with-zfs=$6/zfs-${ZFS_VERSION} \ +- --with-zfs-obj=$7/zfs/${ZFS_VERSION}/$3/$5" ++ --with-zfs=/usr/src/zfs-${ZFS_VERSION} \ ++ --with-zfs-obj=/var/lib/dkms/zfs/${ZFS_VERSION}/$3/$5" This fails if we're building for a newly installed kernel we haven't rebooted to yet (or rather, any kernel version other than the one that's currently booted). Also, we might want to keep open the possibility of building for non-x86 that was present in the original (though I don't now whether Luster even supports non-x86). So: + --with-zfs-obj=/var/lib/dkms/zfs/${ZFS_VERSION}/$3/$5" To be honest, I don't understand why this second block of changes is necessary at all; but I currently don't have the time to do any more experiments.. Cheers, Knut ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
Re: [lustre-discuss] Lustre Client on Ubuntu 20.04
Hi, Thanks for the info. Our server side will get upgraded to 2.12.7 LTS in a few months so looking for a more modern OS than redhat 6.7! Does the 2.14 client work with the 2.12.7 LTS...and even with 2.7? Many thanks, Emyr James Head of IT CRG - Centre for Genomic Regulation C/ Dr. Aiguader, 88 Edif. PRBB 08003 Barcelona, Spain Phone Ext: #1098 From: Patrick Farrell Sent: 21 October 2021 22:57 To: Emyr James ; lustre-discuss@lists.lustre.org Subject: Re: Lustre Client on Ubuntu 20.04 Emyr, 2.10.8 is a fairly old version and I am pretty sure it won't build for Ubuntu 20.04. The most recent kernel it's known to build for is Linux 4.4.x, and Ubuntu 20.04 is at least 5.4.x. The only version which officially has support for 20.04 is Lustre 2.14 (and the upcoming 2.15). For Ubuntu builds, the instructions here give the basic steps: https://wiki.lustre.org/Compiling_Lustre And the other sections have instructions which will help you with adding in MOFED. Just bear in mind the comments about version compatibility. -Patrick From: lustre-discuss on behalf of Emyr James Sent: Thursday, October 21, 2021 3:28 PM To: lustre-discuss@lists.lustre.org Subject: [lustre-discuss] Lustre Client on Ubuntu 20.04 Dear all, We currently have a lustre 2.7 system running on redhat 6.7. The 2.10.8 client has been tested on redhat 6.7 and works fine (using Mellanox OFED). I'd like to get a 2.10.8 client running on Ubuntu 20.04 which is compatible with Mellanox OFED. I haven't found any instructions on how to build just the client side on Ubuntu. Does anyone have instructions or can point me to a howto somewhere? Many thanks. Emyr James Head of IT CRG - Centre for Genomic Regulation C/ Dr. Aiguader, 88 Edif. PRBB 08003 Barcelona, Spain Phone Ext: #1098 ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
[lustre-discuss] Lustre Client on Ubuntu 20.04
Dear all, We currently have a lustre 2.7 system running on redhat 6.7. The 2.10.8 client has been tested on redhat 6.7 and works fine (using Mellanox OFED). I'd like to get a 2.10.8 client running on Ubuntu 20.04 which is compatible with Mellanox OFED. I haven't found any instructions on how to build just the client side on Ubuntu. Does anyone have instructions or can point me to a howto somewhere? Many thanks. Emyr James Head of IT CRG - Centre for Genomic Regulation C/ Dr. Aiguader, 88 Edif. PRBB 08003 Barcelona, Spain Phone Ext: #1098 ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
[lustre-discuss] project quota way off
We have a weird problem with project quotas. We have a project contained in a specific directory. A du of that directory returns 71T. However lfs project shows only 1.8 TB used. Just to make sure everything was assigned correctly we ran lfs project -p -s -r on the directory to try to recalculate. That did not change any of the numbers. Any ideas as to why these numbers could be so different? Thanks in advance. Heath ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
Re: [lustre-discuss] dkms-2.8.6 breaks installation of lustre-zfs-dkms-2.12.7-1.el7.noarch
Hi, Am Mittwoch, dem 13.10.2021 um 16:06 -0700 schrieb Riccardo Veraldi: > > This is my patch to make things works and build the lustre-dkms rpm Thank you! I just ran into the exact same problem. Two comments on the patch: > - ZFS_VERSION=$(dkms status -m zfs -k $3 -a $5 | awk -F', ' '{print $2; > exit 0}' | grep -v ': added$') > + ZFS_VERSION=$(dkms status -m zfs | awk ' { print $1 } ' | sed -e > 's/zfs\///' -e 's/,//') This produces an incorrect result if the dkms module is already built for multiple kernel versions. I would suggest picking the largest zfs version for simplicity's sake: + ZFS_VERSION=$(dkms status -m zfs | awk ' { print $1 } ' | sed -e 's/zfs\///' -e 's/,//' | sort -V | tail -n1) Secondly, > + SERVER="--enable-server $LDISKFS \ > +---with-linux=$4 --with-linux-obj=$4 \ > +---with-spl=$6/spl-${ZFS_VERSION} \ > +---with-spl-obj=$7/spl/${ZFS_VERSION}/$3/$5 \ > +---with-zfs=$6/zfs-${ZFS_VERSION} \ > +---with-zfs-obj=$7/zfs/${ZFS_VERSION}/$3/$5" > ++--with-zfs=/usr/src/zfs-${ZFS_VERSION} \ > ++--with-zfs-obj=/var/lib/dkms/zfs/${ZFS_VERSION}/$3/$5" This fails if we're building for a newly installed kernel we haven't rebooted to yet (or rather, any kernel version other than the one that's currently booted). Also, we might want to keep open the possibility of building for non-x86 that was present in the original (though I don't now whether Luster even supports non-x86). So: + --with-zfs-obj=/var/lib/dkms/zfs/${ZFS_VERSION}/$3/$5" To be honest, I don't understand why this second block of changes is necessary at all; but I currently don't have the time to do any more experiments.. Cheers, Knut ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org