My system: Debian 11, kernel version 5.10.0-13-amd64; I have the
following source code:
# ll /usr/src/
total 117916
drwxr-xr-x 2 root root 4096 Aug 21 09:19 linux-config-5.10/
drwxr-xr-x 4 root root 4096 Jul 25 2022
linux-headers-5.10.0-12-amd64/
drwxr-xr-x 4 root root 4096
I have a lustre 2.15.3 cluster and need to add Debian 11 and -12 clients. This
is what I've done so far:
# git clone git://git.whamcloud.com/fs/lustre-release.git
# git checkout 2.15.3
# sh autogen.sh
# ./configure --disable-server
But make fails:
...
make[3]: Entering directory
le-gss --disable-gss-keyring --disable-snmp
--enable-modules
I pushed Lustre fixes for Debian's 6.1 kernel (client):
LU-17161 build: Avoid fortify_memset in OBD_FREE_PTR
LU-17160 build: Use PyConfig_InitPythonConfig 3.11 and later
Regards,
Shaun
On 9/29/23, 11:00 PM, "lustre-discuss
PM, "lustre-discuss on behalf of Jan Andersen"
wrote:
Hi Shaun,
Thank you your reply. What you say about Debian 12 is a bit worrying:
"--disable-ldiskfs" - I use ldiskfs on the servers, I hope this isn't relevant
on the clients?
/jan
On 30/09/2023 09
Hi Rick,
Very strange - when I started the vm this morning, 'modprobe lnet'
didn't return an error - and it seems to have loaded the module:
[root@rocky8 ~]# lsmod | grep lnet
lnet 704512 0
libcfs266240 1 lnet
sunrpc577536 2 lnet
Looking at
,osd_ldiskfs,lquota,lfsck
sunrpc577536 2 lnet
But it only listens on tcp6, which I don't use - is there a way to for
it to use tcp4?
[root@mds ~]# netstat -nap | grep 988
tcp6 0 0 :::988 :::*
LISTEN -
/jan
On 27/09/2023 10:15, Jan Andersen wrote
`
—Jeff
On Wed, Sep 27, 2023 at 05:35 Jan Andersen <mailto:j...@comind.io>> wrote:
However, it is still timing out when I try to mount on the oss. This is
the kernel module:
[root@mds ~]# lsmod | grep lnet
lnet 704512 7 mgs,obdclass,osp,ptlrpc,mgc,kso
Hi,
I've built and installed lustre on two VirtualBoxes running Rocky 8.8
and formatted one as the MGS/MDS and the other as OSS, following a
presentation from Oak Ridge National Laboratory: "Creating a Lustre Test
System from Source with Virtual Machines" (sorry, no link; it was a
while ago
Hi,
I've just successfully built the lustre 2.15.3 client on Debian 11 and need to
do the same on Debian 12; however, configure fails with:
checking if Linux kernel was built with CONFIG_FHANDLE in or as module... no
configure: error:
Lustre fid handling requires that CONFIG_FHANDLE is
I have come a bit further with this problem - it seems the lnet module
can't load:
[root@rocky8 lustre-release]# depmod lnet
depmod: ERROR: Bad version passed lnet
I deleted the VMs and reinstalled Rocky 8.8, then built lustre 2.15.3
and installed it, everything without any error messages. I
I'm having some trouble installing lustre - this is on Rocky 8.8. I
downloaded the latest (?) source:
git clone git://git.whamcloud.com/fs/lustre-release.git
and I managed to compile and create the RPMs:
make rpms
I now have a directory full of rpm files:
[root@rocky8 lustre-release]# ls -1
I have a Lustre 2.15.3 cluster, and now I need to add a debian based client.
I downloaded and configured the source:
# sh autogen.sh
#
make[3]: Entering directory '/usr/src/linux-headers-5.10.0-13-amd64'
CC [M] /root/Downloads/linux/lustre-release/lnet/klnds/o2iblnd/o2iblnd.o
In file
I have installed Rocky 8.8 on a new server (Dell PowerEdge R640):
[root@mds 4.18.0-513.9.1.el8_9.x86_64]# cat /etc/*release*
Rocky Linux release 8.8 (Green Obsidian)
NAME="Rocky Linux"
VERSION="8.8 (Green Obsidian)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="8.8"
I have finally managed to build the lustre rpms, but when I try to install them
with:
dnf install ./*.rpm
I get a list of errors like
... nothing provides ldiskfsprogs >= 1.44.3.wc1 ...
In a previous communication I was advised that:
You may need to add ldiskfsprogs rpm repo and enable ha
that?
/jan
On 03/01/2024 02:17, Xinliang Liu wrote:
On Wed, 3 Jan 2024 at 10:08, Xinliang Liu mailto:xinliang@linaro.org>> wrote:
Hi Jan,
On Tue, 2 Jan 2024 at 22:29, Jan Andersen mailto:j...@comind.io>> wrote:
I have installed Rocky 8.8 on a new server (Dell Po
I am running Rocky 8.9 (uname -r: 4.18.0-513.9.1.el8_9.x86_64) and have,
apparently successfully, built the lustre rpms:
[root@mds lustre-release]# ll *2.15.4-1.el8.x86_64.rpm
-rw-r--r--. 1 root root 4640828 Jan 10 09:19
kmod-lustre-2.15.4-1.el8.x86_64.rpm
-rw-r--r--. 1 root root 42524976 Jan
Are there any tools for coordinating the start and shutdown of lustre
filesystem, so that the OSS systems don't attempt to mount disks before the MGT
and MDT are online?
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
I have the beginnings of a lustre filesystem, with a server, mds,
hosting the MGS and MDS, and a storage node, oss1. The disks, /mgt and
/mdt on mds and /ost on oss1 mount, apparently without error.
I have set up a client, pxe, which mounts /lustre:
root@node080027eb24b8:~# mount -t lustre
18 matches
Mail list logo