[lustre-discuss] Zabbix Lustre template

2023-09-27 Thread David Cohen via lustre-discuss
Hi, I'm looking for a Zabbix Lustre template, but couldn't find one. Is anyone aware of such a template and can share a link? Thanks, David ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org

[lustre-discuss] Project quota and project quota accounting

2022-11-29 Thread David Cohen via lustre-discuss
Hi, We are running Lustre 2.12.7 (ldiskfs) both on the servers and the clients. lctl get_param osd-*.*.quota_slave.info returns for all ost and mds/mdt: quota enabled: ugp space acct: ug I tried enabling group quota on a client, with no success: chattr -p 1 /storage/test chattr: Operation

Re: [lustre-discuss] Unable to mount new OST

2021-07-06 Thread David Cohen
t 7:24 AM Jeff Johnson wrote: > What devices are underneath dm-21 and are there any errors in > /var/log/messages for those devices? (assuming /dev/sdX devices underneath) > > Run `ls /sys/block/dm-21/slaves` to see what devices are beneath dm-21 > > > > > > On Tue,

Re: [lustre-discuss] Unable to mount new OST

2021-07-06 Thread David Cohen
com> wrote: > Hello David, > > On 6 Jul 2021, at 08:34, David Cohen > wrote: > > Jul 6 07:39:19 oss03 kernel: LDISKFS-fs (dm-21): warning: mounting fs > with errors, running e2fsck is recommended > > > > It looks like LDISKFS partition is in inconsist

Re: [lustre-discuss] Unable to mount new OST

2021-07-05 Thread David Cohen
of that is force reboot. Thanks, David On Tue, Jul 6, 2021 at 8:07 AM Andreas Dilger wrote: > > > On Jul 5, 2021, at 09:05, David Cohen > wrote: > > Hi, > I'm using Lustre 2.10.5 and lately tried to add a new OST. > The OST was formatted with the command below, which other than

[lustre-discuss] Unable to mount new OST

2021-07-05 Thread David Cohen
Hi, I'm using Lustre 2.10.5 and lately tried to add a new OST. The OST was formatted with the command below, which other than the index is the exact same one used for all the other OSTs in the system. mkfs.lustre --reformat --mkfsoptions="-t ext4 -T huge" --ost --fsname=local --index=0051

Re: [lustre-discuss] [EXTERNAL] Re: Disk quota exceeded while quota is not filled

2020-08-26 Thread David Cohen
> > > Chad DeWitt, CISSP | University Research Computing > > UNC Charlotte *| *Office of OneIT > > ccdew...@uncc.edu > > ---- > > > > On Tue,

Re: [lustre-discuss] Disk quota exceeded while quota is not filled

2020-08-25 Thread David Cohen
uot; Best, David On Sun, Aug 16, 2020 at 8:41 AM David Cohen wrote: > Hi, > Adding some more information. > A Few months ago the data on the Lustre fs was migrated to new physical > storage. > After successful migration the old ost were marked as active=0 > (lctl conf_param t

Re: [lustre-discuss] Disk quota exceeded while quota is not filled

2020-08-15 Thread David Cohen
11, 2020 at 7:35 AM David Cohen wrote: > Hi, > I'm running Lustre 2.10.5 on the oss and mds, and 2.10.7 on the clients. > While inode quota ons mdt worked fine for a while now: > lctl conf_param technion.quota.mdt=ugp > When, few days ago I turned on quota on ost: > lctl con

[lustre-discuss] Disk quota exceeded while quota is not filled

2020-08-10 Thread David Cohen
Hi, I'm running Lustre 2.10.5 on the oss and mds, and 2.10.7 on the clients. While inode quota ons mdt worked fine for a while now: lctl conf_param technion.quota.mdt=ugp When, few days ago I turned on quota on ost: lctl conf_param technion.quota.ost=ugp Users started getting "Disk quota exceeded"

Re: [lustre-discuss] MGS+MDT migration to a new storage using LVM tools

2020-07-21 Thread David Cohen
Jul 19, 2020, at 12:41 AM, David Cohen > wrote: > > > > Hi, > > We have a combined MGS+MDT and I'm looking for a migration to new > storage with a minimal disruption to the running jobs on the cluster. > > > > Can anyone find problems in the scenario below and/or

[lustre-discuss] MGS+MDT migration to a new storage using LVM tools

2020-07-19 Thread David Cohen
Hi, We have a combined MGS+MDT and I'm looking for a migration to new storage with a minimal disruption to the running jobs on the cluster. Can anyone find problems in the scenario below and/or suggest another solution? I would appreciate also "no problems" replies to reassure the scenario before

Re: [lustre-discuss] frequent Connection lost, Connection restored to mdt

2019-12-23 Thread David Cohen
ts? Is there any high load that could prevent your > client from communicating with your server properly? > > > > Do you correlate that with some specific load running on your clients? > > > > Aurélien > > > > *De : *lustre-discuss au nom de > David Cohen

[lustre-discuss] frequent Connection lost, Connection restored to mdt

2019-12-22 Thread David Cohen
Hi, We are running 2.10.5 on the servers and 2.10.8 on the clients. Every few minutes, we see: On client side: Dec 22 15:26:34 gftp kernel: Lustre: 439834:0:(client.c:2116:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1577021187/real 1577021187]

[lustre-discuss] Limit to the number of "--servicenode="

2018-09-29 Thread David Cohen
Hi, In all the manuals and examples there are only two "--servicenode=" in the creation of the mgs nodes and oss. Is that a limitation or can I create more service nodes? Is the maximum number of servicenodes is different for mgs and oss? David ___

Re: [lustre-discuss] Lustre 2.10.4 failover

2018-08-13 Thread David Cohen
,errors=remount-ro,acl' --param="mgsnode=oss03@tcp mgsnode=oss01@tcp servicenode=oss01@tcp servicenode=oss03@tcp" /dev/lustre_pool/MDT On Mon, Aug 13, 2018 at 8:40 PM Mohr Jr, Richard Frank (Rick Mohr) < rm...@utk.edu> wrote: > > > On Aug 13, 2018, at 7:14 AM, David

[lustre-discuss] Lustre 2.10.4 failover

2018-08-13 Thread David Cohen
Hi I installed a new 2.10.4 Lustre file system. Running MDS and OSS on the same servers. Failover wasn't configured at format time. I'm trying to configure failover node with tunefs without success. tunefs.lustre --writeconf --erase-params --param="ost.quota_type=ug" --mgsnode=oss03@tcp

[lustre-discuss] How to support user_xattr in 2.10.4

2018-07-16 Thread David Cohen
Hi, I'm running a newly installed Lustre 2.10.4. The mds is configured to support acl and user_xattr: Persistent mount opts: user_xattr,errors=remount-ro,acl But when trying to mount (or remount) the client with "-o remount,acl,user_xattr" And checking the mount I get only: type lustre

[lustre-discuss] Mount options ignored?

2018-07-02 Thread David Cohen
Hi, I'm running a newly installed Lustre 2.10.4. The mds is configured to support acl and user_xattr: Persistent mount opts: user_xattr,errors=remount-ro,acl But when trying to mount (or remount) the client with "-o remount,acl,user_xattr" And checking the mount I get only: type lustre

Re: [lustre-discuss] Lustre Client in a container

2018-01-03 Thread David Cohen
gt; > I’m not sure how much further that limited compatibility goes, though. > -- > *From:* Dilger, Andreas <andreas.dil...@intel.com> > *Sent:* Wednesday, January 3, 2018 4:20:56 AM > *To:* David Cohen > *Cc:* Patrick Farrell; lustre-discuss

Re: [lustre-discuss] Lustre Client in a container

2017-12-31 Thread David Cohen
you want to run multiple client versions on > one node...? Clients are usually interoperable across a pretty generous > set of server versions. > > > - Patrick > > > > -- > *From:* lustre-discuss <lustre-discuss-boun...@lists.lustre.org> on >

[lustre-discuss] Lustre Client in a container

2017-12-30 Thread David Cohen
Hi, Is it possible to run Lustre client in a container? The goal is to run two different client version on the same node, can it be done? David ___ lustre-discuss mailing list lustre-discuss@lists.lustre.org

Re: [Lustre-discuss] delete a undeletable file

2013-03-08 Thread David Cohen
You can move the entire folder (mv) to another location /lustre_fs/somthing/badfiles recreate the folder and mv back only the good files. If I run unlink .viminfo I got the same error: unlink: cannot unlink `.viminfo': Invalid argument I can't stop the MDS/OSS to do a lfsck or e2fsck

Re: [Lustre-discuss] MDS crashes daily at the same hour

2010-01-06 Thread David Cohen
On Monday 04 January 2010 20:42:12 Andreas Dilger wrote: On 2010-01-04, at 03:02, David Cohen wrote: I'm using a mixed environment of 1.8.0.1 MDS and 1.6.6 OSS's (had a problem with qlogic drivers and rolled back to 1.6.6). My MDS get unresponsive each day at 4-5 am local time, no kernel

[Lustre-discuss] MDS crashes daily at the same hour

2010-01-04 Thread David Cohen
1262579942 ref 1 fl Interpret:/0/0 rc 0/0 Jan 4 06:38:56 tech-mds kernel: LustreError: 6420:0: (mds_open.c:1665:mds_close()) Skipped 1923 previous similar messages -- David Cohen ___ Lustre-discuss mailing list Lustre-discuss@lists.lustre.org http