Hi,
I'm looking for a Zabbix Lustre template, but couldn't find one.
Is anyone aware of such a template and can share a link?
Thanks,
David
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
Hi,
We are running Lustre 2.12.7 (ldiskfs) both on the servers and the clients.
lctl get_param osd-*.*.quota_slave.info returns for all ost and mds/mdt:
quota enabled: ugp
space acct: ug
I tried enabling group quota on a client, with no success:
chattr -p 1 /storage/test
chattr: Operation
t 7:24 AM Jeff Johnson
wrote:
> What devices are underneath dm-21 and are there any errors in
> /var/log/messages for those devices? (assuming /dev/sdX devices underneath)
>
> Run `ls /sys/block/dm-21/slaves` to see what devices are beneath dm-21
>
>
>
>
>
> On Tue,
com> wrote:
> Hello David,
>
> On 6 Jul 2021, at 08:34, David Cohen
> wrote:
>
> Jul 6 07:39:19 oss03 kernel: LDISKFS-fs (dm-21): warning: mounting fs
> with errors, running e2fsck is recommended
>
>
>
> It looks like LDISKFS partition is in inconsist
of that is
force reboot.
Thanks,
David
On Tue, Jul 6, 2021 at 8:07 AM Andreas Dilger wrote:
>
>
> On Jul 5, 2021, at 09:05, David Cohen
> wrote:
>
> Hi,
> I'm using Lustre 2.10.5 and lately tried to add a new OST.
> The OST was formatted with the command below, which other than
Hi,
I'm using Lustre 2.10.5 and lately tried to add a new OST.
The OST was formatted with the command below, which other than the index is
the exact same one used for all the other OSTs in the system.
mkfs.lustre --reformat --mkfsoptions="-t ext4 -T huge" --ost
--fsname=local --index=0051
>
>
> Chad DeWitt, CISSP | University Research Computing
>
> UNC Charlotte *| *Office of OneIT
>
> ccdew...@uncc.edu
>
> ----
>
>
>
> On Tue,
uot;
Best,
David
On Sun, Aug 16, 2020 at 8:41 AM David Cohen
wrote:
> Hi,
> Adding some more information.
> A Few months ago the data on the Lustre fs was migrated to new physical
> storage.
> After successful migration the old ost were marked as active=0
> (lctl conf_param t
11, 2020 at 7:35 AM David Cohen
wrote:
> Hi,
> I'm running Lustre 2.10.5 on the oss and mds, and 2.10.7 on the clients.
> While inode quota ons mdt worked fine for a while now:
> lctl conf_param technion.quota.mdt=ugp
> When, few days ago I turned on quota on ost:
> lctl con
Hi,
I'm running Lustre 2.10.5 on the oss and mds, and 2.10.7 on the clients.
While inode quota ons mdt worked fine for a while now:
lctl conf_param technion.quota.mdt=ugp
When, few days ago I turned on quota on ost:
lctl conf_param technion.quota.ost=ugp
Users started getting "Disk quota exceeded"
Jul 19, 2020, at 12:41 AM, David Cohen
> wrote:
> >
> > Hi,
> > We have a combined MGS+MDT and I'm looking for a migration to new
> storage with a minimal disruption to the running jobs on the cluster.
> >
> > Can anyone find problems in the scenario below and/or
Hi,
We have a combined MGS+MDT and I'm looking for a migration to new storage
with a minimal disruption to the running jobs on the cluster.
Can anyone find problems in the scenario below and/or suggest another
solution?
I would appreciate also "no problems" replies to reassure the scenario
before
ts? Is there any high load that could prevent your
> client from communicating with your server properly?
>
>
>
> Do you correlate that with some specific load running on your clients?
>
>
>
> Aurélien
>
>
>
> *De : *lustre-discuss au nom de
> David Cohen
Hi,
We are running 2.10.5 on the servers and 2.10.8 on the clients.
Every few minutes, we see:
On client side:
Dec 22 15:26:34 gftp kernel: Lustre:
439834:0:(client.c:2116:ptlrpc_expire_one_request()) @@@ Request sent has
timed out for slow reply: [sent 1577021187/real 1577021187]
Hi,
In all the manuals and examples there are only two "--servicenode=" in the
creation of the mgs nodes and oss.
Is that a limitation or can I create more service nodes?
Is the maximum number of servicenodes is different for mgs and oss?
David
___
,errors=remount-ro,acl'
--param="mgsnode=oss03@tcp mgsnode=oss01@tcp servicenode=oss01@tcp
servicenode=oss03@tcp" /dev/lustre_pool/MDT
On Mon, Aug 13, 2018 at 8:40 PM Mohr Jr, Richard Frank (Rick Mohr) <
rm...@utk.edu> wrote:
>
> > On Aug 13, 2018, at 7:14 AM, David
Hi
I installed a new 2.10.4 Lustre file system.
Running MDS and OSS on the same servers.
Failover wasn't configured at format time.
I'm trying to configure failover node with tunefs without success.
tunefs.lustre --writeconf --erase-params --param="ost.quota_type=ug"
--mgsnode=oss03@tcp
Hi,
I'm running a newly installed Lustre 2.10.4.
The mds is configured to support acl and user_xattr:
Persistent mount opts: user_xattr,errors=remount-ro,acl
But when trying to mount (or remount) the client with "-o
remount,acl,user_xattr"
And checking the mount I get only:
type lustre
Hi,
I'm running a newly installed Lustre 2.10.4.
The mds is configured to support acl and user_xattr:
Persistent mount opts: user_xattr,errors=remount-ro,acl
But when trying to mount (or remount) the client with "-o
remount,acl,user_xattr"
And checking the mount I get only:
type lustre
gt;
> I’m not sure how much further that limited compatibility goes, though.
> --
> *From:* Dilger, Andreas <andreas.dil...@intel.com>
> *Sent:* Wednesday, January 3, 2018 4:20:56 AM
> *To:* David Cohen
> *Cc:* Patrick Farrell; lustre-discuss
you want to run multiple client versions on
> one node...? Clients are usually interoperable across a pretty generous
> set of server versions.
>
>
> - Patrick
>
>
>
> --
> *From:* lustre-discuss <lustre-discuss-boun...@lists.lustre.org> on
>
Hi,
Is it possible to run Lustre client in a container?
The goal is to run two different client version on the same node, can it be
done?
David
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
You can move the entire folder (mv) to another location
/lustre_fs/somthing/badfiles recreate the folder and mv back only the good
files.
If I run unlink .viminfo I got the same error:
unlink: cannot unlink `.viminfo': Invalid argument
I can't stop the MDS/OSS to do a lfsck or e2fsck
On Monday 04 January 2010 20:42:12 Andreas Dilger wrote:
On 2010-01-04, at 03:02, David Cohen wrote:
I'm using a mixed environment of 1.8.0.1 MDS and 1.6.6 OSS's (had a
problem
with qlogic drivers and rolled back to 1.6.6).
My MDS get unresponsive each day at 4-5 am local time, no kernel
1262579942 ref 1 fl Interpret:/0/0 rc 0/0
Jan 4 06:38:56 tech-mds kernel: LustreError: 6420:0:
(mds_open.c:1665:mds_close()) Skipped 1923 previous similar messages
--
David Cohen
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http
25 matches
Mail list logo