Re: [lustre-discuss] Zfs backend file system level backup

2020-01-30 Thread Yong, Fan
It can be done anytime before umount the Lustre device, in spite of whether it is new formatted system or live zfs-based system with data, as long as the version supports file-level backup. -- Cheers, Nasf -Original Message- From: lustre-discuss On Behalf Of BASDEN, ALASTAIR G. Sent:

Re: [lustre-discuss] lfsck namespace doesn't stop and I cancel it

2019-06-26 Thread Yong, Fan
If you have no other way to stop the lfsck, then you can try re-mount the MDT with option "-o skip_lfsck". -- Cheers, Nasf -Original Message- From: lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org] On Behalf Of Fernando Perez Sent: Wednesday, June 26, 2019 7:59 PM To:

Re: [lustre-discuss] "ls" hangs for certain files AND lfsck_namespace gets stuck in scanning-phase1, same position

2019-06-16 Thread Yong, Fan
u may have lost the clues for the root reason. -- Cheers, Nasf -Original Message- From: Sternberg, Michael G. [mailto:sternb...@anl.gov] Sent: Sunday, June 16, 2019 2:19 PM To: Yong, Fan Cc: lustre-discuss@lists.lustre.org Subject: Re: [lustre-discuss] "ls" han

Re: [lustre-discuss] "ls" hangs for certain files AND lfsck_namespace gets stuck in scanning-phase1, same position

2019-06-14 Thread Yong, Fan
Hi Michael, How do you trigger the namespace LFSCK? "-t namespace" or "-t all" (by default)? The latter one will start both namespace LFSCK and layout LFSCK. If layout LFSCK is also triggered, then layout LFSCK hung will also block the namespace LFSCK engine. Generally, LFSCK depends on the

Re: [lustre-discuss] lfsck repair quota

2019-04-17 Thread Yong, Fan
For ldiskfs based backend, the e2fsck will link ldiskfs orphan inodes into backend /lost+found directory that is invisible to Lustre namespace. After that, you can run namespace LFSCK that will move the orphan inodes (with valid FID) from the backend /lost+found to Lustre lost+found directory

Re: [lustre-discuss] Lustre/ZFS snapshots mount error

2018-09-11 Thread Yong, Fan
On 09/10/2018 02:57 PM, Yong, Fan wrote: It is suspected that there were some llog to be handled when the snapshot was making Then when mount-up such snapshot, some conditions trigger the llog cleanup/modification automatically. So it is not related with your actions when mount the snapshot. Since

Re: [lustre-discuss] Lustre/ZFS snapshots mount error

2018-09-10 Thread Yong, Fan
niversität München Theresienstr. 37, 80333 München, Germany Am 03.09.2018 um 08:16 schrieb Yong, Fan: I would say that it is not your operations order caused trouble. Instead, it is related with the snapshot mount logic. As mentioned in former reply, we need some patch for the llog logic to avoid modi

Re: [lustre-discuss] Lustre/ZFS snapshots mount error

2018-08-27 Thread Yong, Fan
According to the stack trace, someone was trying to cleanup old empty llogs during mount the snapshot. We do NOT allow any modification during mount snapshot; otherwise, it will trigger ZFS backend BUG(). That is why we add LASSERT() when start the transaction. One possible solution is that, we

Re: [lustre-discuss] Is there a way to have faster lustre file system checker (lfsck)?

2018-05-03 Thread Yong, Fan
Inline comments. -- Cheers, Nasf From: lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org] On Behalf Of 代栋 Sent: Thursday, May 3, 2018 1:22 AM To: Yong, Fan <fan.y...@intel.com> Cc: lustre-discuss@lists.lustre.org Subject: Re: [lustre-discuss] Is there a way to have faster

Re: [lustre-discuss] Is there a way to have faster lustre file system checker (lfsck)?

2018-05-02 Thread Yong, Fan
Inline comments. -- Cheers, Nasf > -Original Message- > From: 代栋 [mailto:daidon...@gmail.com] > Sent: Wednesday, May 2, 2018 4:06 PM > To: Yong, Fan <fan.y...@intel.com> > Cc: lustre-discuss@lists.lustre.org > Subject: Re: [lustre-discuss] Is there a way to have f

Re: [lustre-discuss] Is there a way to have faster lustre file system checker (lfsck)?

2018-05-01 Thread Yong, Fan
Inline comments. -- Cheers, Nasf > -Original Message- > From: lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org] On > Behalf > Of 代栋 > Sent: Wednesday, May 2, 2018 5:36 AM > To: lustre-discuss@lists.lustre.org > Subject: [lustre-discuss] Is there a way to have faster

Re: [lustre-discuss] online LFSCK - detailed output?

2017-03-02 Thread Yong, Fan
Hi Steve, The online LFSCK has two kinds of main outputs: one is the LFSCK process and statistics, the other is the LFSCK logs about what inconsistencies are found and fixed. The former one depends on the LFSCK type, can be found via: 1. namespace LFSCK: "lctl get_param -n

Re: [lustre-discuss] syntax/guidance on lfsck

2016-12-13 Thread Yong, Fan
The full functionality of layout LFSCK for handling orphan OST-objects is available since Lustre-2.6. Regards, Nasf -Original Message- From: lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org] On Behalf Of Dilger, Andreas Sent: Wednesday, December 14, 2016 7:03 AM To:

Re: [lustre-discuss] (LFSCK) LBUG: ASSERTION( get_current()->journal_info == ((void *)0) ) failed - (ungracefully) SOLVED

2016-09-25 Thread Yong, Fan
eans that it can NOT replace the local e2fsck. -- Cheers, Nasf -Original Message- From: lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org] On Behalf Of Cédric Dufour - Idiap Research Institute Sent: Monday, September 19, 2016 8:53 PM To: Yong, Fan <fan.y...@intel.com&g

Re: [Lustre-discuss] Directory access triggers OI scrub

2014-06-16 Thread Yong, Fan
Have you tried it with new mounted client after the upgrading? -- Regards, Nasf -Original Message- From: Tommi T [mailto:tommi_...@yahoo.com] Sent: Thursday, June 12, 2014 8:58 PM To: Yong, Fan; lustre-discuss Subject: Re: [Lustre-discuss] Directory access triggers OI scrub Hi

Re: [Lustre-discuss] Directory access triggers OI scrub

2014-06-16 Thread Yong, Fan
Go to up layer directory, then try ls -ail ${parent}/progs to see what will happen. -- Best Regards, Nasf -Original Message- From: lustre-discuss-boun...@lists.lustre.org [mailto:lustre-discuss- boun...@lists.lustre.org] On Behalf Of Tommi T Sent: Wednesday, June 11, 2014 5:11 PM

Re: [Lustre-discuss] Directory access triggers OI scrub

2014-06-16 Thread Yong, Fan
'. -- Lucky, Nasf -Original Message- From: Tommi T [mailto:tommi_...@yahoo.com] Sent: Thursday, June 12, 2014 10:30 PM To: Yong, Fan; lustre-discuss Subject: Re: [Lustre-discuss] Directory access triggers OI scrub Yes, also with different client versions. el6: lustre-client

Re: [Lustre-discuss] ls -l command in the Lustre Filesystem

2008-07-27 Thread Yong Fan
from it. -- Fan Yong Thank you! On Thu, 2008-07-24, at 05:18 PM, Yong Fan [EMAIL PROTECTED] wrote: Johnlya Write: Thank you! How to use stat-ahead command? stat-ahead feature is enabled by default in lustre 1.6.5 and later release. you can disable it by echo 0 /proc/fs

Re: [Lustre-discuss] Questions about Lustre ACLs

2008-07-25 Thread Yong Fan
Kilian CAVALOTTI 写道: Hi all, I've got a couple questions about ACLs in Lustre: 1. When they're enabled on the MDS, can a client mount the filesystem without them? It doesn't seem to be the case, but at the same time, the mount.lustre manpage mentions the noacl option in the

Re: [Lustre-discuss] ls -l command in the Lustre Filesystem

2008-07-24 Thread Yong Fan
Johnlya 写道: Thank you! How to use stat-ahead command? stat-ahead feature is enabled by default in lustre 1.6.5 and later release. you can disable it by echo 0 /proc/fs/lustre/llite/$client/statahead-max -- Fan Yong On Thu, 2008-07-17, at 09:37 PM, Brian J. Murrell [EMAIL PROTECTED]

Re: [Lustre-discuss] client randomly evicted

2008-05-16 Thread Yong Fan
with the big ones first :-) ah! little issue for that, will fix it soon. Regards! -- Fan Yong cheers, robin -Aaron On May 15, 2008, at 4:36 AM, Yong Fan wrote: Robin Humble ??: On Fri, May 02, 2008 at 03:16:31PM -0700, Andreas Dilger wrote: On Apr 30, 2008

Re: [Lustre-discuss] client randomly evicted

2008-05-15 Thread Yong Fan
Robin Humble 写道: On Fri, May 02, 2008 at 03:16:31PM -0700, Andreas Dilger wrote: On Apr 30, 2008 11:40 -0400, Aaron Knister wrote: Some more information that might be helpful. There is a particular code that one of our users runs. Personally after the trouble this code has caused