Re: [lustre-discuss] [EXTERNAL] Re: Data recovery with lost MDT data

2023-09-22 Thread Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.] via lustre-discuss
I’m only showing you the last 10 directories below but there are about 30 or 40 
directories with a pretty uniform distribution between 6/20 and now.  If it was 
a situation where we had been rolled back to 6/20 but directories were starting 
to be updated again, there should be a big gap with no updates.  The rollback 
(when we deleted the “snapshot”) happened on Monday, 9/18.  We could do another 
snapshot of the MDT, mount it read only and poke around in there if you think 
that would help.  Actually, our backup process (which is running normally 
again) is doing just that.  It takes quite a long time to complete so there is 
opportunity for me to investigate.



From: Andreas Dilger 
Date: Friday, September 22, 2023 at 1:36 AM
To: "Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.]" 

Cc: "lustre-discuss@lists.lustre.org" 
Subject: Re: [EXTERNAL] Re: [lustre-discuss] Data recovery with lost MDT data

CAUTION: This email originated from outside of NASA.  Please take care when 
clicking links or opening attachments.  Use the "Report Message" button to 
report suspicious messages to the NASA SOC.


On Sep 21, 2023, at 16:06, Vicker, Darby J. (JSC-EG111)[Jacobs Technology, 
Inc.] mailto:darby.vicke...@nasa.gov>> wrote:

I knew an lfsck would identify the orphaned objects.  That’s great that it will 
move those objects to an area we can triage.  With ownership still intact (and 
I assume time stamps too), I think this will be helpful for at least some of 
the users to recover some of their data.  Thanks Andreas.

I do have another question.  Even with the MDT loss, the top level user 
directories on the file system are still showing current modification times.  I 
was a little surprised to see this – my expectation was that the most current 
time would be from the snapshot that we accidentally reverted to, 6/20/2023 in 
this case.  Does this make sense?

The timestamps of the directories are only stored on the MDT (unlike regular 
files which keep of the timestamp on both the MDT and OST).  Is it possible 
that users (or possibly recovered clients with existing mountpoints) have 
started to access the filesystem in the past few days since it was recovered, 
or an admin was doing something that would have caused the directories to be 
modified?


Is it possible you have a newer copy of the MDT than you thought?


[dvicker@dvicker ~]$ ls -lrt /ephemeral/ | tail
  4 drwx-- 2 abjuarez   abjuarez 4096 Sep 12 
13:24 abjuarez/
  4 drwxr-x--- 2 ksmith29   ksmith29 4096 Sep 13 
15:37 ksmith29/
  4 drwxr-xr-x55 bjjohn10   bjjohn10 4096 Sep 13 
16:36 bjjohn10/
  4 drwxrwx--- 3 cbrownsc   ccp_fast 4096 Sep 14 
12:27 cbrownsc/
  4 drwx-- 3 fgholiza   fgholiza 4096 Sep 18 
06:41 fgholiza/
  4 drwx-- 5 mtfoste2   mtfoste2 4096 Sep 19 
11:35 mtfoste2/
  4 drwx-- 4 abeniniabenini  4096 Sep 19 
15:33 abenini/
  4 drwx-- 9 pdetremp   pdetremp 4096 Sep 19 
16:49 pdetremp/
[dvicker@dvicker ~]$



From: Andreas Dilger mailto:adil...@whamcloud.com>>
Date: Thursday, September 21, 2023 at 2:33 PM
To: "Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.]" 
mailto:darby.vicke...@nasa.gov>>
Cc: "lustre-discuss@lists.lustre.org" 
mailto:lustre-discuss@lists.lustre.org>>
Subject: [EXTERNAL] Re: [lustre-discuss] Data recovery with lost MDT data

CAUTION: This email originated from outside of NASA.  Please take care when 
clicking links or opening attachments.  Use the "Report Message" button to 
report suspicious messages to the NASA SOC.



In the absence of backups, you could try LFSCK to link all of the orphan OST 
objects into .lustre/lost+found (see lctl-lfsck_start.8 man page for details).

The data is still in the objects, and they should have UID/GID/PRJID assigned 
(if used) but they have no filenames.  It would be up to you to make e.g. 
per-user lost+found directories in their home directories and move the files 
where they could access them and see if they want to keep or delete the files.

How easy/hard this is to do depends on whether the files have any content that 
can help identify them.

There was a Lustre hackathon project to save the Lustre JobID in a "user.job" 
xattr on every object, exactly to help identify the provenance of files after 
the fact (regardless of whether there is corruption), but it only just landed 
to master and will be in 2.16. That is cold comfort, but would help in the 
future.
Cheers, Andreas



On Sep 20, 2023, at 15:34, Vicker, Darby J. (JSC-EG111)[Jacobs Technology, 
Inc.] via lustre-discuss 
mailto:lustre-discuss@lists.lustre.org>> wrote:
Hello,

We have recently accidentally deleted some of our MDT data.  I think its gone 
for good but looking for advice to see if there is any way to recover.  
Thoughts

Re: [lustre-discuss] [EXTERNAL] Re: Data recovery with lost MDT data

2023-09-22 Thread Andreas Dilger via lustre-discuss
On Sep 21, 2023, at 16:06, Vicker, Darby J. (JSC-EG111)[Jacobs Technology, 
Inc.] mailto:darby.vicke...@nasa.gov>> wrote:

I knew an lfsck would identify the orphaned objects.  That’s great that it will 
move those objects to an area we can triage.  With ownership still intact (and 
I assume time stamps too), I think this will be helpful for at least some of 
the users to recover some of their data.  Thanks Andreas.

I do have another question.  Even with the MDT loss, the top level user 
directories on the file system are still showing current modification times.  I 
was a little surprised to see this – my expectation was that the most current 
time would be from the snapshot that we accidentally reverted to, 6/20/2023 in 
this case.  Does this make sense?

The timestamps of the directories are only stored on the MDT (unlike regular 
files which keep of the timestamp on both the MDT and OST).  Is it possible 
that users (or possibly recovered clients with existing mountpoints) have 
started to access the filesystem in the past few days since it was recovered, 
or an admin was doing something that would have caused the directories to be 
modified?


Is it possible you have a newer copy of the MDT than you thought?

[dvicker@dvicker ~]$ ls -lrt /ephemeral/ | tail
  4 drwx-- 2 abjuarez   abjuarez 4096 Sep 12 
13:24 abjuarez/
  4 drwxr-x--- 2 ksmith29   ksmith29 4096 Sep 13 
15:37 ksmith29/
  4 drwxr-xr-x55 bjjohn10   bjjohn10 4096 Sep 13 
16:36 bjjohn10/
  4 drwxrwx--- 3 cbrownsc   ccp_fast 4096 Sep 14 
12:27 cbrownsc/
  4 drwx-- 3 fgholiza   fgholiza 4096 Sep 18 
06:41 fgholiza/
  4 drwx-- 5 mtfoste2   mtfoste2 4096 Sep 19 
11:35 mtfoste2/
  4 drwx-- 4 abeniniabenini  4096 Sep 19 
15:33 abenini/
  4 drwx-- 9 pdetremp   pdetremp 4096 Sep 19 
16:49 pdetremp/
[dvicker@dvicker ~]$



From: Andreas Dilger mailto:adil...@whamcloud.com>>
Date: Thursday, September 21, 2023 at 2:33 PM
To: "Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.]" 
mailto:darby.vicke...@nasa.gov>>
Cc: "lustre-discuss@lists.lustre.org" 
mailto:lustre-discuss@lists.lustre.org>>
Subject: [EXTERNAL] Re: [lustre-discuss] Data recovery with lost MDT data

CAUTION: This email originated from outside of NASA.  Please take care when 
clicking links or opening attachments.  Use the "Report Message" button to 
report suspicious messages to the NASA SOC.


In the absence of backups, you could try LFSCK to link all of the orphan OST 
objects into .lustre/lost+found (see lctl-lfsck_start.8 man page for details).

The data is still in the objects, and they should have UID/GID/PRJID assigned 
(if used) but they have no filenames.  It would be up to you to make e.g. 
per-user lost+found directories in their home directories and move the files 
where they could access them and see if they want to keep or delete the files.

How easy/hard this is to do depends on whether the files have any content that 
can help identify them.

There was a Lustre hackathon project to save the Lustre JobID in a "user.job" 
xattr on every object, exactly to help identify the provenance of files after 
the fact (regardless of whether there is corruption), but it only just landed 
to master and will be in 2.16. That is cold comfort, but would help in the 
future.
Cheers, Andreas


On Sep 20, 2023, at 15:34, Vicker, Darby J. (JSC-EG111)[Jacobs Technology, 
Inc.] via lustre-discuss 
mailto:lustre-discuss@lists.lustre.org>> wrote:
Hello,

We have recently accidentally deleted some of our MDT data.  I think its gone 
for good but looking for advice to see if there is any way to recover.  
Thoughts appreciated.

We run two LFS’s on the same set of hardware.  We didn’t set out to do this, 
but it kind of evolved.  The original setup was only a single filesystem and 
was all ZFS – MDT and OST’s.  Eventually, we had some small file workflows that 
we wanted to get better performance on.  To address this, we stood up another 
filesystem on the same hardware and used a an ldiskfs MDT.  However, since were 
already using ZFS, under the hood the storage device we build the ldisk MDT on 
comes from ZFS.  That gets presented to the OS as /dev/zd0.  We do a nightly 
backup of the MDT by cloning the ZFS dataset (this creates /dev/zd16, for 
whatever reason), snapshot the clone, mount that as ldiskfs, tar up the data 
and then destroy the snapshot and clone.  Well, occasionally this process gets 
interrupted, leaving the ZFS snapshot and clone hanging around.  This is where 
things go south.  Something happens that swaps the clone with the primary 
dataset.  ZFS says you’re working with the primary but its really the clone, 
and via versa.  This happened about a year ago and we caught it, were able to 
“zfs pro

Re: [lustre-discuss] [EXTERNAL] Re: Data recovery with lost MDT data

2023-09-21 Thread Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.] via lustre-discuss
I knew an lfsck would identify the orphaned objects.  That’s great that it will 
move those objects to an area we can triage.  With ownership still intact (and 
I assume time stamps too), I think this will be helpful for at least some of 
the users to recover some of their data.  Thanks Andreas.

I do have another question.  Even with the MDT loss, the top level user 
directories on the file system are still showing current modification times.  I 
was a little surprised to see this – my expectation was that the most current 
time would be from the snapshot that we accidentally reverted to, 6/20/2023 in 
this case.  Does this make sense?

[dvicker@dvicker ~]$ ls -lrt /ephemeral/ | tail
  4 drwx-- 2 abjuarez   abjuarez 4096 Sep 12 
13:24 abjuarez/
  4 drwxr-x--- 2 ksmith29   ksmith29 4096 Sep 13 
15:37 ksmith29/
  4 drwxr-xr-x55 bjjohn10   bjjohn10 4096 Sep 13 
16:36 bjjohn10/
  4 drwxrwx--- 3 cbrownsc   ccp_fast 4096 Sep 14 
12:27 cbrownsc/
  4 drwx-- 3 fgholiza   fgholiza 4096 Sep 18 
06:41 fgholiza/
  4 drwx-- 5 mtfoste2   mtfoste2 4096 Sep 19 
11:35 mtfoste2/
  4 drwx-- 4 abeniniabenini  4096 Sep 19 
15:33 abenini/
  4 drwx-- 9 pdetremp   pdetremp 4096 Sep 19 
16:49 pdetremp/
256 drwxrwx---   199 ccp_fast_boeing_runner ccp_white  258048 Sep 19 
22:03 ccp_fast_boeing_runner/
160 drwxr-x---  2070 ccp_fast_spacex_runner ccp_white  159744 Sep 19 
22:04 ccp_fast_spacex_runner/
[dvicker@dvicker ~]$



From: Andreas Dilger 
Date: Thursday, September 21, 2023 at 2:33 PM
To: "Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.]" 

Cc: "lustre-discuss@lists.lustre.org" 
Subject: [EXTERNAL] Re: [lustre-discuss] Data recovery with lost MDT data

CAUTION: This email originated from outside of NASA.  Please take care when 
clicking links or opening attachments.  Use the "Report Message" button to 
report suspicious messages to the NASA SOC.


In the absence of backups, you could try LFSCK to link all of the orphan OST 
objects into .lustre/lost+found (see lctl-lfsck_start.8 man page for details).

The data is still in the objects, and they should have UID/GID/PRJID assigned 
(if used) but they have no filenames.  It would be up to you to make e.g. 
per-user lost+found directories in their home directories and move the files 
where they could access them and see if they want to keep or delete the files.

How easy/hard this is to do depends on whether the files have any content that 
can help identify them.

There was a Lustre hackathon project to save the Lustre JobID in a "user.job" 
xattr on every object, exactly to help identify the provenance of files after 
the fact (regardless of whether there is corruption), but it only just landed 
to master and will be in 2.16. That is cold comfort, but would help in the 
future.
Cheers, Andreas


On Sep 20, 2023, at 15:34, Vicker, Darby J. (JSC-EG111)[Jacobs Technology, 
Inc.] via lustre-discuss  wrote:
Hello,

We have recently accidentally deleted some of our MDT data.  I think its gone 
for good but looking for advice to see if there is any way to recover.  
Thoughts appreciated.

We run two LFS’s on the same set of hardware.  We didn’t set out to do this, 
but it kind of evolved.  The original setup was only a single filesystem and 
was all ZFS – MDT and OST’s.  Eventually, we had some small file workflows that 
we wanted to get better performance on.  To address this, we stood up another 
filesystem on the same hardware and used a an ldiskfs MDT.  However, since were 
already using ZFS, under the hood the storage device we build the ldisk MDT on 
comes from ZFS.  That gets presented to the OS as /dev/zd0.  We do a nightly 
backup of the MDT by cloning the ZFS dataset (this creates /dev/zd16, for 
whatever reason), snapshot the clone, mount that as ldiskfs, tar up the data 
and then destroy the snapshot and clone.  Well, occasionally this process gets 
interrupted, leaving the ZFS snapshot and clone hanging around.  This is where 
things go south.  Something happens that swaps the clone with the primary 
dataset.  ZFS says you’re working with the primary but its really the clone, 
and via versa.  This happened about a year ago and we caught it, were able to 
“zfs promote” to swap them back and move on.  More details on the ZFS and this 
mailing list here.

https://zfsonlinux.topicbox.com/groups/zfs-discuss/Tcb8a3ef663db0031-M5a79e71768b20b2389efc4a4

http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/2022-June/018154.html

It happened again earlier this week but we didn’t remember to check this and, 
in an effort to get the backups going again, destroyed what we thought were the 
snapshot and clone.  In reality, we destroyed the primary dataset.  Even more 
unfortunately, the stale “snapshot” was about 3