On 2016/01/14, 09:27, "Bob Ball" <[email protected]> wrote:

>Thank you, Andreas.  As always, your answers are the best.

In this case, I think Marc's answer is better.  I didn't realize that you
are moving the OST over to new hardware and not retiring it permanently,
in which case moving the existing filesystem is probably less effort, and
as an added bonus you get a backup of the filesystem (assuming you have
enough space to keep it).

Cheers, Andreas

>On 1/13/2016 10:14 PM, Dilger, Andreas wrote:
>> On 2016/01/12, 12:50, "lustre-discuss on behalf of Bob Ball"
>> <[email protected] on behalf of [email protected]>
>> wrote:
>>
>>> I have a zfs OST that I need to drain and re-create.  This is lustre
>>> 2.7.x.  In the past, with ldiskfs OST, I did this a number of times
>>> following critical failures, and came up with the following items to be
>>> replaced on the file system once the mkfs.lustre had been run, where
>>>the
>>> new OST was mounted ldiskfs at /mnt/ost and the previous files had been
>>> saved off in a backup_directory.  This is also mostly documented in the
>>> Lustre user manual.
>>>
>>> cd backup_directory
>>> cp -fv mountdata /mnt/ost/CONFIGS
>>> cp last_rcvd /mnt/ost
>>> mkdir -p /mnt/ost/O/0
>>> chmod 700 /mnt/ost/O
>>> chmod 700 /mnt/ost/O/0
>>> cp -fv LAST_ID /mnt/ost/O/0
>>>
>>> Now, with the zfs OST, mounted as type zfs, I see this file structure
>>> drwxr-xr-x 2 root root    2 Dec 31  1969 CONFIGS
>>> -rw-rw-rw- 1 root root    2 Dec 31  1969 fld
>>> -rw-r--r-- 1 root root    0 Dec 31  1969 health_check
>>> -rw-r--r-- 1 root root 8576 Dec 31  1969 last_rcvd
>>> drw-r--r-- 2 root root    2 Dec 31  1969 LFSCK
>>> -rw-r--r-- 1 root root   64 Dec 31  1969 lfsck_bookmark
>>> -rw-r--r-- 1 root root  256 Dec 31  1969 lfsck_layout
>>> drwxr-xr-x 1 root root    2 Dec 31  1969 O
>>> drwxr-xr-x 1 root root    2 Dec 31  1969 oi.0
>>> ... with lots more oi.N
>>>
>>> [root@umdist10 ~]# ll /mnt/temp/CONFIGS
>>> total 258
>>> -rw-r--r-- 1 root root     0 Dec 31  1969 params
>>> -rw-r--r-- 1 root root 12104 Dec 31  1969 T3test-client
>>> -rw-r--r-- 1 root root  8880 Dec 31  1969 T3test-OST0000
>>> [root@umdist10 ~]# ll /mnt/temp/O
>>> total 17
>>> drwxr-xr-x 1 root root 2 Dec 31  1969 0
>>> [root@umdist10 ~]# ll /mnt/temp/O/0
>>> total 98784
>>> drwxr-xr-x 1 root root 2 Dec 31  1969 d0
>>> ... 31 more dN directories
>>>
>>> So, specifically, there is no LAST_ID file, and the CONFIGS directory
>>> does not contain a "mountdata" file.
>>>
>>> My questions is, do I just not worry about those 2 files, copy back the
>>> last_rcvd file, remount the re-created OST, and continue on with life?
>>> Or, are there other files from this zfs directory structure that should
>>> also be saved and put back once the zpool is re-created?
>> For ZFS, the "mountdata" equivalent is stored as ZFS "lustre.*" dataset
>> parameters that can be read and modified via "zfs" in userspace.  That
>> said, if you haven't changed anything from defaults they should be
>> recreated by mkfs.lustre the same way again.
>>
>> For replacing the OST, you should use the "--replace" option in addition
>> to specifying "--index=N", so that it doesn't try to re-register with
>>the
>> MGS as a new OST.
>>
>> Copying "last_rcvd" isn't strictly needed, but doesn't hurt either.
>>
>> As for LAST_ID, it seems that this object is present on ZFS OSTs, but
>> doesn't have a name.  It is only looked up by FID from the OI, though it
>> would make sense to also create this file in the namespace for
>>reference.
>> With newer versions of Lustre (2.5.0 and later with the LU-14 fixes) the
>> LAST_ID file will be recreated based on info from the MDS, so this
>> shouldn't be an obstacle either.
>>
>> Bob, I'd suggest backing up the available files as you normally would,
>>but
>> AFAIK you shouldn't actually need any of them anymore.  If there are
>>gaps
>> in the user manual in this area, it would be good to file an LUDOC bug
>> and/or a patch to the manual to address the gaps.
>>
>> Cheers, Andreas
>
>


Cheers, Andreas
-- 
Andreas Dilger

Lustre Principal Architect
Intel High Performance Data Division


_______________________________________________
lustre-discuss mailing list
[email protected]
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to