Nathaniel Rutman wrote:

> You have to deactivate the OSCs that reference that OST.  From
> https://mail.clusterfs.com/wikis/lustre/MountConf:
>
> As of beta7, an OST can be permanently removed from a filesystem. Note
> that any files that have stripes on the removed OST will henceforth
> return EIO.
>
> mgs> lctl conf_param testfs-OST0001.osc.active=0

Thanks Nathaniel I found it shortly after posting the message. However,
maybe I didn't do it right but I still get error messages about this node.
The steps I took were:

1. umounted /lustre from the frontend
2. umounted /lustre-storage on node17
3. on frontend, ran lctl conf_param bcffs-OST0002.osc.active=0
4. on frontend, remounted bcffs with:

mount -o exclude=bcffs-OST0002 -t lustre [EMAIL PROTECTED]:/bcffs /lustre

dmesg shows:

Lustre: 31233:0:(quota_master.c:1105:mds_quota_recovery()) Not all osts
are active, abort quota recovery
Lustre: MDS bcffs-MDT0000: bcffs-OST000c_UUID now active, resetting orphans
LustreError: 32542:0:(file.c:1012:ll_glimpse_size()) obd_enqueue returned
rc -5, returning -EIO
Lustre: client 000001017ac3fc00 umount complete
Lustre: 1257:0:(obd_mount.c:1675:lustre_check_exclusion()) Excluding
bcffs-OST0002-osc (on exclusion list)
Lustre: 1257:0:(recover.c:231:ptlrpc_set_import_active()) setting import
bcffs-OST0002_UUID INACTIVE by administrator request
Lustre: osc.: set active=0 to 0
LustreError: 1257:0:(lov_obd.c:139:lov_connect_obd()) not connecting OSC
bcffs-OST0002_UUID; administratively disabled
Lustre: Client bcffs-client has started
Lustre: 2484:0:(quota_master.c:1105:mds_quota_recovery()) Not all osts are
active, abort quota recovery
Lustre: MDS bcffs-MDT0000: bcffs-OST000d_UUID now active, resetting orphans
Lustre: MGS: haven't heard from client
454dd520-82b9-e3e6-8fcb-800a75807121 (at [EMAIL PROTECTED]) in 228
seconds. I think it's dead, and I am evicting it.
LustreError: 4834:0:(file.c:1012:ll_glimpse_size()) obd_enqueue returned
rc -5, returning -EIO

Are these normal error messages? I'm asking because I'm about to copy all
of the NCBI databases to the lustre filesystem. I don't want to start it,
then have Lustre crash and have to rebuild everything all over again minus
this node.


-- 
Jeremy Mann
[EMAIL PROTECTED]

University of Texas Health Science Center
Bioinformatics Core Facility
http://www.bioinformatics.uthscsa.edu
Phone: (210) 567-2672

_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Reply via email to