Hi Peter. Went all ok. No problem at all.The metadb (the 7) was on the first cylinder so I think that why went ok.
Thanks Alfredo On Thu, May 14, 2009 at 4:09 PM, Peter Dennis - Sustaining Engineer < [email protected]> wrote: > > > Fred wrote: > >> Hi Peter. >> if I do so (metaclear) do you think I can move those device to another >> metaset without losing any data? >> > > In theory the answer is yes. > > When you add a disk into a diskset the disk will be repartitioned so > that SVM can do 'stuff' to it (the stuff being reserving space for > a replica on a slice). This partition layout is documented in the > man page for metaset(1M). > > If you meet these requirements then the disk will not be repartitioned > when you add the diskset into the new set. > > You can then run the metainit commands to rebuild the required metadevices. > Being careful about the creation of mirrors - see > the end of the man page for metainit(1M) for details. > > If you are using softpartitions then you will need to consider > saving a copy of the configuration (metastat -p <softpartition>) > so that you can rebuild them using the appropriate metainit command > (again look at the man page for metainit). > > In all cases take a copy of the data prior to trying any of this. > > Ta > pete > > >> >> >> On Thu, May 14, 2009 at 3:51 PM, Peter Dennis - Sustaining Engineer < >> [email protected] <mailto:[email protected]>> wrote: >> >> >> >> Fred wrote: >> >> Hi all. >> I have a DiskSer with 5 disk in it. I have 2 nodes in cluster >> and is all running ok. Now I'd like to create a new diskset with >> 2 disks from the first diskset. >> >> What's the best way to do that? >> >> I've already umount all the FS and I tried >> metaset -s TEST -d <drivename > >> >> >> Does the output of metastat -a show the devices in use by >> any metadevices ? If so you will need to (meta)clear >> those first. >> >> >> >> but I got the message >> <drivename> is in use... >> >> Any idea? >> >> Thanks >> >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> lvm-discuss mailing list >> [email protected] <mailto:[email protected]> >> >> >>
_______________________________________________ sysadmin-discuss mailing list [email protected] http://mail.opensolaris.org/mailman/listinfo/sysadmin-discuss
