Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
On Wed, Jun 27, 2012 at 01:42:27AM +0300, Pasi Kärkkäinen wrote: > On Fri, Jun 15, 2012 at 06:23:42PM -0500, Timothy Coalson wrote: > > Sorry, if you meant distinguishing between true 512 and emulated > > 512/4k, I don't know, it may be vendor-specific as to whether they > > expose it through device commands at all. > > > > At least on Linux you can see the info from: > > /sys/block//queue/logical_block_size=512 > /sys/block//queue/physical_block_size=4096 > Oh, and also these methods work on Linux: # hdparm -I /dev/sdc | grep Sector Logical Sector size: 512 bytes Physical Sector size: 4096 bytes Logical Sector-0 offset:512 bytes And then there's the BLKPBSZGET ioctl. So I'd be surprised if that stuff isn't implemented on *solaris.. -- Pasi > > > Tim > > > > On Fri, Jun 15, 2012 at 6:02 PM, Timothy Coalson wrote: > > > On Fri, Jun 15, 2012 at 5:35 PM, Jim Klimov wrote: > > >> 2012-06-16 0:05, John Martin wrote: > > > > Its important to know... > > >>> > > >>> ...whether the drive is really 4096p or 512e/4096p. > > >> > > >> > > >> BTW, is there a surefire way to learn that programmatically > > >> from Solaris or its derivates > > > > > > prtvtoc should show the block size the OS thinks it has. Or > > > you can use format, select the disk from a list that includes the > > > model number and size, and use "verify". > > > > > > Tim > > ___ > > zfs-discuss mailing list > > zfs-discuss@opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
On Fri, Jun 15, 2012 at 06:23:42PM -0500, Timothy Coalson wrote: > Sorry, if you meant distinguishing between true 512 and emulated > 512/4k, I don't know, it may be vendor-specific as to whether they > expose it through device commands at all. > At least on Linux you can see the info from: /sys/block//queue/logical_block_size=512 /sys/block//queue/physical_block_size=4096 -- Pasi > Tim > > On Fri, Jun 15, 2012 at 6:02 PM, Timothy Coalson wrote: > > On Fri, Jun 15, 2012 at 5:35 PM, Jim Klimov wrote: > >> 2012-06-16 0:05, John Martin wrote: > > Its important to know... > >>> > >>> ...whether the drive is really 4096p or 512e/4096p. > >> > >> > >> BTW, is there a surefire way to learn that programmatically > >> from Solaris or its derivates > > > > prtvtoc should show the block size the OS thinks it has. Or > > you can use format, select the disk from a list that includes the > > model number and size, and use "verify". > > > > Tim > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
I tried to use cylinder 0 for root on x86, but in the UFS days, and I lost the vtoc on both mirrored disks. The installer had selected cylinder 1 as the starting cylinder for the first disk, and I thought I should be able to use cylinder 0 as well, so for the mirror I partitioned it to start from 0. I then removed the first disk, changed the starting cylinder to 0, and added it back. When I later tried to reboot the system both vtocs were lost. I had to whip up a program that scanned the disk to find my UFS filesystems so that I could put a proper vtoc back, boot the system and then change it back to start at cylinder 1. I always leave cylinder 0 alone since then. Thomas 2012-06-16 18:23, Richard Elling skrev: On Jun 15, 2012, at 7:37 AM, Hung-Sheng Tsao Ph.D. wrote: by the way when you format start with cylinder 1 donot use 0 There is no requirement for skipping cylinder 0 for root on Solaris, and there never has been. -- richard *__* ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
may be so, but in all x86 installation if one choose the default or use vbox image s11, s11 express , oi, s10u10 all have zfs rpool disk partition start from cyl 1 On this list I even come across some user some issues with zpool create that use disk partition start from cyl 0. rather be safe then sorry regards On 6/16/2012 12:23 PM, Richard Elling wrote: On Jun 15, 2012, at 7:37 AM, Hung-Sheng Tsao Ph.D. wrote: by the way when you format start with cylinder 1 donot use 0 There is no requirement for skipping cylinder 0 for root on Solaris, and there never has been. -- richard -- <>___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
On 06/16/12 12:23, Richard Elling wrote: On Jun 15, 2012, at 7:37 AM, Hung-Sheng Tsao Ph.D. wrote: by the way when you format start with cylinder 1 donot use 0 There is no requirement for skipping cylinder 0 for root on Solaris, and there never has been. Maybe not for core Solaris, but it is still wise advice if you plan to use Oracle ASM. See section 3.3.1.4, 2c: http://docs.oracle.com/cd/E11882_01/install.112/e24616/storage.htm#CACHGBAH ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
On x86 on zfs cyl 0 must be left out of zfs root pools. Been there. Skickat från min Android MobilRichard Elling skrev:On Jun 15, 2012, at 7:37 AM, Hung-Sheng Tsao Ph.D. wrote: > by the way > when you format start with cylinder 1 donot use 0 There is no requirement for skipping cylinder 0 for root on Solaris, and there never has been. -- richard -- ZFS and performance consulting http://www.RichardElling.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
On Jun 15, 2012, at 7:37 AM, Hung-Sheng Tsao Ph.D. wrote: > by the way > when you format start with cylinder 1 donot use 0 There is no requirement for skipping cylinder 0 for root on Solaris, and there never has been. -- richard -- ZFS and performance consulting http://www.RichardElling.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
I'm on openindiana 151-a4 Skickat från min Android MobilCindy Swearingen skrev:Hi Hans, Its important to identify your OS release to determine if booting from a 4k disk is supported. Thanks, Cindy On 06/15/12 06:14, Hans J Albertsson wrote: > I've got my root pool on a mirror on 2 512 byte blocksize disks. > I want to move the root pool to two 2 TB disks with 4k blocks. > The server only has room for two disks. I do have an esata connector, > though, and a suitable external cabinet for connecting one extra disk. > > How would I go about migrating/expanding the root pool to the larger > disks so I can then use the larger disks for booting? > > I have no extra machine to use. > > > > Skickat från min Android Mobil > > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
Sorry, if you meant distinguishing between true 512 and emulated 512/4k, I don't know, it may be vendor-specific as to whether they expose it through device commands at all. Tim On Fri, Jun 15, 2012 at 6:02 PM, Timothy Coalson wrote: > On Fri, Jun 15, 2012 at 5:35 PM, Jim Klimov wrote: >> 2012-06-16 0:05, John Martin wrote: Its important to know... >>> >>> ...whether the drive is really 4096p or 512e/4096p. >> >> >> BTW, is there a surefire way to learn that programmatically >> from Solaris or its derivates > > prtvtoc should show the block size the OS thinks it has. Or > you can use format, select the disk from a list that includes the > model number and size, and use "verify". > > Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
On Fri, Jun 15, 2012 at 5:35 PM, Jim Klimov wrote: > 2012-06-16 0:05, John Martin wrote: >>> >>> Its important to know... >> >> ...whether the drive is really 4096p or 512e/4096p. > > > BTW, is there a surefire way to learn that programmatically > from Solaris or its derivates prtvtoc should show the block size the OS thinks it has. Or you can use format, select the disk from a list that includes the model number and size, and use "verify". Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
2012-06-16 0:05, John Martin wrote: Its important to know... ...whether the drive is really 4096p or 512e/4096p. BTW, is there a surefire way to learn that programmatically from Solaris or its derivates (i.e. from SCSI driver options, format/scsi/inquiry, SMART or some similar way)? Or if the drive lies, saying its sectors are 512b while they physically are 4KB - it is undetectable except by reading vendor specs? Thanks, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
On 06/15/12 15:52, Cindy Swearingen wrote: Its important to identify your OS release to determine if booting from a 4k disk is supported. In addition, whether the drive is really 4096p or 512e/4096p. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
Hi Hans, Its important to identify your OS release to determine if booting from a 4k disk is supported. Thanks, Cindy On 06/15/12 06:14, Hans J Albertsson wrote: I've got my root pool on a mirror on 2 512 byte blocksize disks. I want to move the root pool to two 2 TB disks with 4k blocks. The server only has room for two disks. I do have an esata connector, though, and a suitable external cabinet for connecting one extra disk. How would I go about migrating/expanding the root pool to the larger disks so I can then use the larger disks for booting? I have no extra machine to use. Skickat från min Android Mobil ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
hi what is the version of Solaris? uname -a output? regards On 6/15/2012 10:37 AM, Hung-Sheng Tsao Ph.D. wrote: by the way when you format start with cylinder 1 donot use 0 depend on the version of Solaris you may not be able to use 2TB as root regards On 6/15/2012 9:53 AM, Hung-Sheng Tsao Ph.D. wrote: yes which version of solaris or bsd you are using? for bsd I donot know the steps for create new BE (boot env) for s10 and opensolaris and solaris express (may be other opensolaris fork) , you use the liveupgrade for s11 you use beadm regards On 6/15/2012 9:13 AM, Hans J Albertsson wrote: I suppose I must start by labelling the new disk properly, and give the s0 partition to zpool, so the new zpool can be booted? Skickat från min Android Mobil "Hung-Sheng Tsao Ph.D." skrev: one possible way: 1)break the mirror 2)install new hdd, format the HDD 3)create new zpool on new hdd with 4k block 4)create new BE on the new pool with the old root pool as source (not sure which version of "solaris" or "openSolaris" ypu are using the procedure may be different depend on version" 5)activate the new BE 6)boot the new BE 7)destroy the old zpool 8)replace old HDD with new HDD 9)format the HDD 10)attach the HDD to the new root pool regards On 6/15/2012 8:14 AM, Hans J Albertsson wrote: I've got my root pool on a mirror on 2 512 byte blocksize disks. I want to move the root pool to two 2 TB disks with 4k blocks. The server only has room for two disks. I do have an esata connector, though, and a suitable external cabinet for connecting one extra disk. How would I go about migrating/expanding the root pool to the larger disks so I can then use the larger disks for booting? I have no extra machine to use. Skickat från min Android Mobil ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- -- -- -- <>___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
by the way when you format start with cylinder 1 donot use 0 depend on the version of Solaris you may not be able to use 2TB as root regards On 6/15/2012 9:53 AM, Hung-Sheng Tsao Ph.D. wrote: yes which version of solaris or bsd you are using? for bsd I donot know the steps for create new BE (boot env) for s10 and opensolaris and solaris express (may be other opensolaris fork) , you use the liveupgrade for s11 you use beadm regards On 6/15/2012 9:13 AM, Hans J Albertsson wrote: I suppose I must start by labelling the new disk properly, and give the s0 partition to zpool, so the new zpool can be booted? Skickat från min Android Mobil "Hung-Sheng Tsao Ph.D." skrev: one possible way: 1)break the mirror 2)install new hdd, format the HDD 3)create new zpool on new hdd with 4k block 4)create new BE on the new pool with the old root pool as source (not sure which version of "solaris" or "openSolaris" ypu are using the procedure may be different depend on version" 5)activate the new BE 6)boot the new BE 7)destroy the old zpool 8)replace old HDD with new HDD 9)format the HDD 10)attach the HDD to the new root pool regards On 6/15/2012 8:14 AM, Hans J Albertsson wrote: I've got my root pool on a mirror on 2 512 byte blocksize disks. I want to move the root pool to two 2 TB disks with 4k blocks. The server only has room for two disks. I do have an esata connector, though, and a suitable external cabinet for connecting one extra disk. How would I go about migrating/expanding the root pool to the larger disks so I can then use the larger disks for booting? I have no extra machine to use. Skickat från min Android Mobil ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- -- -- <>___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
yes which version of solaris or bsd you are using? for bsd I donot know the steps for create new BE (boot env) for s10 and opensolaris and solaris express (may be other opensolaris fork) , you use the liveupgrade for s11 you use beadm regards On 6/15/2012 9:13 AM, Hans J Albertsson wrote: I suppose I must start by labelling the new disk properly, and give the s0 partition to zpool, so the new zpool can be booted? Skickat från min Android Mobil "Hung-Sheng Tsao Ph.D." skrev: one possible way: 1)break the mirror 2)install new hdd, format the HDD 3)create new zpool on new hdd with 4k block 4)create new BE on the new pool with the old root pool as source (not sure which version of "solaris" or "openSolaris" ypu are using the procedure may be different depend on version" 5)activate the new BE 6)boot the new BE 7)destroy the old zpool 8)replace old HDD with new HDD 9)format the HDD 10)attach the HDD to the new root pool regards On 6/15/2012 8:14 AM, Hans J Albertsson wrote: I've got my root pool on a mirror on 2 512 byte blocksize disks. I want to move the root pool to two 2 TB disks with 4k blocks. The server only has room for two disks. I do have an esata connector, though, and a suitable external cabinet for connecting one extra disk. How would I go about migrating/expanding the root pool to the larger disks so I can then use the larger disks for booting? I have no extra machine to use. Skickat från min Android Mobil ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- -- <>___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
On 06/15/2012 03:35 PM, Johannes Totz wrote: > On 15/06/2012 13:22, Sašo Kiselkov wrote: >> On 06/15/2012 02:14 PM, Hans J Albertsson wrote: >>> I've got my root pool on a mirror on 2 512 byte blocksize disks. I >>> want to move the root pool to two 2 TB disks with 4k blocks. The >>> server only has room for two disks. I do have an esata connector, >>> though, and a suitable external cabinet for connecting one extra disk. >>> >>> How would I go about migrating/expanding the root pool to the >>> larger disks so I can then use the larger disks for booting? >>> I have no extra machine to use. >> >> Suppose we call the disks like so: >> >> A, B: your old 512-block drives >> X, Y: your new 2TB drives >> >> The easiest way would be to simply: >> >> 1) zpool set autoexpand=on rpool >> 2) offline the A drive >> 3) physically replace it with the X drive >> 4) do a "zpool replace" on it and wait for it to resilver > > When sector size differs, attaching it is going to fail (at least on fbsd). > You might not get around a send-receive cycle... Jim Klimov has already posted a way better guide, which rebuilds the pool using the old one's data, so yeah, the replace route I recommended here is rendered moot. -- Saso ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
On 15/06/2012 13:22, Sašo Kiselkov wrote: > On 06/15/2012 02:14 PM, Hans J Albertsson wrote: >> I've got my root pool on a mirror on 2 512 byte blocksize disks. I >> want to move the root pool to two 2 TB disks with 4k blocks. The >> server only has room for two disks. I do have an esata connector, >> though, and a suitable external cabinet for connecting one extra disk. >> >> How would I go about migrating/expanding the root pool to the >> larger disks so I can then use the larger disks for booting? >> I have no extra machine to use. > > Suppose we call the disks like so: > > A, B: your old 512-block drives > X, Y: your new 2TB drives > > The easiest way would be to simply: > > 1) zpool set autoexpand=on rpool > 2) offline the A drive > 3) physically replace it with the X drive > 4) do a "zpool replace" on it and wait for it to resilver When sector size differs, attaching it is going to fail (at least on fbsd). You might not get around a send-receive cycle... > 5) offline the B drive > 6) physically replace it with the Y drive > 7) do a "zpool replace" on it and wait for it to resilver > > At this point, you should have a 2TB rpool (thanks to the > "autoexpand=on" in step 1). Unfortunately, to my knowledge, there is no > way to convert a bshift=9 pool (512 byte sectors) to a bshift=13 pool > (4k sectors). Perhaps some great ZFS guru can shed more light on this. > > -- > Saso > ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
2012-06-15 17:18, Jim Klimov wrote: 7) If you're on live media, try to rename the new "rpool2" to become "rpool", i.e.: # zpool export rpool2 # zpool export rpool # zpool import -N rpool rpool2 # zpool export rpool Ooops, bad typo in third line; should be: # zpool export rpool2 # zpool export rpool # zpool import -N rpool2 rpool # zpool export rpool Sorry, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
2012-06-15 16:14, Hans J Albertsson wrote: I've got my root pool on a mirror on 2 512 byte blocksize disks. I want to move the root pool to two 2 TB disks with 4k blocks. The server only has room for two disks. I do have an esata connector, though, and a suitable external cabinet for connecting one extra disk. How would I go about migrating/expanding the root pool to the larger disks so I can then use the larger disks for booting? I have no extra machine to use. I think this question was recently asked and discussed on another list; my suggestion would be more low-level than that suggested by others: 0) Boot from a LiveCD/LiveUSB so that your rpool's environment doesn't change during the migration, and so that you can ultimately rename your new rpool to its old name. It is not fatal if you don't use a LiveMedia environment, but it can be problematic to rename a running rpool, and some of your programs might depend on its known name as recorded in some config file or service properties. 1) Break the existing mirror, reducing it to a single-disk pool 2) Install the new disk, slice it, create an "rpool2" on it. NOTE that you might not want all 2TB to be the "rpool2", but rather you might dedicate several tens of GBs to a root-pool partition or slice, and store the rest as a data pool - perhaps implemented with different choices on caching, dedup, etc. NOTE also that you might need to apply some tricks to enforce that the new pool uses ashift=12 if that (4KB) is your hardware native sector size. We had some info recently on the mailing lists and carried that over to illumos wiki: http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks 3) # zfs snapshot -r rpool@20120615-preMigration 4) # zfs send -R rpool@20120615-preMigration | \ zfs recv -vFd rpool2 NOTE this assumes you do want the whole old rpool into rpool2. If you decide you want something on a data pool, i.e. the "/export/*" datasets - you'd have to make that pool and send the datasets there in a similar manner, and send the root pool datasets not in one recursive command, but in several sets i.e. for rpool/ROOT and rpool/swap and rpool/dump in the default layout. 5) # zpool get all rpool # zpool get all rpool2 Compare the pool settings. Carry over the "local" changes with # zpool set property=value rpool2 You'll likely change bootfs, failmode, maybe some others. 6) installgrub onto the new disk so it becomes bootable 7) If you're on live media, try to rename the new "rpool2" to become "rpool", i.e.: # zpool export rpool2 # zpool export rpool # zpool import -N rpool rpool2 # zpool export rpool 8) Reboot, disconnecting your remaining old disk, and hope that the new pool boots okay. It should ;) When it's ok, attach the second new disk to the system and slice it similarly (prtvtoc|fmthard usually helps, google it). Then attach the new second disk's slices to your new rpool (and data pool if you've made one), installgrub onto the second disk - and you're done. HTH, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
I suppose I must start by labelling the new disk properly, and give the s0 partition to zpool, so the new zpool can be booted? Skickat från min Android Mobil"Hung-Sheng Tsao Ph.D." skrev: one possible way: 1)break the mirror 2)install new hdd, format the HDD 3)create new zpool on new hdd with 4k block 4)create new BE on the new pool with the old root pool as source (not sure which version of "solaris" or "openSolaris" ypu are using the procedure may be different depend on version" 5)activate the new BE 6)boot the new BE 7)destroy the old zpool 8)replace old HDD with new HDD 9)format the HDD 10)attach the HDD to the new root pool regards On 6/15/2012 8:14 AM, Hans J Albertsson wrote: I've got my root pool on a mirror on 2 512 byte blocksize disks. I want to move the root pool to two 2 TB disks with 4k blocks. The server only has room for two disks. I do have an esata connector, though, and a suitable external cabinet for connecting one extra disk. How would I go about migrating/expanding the root pool to the larger disks so I can then use the larger disks for booting? I have no extra machine to use. Skickat från min Android Mobil ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
one possible way: 1)break the mirror 2)install new hdd, format the HDD 3)create new zpool on new hdd with 4k block 4)create new BE on the new pool with the old root pool as source (not sure which version of "solaris" or "openSolaris" ypu are using the procedure may be different depend on version" 5)activate the new BE 6)boot the new BE 7)destroy the old zpool 8)replace old HDD with new HDD 9)format the HDD 10)attach the HDD to the new root pool regards On 6/15/2012 8:14 AM, Hans J Albertsson wrote: I've got my root pool on a mirror on 2 512 byte blocksize disks. I want to move the root pool to two 2 TB disks with 4k blocks. The server only has room for two disks. I do have an esata connector, though, and a suitable external cabinet for connecting one extra disk. How would I go about migrating/expanding the root pool to the larger disks so I can then use the larger disks for booting? I have no extra machine to use. Skickat från min Android Mobil ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- <>___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
On 06/15/2012 02:14 PM, Hans J Albertsson wrote: > I've got my root pool on a mirror on 2 512 byte blocksize disks. > I want to move the root pool to two 2 TB disks with 4k blocks. > The server only has room for two disks. I do have an esata connector, though, > and a suitable external cabinet for connecting one extra disk. > > How would I go about migrating/expanding the root pool to the larger disks so > I can then use the larger disks for booting? > > I have no extra machine to use. Suppose we call the disks like so: A, B: your old 512-block drives X, Y: your new 2TB drives The easiest way would be to simply: 1) zpool set autoexpand=on rpool 2) offline the A drive 3) physically replace it with the X drive 4) do a "zpool replace" on it and wait for it to resilver 5) offline the B drive 6) physically replace it with the Y drive 7) do a "zpool replace" on it and wait for it to resilver At this point, you should have a 2TB rpool (thanks to the "autoexpand=on" in step 1). Unfortunately, to my knowledge, there is no way to convert a bshift=9 pool (512 byte sectors) to a bshift=13 pool (4k sectors). Perhaps some great ZFS guru can shed more light on this. -- Saso ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks
I've got my root pool on a mirror on 2 512 byte blocksize disks. I want to move the root pool to two 2 TB disks with 4k blocks. The server only has room for two disks. I do have an esata connector, though, and a suitable external cabinet for connecting one extra disk. How would I go about migrating/expanding the root pool to the larger disks so I can then use the larger disks for booting? I have no extra machine to use. Skickat från min Android Mobil___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss