Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs ec2]

2010-04-23 Thread Mark Musante
On 23 Apr, 2010, at 7.06, Phillip Oldham wrote: I've created an OpenSolaris 2009.06 x86_64 image with the zpool structure already defined. Starting an instance from this image, without attaching the EBS volume, shows the pool structure exists and that the pool state is UNAVAIL (as

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs ec2]

2010-04-23 Thread Mark Musante
On 23 Apr, 2010, at 7.31, Phillip Oldham wrote: I'm not actually issuing any when starting up the new instance. None are needed; the instance is booted from an image which has the zpool configuration stored within, so simply starts and sees that the devices aren't available, which become

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs ec2]

2010-04-23 Thread Mark Musante
On 23 Apr, 2010, at 8.38, Phillip Oldham wrote: The instances are ephemeral; once terminated they cease to exist, as do all their settings. Rebooting an image keeps any EBS volumes attached, but this isn't the case I'm dealing with - its when the instance terminates unexpectedly. For

Re: [zfs-discuss] zpool vdev's

2010-05-28 Thread Mark Musante
On 28 May, 2010, at 17.21, Vadim Comanescu wrote: In a stripe zpool configuration (no redundancy) is a certain disk regarded as an individual vdev or do all the disks in the stripe represent a single vdev ? In a raidz configuration im aware that every single group of raidz disks is

Re: [zfs-discuss] ZFS and IBM SDD Vpaths

2010-05-29 Thread Mark Musante
Can you find the devices in /dev/rdsk? I see there is a path in /pseudo at least, but the zpool import command only looks in /dev. One thing you can try is doing this: # mkdir /tmpdev # ln -s /pseudo/vpat...@1:1 /tmpdev/vpath1a And then see if 'zpool import -d /tmpdev' finds the pool. On

Re: [zfs-discuss] Zpool import not working

2010-06-12 Thread Mark Musante
I'm guessing that the virtualbox VM is ignoring write cache flushes. See this for more ifno: http://forums.virtualbox.org/viewtopic.php?f=8t=13661 On 12 Jun, 2010, at 5.30, zfsnoob4 wrote: Thanks, that works. But it only when I do a proper export first. If I export the pool then I can

Re: [zfs-discuss] Splitting root mirror to prep for re-install

2010-08-04 Thread Mark Musante
You can also use the zpool split command and save yourself having to do the zfs send|zfs recv step - all the data will be preserved. zpool split rpool preserve does essentially everything up to and including the zpool export preserve commands you listed in your original email. Just don't try

Re: [zfs-discuss] How do I Import rpool to an alternate location?

2010-08-16 Thread Mark Musante
On 16 Aug 2010, at 22:30, Robert Hartzell wrote: cd /mnt ; ls bertha export var ls bertha boot etc where is the rest of the file systems and data? By default, root filesystems are not mounted. Try doing a zfs mount bertha/ROOT/snv_134___

Re: [zfs-discuss] Cant't detach spare device from pool

2010-08-18 Thread Mark Musante
You need to let the resilver complete before you can detach the spare. This is a known problem, CR 6909724. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6909724 On 18 Aug 2010, at 14:02, Dr. Martin Mundschenk wrote: Hi! I had trouble with my raidz in the way, that some of

Re: [zfs-discuss] ZPOOL_CONFIG_IS_HOLE

2010-10-15 Thread Mark Musante
You should only see a HOLE in your config if you removed a slog after having added more stripes. Nothing to do with bad sectors. On 14 Oct 2010, at 06:27, Matt Keenan wrote: Hi, Can someone shed some light on what this ZPOOL_CONFIG is exactly. At a guess is it a bad sector of the disk,

Re: [zfs-discuss] Zfs ignoring spares?

2010-12-05 Thread Mark Musante
On 5 Dec 2010, at 16:06, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote: Hot spares are dedicated spares in the ZFS world. Until you replace the actual bad drives, you will be running in a degraded state. The idea is that spares are only used in an emergency. You are degraded until your

Re: [zfs-discuss] Another zfs issue

2011-06-01 Thread Mark Musante
Yeah, this is a known problem. The DTL on the toplevel shows an outage, and is preventing the removal of the spare even though removing the spare won't make the outage worse. Unfortunately, for opensolaris anyway, there is no workaround. You could try doing a full scrub, replacing any disks

Re: [zfs-discuss] [illumos-Developer] zfs refratio property

2011-06-06 Thread Mark Musante
minor quibble: compressratio uses a lowercase x for the description text whereas the new prop uses an uppercase X On 6 Jun 2011, at 21:10, Eric Schrock wrote: Webrev has been updated: http://dev1.illumos.org/~eschrock/cr/zfs-refratio/ - Eric -- Eric Schrock Delphix 275

Re: [zfs-discuss] Mirror Gone

2011-09-27 Thread Mark Musante
On 27 Sep 2011, at 18:29, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Tony MacDoodle Now: mirror-0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-08 Thread Mark Musante
You can see the original ARC case here: http://arc.opensolaris.org/caselog/PSARC/2009/557/20091013_lori.alt On 8 Dec 2011, at 16:41, Ian Collins wrote: On 12/ 9/11 12:39 AM, Darren J Moffat wrote: On 12/07/11 20:48, Mertol Ozyoney wrote: Unfortunetly the answer is no. Neither l1 nor l2