Re: [zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]

2006-05-11 Thread George Wilson
upgrade does so that it happens as part of the promotion. A best practice would be to keep the application data and config/logging data separately. This would avoid the need for this feature. Thanks, George Darren J Moffat wrote: George Wilson wrote: Matt, This is really cool! One thing that I

Re: [zfs-discuss] How to best layout our filesystems

2006-07-28 Thread George Wilson
Robert, The patches will be available sometime late September. This may be a week or so before s10u3 actually releases. Thanks, George Robert Milkowski wrote: Hello eric, Thursday, July 27, 2006, 4:34:16 AM, you wrote: ek Robert Milkowski wrote: Hello George, Wednesday, July 26, 2006,

[zfs-discuss] Re: Canary is now running latest code and has a 3 disk raidz ZFS volume

2006-07-30 Thread George Wilson
that we have 800 servers, 30,000 users, 140 million lines of ASCII per day all fitting in a 2u T2000 box! thanks sean George Wilson wrote: Sean, Sorry for the delay getting back to you. You can do a 'zpool upgrade' to see what version of the on-disk format you pool is currently running

[zfs-discuss] Solaris 10 ZFS Update

2006-07-31 Thread George Wilson
We have putback a significant number of fixes and features from OpenSolaris into what will become Solaris 10 11/06. For reference here's the list: Features: PSARC 2006/223 ZFS Hot Spares 6405966 Hot Spare support in ZFS PSARC 2006/303 ZFS Clone Promotion 6276916 support for clone swap PSARC

Re: [zfs-discuss] Solaris 10 ZFS Update

2006-07-31 Thread George Wilson
Rainer, This will hopefully go into build 06 of s10u3. It's on my list... :-) Thanks, George Rainer Orth wrote: George Wilson [EMAIL PROTECTED] writes: We have putback a significant number of fixes and features from OpenSolaris into what will become Solaris 10 11/06. For reference here's

Re: [zfs-discuss] Solaris 10 ZFS Update

2006-07-31 Thread George Wilson
I forgot to highlight that RAIDZ2 (a.k.a RAID-6) is also in this wad: 6417978 double parity RAID-Z a.k.a. RAID6 Thanks, George George Wilson wrote: We have putback a significant number of fixes and features from OpenSolaris into what will become Solaris 10 11/06. For reference here's

Re: [zfs-discuss] Solaris 10 ZFS Update

2006-07-31 Thread George Wilson
Grant, Expect patches late September or so. Once available I'll post the patch information. Thanks, George grant beattie wrote: On Mon, Jul 31, 2006 at 11:51:09AM -0400, George Wilson wrote: We have putback a significant number of fixes and features from OpenSolaris into what will become

[zfs-discuss] Re: [Fwd: [zones-discuss] Zone boot problems after installing patches]

2006-08-02 Thread George Wilson
Dave, I'm copying the zfs-discuss alias on this as well... It's possible that not all necessary patches have been installed or they maybe hitting CR# 6428258. If you reboot the zone does it continue to end up in maintenance mode? Also do you know if the necessary ZFS/Zones patches have been

Re: [zfs-discuss] Re: [Fwd: [zones-discuss] Zone boot problems after installing patches]

2006-08-03 Thread George Wilson
5.8_x86 5.9_x86 5.10_x86: Live Upgrade Patch Thanks, George George Wilson wrote: Dave, I'm copying the zfs-discuss alias on this as well... It's possible that not all necessary patches have been installed or they maybe hitting CR# 6428258. If you reboot the zone does it continue to end up

Re: [zfs-discuss] Re: SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-07 Thread George Wilson
Leon, Looking at the corefile doesn't really show much from the zfs side. It looks like you were having problems with your san though: /scsi_vhci/[EMAIL PROTECTED] (ssd5) offline /scsi_vhci/[EMAIL PROTECTED] (ssd5) multipath status: failed, path /[EMAIL PROTECTED],70/SUNW,[EMAIL

Re: [zfs-discuss] Querying ZFS version?

2006-08-08 Thread George Wilson
Luke, You can run 'zpool upgrade' to see what on-disk version you are capable of running. If you have the latest features then you should be running version 3: hadji-2# zpool upgrade This system is currently running ZFS version 3. Unfortunately this won't tell you if you are running the

Re: [zfs-discuss] system unresponsive after issuing a zpool attach

2006-08-17 Thread George Wilson
I believe this is what you're hitting: 6456888 zpool attach leads to memory exhaustion and system hang We are currently looking at fixing this so stay tuned. Thanks, George Daniel Rock wrote: Joseph Mocker schrieb: Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS SVM

Re: [zfs-discuss] SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-18 Thread George Wilson
Frank, The SC 3.2 beta maybe closed but I'm forwarding your request to Eric Redmond. Thanks, George Frank Cusack wrote: On August 10, 2006 6:04:38 PM -0700 eric kustarz [EMAIL PROTECTED] wrote: If you're doing HA-ZFS (which is SunCluster 3.2 - only available in beta right now), Is the

Re: [zfs-discuss] Re: SCSI synchronize cache cmd

2006-08-22 Thread George Wilson
Roch wrote: Dick Davies writes: On 22/08/06, Bill Moore [EMAIL PROTECTED] wrote: On Mon, Aug 21, 2006 at 02:40:40PM -0700, Anton B. Rang wrote: Yes, ZFS uses this command very frequently. However, it only does this if the whole disk is under the control of ZFS, I believe; so a

Re: [zfs-discuss] zpool hangs

2006-08-24 Thread George Wilson
Robert, One of your disks is not responding. I've been trying to track down why the scsi command is not being timed out but for now check out each of the devices to make sure they are healthy. BTW, if you capture a corefile let me know. Thanks, George Robert Milkowski wrote: Hi. S10U2 +

Re: [zfs-discuss] zpool hangs

2006-08-24 Thread George Wilson
Robert Milkowski wrote: Hello George, Thursday, August 24, 2006, 5:48:08 PM, you wrote: GW Robert, GW One of your disks is not responding. I've been trying to track down why GW the scsi command is not being timed out but for now check out each of GW the devices to make sure they are

Re: [zfs-discuss] Re: Re: zpool status panics server

2006-08-25 Thread George Wilson
Neal, This is not fixed yet. Your best best is to run a replicated pool. Thanks, George Neal Miskin wrote: Hi Dana It is ZFS bug 6322646; a flaw. Is this fixed in a patch yet? nelly_bo This message posted from opensolaris.org ___

Re: [zfs-discuss] Re: Significant pauses during zfs writes

2006-08-28 Thread George Wilson
A fix for this should be integrated shortly. Thanks, George Michael Schuster - Sun Microsystems wrote: Robert Milkowski wrote: Hello Michael, Wednesday, August 23, 2006, 12:49:28 PM, you wrote: MSSM Roch wrote: MSSM I sent this output offline to Roch, here's the essential ones and

Re: [zfs-discuss] Re: Re: ZFS causes panic on import

2006-09-04 Thread George Wilson
Stuart, Can you send the output of 'zpool status -v' from both nodes? Thanks, George Stuart Low wrote: Nada. [EMAIL PROTECTED] ~]$ zpool export -f ax150s cannot open 'ax150s': no such pool [EMAIL PROTECTED] ~]$ I wonder if it's possible to force the pool to be marked as inactive? Ideally

Re: [zfs-discuss] Re: Re: Re: ZFS causes panic on import

2006-09-04 Thread George Wilson
Stuart, Given that the pool was imported on both nodes simultaneously may have corrupted it beyond repair. I'm assuming that the same problem is a system panic? If so, can you send the panic string from that node? Thanks, George Stuart Low wrote: I thought that might work too but having

Re: [zfs-discuss] Re: Re: Re: ZFS causes panic on import

2006-09-04 Thread George Wilson
Stuart, Issuing a 'zpool import' will show all the pools which are accessible for import and that's why you are seeing them. The fact that a forced import gives results in a panic is indicative of pool corruption that resulted from being imported on more than one host. Thanks, George

Re: [zfs-discuss] ZFS Corruption

2006-12-12 Thread George Wilson
Bill, If you want to find the file associated with the corruption you could do a find /u01 -inum 4741362 or use the output of zdb -d u01 to find the object associated with that id. Thanks, George Bill Casale wrote: Please reply directly to me. Seeing the message below. Is it possible

Re: [zfs-discuss] using zpool attach/detach to migrate drives from one controller to another

2006-12-27 Thread George Wilson
Derek, I don't think 'zpool attach/detach' is what you want as it will always result in a complete resilver. You're best bet is to export and re-import the pool after moving devices. You might also try to 'zpool offline' the device, move it and then 'zpool online' it. This should force a

Re: [zfs-discuss] Saving scrub results before scrub completes

2006-12-27 Thread George Wilson
Siegfried, Can you provide the panic string that you are seeing? We should be able to pull out the persistent error log information from the corefile. You can take a look at spa_get_errlog() function as a starting point. Additionally, you can look at the corefile using mdb and take a look at

Re: [zfs-discuss] using zpool attach/detach to migrate drives from one controller to another

2006-12-27 Thread George Wilson
Derek, Have you tried doing a 'zpool replace poolname c1t53d0 c2t53d0'? I'm not sure if this will work but worth a shot. You may still end up with a complete resilver. Thanks, George Derek E. Lewis wrote: On Thu, 28 Dec 2006, George Wilson wrote: You're best bet is to export and re-import

[zfs-discuss] Solaris 10 11/06

2006-12-28 Thread George Wilson
Now that Solaris 10 11/06 is available, I wanted to post the complete list of ZFS features and bug fixes that were included in that release. I'm also including the necessary patches for anyone wanting to get all the ZFS features and fixes via patches (NOTE: later patch revision may already be

Re: [zfs-discuss] zfs services , label and packages

2006-12-28 Thread George Wilson
storage-disk wrote: Hi there I have 3 questions regarding zfs. 1. what are zfs packages? SUNWzfsr, SUNWzfskr, and SUNWzfsu. Note that ZFS has dependencies on other components of Solaris so installing just the packages in not supported. 2. what services need to be started in order for

Re: [zfs-discuss] panic with zfs

2007-01-29 Thread George Wilson
Ihsan, If you are running Solaris 10 then you are probably hitting: 6456939 sd_send_scsi_SYNCHRONIZE_CACHE_biodone() can issue TUR which calls biowait()and deadlock/hangs host This was fixed in opensolaris (build 48) but a patch is not yet available for Solaris 10. Thanks, George Ihsan

Re: [zfs-discuss] Unbelievable. an other crashed zpool :(

2007-04-09 Thread George Wilson
Gino, Can you send me the corefile from the zpool command? This looks like a case where we can't open the device for some reason. Are you using a multi-pathing solution other than MPXIO? Thanks, George Gino wrote: Today we lost an other zpool! Fortunately it was only a backup repository.

Re: [zfs-discuss] other panic caused by ZFS

2007-04-09 Thread George Wilson
Gino, Were you able to recover by setting zfs_recover? Thanks, George Gino wrote: Hi All, here is an other kind of kernel panic caused by ZFS that we found. I have dumps if needed. #zpool import pool: zpool8 id: 7382567111495567914 state: ONLINE status: The pool is formatted using an

Re: [zfs-discuss] misleading zpool state and panic -- nevada b60 x86

2007-04-09 Thread George Wilson
William D. Hathaway wrote: I'm running Nevada build 60 inside VMWare, it is a test rig with no data of value. SunOS b60 5.11 snv_60 i86pc i386 i86pc I wanted to check out the FMA handling of a serious zpool error, so I did the following: 2007-04-07.08:46:31 zpool create tank mirror c0d1 c1d1

Re: [zfs-discuss] B62 AHCI and ZFS

2007-04-30 Thread George Wilson
Peter, Can you send the 'zpool status -x' output after your reboot. I suspect that the pool error is occurring early in the boot and later the devices are all available and the pool is brought into an online state. Take a look at: *6401126 ZFS DE should verify that diagnosis is still valid

Re: [zfs-discuss] Re: B62 AHCI and ZFS

2007-04-30 Thread George Wilson
Peter Goodman wrote: # zpool status -x pool: mtf state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see:

Re: [zfs-discuss] zpool status -v: machine readable format?

2007-07-03 Thread George Wilson
David Smith wrote: I was wondering if anyone had a script to parse the zpool status -v output into a more machine readable format? Thanks, David This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS pool fragmentation

2007-07-10 Thread George Wilson
This fix plus the fix for '6495013 Loops and recursion in metaslab_ff_alloc can kill performance, even on a pool with lots of free data' will greatly help your situation. Both of these fixes will be in Solaris 10 update 4. Thanks, George ?ukasz wrote: I have a huge problem with ZFS pool

Re: [zfs-discuss] Again ZFS with expanding LUNs!

2007-08-06 Thread George Wilson
I'm planning on putting back the changes to ZFS into Opensolaris in upcoming weeks. This will still require a manual step as the changes required in the sd driver are still under development. The ultimate plan is to have the entire process totally automated. If you have more questions, feel

Re: [zfs-discuss] Opensolaris ZFS version Sol10U4 compatibility

2007-08-16 Thread George Wilson
The on-disk format for s10u4 will be version 4. This is equivalent to Opensolaris build 62. Thanks, George David Evans wrote: As the release date Solaris 10 Update 4 approaches (hope, hope), I was wondering if someone could comment on which versions of opensolaris ZFS will seamlessly work

[zfs-discuss] ZFS Solaris 10u5 Proposed Changes

2007-09-18 Thread George Wilson
ZFS Fans, Here's a list of features that we are proposing for Solaris 10u5. Keep in mind that this is subject to change. Features: PSARC 2007/142 zfs rename -r PSARC 2007/171 ZFS Separate Intent Log PSARC 2007/197 ZFS hotplug PSARC 2007/199 zfs {create,clone,rename} -p PSARC 2007/283 FMA for

Re: [zfs-discuss] zpool history not found

2007-09-18 Thread George Wilson
You need to install patch 120011-14. After you reboot you will be able to run 'zpool upgrade -a' to upgrade to the latest version. Thanks, George sunnie wrote: Hey, guys Since corrent zfs software only support ZFS pool version 3, how should I do to upgrade the zfs software or package?

Re: [zfs-discuss] System hang caused by a bad snapshot

2007-09-18 Thread George Wilson
Ben, Much of this code has been revamped as a result of: 6514331 in-memory delete queue is not needed Although this may not fix your issue it would be good to try this test with more recent bits. Thanks, George Ben Miller wrote: Hate to re-open something from a year ago, but we just had

[zfs-discuss] ZFS Solaris 10 Update 4 Patches

2007-09-19 Thread George Wilson
The latest ZFS patches for Solaris 10 are now available: 120011-14 - SunOS 5.10: kernel patch 120012-14 - SunOS 5.10_x86: kernel patch ZFS Pool Version available with patches = 4 These patches will provide access to all of the latest features and bug fixes: Features: PSARC 2006/288 zpool

Re: [zfs-discuss] what patches are needed to enable zfs_nocacheflush

2007-09-19 Thread George Wilson
Bernhard, Here are the solaris 10 patches: 120011-14 - SunOS 5.10: kernel patch 120012-14 - SunOS 5.10_x86: kernel patch See http://www.opensolaris.org/jive/thread.jspa?threadID=39951tstart=0 for more info. Thanks, George Bernhard Holzer wrote: Hi, this parameter (zfs_nocacheflush) is

Re: [zfs-discuss] zpool question

2007-10-30 Thread George Wilson
Krzys wrote: hello folks, I am running Solaris 10 U3 and I have small problem that I dont know how to fix... I had a pool of two drives: bash-3.00# zpool status pool: mypool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM

Re: [zfs-discuss] [storage-discuss] Preventing zpool imports on boot

2008-02-15 Thread George Wilson
Mike Gerdts wrote: On Feb 15, 2008 2:31 PM, Dave [EMAIL PROTECTED] wrote: This is exactly what I want - Thanks! This isn't in the man pages for zfs or zpool in b81. Any idea when this feature was integrated? Interesting... it is in b76. I checked several other releases both

Re: [zfs-discuss] How to set ZFS metadata copies=3?

2008-02-15 Thread George Wilson
Vincent Fox wrote: Let's say you are paranoid and have built a pool with 40+ disks in a Thumper. Is there a way to set metadata copies=3 manually? After having built RAIDZ2 sets with 7-9 disks and then pooled these together, it just seems like a little bit of extra insurance to increase

Re: [zfs-discuss] zfs write cache enable on boot disks ?

2008-04-24 Thread George Wilson
Unfortunately not. Thanks, George Par Kansala wrote: Hi, Will the upcoming zfs boot capabilities also enable write cache on a boot disk like it does on regular data disks (when whole disks are used) ? //Par -- -- http://www.sun.com *Pär Känsälä* OEM Engagement Architect *Sun

Re: [zfs-discuss] zfs write cache enable on boot disks ?

2008-04-24 Thread George Wilson
Just to clarify a bit, ZFS will not enable the write cache for the root pool. That said, there are disk drives which have the write cache enabled by default. That behavior remains unchanged. - George George Wilson wrote: Unfortunately not. Thanks, George Par Kansala wrote: Hi

Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-02 Thread George Wilson
Kyle McDonald wrote: David Magda wrote: Quite often swap and dump are the same device, at least in the installs that I've worked with, and I think the default for Solaris is that if dump is not explicitly specified it defaults to swap, yes? Is there any reason why they should be

Re: [zfs-discuss] Repairing a Root Pool

2008-08-29 Thread George Wilson
Krister Joas wrote: Hello. I have a machine at home on which I have SXCE B96 installed on a root zpool mirror. It's been working great until yesterday. The root pool is a mirror with two identical 160GB disks. The other day I added a third disk to the mirror, a 250 GB disk. Soon

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-03-03 Thread George Wilson
Matthew Ahrens wrote: Blake wrote: zfs send is great for moving a filesystem with lots of tiny files, since it just handles the blocks :) I'd like to see: pool-shrinking (and an option to shrink disk A when i want disk B to become a mirror, but A is a few blocks bigger) I'm working on it.

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-03-03 Thread George Wilson
Richard Elling wrote: David Magda wrote: On Feb 27, 2009, at 18:23, C. Bergström wrote: Blake wrote: Care to share any of those in advance? It might be cool to see input from listees and generally get some wheels turning... raidz boot support in grub 2 is pretty high on my list to be

Re: [zfs-discuss] ZFS crashes on boot

2009-03-31 Thread George Wilson
Cyril Plisko wrote: On Thu, Mar 26, 2009 at 8:45 PM, Richard Elling richard.ell...@gmail.com wrote: assertion failures are bugs. Yup, I know that. Please file one at http://bugs.opensolaris.org Just did. Do you have a crash dump from this issue? - George You may

Re: [zfs-discuss] ZFS crashes on boot

2009-04-02 Thread George Wilson
Cyril Plisko wrote: On Tue, Mar 31, 2009 at 11:01 PM, George Wilson george.wil...@sun.com wrote: Cyril Plisko wrote: On Thu, Mar 26, 2009 at 8:45 PM, Richard Elling richard.ell...@gmail.com wrote: assertion failures are bugs. Yup, I know that. Please file one at http

Re: [zfs-discuss] ZFS hanging on import

2009-04-02 Thread George Wilson
Arne Schwabe wrote: Hi, I have a zpool in a degraded state: [19:15]{1}a...@charon:~% pfexec zpool import pool: npool id: 5258305162216370088 state: DEGRADED status: The pool is formatted using an older on-disk version. action: The pool can be imported despite missing or damaged

Re: [zfs-discuss] ZFS hanging on import

2009-04-03 Thread George Wilson
Arne Schwabe wrote: Am 03.04.2009 2:42 Uhr, schrieb George Wilson: Arne Schwabe wrote: Hi, I have a zpool in a degraded state: [19:15]{1}a...@charon:~% pfexec zpool import pool: npool id: 5258305162216370088 state: DEGRADED status: The pool is formatted using an older on-disk version

Re: [zfs-discuss] vdev_disk_io_start() sending NULL pointer in ldi_ioctl()

2009-04-10 Thread George Wilson
shyamali.chakrava...@sun.com wrote: Hi All, I have corefile where we see NULL pointer de-reference PANIC as we have sent (deliberately) NULL pointer for return value. vdev_disk_io_start() ... ... error = ldi_ioctl(dvd-vd_lh, zio-io_cmd,

Re: [zfs-discuss] LUN expansion

2009-06-04 Thread George Wilson
Leonid, I will be integrating this functionality within the next week: PSARC 2008/353 zpool autoexpand property 6475340 when lun expands, zfs should expand too Unfortunately, the won't help you until they get pushed to Opensolaris. The problem you're facing is that the partition table needs

Re: [zfs-discuss] LUN expansion

2009-06-08 Thread George Wilson
Leonid Zamdborg wrote: George, Is there a reasonably straightforward way of doing this partition table edit with existing tools that won't clobber my data? I'm very new to ZFS, and didn't want to start experimenting with a live machine. Leonid, What you could do is to write a program

Re: [zfs-discuss] Issues with slightly different sized drives in raidz pool?

2009-06-08 Thread George Wilson
David Bryan wrote: Sorry if the question has been discussed before...did a pretty extensive search, but no luck... Preparing to build my first raidz pool. Plan to use 4 identical drives in a 3+1 configuration. My question is -- what happens if one drive dies, and when I replace it, design

Re: [zfs-discuss] Increase size of ZFS mirror

2009-06-24 Thread George Wilson
Ben wrote: Hi all, I have a ZFS mirror of two 500GB disks, I'd like to up these to 1TB disks, how can I do this? I must break the mirror as I don't have enough controller on my system board. My current mirror looks like this: [b]r...@beleg-ia:/share/media# zpool status share pool: share

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-21 Thread George Wilson
Russel wrote: OK. So do we have an zpool import --xtg 56574 mypoolname or help to do it (script?) Russel We are working on the pool rollback mechanism and hope to have that soon. The ZFS team recognizes that not all hardware is created equal and thus the need for this mechanism. We are

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-22 Thread George Wilson
! But how do we deal with that on older sStems, which don't have the patch applied, once it is out? Thanks, Alexander On Tuesday, July 21, 2009, George Wilson george.wil...@sun.com wrote: Russel wrote: OK. So do we have an zpool import --xtg 56574 mypoolname or help to do it (script?) Russel We

Re: [zfs-discuss] zpool export taking hours

2009-07-27 Thread George Wilson
fyleow wrote: I have a raidz1 tank of 5x 640 GB hard drives on my newly installed OpenSolaris 2009.06 system. I did a zpool export tank and the process has been running for 3 hours now taking up 100% CPU usage. When I do a zfs list tank it's still shown as mounted. What's going on here?

Re: [zfs-discuss] zpool export taking hours

2009-07-29 Thread George Wilson
fyleow wrote: fyleow wrote: I have a raidz1 tank of 5x 640 GB hard drives on my newly installed OpenSolaris 2009.06 system. I did a zpool export tank and the process has been running for 3 hours now taking up 100% CPU usage. When I do a zfs list tank it's still shown as mounted. What's going

Re: [zfs-discuss] ZFS dedup issue

2009-11-03 Thread George Wilson
Eric Schrock wrote: On Nov 3, 2009, at 12:24 PM, Cyril Plisko wrote: I think I'm observing the same (with changeset 10936) ... # mkfile 2g /var/tmp/tank.img # zpool create tank /var/tmp/tank.img # zfs set dedup=on tank # zfs create tank/foobar This has to do with the fact that

Re: [zfs-discuss] dedupe question

2009-11-08 Thread George Wilson
Dennis Clarke wrote: On Sat, 2009-11-07 at 17:41 -0500, Dennis Clarke wrote: Does the dedupe functionality happen at the file level or a lower block level? it occurs at the block allocation level. I am writing a large number of files that have the fol structure : -- file begins 1024

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-11-30 Thread George Wilson
Moshe Vainer wrote: I am sorry, i think i confused the matters a bit. I meant the bug that prevents importing with slog device missing, 6733267. I am aware that one can remove a slog device, but if you lose your rpool and the device goes missing while you rebuild, you will lose your pool in

[zfs-discuss] Heads-Up: Changes to the zpool(1m) command

2009-12-02 Thread George Wilson
Some new features have recently integrated into ZFS which have change the output of zpool(1m) command. Here's a quick recap: 1) 6574286 removing a slog doesn't work This change added the concept of named top-level devices for the purpose of device removal. The named top-levels are constructed

Re: [zfs-discuss] dedup status

2010-05-20 Thread George Wilson
Roy Sigurd Karlsbakk wrote: Hi all I've been doing a lot of testing with dedup and concluded it's not really ready for production. If something fails, it can render the pool unuseless for hours or maybe days, perhaps due to single-threded stuff in zfs. There is also very little data

Re: [zfs-discuss] Scrub issues

2010-06-14 Thread George Wilson
Richard Elling wrote: On Jun 14, 2010, at 2:12 PM, Roy Sigurd Karlsbakk wrote: Hi all It seems zfs scrub is taking a big bit out of I/O when running. During a scrub, sync I/O, such as NFS and iSCSI is mostly useless. Attaching an SLOG and some L2ARC helps this, but still, the problem remains

Re: [zfs-discuss] zfs hangs with B141 when filebench runs

2010-07-15 Thread George Wilson
I don't recall seeing this issue before. Best thing to do is file a bug and include a pointer to the crash dump. - George zhihui Chen wrote: Looks that the txg_sync_thread for this pool has been blocked and never return, which leads to many other threads have been blocked. I have tried to

Re: [zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292Self Review]

2010-07-30 Thread George Wilson
-...@sun.com mailto:psarc-...@sun.com *CC: * zfs-t...@sun.com mailto:zfs-t...@sun.com I am sponsoring the following case for George Wilson. Requested binding is micro/patch. Since this is a straight-forward addition of a command line option, I think itqualifies for self review

Re: [zfs-discuss] problem with zpool import - zil and cache drive are not displayed?

2010-08-03 Thread George Wilson
Darren, It looks like you've lost your log device. The newly integrated missing log support will help once it's available. In the meantime, you should run 'zdb -l' on your log device to make sure the label is still intact. Thanks, George Darren Taylor wrote: I'm at a loss, I've managed to

Re: [zfs-discuss] How do I Import rpool to an alternate location?

2010-08-16 Thread George Wilson
The root filesystem on the root pool is set to 'canmount=noauto' so you need to manually mount it first using 'zfs mount dataset name'. Then run 'zfs mount -a'. - George On 08/16/10 07:30 PM, Robert Hartzell wrote: I have a disk which is 1/2 of a boot disk mirror from a failed system that

Re: [zfs-discuss] How do I Import rpool to an alternate location?

2010-08-16 Thread George Wilson
Robert Hartzell wrote: On 08/16/10 07:47 PM, George Wilson wrote: The root filesystem on the root pool is set to 'canmount=noauto' so you need to manually mount it first using 'zfs mount dataset name'. Then run 'zfs mount -a'. - George mounting the dataset failed because the /mnt dir

Re: [zfs-discuss] ZFS offline ZIL corruption not detected

2010-08-26 Thread George Wilson
Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Neil Perrin This is a consequence of the design for performance of the ZIL code. Intent log blocks are dynamically allocated and chained together. When reading the

Re: [zfs-discuss] ZFS offline ZIL corruption not detected

2010-08-26 Thread George Wilson
David Magda wrote: On Wed, August 25, 2010 23:00, Neil Perrin wrote: Does a scrub go through the slog and/or L2ARC devices, or only the primary storage components? A scrub will go through slogs and primary storage devices. The L2ARC device is considered volatile and data loss is not possible

Re: [zfs-discuss] ZFS offline ZIL corruption not detected

2010-08-26 Thread George Wilson
Edward Ned Harvey wrote: Add to that: During scrubs, perform some reads on log devices (even if there's nothing to read). We do read from log device if there is data stored on them. In fact, during scrubs, perform some reads on every device (even if it's actually empty.) Reading from the

Re: [zfs-discuss] Hang on zpool import (dedup related)

2010-09-12 Thread George Wilson
Chris Murray wrote: Another hang on zpool import thread, I'm afraid, because I don't seem to have observed any great successes in the others and I hope there's a way of saving my data ... In March, using OpenSolaris build 134, I created a zpool, some zfs filesystems, enabled dedup on them,

Re: [zfs-discuss] resilver that never finishes

2010-09-18 Thread George Wilson
Tom Bird wrote: On 18/09/10 09:02, Ian Collins wrote: In my case, other than an hourly snapshot, the data is not significantly changing. It'd be nice to see a response other than you're doing it wrong, rebuilding 5x the data on a drive relative to its capacity is clearly erratic

Re: [zfs-discuss] Resilver endlessly restarting at completion

2010-09-29 Thread George Wilson
Answers below... Tuomas Leikola wrote: The endless resilver problem still persists on OI b147. Restarts when it should complete. I see no other solution than to copy the data to safety and recreate the array. Any hints would be appreciated as that takes days unless i can stop or pause the

Re: [zfs-discuss] Is there any way to stop a resilver?

2010-09-29 Thread George Wilson
Can you post the output of 'zpool status'? Thanks, George LIC mesh wrote: Most likely an iSCSI timeout, but that was before my time here. Since then, there have been various individual drives lost along the way on the shelves, but never a whole LUN, so, theoretically, /except/ for iSCSI

Re: [zfs-discuss] Recovering from corrupt ZIL

2010-10-24 Thread George Wilson
If your pool is on version 19 then you should be able to import a pool with a missing log device by using the '-m' option to 'zpool import'. - George On Sat, Oct 23, 2010 at 10:03 PM, David Ehrmann ehrm...@gmail.com wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-

Re: [zfs-discuss] Recovering from corrupt ZIL

2010-10-24 Thread George Wilson
The guid is stored on the mirrored pair of the log and in the pool config. If you're log device was not mirrored then you can only find it in the pool config. - George On Sun, Oct 24, 2010 at 9:34 AM, David Ehrmann ehrm...@gmail.com wrote: How does ZFS detect that there's a log device attached

Re: [zfs-discuss] Question about (delayed) block freeing

2010-10-29 Thread George Wilson
This value is hard-coded in. - George On Fri, Oct 29, 2010 at 9:58 AM, David Magda dma...@ee.ryerson.ca wrote: On Fri, October 29, 2010 10:00, Eric Schrock wrote: On Oct 29, 2010, at 9:21 AM, Jesus Cea wrote: When a file is deleted, its block are freed, and that situation is

Re: [zfs-discuss] zpool-poolname has 99 threads

2011-01-31 Thread George Wilson
--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- George Wilson http://www.delphix.com M: +1.770.853.8523 F

Re: [zfs-discuss] Repairing Faulted ZFS pool when zbd doesn't recognize the pool as existing

2011-02-06 Thread George Wilson
and dive into this mess with me. J ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- George Wilson http://www.delphix.com M: +1.770.853.8523 F: +1.650.494.1676 275

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-16 Thread George Wilson
or things I should test and I will gladly look into them. -Don ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- George Wilson M: +1.770.853.8523 F: +1.650.494.1676 275

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-16 Thread George Wilson
never seen before. Thanks, -Don -- George Wilson M: +1.770.853.8523 F: +1.650.494.1676 275 Middlefield Road, Suite 50 Menlo Park, CA 94025 http://www.delphix.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-16 Thread George Wilson
abnormally high- especially when compared to the much smaller, better performing pool I have. -- George Wilson M: +1.770.853.8523 F: +1.650.494.1676 275 Middlefield Road, Suite 50 Menlo Park, CA 94025 http://www.delphix.com ___ zfs-discuss mailing list

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-17 Thread George Wilson
. Do you mind sharing more data on this? I would like to see the spa_scrub_* values I sent you earlier while you're running your test (in a loop so we can see the changes). What I'm looking for is to see how many inflight scrubs you have at the time of your run. Thanks, George -Don -- George

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-17 Thread George Wilson
@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- George Wilson M: +1.770.853.8523 F: +1.650.494.1676 275 Middlefield Road, Suite 50 Menlo Park, CA 94025 http://www.delphix.com ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-17 Thread George Wilson
that are at the edge of disk and some out a 1/3 of the way in. Then you want all the metaslabs which are a 1/3 of the way in and lower to get the bonus. This keeps the allocations towards the outer edges. - George Thanks, //Jim -- George Wilson M: +1.770.853.8523 F: +1.650.494.1676 275

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-18 Thread George Wilson
to 100MB/s (in the brief time I looked at it). Is there a setting somewhere between slow and ludicrous speed? -Don -- George Wilson M: +1.770.853.8523 F: +1.650.494.1676 275 Middlefield Road, Suite 50 Menlo Park, CA 94025 http://www.delphix.com