Re: [zfs-discuss] dedup status

2010-05-20 Thread George Wilson
Roy Sigurd Karlsbakk wrote: Hi all I've been doing a lot of testing with dedup and concluded it's not really ready for production. If something fails, it can render the pool unuseless for hours or maybe days, perhaps due to single-threded stuff in zfs. There is also very little data available

Re: [zfs-discuss] Scrub issues

2010-06-14 Thread George Wilson
Richard Elling wrote: On Jun 14, 2010, at 2:12 PM, Roy Sigurd Karlsbakk wrote: Hi all It seems zfs scrub is taking a big bit out of I/O when running. During a scrub, sync I/O, such as NFS and iSCSI is mostly useless. Attaching an SLOG and some L2ARC helps this, but still, the problem remains

Re: [zfs-discuss] zfs hangs with B141 when filebench runs

2010-07-15 Thread George Wilson
I don't recall seeing this issue before. Best thing to do is file a bug and include a pointer to the crash dump. - George zhihui Chen wrote: Looks that the txg_sync_thread for this pool has been blocked and never return, which leads to many other threads have been blocked. I have tried to chan

Re: [zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292Self Review]

2010-07-30 Thread George Wilson
...@sun.com <mailto:psarc-...@sun.com> *CC: * zfs-t...@sun.com <mailto:zfs-t...@sun.com> I am sponsoring the following case for George Wilson. Requested binding is micro/patch. Since this is a straight-forward addition of a command line option, I think itqualifies for

Re: [zfs-discuss] problem with zpool import - zil and cache drive are not displayed?

2010-08-03 Thread George Wilson
Darren, It looks like you've lost your log device. The newly integrated missing log support will help once it's available. In the meantime, you should run 'zdb -l' on your log device to make sure the label is still intact. Thanks, George Darren Taylor wrote: I'm at a loss, I've managed to ge

Re: [zfs-discuss] ZFS dedup issue

2009-11-03 Thread George Wilson
Eric Schrock wrote: On Nov 3, 2009, at 12:24 PM, Cyril Plisko wrote: I think I'm observing the same (with changeset 10936) ... # mkfile 2g /var/tmp/tank.img # zpool create tank /var/tmp/tank.img # zfs set dedup=on tank # zfs create tank/foobar This has to do with the fact that dedu

Re: [zfs-discuss] dedupe question

2009-11-08 Thread George Wilson
Dennis Clarke wrote: On Sat, 2009-11-07 at 17:41 -0500, Dennis Clarke wrote: Does the dedupe functionality happen at the file level or a lower block level? it occurs at the block allocation level. I am writing a large number of files that have the fol structure : -- file begins 1024 line

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-11-30 Thread George Wilson
Moshe Vainer wrote: I am sorry, i think i confused the matters a bit. I meant the bug that prevents importing with slog device missing, 6733267. I am aware that one can remove a slog device, but if you lose your rpool and the device goes missing while you rebuild, you will lose your pool in its

[zfs-discuss] Heads-Up: Changes to the zpool(1m) command

2009-12-02 Thread George Wilson
Some new features have recently integrated into ZFS which have change the output of zpool(1m) command. Here's a quick recap: 1) 6574286 removing a slog doesn't work This change added the concept of named top-level devices for the purpose of device removal. The named top-levels are constructed by

Re: [zfs-discuss] How do I Import rpool to an alternate location?

2010-08-16 Thread George Wilson
The root filesystem on the root pool is set to 'canmount=noauto' so you need to manually mount it first using 'zfs mount '. Then run 'zfs mount -a'. - George On 08/16/10 07:30 PM, Robert Hartzell wrote: I have a disk which is 1/2 of a boot disk mirror from a failed system that I would like

Re: [zfs-discuss] How do I Import rpool to an alternate location?

2010-08-16 Thread George Wilson
Robert Hartzell wrote: On 08/16/10 07:47 PM, George Wilson wrote: The root filesystem on the root pool is set to 'canmount=noauto' so you need to manually mount it first using 'zfs mount '. Then run 'zfs mount -a'. - George mounting the dataset failed because

Re: [zfs-discuss] ZFS offline ZIL corruption not detected

2010-08-26 Thread George Wilson
Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Neil Perrin This is a consequence of the design for performance of the ZIL code. Intent log blocks are dynamically allocated and chained together. When reading the intent

Re: [zfs-discuss] ZFS offline ZIL corruption not detected

2010-08-26 Thread George Wilson
David Magda wrote: On Wed, August 25, 2010 23:00, Neil Perrin wrote: Does a scrub go through the slog and/or L2ARC devices, or only the "primary" storage components? A scrub will go through slogs and primary storage devices. The L2ARC device is considered volatile and data loss is not possibl

Re: [zfs-discuss] ZFS offline ZIL corruption not detected

2010-08-26 Thread George Wilson
Edward Ned Harvey wrote: Add to that: During scrubs, perform some reads on log devices (even if there's nothing to read). We do read from log device if there is data stored on them. In fact, during scrubs, perform some reads on every device (even if it's actually empty.) Reading from the d

Re: [zfs-discuss] ZFS offline ZIL corruption not detected

2010-08-27 Thread George Wilson
Bob Friesenhahn wrote: On Thu, 26 Aug 2010, George Wilson wrote: What gets "scrubbed" in the slog? The slog contains transient data which exists for only seconds at a time. The slog is quite likely to be empty at any given point in time. Bob Yes, the typical ZIL block never

Re: [zfs-discuss] what is zfs doing during a log resilver?

2010-09-06 Thread George Wilson
Arne Jansen wrote: Giovanni Tirloni wrote: On Thu, Sep 2, 2010 at 10:18 AM, Jeff Bacon > wrote: So, when you add a log device to a pool, it initiates a resilver. What is it actually doing, though? Isn't the slog a copy of the in-memory intent lo

Re: [zfs-discuss] Hang on zpool import (dedup related)

2010-09-12 Thread George Wilson
Chris Murray wrote: Another hang on zpool import thread, I'm afraid, because I don't seem to have observed any great successes in the others and I hope there's a way of saving my data ... In March, using OpenSolaris build 134, I created a zpool, some zfs filesystems, enabled dedup on them, mo

Re: [zfs-discuss] resilver that never finishes

2010-09-18 Thread George Wilson
Tom Bird wrote: On 18/09/10 09:02, Ian Collins wrote: In my case, other than an hourly snapshot, the data is not significantly changing. It'd be nice to see a response other than "you're doing it wrong", rebuilding 5x the data on a drive relative to its capacity is clearly erratic behaviou

Re: [zfs-discuss] Resilver endlessly restarting at completion

2010-09-29 Thread George Wilson
Answers below... Tuomas Leikola wrote: The endless resilver problem still persists on OI b147. Restarts when it should complete. I see no other solution than to copy the data to safety and recreate the array. Any hints would be appreciated as that takes days unless i can stop or pause the re

Re: [zfs-discuss] Is there any way to stop a resilver?

2010-09-29 Thread George Wilson
Can you post the output of 'zpool status'? Thanks, George LIC mesh wrote: Most likely an iSCSI timeout, but that was before my time here. Since then, there have been various individual drives lost along the way on the shelves, but never a whole LUN, so, theoretically, /except/ for iSCSI time

Re: [zfs-discuss] Recovering from corrupt ZIL

2010-10-23 Thread George Wilson
If your pool is on version > 19 then you should be able to import a pool with a missing log device by using the '-m' option to 'zpool import'. - George On Sat, Oct 23, 2010 at 10:03 PM, David Ehrmann wrote: > > > From: zfs-discuss-boun...@opensolaris.org > > [mailto:zfs-discuss- > > > boun...@o

Re: [zfs-discuss] Recovering from corrupt ZIL

2010-10-24 Thread George Wilson
The guid is stored on the mirrored pair of the log and in the pool config. If you're log device was not mirrored then you can only find it in the pool config. - George On Sun, Oct 24, 2010 at 9:34 AM, David Ehrmann wrote: > How does ZFS detect that there's a log device attached to a pool? I >

Re: [zfs-discuss] Question about (delayed) block freeing

2010-10-29 Thread George Wilson
This value is hard-coded in. - George On Fri, Oct 29, 2010 at 9:58 AM, David Magda wrote: > On Fri, October 29, 2010 10:00, Eric Schrock wrote: > > > > On Oct 29, 2010, at 9:21 AM, Jesus Cea wrote: > > > >> When a file is deleted, its block are freed, and that situation is > >> committed in the

Re: [zfs-discuss] zpool-poolname has 99 threads

2011-01-31 Thread George Wilson
ke to > rule it out before looking further at I/O performance. > > -- > -Gary Mills--Unix Group--Computer and Network Services- > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolar

Re: [zfs-discuss] Repairing Faulted ZFS pool when zbd doesn't recognize the pool as existing

2011-02-06 Thread George Wilson
-fFX -o ro -o > failmode=continue -R /mnt 13666181038508963033 > > Result: works, took 25 min, and all the vdevs are the proper /mytempdev > devices, not other ones. > > > > Tried: zpool clear -F tank > > Result: cannot clear errors for tank: I/O error > > >

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-15 Thread George Wilson
mbers make any sense to me. > > The array actually has 88 disks + 4 hot spares (1 each of two sizes > per controller channel) + 4 Intel X-25E 32GB SSD's (2 x 2 way mirror > split across controller channels). > > Any ideas or things I should test and I will gladly look into

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-16 Thread George Wilson
same theme- is there a good reference for all of the various > ZFS debugging commands and mdb options? > > I'd love to spend a lot of time just looking at the data available to > me but every time I turn around someone suggests a new and interesting > mdb query I've never s

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-16 Thread George Wilson
don't make any sense. And as I > mentioned- the busy percent and service times of these disks are never > abnormally high- especially when compared to the much smaller, better > performing pool I have. > -- George Wilson M: +1.770.853.8523 F: +1.650.494.1676 275 Middlefiel

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-17 Thread George Wilson
o we can see the changes). What I'm looking for is to see how many inflight scrubs you have at the time of your run. Thanks, George > -Don > -- George Wilson M: +1.770.853.8523 F: +1.650.494.1676 275 Middlefield Road, Suite 50 Menlo Park, CA 94025 http://www.delphix.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-17 Thread George Wilson
ellular)          mailto:jimkli...@cos.ru | > |                          CC:ad...@cos.ru,jimkli...@mail.ru | > ++ > | ()  ascii ribbon campaign - against html mail              | > | /\                        - against microsoft attachments  | > +=

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-17 Thread George Wilson
as to spare the absent > whiteboard ,) No. Imagine if you started allocations on a disk and used the metaslabs that are at the edge of disk and some out a 1/3 of the way in. Then you want all the metaslabs which are a 1/3 of the way in and lower to get the bonus. This keeps the allocations tow

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-18 Thread George Wilson
 3   402M  31.8K > pool0       14.1T  25.3T  51.6K      0   405M  3.98K > pool0       14.1T  25.3T  52.0K      0   408M      0 > > Now the pool scrub rate climbs to 100MB/s (in the brief time I looked at it). > > Is there a setting somewhere between slow and ludicrous s

Re: [zfs-discuss] Repairing a Root Pool

2008-08-29 Thread George Wilson
Krister Joas wrote: > Hello. > > I have a machine at home on which I have SXCE B96 installed on a root > zpool mirror. It's been working great until yesterday. The root pool > is a mirror with two identical 160GB disks. The other day I added a > third disk to the mirror, a 250 GB disk. S

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-03-03 Thread George Wilson
Matthew Ahrens wrote: Blake wrote: zfs send is great for moving a filesystem with lots of tiny files, since it just handles the blocks :) I'd like to see: pool-shrinking (and an option to shrink disk A when i want disk B to become a mirror, but A is a few blocks bigger) I'm working on it.

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-03-03 Thread George Wilson
Richard Elling wrote: David Magda wrote: On Feb 27, 2009, at 18:23, C. Bergström wrote: Blake wrote: Care to share any of those in advance? It might be cool to see input from listees and generally get some wheels turning... raidz boot support in grub 2 is pretty high on my list to be hone

Re: [zfs-discuss] ZFS crashes on boot

2009-03-31 Thread George Wilson
Cyril Plisko wrote: On Thu, Mar 26, 2009 at 8:45 PM, Richard Elling wrote: assertion failures are bugs. Yup, I know that. Please file one at http://bugs.opensolaris.org Just did. Do you have a crash dump from this issue? - George You may need to try another vers

Re: [zfs-discuss] ZFS crashes on boot

2009-04-02 Thread George Wilson
Cyril Plisko wrote: On Tue, Mar 31, 2009 at 11:01 PM, George Wilson wrote: Cyril Plisko wrote: On Thu, Mar 26, 2009 at 8:45 PM, Richard Elling wrote: assertion failures are bugs. Yup, I know that. Please file one at http://bugs.opensolaris.org Just did. Do you have a crash dump

Re: [zfs-discuss] ZFS hanging on import

2009-04-02 Thread George Wilson
Arne Schwabe wrote: Hi, I have a zpool in a degraded state: [19:15]{1}a...@charon:~% pfexec zpool import pool: npool id: 5258305162216370088 state: DEGRADED status: The pool is formatted using an older on-disk version. action: The pool can be imported despite missing or damaged devices.

Re: [zfs-discuss] ZFS hanging on import

2009-04-03 Thread George Wilson
Arne Schwabe wrote: Am 03.04.2009 2:42 Uhr, schrieb George Wilson: Arne Schwabe wrote: Hi, I have a zpool in a degraded state: [19:15]{1}a...@charon:~% pfexec zpool import pool: npool id: 5258305162216370088 state: DEGRADED status: The pool is formatted using an older on-disk version

Re: [zfs-discuss] vdev_disk_io_start() sending NULL pointer in ldi_ioctl()

2009-04-10 Thread George Wilson
shyamali.chakrava...@sun.com wrote: Hi All, I have corefile where we see NULL pointer de-reference PANIC as we have sent (deliberately) NULL pointer for return value. vdev_disk_io_start() ... ... error = ldi_ioctl(dvd->vd_lh, zio->io_cmd, (uintptr_

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-05-23 Thread George Wilson
Mike Gerdts wrote: On Tue, May 19, 2009 at 2:16 PM, Paul B. Henson > wrote: > > I was checking with Sun support regarding this issue, and they say "The CR > currently has a high priority and the fix is understood. However, there is > no eta, workaround, nor IDR." > > If

Re: [zfs-discuss] Replacing HDD with larger HDD..

2009-05-23 Thread George Wilson
Jorgen Lundman wrote: Rob Logan wrote: you meant to type zpool import -d /var/tmp grow Bah - of course, I can not just expect zpool to know what random directory to search. You Sir, are a genius. Works like a charm, and thank you. Lund I will be integrating some changes soon which wil

Re: [zfs-discuss] LUN expansion

2009-06-04 Thread George Wilson
Leonid, I will be integrating this functionality within the next week: PSARC 2008/353 zpool autoexpand property 6475340 when lun expands, zfs should expand too Unfortunately, the won't help you until they get pushed to Opensolaris. The problem you're facing is that the partition table needs to

Re: [zfs-discuss] LUN expansion

2009-06-08 Thread George Wilson
Leonid Zamdborg wrote: George, Is there a reasonably straightforward way of doing this partition table edit with existing tools that won't clobber my data? I'm very new to ZFS, and didn't want to start experimenting with a live machine. Leonid, What you could do is to write a program whi

Re: [zfs-discuss] Issues with slightly different sized drives in raidz pool?

2009-06-08 Thread George Wilson
David Bryan wrote: Sorry if the question has been discussed before...did a pretty extensive search, but no luck... Preparing to build my first raidz pool. Plan to use 4 identical drives in a 3+1 configuration. My question is -- what happens if one drive dies, and when I replace it, design ha

Re: [zfs-discuss] Increase size of ZFS mirror

2009-06-24 Thread George Wilson
Ben wrote: Hi all, I have a ZFS mirror of two 500GB disks, I'd like to up these to 1TB disks, how can I do this? I must break the mirror as I don't have enough controller on my system board. My current mirror looks like this: [b]r...@beleg-ia:/share/media# zpool status share pool: share sta

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-21 Thread George Wilson
Russel wrote: OK. So do we have an zpool import --xtg 56574 mypoolname or help to do it (script?) Russel We are working on the pool rollback mechanism and hope to have that soon. The ZFS team recognizes that not all hardware is created equal and thus the need for this mechanism. We are usi

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-22 Thread George Wilson
how do we deal with that on older sStems, which don't have the patch applied, once it is out? Thanks, Alexander On Tuesday, July 21, 2009, George Wilson wrote: Russel wrote: OK. So do we have an zpool import --xtg 56574 mypoolname or help to do it (script?) Russel We are working o

Re: [zfs-discuss] zpool export taking hours

2009-07-27 Thread George Wilson
fyleow wrote: I have a raidz1 tank of 5x 640 GB hard drives on my newly installed OpenSolaris 2009.06 system. I did a zpool export tank and the process has been running for 3 hours now taking up 100% CPU usage. When I do a zfs list tank it's still shown as mounted. What's going on here? Shoul

Re: [zfs-discuss] zpool export taking hours

2009-07-29 Thread George Wilson
fyleow wrote: fyleow wrote: I have a raidz1 tank of 5x 640 GB hard drives on my newly installed OpenSolaris 2009.06 system. I did a zpool export tank and the process has been running for 3 hours now taking up 100% CPU usage. When I do a zfs list tank it's still shown as mounted. What's going

Re: [zfs-discuss] panic with zfs

2007-01-29 Thread George Wilson
Ihsan, If you are running Solaris 10 then you are probably hitting: 6456939 sd_send_scsi_SYNCHRONIZE_CACHE_biodone() can issue TUR which calls biowait()and deadlock/hangs host This was fixed in opensolaris (build 48) but a patch is not yet available for Solaris 10. Thanks, George Ihsan Do

Re: [zfs-discuss] Unbelievable. an other crashed zpool :(

2007-04-09 Thread George Wilson
Gino, Can you send me the corefile from the zpool command? This looks like a case where we can't open the device for some reason. Are you using a multi-pathing solution other than MPXIO? Thanks, George Gino wrote: Today we lost an other zpool! Fortunately it was only a backup repository.

Re: [zfs-discuss] other panic caused by ZFS

2007-04-09 Thread George Wilson
Gino, Were you able to recover by setting zfs_recover? Thanks, George Gino wrote: Hi All, here is an other kind of kernel panic caused by ZFS that we found. I have dumps if needed. #zpool import pool: zpool8 id: 7382567111495567914 state: ONLINE status: The pool is formatted using an o

Re: [zfs-discuss] misleading zpool state and panic -- nevada b60 x86

2007-04-09 Thread George Wilson
William D. Hathaway wrote: I'm running Nevada build 60 inside VMWare, it is a test rig with no data of value. SunOS b60 5.11 snv_60 i86pc i386 i86pc I wanted to check out the FMA handling of a serious zpool error, so I did the following: 2007-04-07.08:46:31 zpool create tank mirror c0d1 c1d1 2

Re: [zfs-discuss] Permanently removing vdevs from a pool

2007-04-19 Thread George Wilson
This is a high priority for us and is actively being worked. Vague enough for you. :-) Sorry I can't give you anything more exact that that. - George Matty wrote: On 4/19/07, Mark J Musante <[EMAIL PROTECTED]> wrote: On Thu, 19 Apr 2007, Mario Goebbels wrote: > Is it possible to gracefully

Re: [zfs-discuss] B62 AHCI and ZFS

2007-04-30 Thread George Wilson
Peter, Can you send the 'zpool status -x' output after your reboot. I suspect that the pool error is occurring early in the boot and later the devices are all available and the pool is brought into an online state. Take a look at: *6401126 ZFS DE should verify that diagnosis is still valid b

Re: [zfs-discuss] Re: B62 AHCI and ZFS

2007-04-30 Thread George Wilson
Peter Goodman wrote: # zpool status -x pool: mtf state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/Z

Re: [zfs-discuss] zpool status -v: machine readable format?

2007-07-03 Thread George Wilson
David Smith wrote: > I was wondering if anyone had a script to parse the "zpool status -v" output > into a more machine readable format? > > Thanks, > > David > > > This message posted from opensolaris.org > ___ > zfs-discuss mailing list > zfs-discu

Re: [zfs-discuss] ZFS pool fragmentation

2007-07-10 Thread George Wilson
This fix plus the fix for '6495013 Loops and recursion in metaslab_ff_alloc can kill performance, even on a pool with lots of free data' will greatly help your situation. Both of these fixes will be in Solaris 10 update 4. Thanks, George ?ukasz wrote: > I have a huge problem with ZFS pool frag

Re: [zfs-discuss] Again ZFS with expanding LUNs!

2007-08-06 Thread George Wilson
I'm planning on putting back the changes to ZFS into Opensolaris in upcoming weeks. This will still require a manual step as the changes required in the sd driver are still under development. The ultimate plan is to have the entire process totally automated. If you have more questions, feel fre

Re: [zfs-discuss] Opensolaris ZFS version & Sol10U4 compatibility

2007-08-16 Thread George Wilson
The on-disk format for s10u4 will be version 4. This is equivalent to Opensolaris build 62. Thanks, George David Evans wrote: > As the release date Solaris 10 Update 4 approaches (hope, hope), I was > wondering if someone could comment on which versions of opensolaris ZFS will > seamlessly wo

[zfs-discuss] ZFS Solaris 10u5 Proposed Changes

2007-09-18 Thread George Wilson
ZFS Fans, Here's a list of features that we are proposing for Solaris 10u5. Keep in mind that this is subject to change. Features: PSARC 2007/142 zfs rename -r PSARC 2007/171 ZFS Separate Intent Log PSARC 2007/197 ZFS hotplug PSARC 2007/199 zfs {create,clone,rename} -p PSARC 2007/283 FMA for ZFS

Re: [zfs-discuss] zpool history not found

2007-09-18 Thread George Wilson
You need to install patch 120011-14. After you reboot you will be able to run 'zpool upgrade -a' to upgrade to the latest version. Thanks, George sunnie wrote: > Hey, guys > Since corrent zfs software only support ZFS pool version 3, how should I > do to upgrade the zfs software or package

Re: [zfs-discuss] System hang caused by a "bad" snapshot

2007-09-18 Thread George Wilson
Ben, Much of this code has been revamped as a result of: 6514331 in-memory delete queue is not needed Although this may not fix your issue it would be good to try this test with more recent bits. Thanks, George Ben Miller wrote: > Hate to re-open something from a year ago, but we just had th

[zfs-discuss] ZFS Solaris 10 Update 4 Patches

2007-09-18 Thread George Wilson
The latest ZFS patches for Solaris 10 are now available: 120011-14 - SunOS 5.10: kernel patch 120012-14 - SunOS 5.10_x86: kernel patch ZFS Pool Version available with patches = 4 These patches will provide access to all of the latest features and bug fixes: Features: PSARC 2006/288 zpool histo

Re: [zfs-discuss] what patches are needed to enable zfs_nocacheflush

2007-09-18 Thread George Wilson
Bernhard, Here are the solaris 10 patches: 120011-14 - SunOS 5.10: kernel patch 120012-14 - SunOS 5.10_x86: kernel patch See http://www.opensolaris.org/jive/thread.jspa?threadID=39951&tstart=0 for more info. Thanks, George Bernhard Holzer wrote: > Hi, > > this parameter (zfs_nocacheflush) is

Re: [zfs-discuss] import zpool error if use loop device as vdev

2007-09-18 Thread George Wilson
By default, 'zpool import' looks only in /dev/dsk. Since you are using /dev/lofi you will need to use 'zpool import -d /dev/lofi' to import your pool. Thanks, George sunnie wrote: > Hey, guys > I just do the test for use loop device as vdev for zpool > Procedures as followings: > 1) mkfi

Re: [zfs-discuss] ZFS Solaris 10 Update 4 Patches

2007-09-19 Thread George Wilson
son wrote: > >> Does simply installing that patch on a U3 machine get you all of the U4 >> zfs updates, or must a full OS upgrade be done? >> >> -Brian >> >> >> George Wilson wrote: >>> The latest ZFS patches for Solaris 10 are now available: >>

Re: [zfs-discuss] ZFS Solaris 10u5 Proposed Changes

2007-09-19 Thread George Wilson
Gzip support will be there but it was not a PSARC case. Once we have committed the content I will publish a complete list of all the CRs that are going into s10u5. Thanks, George Jesus Cea wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > George Wilson wrote: >>

Re: [zfs-discuss] zpool question

2007-10-30 Thread George Wilson
Krzys wrote: > hello folks, I am running Solaris 10 U3 and I have small problem that I dont > know how to fix... > > I had a pool of two drives: > > bash-3.00# zpool status >pool: mypool > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CK

Re: [zfs-discuss] [storage-discuss] Preventing zpool imports on boot

2008-02-15 Thread George Wilson
Mike Gerdts wrote: > On Feb 15, 2008 2:31 PM, Dave <[EMAIL PROTECTED]> wrote: > >> This is exactly what I want - Thanks! >> >> This isn't in the man pages for zfs or zpool in b81. Any idea when this >> feature was integrated? >> > > Interesting... it is in b76. I checked several other rele

Re: [zfs-discuss] How to set ZFS metadata copies=3?

2008-02-15 Thread George Wilson
Vincent Fox wrote: > Let's say you are paranoid and have built a pool with 40+ disks in a Thumper. > > Is there a way to set metadata copies=3 manually? > > After having built RAIDZ2 sets with 7-9 disks and then pooled these together, > it just seems like a little bit of extra insurance to increas

Re: [zfs-discuss] zfs write cache enable on boot disks ?

2008-04-24 Thread George Wilson
Unfortunately not. Thanks, George Par Kansala wrote: > Hi, > > Will the upcoming zfs boot capabilities also enable write cache on a > boot disk > like it does on regular data disks (when whole disks are used) ? > > //Par > -- > -- > *Pär Känsälä* > OEM Engagement Architec

Re: [zfs-discuss] zfs write cache enable on boot disks ?

2008-04-24 Thread George Wilson
Just to clarify a bit, ZFS will not enable the write cache for the root pool. That said, there are disk drives which have the write cache enabled by default. That behavior remains unchanged. - George George Wilson wrote: > Unfortunately not. > > Thanks, > George > >

Re: [zfs-discuss] [caiman-discuss] swap & dump on ZFS volume

2008-07-02 Thread George Wilson
Kyle McDonald wrote: > David Magda wrote: > >> Quite often swap and dump are the same device, at least in the >> installs that I've worked with, and I think the default for Solaris >> is that if dump is not explicitly specified it defaults to swap, yes? >> Is there any reason why they shou

Re: [zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]

2006-05-11 Thread George Wilson
Darren J Moffat wrote: How would you apply these diffs ? How do you select which files to apply and which not to ? For example you want the log files to be "merged" some how but you certainly don't want the binaries to be merged. This would have to be a decision by the user when the sync tak

Re: [zfs-discuss] How to monitor ZFS ?

2006-07-16 Thread George Wilson
ZFS will generate error events through FMA and they will be visible via syslog or fmdump. The hot spares facility will also listen for vdev failures to determine when to automatically kick in a hot spare. - George Dick Davies wrote: On 15/07/06, Torrey McMahon <[EMAIL PROTECTED]> wrote: eric

Re: [zfs-discuss] How to best layout our filesystems

2006-07-25 Thread George Wilson
Karen and Sean, You mention ZFS version 6 do yo mean that you are running s10u2_06? If so, then definitely you want to upgrade to the RR version of s10u2 which is s10u2_09a. Additionally, I've just putback the latest feature set and bugfixes which will be part of s10u3_03. There were some ad

Re: [zfs-discuss] How to best layout our filesystems

2006-07-28 Thread George Wilson
Robert, The patches will be available sometime late September. This may be a week or so before s10u3 actually releases. Thanks, George Robert Milkowski wrote: Hello eric, Thursday, July 27, 2006, 4:34:16 AM, you wrote: ek> Robert Milkowski wrote: Hello George, Wednesday, July 26, 2006,

[zfs-discuss] Re: Canary is now running latest code and has a 3 disk raidz ZFS volume

2006-07-30 Thread George Wilson
pretty amazing that we have 800 servers, 30,000 users, 140 million lines of ASCII per day all fitting in a 2u T2000 box! thanks sean George Wilson wrote: Sean, Sorry for the delay getting back to you. You can do a 'zpool upgrade' to see what version of the on-disk format you

Re: [zfs-discuss] 6424554

2006-07-30 Thread George Wilson
Robert, We are looking to try to get patches out by late September which will include this and many other fixes. I'll post all the changes in another thread. Thanks, George Robert Milkowski wrote: Hello Fred, Friday, July 28, 2006, 12:37:22 AM, you wrote: FZ> Hi Robert, FZ> The fix for 6

[zfs-discuss] Solaris 10 ZFS Update

2006-07-31 Thread George Wilson
We have putback a significant number of fixes and features from OpenSolaris into what will become Solaris 10 11/06. For reference here's the list: Features: PSARC 2006/223 ZFS Hot Spares 6405966 Hot Spare support in ZFS PSARC 2006/303 ZFS Clone Promotion 6276916 support for "clone swap" PSARC 2

Re: [zfs-discuss] Solaris 10 ZFS Update

2006-07-31 Thread George Wilson
Rainer, This will hopefully go into build 06 of s10u3. It's on my list... :-) Thanks, George Rainer Orth wrote: George Wilson <[EMAIL PROTECTED]> writes: We have putback a significant number of fixes and features from OpenSolaris into what will become Solaris 10 11/06. For refer

Re: [zfs-discuss] Solaris 10 ZFS Update

2006-07-31 Thread George Wilson
I forgot to highlight that RAIDZ2 (a.k.a RAID-6) is also in this wad: 6417978 double parity RAID-Z a.k.a. RAID6 Thanks, George George Wilson wrote: We have putback a significant number of fixes and features from OpenSolaris into what will become Solaris 10 11/06. For reference here's

Re: [zfs-discuss] Solaris 10 ZFS Update

2006-07-31 Thread George Wilson
Grant, Expect patches late September or so. Once available I'll post the patch information. Thanks, George grant beattie wrote: On Mon, Jul 31, 2006 at 11:51:09AM -0400, George Wilson wrote: We have putback a significant number of fixes and features from OpenSolaris into what will b

[zfs-discuss] Re: [Fwd: [zones-discuss] Zone boot problems after installing patches]

2006-08-02 Thread George Wilson
Dave, I'm copying the zfs-discuss alias on this as well... It's possible that not all necessary patches have been installed or they maybe hitting CR# 6428258. If you reboot the zone does it continue to end up in maintenance mode? Also do you know if the necessary ZFS/Zones patches have been u

Re: [zfs-discuss] Re: [Fwd: [zones-discuss] Zone boot problems after installing patches]

2006-08-03 Thread George Wilson
1-11 SunOS 5.8_x86 5.9_x86 5.10_x86: Live Upgrade Patch Thanks, George George Wilson wrote: Dave, I'm copying the zfs-discuss alias on this as well... It's possible that not all necessary patches have been installed or they maybe hitting CR# 6428258. If you reboot the zone does it

Re: [zfs-discuss] Re: SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-07 Thread George Wilson
Leon, Looking at the corefile doesn't really show much from the zfs side. It looks like you were having problems with your san though: /scsi_vhci/[EMAIL PROTECTED] (ssd5) offline /scsi_vhci/[EMAIL PROTECTED] (ssd5) multipath status: failed, path /[EMAIL PROTECTED],70/SUNW,[EMAIL PROTECTED

Re: [zfs-discuss] Querying ZFS version?

2006-08-08 Thread George Wilson
Luke, You can run 'zpool upgrade' to see what on-disk version you are capable of running. If you have the latest features then you should be running version 3: hadji-2# zpool upgrade This system is currently running ZFS version 3. Unfortunately this won't tell you if you are running the late

Re: [zfs-discuss] Re: System hangs on SCSI error

2006-08-09 Thread George Wilson
Brad, I'm investigating a similar issue and would like to get a coredump if you have one available. Thanks, George Brad Plecs wrote: I have similar problems ... I have a bunch of D1000 disk shelves attached via SCSI HBAs to a V880. If I do something as simple as unplug a drive in a raidz v

Re: [zfs-discuss] Re: System hangs on SCSI error

2006-08-10 Thread George Wilson
Brad, I have a suspicion about what you might be seeing and I want to confirm it. If it locks up again you can also collect a threadlist: "echo $ The core dump timed out (related to the SCSI bus reset?), so I don't have one. I can try it again, though, it's easy enough to reproduce. I was

Re: [zfs-discuss] system unresponsive after issuing a zpool attach

2006-08-16 Thread George Wilson
I believe this is what you're hitting: 6456888 zpool attach leads to memory exhaustion and system hang We are currently looking at fixing this so stay tuned. Thanks, George Daniel Rock wrote: Joseph Mocker schrieb: Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS SVM part

Re: [zfs-discuss] SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-18 Thread George Wilson
Frank, The SC 3.2 beta maybe closed but I'm forwarding your request to Eric Redmond. Thanks, George Frank Cusack wrote: On August 10, 2006 6:04:38 PM -0700 eric kustarz <[EMAIL PROTECTED]> wrote: If you're doing HA-ZFS (which is SunCluster 3.2 - only available in beta right now), Is the 3

Re: [zfs-discuss] ZFS questions with mirrors

2006-08-21 Thread George Wilson
Peter, Are you sure your customer is not hitting this: 6456939 sd_send_scsi_SYNCHRONIZE_CACHE_biodone() can issue TUR which calls biowait()and deadlock/hangs host I have a fix that you could have your customer try. Thanks, George Peter Wilk wrote: IHAC that is asking the following. any tho

Re: [zfs-discuss] Re: SCSI synchronize cache cmd

2006-08-22 Thread George Wilson
Roch wrote: Dick Davies writes: > On 22/08/06, Bill Moore <[EMAIL PROTECTED]> wrote: > > On Mon, Aug 21, 2006 at 02:40:40PM -0700, Anton B. Rang wrote: > > > Yes, ZFS uses this command very frequently. However, it only does this > > > if the whole disk is under the control of ZFS, I believe

Re: [zfs-discuss] zpool hangs

2006-08-24 Thread George Wilson
Robert, One of your disks is not responding. I've been trying to track down why the scsi command is not being timed out but for now check out each of the devices to make sure they are healthy. BTW, if you capture a corefile let me know. Thanks, George Robert Milkowski wrote: Hi. S10U2 +

Re: [zfs-discuss] zpool hangs

2006-08-24 Thread George Wilson
Robert Milkowski wrote: Hello George, Thursday, August 24, 2006, 5:48:08 PM, you wrote: GW> Robert, GW> One of your disks is not responding. I've been trying to track down why GW> the scsi command is not being timed out but for now check out each of GW> the devices to make sure they are heal

Re: [zfs-discuss] Re: Re: zpool status panics server

2006-08-25 Thread George Wilson
Neal, This is not fixed yet. Your best best is to run a replicated pool. Thanks, George Neal Miskin wrote: Hi Dana It is ZFS bug 6322646; a flaw. Is this fixed in a patch yet? nelly_bo This message posted from opensolaris.org ___ zfs-discus

Re: [zfs-discuss] Re: Significant "pauses" during zfs writes

2006-08-28 Thread George Wilson
A fix for this should be integrated shortly. Thanks, George Michael Schuster - Sun Microsystems wrote: Robert Milkowski wrote: Hello Michael, Wednesday, August 23, 2006, 12:49:28 PM, you wrote: MSSM> Roch wrote: MSSM> I sent this output offline to Roch, here's the essential ones and (firs

Re: [zfs-discuss] ZFS causes panic on import

2006-09-04 Thread George Wilson
Stuart, Is it possible that you ran 'zpool import' on node1 and then failed it over to the other node which ran 'zpool import' on node2? If so, then the pool configuration was automatically added to zpool.cache so that the pool could be automatically loaded upon reboot. This may result in the

  1   2   >