Re: [zfs-discuss] Zpool problems

2009-12-07 Thread Cindy Swearingen
Hi Michael, Whenever I see commands hanging, I would first rule out any hardware issues. I'm not sure how to do that on a OS X. Cindy On 12/06/09 09:14, Michael Armstrong wrote: Hi, I'm using zfs version 6 on mac os x 10.5 using the old macosforge pkg. When I'm writing files to the fs they

Re: [zfs-discuss] SMC for ZFS administration in OpenSolaris 2009.06?

2009-12-07 Thread Cindy Swearingen
On 12/07/09 09:37, Cindy Swearingen wrote: Hi Xavier, Neither the SMC interface nor the ZFS webconsole is available in OpenSolaris releases. The SMC cannot be used for ZFS administration in any Solaris release. I'm not sure what the replacement plans are but you might check with the experts

Re: [zfs-discuss] Accidentally added disk instead of attaching

2009-12-07 Thread Cindy Swearingen
I agree that zpool attach and add look similar in their syntax, but if you attempt to add a disk to a redundant config, you'll see an error message similar to the following: # zpool status export pool: export state: ONLINE scrub: none requested config: NAMESTATE READ

Re: [zfs-discuss] Problems while resilvering

2009-12-07 Thread Cindy Swearingen
Hi Matthias, I'm not sure I understand all the issues that are going on in this configuration, but I don't see that you used the zpool replace command to complete physical replacement of the failed disk, which would look like this: # zpool replace performance c1t3d0 Then run zpool clear to

Re: [zfs-discuss] Pool resize

2009-12-07 Thread Cindy Swearingen
Hi Alex, The SXCE Admin Guide is generally up-to-date on docs.sun.com. The section that covers the autoreplace property and default behavior is here: http://docs.sun.com/app/docs/doc/817-2271/gazgd?a=view Thanks, Cindy On 12/07/09 14:50, Alexandru Pirvulescu wrote: Thank you. That fixed the

Re: [zfs-discuss] zpool import - device names not always updated?

2009-12-04 Thread Cindy Swearingen
practices guide, here, for guidelines on creating ZFS storage pools: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide Cindy On 12/03/09 15:26, Ragnar Sundblad wrote: Thank you Cindy for your reply! On 3 dec 2009, at 18.35, Cindy Swearingen wrote: A bug might exist but you

Re: [zfs-discuss] b128a available w/deduplication

2009-12-04 Thread Cindy Swearingen
Hi Dennis, Yes, sorry for the confusion. I just added the ZFS pool version (19-22) pages in the old OpenSolaris site to avoid any problems, but it doesn't look like the newer pages are redirecting correctly from the old site. I filed a bug to alert people that the version pages have moved due

Re: [zfs-discuss] Permanent errors on two files

2009-12-04 Thread Cindy Swearingen
Hi Gary, To answer your questions, the hardware read some data and ZFS detected a problem with the checksums in this dataset and reported this problem. ZFS can do this regardless of ZFS redundancy. I don't think a scrub will fix these permanent errors, but it depends on the corruption. If its

Re: [zfs-discuss] USB sticks show on one set of devices in zpool, different devices in format

2009-12-04 Thread Cindy Swearingen
Hi Bill, I can't comment on why your USB device names are changing, but I have seen BIOS upgrades do similar things to device names. If you must run a root pool on USB sticks, then I think you would have to boot from the LiveCD before running the BIOS upgrade. Maybe someone can comment. On Sun

Re: [zfs-discuss] USB sticks show on one set of devices in zpool, different devices in format

2009-12-04 Thread Cindy Swearingen
in device names and the beadm activate will fail something like this: ERROR: Unable to determine the configuration of the current boot environment Cindy On 12/04/09 15:26, Cindy Swearingen wrote: Hi Bill, I can't comment on why your USB device names are changing, but I have seen BIOS upgrades do

Re: [zfs-discuss] zpool import - device names not always updated?

2009-12-03 Thread Cindy Swearingen
Hi Ragnar, A bug might exist but you are building a pool based on the ZFS volumes that are created in another pool. This configuration is not supported and possible deadlocks can occur. If you can retry this example without building a pool on another pool, like using files to create a pool and

Re: [zfs-discuss] Any recommendation: what FS in DomU?

2009-12-02 Thread Cindy Swearingen
I'm not sure we have any LDOMs experts on this list. You might try reposting this query on the LDOMs discuss list, which I think is this one: http://forums.sun.com/forum.jspa?forumID=894 Thanks, Cindy On 12/02/09 08:17, Andre Boegelsack wrote: Hi to all, I have a short question regarding

Re: [zfs-discuss] ZFS dedup issue

2009-12-02 Thread Cindy Swearingen
Hi Jim, Nevada build 128 had some problems so will not be released. The dedup space fixes should be available in build 129. Thanks, Cindy On 12/02/09 02:37, Jim Klimov wrote: Hello all Sorry for bumping an old thread, but now that snv_128 is due to appear as a public DVD download, I

Re: [zfs-discuss] Any recommendation: what FS in DomU?

2009-12-02 Thread Cindy Swearingen
Apparently, I don't know a DomU from a LDOM... I should have pointed you to the Xen discussion list, here: http://opensolaris.org/jive/forum.jspa?forumID=53 Cindy On 12/02/09 08:58, Cindy Swearingen wrote: I'm not sure we have any LDOMs experts on this list. You might try reposting

Re: [zfs-discuss] How many TLDs should you have?

2009-12-01 Thread Cindy Swearingen
Hi Chris, If you have 40 or so disks then you would create 5-6 RAIDZ virtual devices of 7-8 disks each, or possibly include two disks for the root pool, two disks as spares, and then 36 (4 RAIDZ vdevs of 6 disks) disks for a non-root pool. This configuration guide hasn't been updated for

Re: [zfs-discuss] How many TLDs should you have?

2009-12-01 Thread Cindy Swearingen
. Like I said, our storage group presents 15G LUNs to use -- so it'd be difficult to keep the TLDs under 9 and have a very large filesystem. Let me know what you think. Thanks! Chris On Tue, Dec 1, 2009 at 10:47 AM, Cindy Swearingen cindy.swearin...@sun.com mailto:cindy.swearin...@sun.com

Re: [zfs-discuss] Adding drives to system - disk labels not consistent

2009-12-01 Thread Cindy Swearingen
I was able to reproduce this problem on the latest Nevada build: # zpool create tank raidz c1t2d0 c1t3d0 c1t4d0 # zpool add -n tank raidz c1t5d0 c1t6d0 c1t7d0 would update 'tank' to the following configuration: tank raidz1 c1t2d0 c1t3d0

Re: [zfs-discuss] Adding drives to system - disk labels not consistent

2009-11-30 Thread Cindy Swearingen
Hi Stuart, Which Solaris release are you seeing this behavior? I would like reproduce it and file a bug, if necessary. Thanks, Cindy On 11/29/09 13:06, Stuart Reid wrote: Answered by own question... When using the -n switch the output is truncated i.e. the d0 is not printed. When actually

Re: [zfs-discuss] Help needed to find out where the problem is

2009-11-26 Thread Cindy Swearingen
Hi all, on a x4500 with a relatively well patched Sol10u8 # uname -a SunOS s13 5.10 Generic_141445-09 i86pc i386 i86pc I've started a scrub after about 2 weeks of operation and have a lot of checksum errors: s13:~# zpool status

Re: [zfs-discuss] Solaris10 10/09 ZFS shared via CIFS?

2009-11-23 Thread Cindy Swearingen
div id=jive-html-wrapper-div font face=Helvetica, Arial, sans-serifbr Thanks old friendbr br I was surprised to read in the S10 zfs man page that there was the option /fontfont face=Helvetica, Arial, sans-serifsharesmb=on.nbsp; I though I had missed the CIFs server making S10 whilst I

Re: [zfs-discuss] building zpools on device aliases

2009-11-17 Thread Cindy Swearingen
Hi Sean, I sympathize with your intentions but providing pseudo-names for these disks might cause more confusion than actual help. The c4t5... name isn't so bad. I've seen worse. :-) Here are the issues with using the aliases: - If a device fails on a J4200, a LED will indicate which disk has

Re: [zfs-discuss] permanent files error, unable to access pool

2009-11-16 Thread Cindy Swearingen
Hi Daniel, Unfortunately, the permanent errors are in this pool's metadata so it is unlikely that this pool can be recovered. Is this an external USB drive? These drives are not always well-behaved and its possible that it didn't synchronize successfully. Is the data accessible? I don't know

Re: [zfs-discuss] permanent files error, unable to access pool

2009-11-16 Thread Cindy Swearingen
Hi Daniel, In some cases, when I/O is suspended, permanent errors are logged and you need to run a zpool scrub to clear the errors. Are you saying that a zpool scrub cleared the errors that were displayed in the zpool status output? Or, did you also use zpool clear? Metadata is duplicated even

Re: [zfs-discuss] heads up on SXCE build 125 (LU + mirrored root pools)

2009-11-14 Thread Cindy Swearingen
Seems like upgrading from b126 to b127 will have the same problem. Yes, good point. I provided a blurb about this issue, here: http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Live_Upgrade_Problem_.28Starting_in_Nevada.2C_build_125.29 Its a good idea to review this

[zfs-discuss] [Fwd: [osol-announce] IMPT: Infrastructure upgrade this weekend, 11/13-15]

2009-11-13 Thread Cindy Swearingen
Original Message Subject: [osol-announce] IMPT: Infrastructure upgrade this weekend, 11/13-15 Date: Wed, 11 Nov 2009 12:37:19 -0800 From: Derek Cicero derek.cic...@sun.com Reply-To: mai...@opensolaris.org To: opensolaris-annou...@opensolaris.org All, Due to infrastructure

Re: [zfs-discuss] zpool not growing after drive upgrade

2009-11-12 Thread Cindy Swearingen
Hi Tim, In a pool with mixed disk sizes, ZFS can use only the amount of disk space that is equal to the smallest disk and spares aren't included in pool size until they are used. In your RAIDZ-2 pool, this is equivalent to 10 500 GB disks, which should be about 5 TBs. I think you are running a

Re: [zfs-discuss] zpool not growing after drive upgrade

2009-11-12 Thread Cindy Swearingen
Cook wrote: On Thu, Nov 12, 2009 at 4:05 PM, Cindy Swearingen cindy.swearin...@sun.com mailto:cindy.swearin...@sun.com wrote: Hi Tim, In a pool with mixed disk sizes, ZFS can use only the amount of disk space that is equal to the smallest disk and spares aren't included in pool

Re: [zfs-discuss] zfs eradication

2009-11-11 Thread Cindy Swearingen
This feature is described in this RFE: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4930014 Secure delete option: erase blocks after they're freed cs On 11/11/09 09:17, Darren J Moffat wrote: Brian Kolaci wrote: Hi, I was discussing the common practice of disk eradication used

Re: [zfs-discuss] Odd sparing problem

2009-11-11 Thread Cindy Swearingen
Cook wrote: On Tue, Nov 10, 2009 at 4:38 PM, Cindy Swearingen cindy.swearin...@sun.com mailto:cindy.swearin...@sun.com wrote: Hi Tim, I'm not sure I understand this output completely, but have you tried detaching the spare? Cindy Hey Cindy, Detaching did in fact solve

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-10 Thread Cindy Swearingen
Hi Orvar, Correct, I don't see any marvell8sx2 driver changes between b125-126. So far, only you and Tim are reporting these issues. Generally, we see bugs filed by the internal test teams if they see similar problems. I will try to reproduce the RAIDZ checksum errors separately from the

Re: [zfs-discuss] Odd sparing problem

2009-11-10 Thread Cindy Swearingen
Hi Tim, I'm not sure I understand this output completely, but have you tried detaching the spare? Cindy On 11/10/09 09:21, Tim Cook wrote: So, I currently have a pool with 12 disks raid-z2 (12+2). As you may have seen in the other thread, I've been having on and off issues with b126

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-09 Thread Cindy Swearingen
Hi, I can't find any bug-related issues with marvell88sx2 in b126. I looked over Dave Hollister's shoulder while he searched for marvell in his webrevs of this putback and nothing came up: driver change with build 126? not for the SATA framework, but for HBAs there is:

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-07 Thread Cindy Swearingen
Hi Tim and all, I believe you are saying that marvell88sx2 driver error messages started in build 126, along with new disk errors in RAIDZ pools. Is this correct? If so, please send me the following information: 1. Hardware you are running 2. If you are also seeing new disk errors in your

Re: [zfs-discuss] heads up on SXCE build 125 (LU + mirrored root pools)

2009-11-05 Thread Cindy Swearingen
Hi Rich, In build 125, the device naming changed for redundant pools. LU doesn't understand the new device naming if you have a mirrored root pool. I believe an upgrade from 121 to 126 will be okay. Any LU operation on your build 126 system will likely fail unless you follow Casper's steps for

Re: [zfs-discuss] MPxIO and removing physical devices

2009-11-04 Thread Cindy Swearingen
Hi Karl, Welcome to Solaris/ZFS land ... ZFS administration is pretty easy but our device administration is more difficult. I'll probably bungle this response because I don't have similar hardware and I hope some expert will correct me. I think you will have to experiment with various forms

Re: [zfs-discuss] Location of ZFS documentation (source)?

2009-11-03 Thread Cindy Swearingen
Alex, You can download the man page source files from this URL: http://dlc.sun.com/osol/man/downloads/current/ If you want a different version, you can navigate to the available source consolidations from the Downloads page on opensolaris.org. Thanks, Cindy On 11/02/09 16:39, Cindy

Re: [zfs-discuss] Solaris disk confusion ?

2009-11-03 Thread Cindy Swearingen
Hi David, This RFE is filed for this feature: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6893282 Allow the zpool command to wipe labels from disks Cindy On 11/03/09 09:00, David Dyer-Bennet wrote: On Mon, November 2, 2009 20:23, Marion Hakanson wrote: You'll need to give

Re: [zfs-discuss] Location of ZFS documentation (source)?

2009-11-02 Thread Cindy Swearingen
Hi Alex, I'm checking with some folks on how we handled this handoff for the previous project. I'll get back to you shortly. Thanks, Cindy On 11/02/09 16:07, Alex Blewitt wrote: The man pages documentation from the old Apple port

Re: [zfs-discuss] adding new disk to pool

2009-10-29 Thread Cindy Swearingen
Hi Dan, Could you provide a bit more information, such as: 1. zpool status output for tank 2. the format entries for c0d0 and c1d1 Thanks, Cindy - Original Message - From: Daniel dan.lis...@gmail.com Date: Thursday, October 29, 2009 9:59 am Subject: [zfs-discuss] adding new disk to

Re: [zfs-discuss] adding new disk to pool

2009-10-29 Thread Cindy Swearingen
ONLINE 0 0 0 errors: No known data errors format current Current Disk = c1d1 ST315003- 6VS08NK-0001-16777215. /p...@0,0/pci-...@1f,2/i...@0/c...@1,0 On Thu, Oct 29, 2009 at 12:04 PM, Cindy Swearingen cindy.swearin...@sun.com mailto:cindy.swearin...@sun.com wrote: Hi

Re: [zfs-discuss] adding new disk to pool

2009-10-29 Thread Cindy Swearingen
cannot create 'tank2': invalid argument for this pool operation Thanks for your help. On Thu, Oct 29, 2009 at 1:54 PM, Cindy Swearingen cindy.swearin...@sun.com mailto:cindy.swearin...@sun.com wrote: I might need to see the format--partition output for both c0d0 and c1td1

Re: [zfs-discuss] zpool getting in a stuck state?

2009-10-27 Thread Cindy Swearingen
Jeremy, I generally suspect device failures in this case and if possible, review the contents of /var/adm/messages and fmdump -eV to see if the pool hang could be attributed to failed or failing devices. Cindy On 10/26/09 17:28, Jeremy Kitchen wrote: Cindy Swearingen wrote: Hi Jeremy, Can

Re: [zfs-discuss] resolve zfs properties default to actual value

2009-10-27 Thread Cindy Swearingen
Hi Frederik, In most cases, you can use the zfs get syntax below or you can use the zfs get all fs-name to review all current property settings. The checksum property is a bit different in that you need to review the zfs.1m man page checksum property description to determine the value of the

Re: [zfs-discuss] zpool getting in a stuck state?

2009-10-27 Thread Cindy Swearingen
this device until it is replaced. If you have another device available, you might replace the suspect drive and see if that solves the pool hang problem. Cindy On 10/27/09 12:04, Jeremy Kitchen wrote: Cindy Swearingen wrote: Jeremy, I generally suspect device failures in this case

Re: [zfs-discuss] zpool getting in a stuck state?

2009-10-27 Thread Cindy Swearingen
, which would allow reads to continue in case of a device failure, might prevent the pool from hanging. If offlining the disk or replacing the disk doesn't help, let us know. Cindy On 10/27/09 13:13, Jeremy Kitchen wrote: Jeremy Kitchen wrote: Cindy Swearingen wrote: Jeremy, I generally suspect

Re: [zfs-discuss] Checksums

2009-10-26 Thread Cindy Swearingen
Hi Ross, The CR ID is 6740597: zfs fletcher-2 is losing its carries Integrated in Nevada build 114 and the Solaris 10 10/09 release. This CR didn't get a companion man page bug to update the docs so I'm working on that now. The opensolaris.org site seems to be in the middle of its migration

Re: [zfs-discuss] zpool getting in a stuck state?

2009-10-26 Thread Cindy Swearingen
Hi Jeremy, Can you use the command below and send me the output, please? Thanks, Cindy # mdb -k ::stacks -m zfs On 10/26/09 11:58, Jeremy Kitchen wrote: Jeremy Kitchen wrote: Hey folks! We're using zfs-based file servers for our backups and we've been having some issues as of late with

Re: [zfs-discuss] cryptic vdev name from fmdump

2009-10-23 Thread Cindy Swearingen
Hi Sean, A better way probably exists but I use the fdump -eV to identify the pool and the device information (vdev_path) that is listed like this: # fmdump -eV | more . . . pool = test pool_guid = 0x6de45047d7bde91d pool_context = 0 pool_failmode = wait

Re: [zfs-discuss] Zpool issue question

2009-10-23 Thread Cindy Swearingen
Hi Karim, All ZFS storage pools are going to use some amount of space for metadata and in this example it looks like 3 GB. This is what the difference between zpool list and zfs list is telling you. No other way exists to calculate the space that is consumed by metadata. pool space (199 GB)

Re: [zfs-discuss] cryptic vdev name from fmdump

2009-10-23 Thread Cindy Swearingen
I'm stumped too. Someone with more FM* experience needs to comment. Cindy On 10/23/09 14:52, sean walmsley wrote: Thanks for this information. We have a weekly scrub schedule, but I ran another just to be sure :-) It completed with 0 errors. Running fmdump -eV gives: TIME

Re: [zfs-discuss] heads up on SXCE build 125 (LU + mirrored root pools)

2009-10-23 Thread Cindy Swearingen
Probably if you try to use any LU operation after you have upgraded to build 125. cs On 10/23/09 16:18, Chris Du wrote: Sorry, do you mean luupgrade from previous versions or from 125 to future versions? I luupgrade from 124 to 125 with mirrored root pool and everything is working fine.

Re: [zfs-discuss] raidz ZFS Best Practices wiki inconsistency

2009-10-22 Thread Cindy Swearingen
Thanks for your comments, Frank. I will take a look at the inconsistencies. Cindy On 10/22/09 08:29, Frank Cusack wrote: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations says that the number of disks in a RAIDZ

Re: [zfs-discuss] SNV_125 MPT warning in logfile

2009-10-22 Thread Cindy Swearingen
Hi Bruno, I see some bugs associated with these messages (6694909) that point to an LSI firmware upgrade that cause these harmless errors to display. According to the 6694909 comments, this issue is documented in the release notes. As they are harmless, I wouldn't worry about them. Maybe

Re: [zfs-discuss] ZFS disk failure question

2009-10-22 Thread Cindy Swearingen
Hi Jason, Since spare replacement is an important process, I've rewritten this section to provide 3 main examples, here: http://docs.sun.com/app/docs/doc/817-2271/gcvcw?a=view Scroll down the section: Activating and Deactivating Hot Spares in Your Storage Pool Example 4–7 Manually Replacing

Re: [zfs-discuss] fault.fs.zfs.vdev.io

2009-10-21 Thread Cindy Swearingen
Hi Matthew, You can use various forms of fmdump to decode this output. It might be easier to use fmdump -eV and look for the device info in the vdev path entry, like the one below. Also see if the errors on these vdevs are reported in your zpool status output. Thanks, Cindy # fmdump -eV |

Re: [zfs-discuss] zvol used apparently greater than volsize for sparse volume

2009-10-21 Thread Cindy Swearingen
this is resolved, is there some documentation available that will let me calculate this by hand? I would like to know how large the current 3-4% meta data storage I am observing can potentially grow. Thanks. On Oct 20, 2009, at 8:57 AM, Cindy Swearingen wrote: Hi Stuart, The reason why used

Re: [zfs-discuss] Exported zpool cannot be imported or deleted.

2009-10-21 Thread Cindy Swearingen
Hi Stacy, Can you try to forcibly create a new pool using the devices from the corrupted pool, like this: # zpool create -f newpool disk1 disk2 ... Then, destroy this pool, which will release the devices. This CR has been filed to help resolve the pool cruft problem: 6893282 Allow the zpool

Re: [zfs-discuss] zvol used apparently greater than volsize for sparse volume

2009-10-20 Thread Cindy Swearingen
Hi Stuart, The reason why used is larger than the volsize is because we aren't accounting for metadata, which is covered by this CR: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6429996 6429996 zvols don't reserve enough space for requisite meta data Metadata is usually only a

Re: [zfs-discuss] The ZFS FAQ needs an update

2009-10-19 Thread Cindy Swearingen
Its updated now. Thanks for mentioning it. Cindy On 10/18/09 10:19, Sriram Narayanan wrote: All: Given that the latest S10 update includes user quotas, the FAQ here [1] may need an update -- Sriram [1] http://opensolaris.org/os/community/zfs/faq/#zfsquotas

Re: [zfs-discuss] Numbered vdevs

2009-10-19 Thread Cindy Swearingen
Hi Markus, The numbered VDEVs listed in your zpool status output facilitate log device removal that integrated into build 125. Eventually, they will also be used for removal of redundant devices when device removal integrates. In build 125, if you create a pool with mirrored log devices, and

Re: [zfs-discuss] Interesting bug with picking labels when expanding a slice where a pool lives

2009-10-19 Thread Cindy Swearingen
Hi Tomas, I think you are saying that you are testing what happens when you increase a slice under a live ZFS storage pool and then reviewing the zdb output of the disk labels. Increasing a slice under a live ZFS storage pool isn't supported and might break your pool. I think you are seeing

Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-19 Thread Cindy Swearingen
We are working on evaluating all the issues and will get problem descriptions and resolutions posted soon. I've asked some of you to contact us directly to provide feedback and hope those wheels are turning. So far, we have these issues: 1. Boot failure after LU with a separate var dataset.

Re: [zfs-discuss] Interesting bug with picking labels when expanding a slice where a pool lives

2009-10-19 Thread Cindy Swearingen
, Tomas Ögren wrote: On 19 October, 2009 - Cindy Swearingen sent me these 2,4K bytes: Hi Tomas, I think you are saying that you are testing what happens when you increase a slice under a live ZFS storage pool and then reviewing the zdb output of the disk labels. Increasing a slice under a live

[zfs-discuss] heads up on SXCE build 125 (LU + mirrored root pools)

2009-10-19 Thread Cindy Swearingen
Hi everyone, Currently, the device naming changes in build 125 mean that you cannot use Solaris Live Upgrade to upgrade or patch a ZFS root dataset in a mirrored root pool. If you are considering this release for the ZFS log device removal feature, then also consider that you will not be able

Re: [zfs-discuss] Stupid to have 2 disk raidz?

2009-10-15 Thread Cindy Swearingen
Hi Greg, With two disks, I would start with a mirror. Then, you could add two more disks for expansion. You can also detach disks in a mirrored configuration. Or, you could attach another disk to create a 3-way mirror. With a RAIDZ configuration, you would not be able to expand the two disks to

Re: [zfs-discuss] ZFS Swap Doesn't Survive Reboot - OSOL-124

2009-10-15 Thread Cindy Swearingen
Rodney, I added a second swap device to my OSOL 2009.06 laptop and my system running Nevada build 124. I can't reproduce this. Both swap devices appear after reboot. I would agree with Darren's comments that copies=2 is a better configuration for a one-disk pool. The fact that you can attach a

Re: [zfs-discuss] primarycache and secondarycache properties on Solaris 10 u8

2009-10-15 Thread Cindy Swearingen
Other than how to turn these features on and off, only so much performance related info can be shoe horned into a man page. You might check out these blogs: http://blogs.sun.com/roch/entry/people_ask_where_are_we See the direct I/O section

Re: [zfs-discuss] ZFS Swap Doesn't Survive Reboot - OSOL-124

2009-10-14 Thread Cindy Swearingen
Hi Rodney, I've not seen this problem. Did you install using LiveCD or the automated installer? Here are some things to try/think about: 1. After a reboot with no swap or dump devices, run this command: # zfs volinit If this works, then this command isn't getting run on boot. Let me know

Re: [zfs-discuss] ZFS disk failure question

2009-10-14 Thread Cindy Swearingen
Hi Jason, I think you are asking how do you tell ZFS that you want to replace the failed disk c8t7d0 with the spare, c8t11d0? I just tried do this on my Nevada build 124 lab system, simulating a disk failure and using zpool replace to replace the failed disk with the spare. The spare is now

Re: [zfs-discuss] ZFS disk failure question

2009-10-14 Thread Cindy Swearingen
0 0 c0t5d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 48.5K resilvered errors: No known data errors On 10/14/09 15:23, Eric Schrock wrote: On 10/14/09 14:17, Cindy Swearingen wrote: Hi Jason, I think you are asking how do you tell ZFS

Re: [zfs-discuss] ZFS disk failure question

2009-10-14 Thread Cindy Swearingen
searchable What to do when you have a zfs disk failure with lots of examples would be great. There are a lot of attempts out there, but nothing I've found is comprehensive. Jason On Wed, Oct 14, 2009 at 4:23 PM, Eric Schrock eric.schr...@sun.com wrote: On 10/14/09 14:17, Cindy Swearingen wrote: Hi

Re: [zfs-discuss] ZFS disk failure question

2009-10-14 Thread Cindy Swearingen
/09 16:02, Eric Schrock wrote: On 10/14/09 14:33, Cindy Swearingen wrote: Hi Eric, I tried that and found that I needed to detach and remove the spare before replacing the failed disk with the spare disk. You should just be able to detach 'c0t6d0' in the config below. The spare (c0t7d0

Re: [zfs-discuss] How to resize ZFS partion or add a new one?

2009-10-13 Thread Cindy Swearingen
Hi-- Unfortunately, you cannot change the partitioning underneath your pool. I don't see any way of resizing this partition except for backing up your data, repartitioning the disk, and reinstalling Opensolaris 2009.06. Maybe someone else has a better idea... Cindy On 10/13/09 06:32, Julio

Re: [zfs-discuss] How to resize ZFS partion or add a new one?

2009-10-13 Thread Cindy Swearingen
Except that you can't add a disk or partition to a root pool: # zpool add rpool c1t1d0s0 cannot add to 'rpool': root pool can not have multiple vdevs or separate logs He could try to attach the partition to his existing pool, I'm not sure how, and this would only create a mirrored root pool,

Re: [zfs-discuss] How to resize ZFS partion or add a new one?

2009-10-13 Thread Cindy Swearingen
to backup his pool, reconfigure and expand the solaris2 partition, and then reinstall OpenSolaris. Cindy On 10/13/09 10:47, Cindy Swearingen wrote: Except that you can't add a disk or partition to a root pool: # zpool add rpool c1t1d0s0 cannot add to 'rpool': root pool can not have multiple vdevs

Re: [zfs-discuss] use zpool directly w/o create zfs

2009-10-12 Thread Cindy Swearingen
Hua, The behavior below is described here: http://docs.sun.com/app/docs/doc/819-5461/setup-1?a=view The top-level /tank file system cannot be removed so it is less flexible then using descendent datasets. If you want to create snapshot or clone and later promote the /tank clone, then it is

Re: [zfs-discuss] convert raidz from osx

2009-10-08 Thread Cindy Swearingen
Dirk, I'm not sure I'm following you exactly but this is what I think you are trying to do: You have a RAIDZ pool that is built with slices and you are trying to convert the slice configuration to whole disks. This isn't possible because you are trying replace the same disk. This is what

Re: [zfs-discuss] Unable to import pool: invalid vdev configuration

2009-10-06 Thread Cindy Swearingen
Hi Osvald, If you physically replaced the failed disk with even a slightly smaller disk in a RAIDZ pool and ran the zpool replace command, you would have seen a message similar to the following: # zpool replace rescamp c0t6d0 c2t2d0 cannot replace c0t6d0 with c2t2d0: device is too small Did

Re: [zfs-discuss] How can I destroy an exported, corrupt zpool?

2009-10-06 Thread Cindy Swearingen
Hi Stacy, If you can't import the pool, then it is difficult to remove the disks. If the pool had enough redundancy, you could attempt to unconfigure the corrupted disks with cfgadm and then try to import the pool. Until we have a zpool clean feature, you could wipe the disk labels with dd in

Re: [zfs-discuss] Unable to import pool: invalid vdev configuration

2009-10-05 Thread Cindy Swearingen
Hi Osvald, Can you comment on how the disks shrank or how the labeling on these disks changed? We would like to track the issues that causes the hardware underneath a live pool to change so that we can figure out how to prevent pool failures in the future. Thanks, Cindy On 10/03/09 09:46,

Re: [zfs-discuss] Replacing a failed drive

2009-10-02 Thread Cindy Swearingen
Yes, you can use the zpool replace process with any kind of drive: failed, failing, or even healthy. cs On 10/02/09 12:15, Dan Transue wrote: Does the same thing apply for a failing drive? I have a drive that has not failed but by all indications, it's about to Can I do the same thing

Re: [zfs-discuss] Best way to convert checksums

2009-10-02 Thread Cindy Swearingen
Ray, The checksums are set on the file systems not the pool. If a new checksum is set and *you* rewrite the data, then the rewritten data will contain the new checksum. If your pool has the space for you to duplicate the user data and new checksum is set, then the duplicated data will have

Re: [zfs-discuss] RAIDZ v. RAIDZ1

2009-10-01 Thread Cindy Swearingen
Hi David, Which Solaris release is this? Are you sure you are using the same ZFS command to review the sizes of the raidz1 and raidz pools? The zpool list and zfs list commands will display different values. See the output below of my tank pool created with raidz or raidz1 redundancy. The pool

Re: [zfs-discuss] Best way to convert checksums

2009-10-01 Thread Cindy Swearingen
You are correct. The zpool create -O option isn't available in a Solaris 10 release but will be soon. This will allow you to set the file system checksum property when the pool is created: # zpool create -O checksum=sha256 pool c1t1d0 # zfs get checksum pool NAME PROPERTY VALUE SOURCE

Re: [zfs-discuss] RAIDZ v. RAIDZ1

2009-10-01 Thread Cindy Swearingen
David, When you get back to the original system, it would be helpful if you could provide a side-by-side comparison of the zpool create syntax and the zfs list output of both pools. Thanks, Cindy On 10/01/09 13:48, David Stewart wrote: Cindy: I am not at the machine right now, but I

Re: [zfs-discuss] OS install question

2009-09-28 Thread Cindy Swearingen
Hi Ron, Any reason why you want to use slices except for the root pool? I would recommend a 4-disk configuration like this: mirrored root pool on c1t0d0s0 and c2t0d0s0 mirrored app pool on c1t1d0 and c2t1d0 Let the install use one big slice for each disk in the mirrored root pool, which is

Re: [zfs-discuss] Collecting hardware configurations (was Re: White box server for OpenSolaris)

2009-09-25 Thread Cindy Swearingen
The opensolaris.org site will be transitioning to a wiki-based site soon, as described here: http://www.opensolaris.org/os/about/faq/site-transition-faq/ I think it would be best to use the new site to collect this information because it will be much easier for community members to contribute.

Re: [zfs-discuss] selecting zfs BE from OBP

2009-09-25 Thread Cindy Swearingen
Hi Donour, You would use the boot -L syntax to select the ZFS BE to boot from, like this: ok boot -L Rebooting with command: boot -L Boot device: /p...@8,60/SUNW,q...@4/f...@0,0/d...@w2104cf7fa6c7,0:a File and args: -L 1 zfs1009BE 2 zfs10092BE Select environment to boot: [ 1 - 2 ]:

Re: [zfs-discuss] Which directories must be part of rpool?

2009-09-25 Thread Cindy Swearingen
Hi David, All system-related components should remain in the root pool, such as the components needed for booting and running the OS. If you have datasets like /export/home or other non-system-related datasets in the root pool, then feel free to move them out. Moving OS components out of the

Re: [zfs-discuss] Cloning Systems using zpool

2009-09-24 Thread Cindy Swearingen
Hi Karl, Manually cloning the root pool is difficult. We have a root pool recovery procedure that you might be able to apply as long as the systems are identical. I would not attempt this with LiveUpgrade and manually tweaking.

Re: [zfs-discuss] Cloning Systems using zpool

2009-09-24 Thread Cindy Swearingen
info stored in the root pool? Thanks Peter 2009/9/24 Cindy Swearingen cindy.swearin...@sun.com: Hi Karl, Manually cloning the root pool is difficult. We have a root pool recovery procedure that you might be able to apply as long as the systems are identical. I would not attempt

Re: [zfs-discuss] Cloning Systems using zpool

2009-09-24 Thread Cindy Swearingen
Karl, I'm not sure I'm following everything. If you can't swap the drives, the which pool would you import? If you install the new v210 with snv_115, then you would have a bootable root pool. You could then receive the snapshots from the old root pool into the root pool on the new v210. I

Re: [zfs-discuss] RAID-Z2 won't come online after replacing failed disk

2009-09-23 Thread Cindy Swearingen
Dustin, You didn't describe the process that you used to replace the disk so its difficult to commment on what happened. In general, you physically replace the disk and then let ZFS know that the disk is replaced, like this: # zpool replace pool-name device-name This process is described

Re: [zfs-discuss] addendum: zpool UNAVAIL even though disk is online: another label issue?

2009-09-18 Thread Cindy Swearingen
Michael, ZFS handles EFI labels just fine, but you need an SMI label on the disk that you are booting from. Are you saying that localtank is your root pool? I believe the OSOL install creates a root pool called rpool. I don't remember if its configurable. Changing labels or partitions

Re: [zfs-discuss] Crazy Phantom Zpools Again

2009-09-18 Thread Cindy Swearingen
Dave, I've searched opensolaris.org and our internal bug database. I don't see that anyone else has reported this problem. I asked someone from the OSOL install team and this behavior is a mystery. If you destroyed the phantom pools before you reinstalled, then they probably returned from the

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-16 Thread Cindy . Swearingen
In addition, if you need the flexibility of moving disks around until the device removal CR integrates, then mirrored pools are more flexible. Detaching disks from a mirror isn't ideal but if you absolutely have to reuse a disk temporarily then go with mirrors. See the output below. You can

Re: [zfs-discuss] ZFS flar image.

2009-09-14 Thread Cindy . Swearingen
Hi RB, We have a draft of the ZFS/flar image support here: http://opensolaris.org/os/community/zfs/boot/flash/ Make sure you review the Solaris OS requirements. Thanks, Cindy On 09/14/09 11:45, RB wrote: Is it possible to create flar image of ZFS root filesystem to install it to other

Re: [zfs-discuss] b122 and fake checksum errors

2009-09-10 Thread Cindy . Swearingen
Hi Brian, I'm tracking this issue and expected resolution, here: http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#RAID-Z_Checksum_Errors_in_Nevada_Builds.2C_120-123 Thanks, Cindy On 09/10/09 13:21, Brian Hechinger wrote: I've hit google and it looks like this is

Re: [zfs-discuss] Help with Scenerio

2009-09-08 Thread Cindy . Swearingen
Hi Jon, If the zpool import command shows the old rpool and associated disk (c1t1d0s0), then you might able to import it like this: # zpool import rpool rpool2 Which renames the original pool, rpool, to rpool2, upon import. If the disk c1t1d0s0 was overwritten in any way then I'm not sure

Re: [zfs-discuss] Archiving and Restoring Snapshots

2009-09-02 Thread Cindy . Swearingen
Hi Mike, I reviewed this doc and the only issue I have with it now is that uses /var/tmp an an example of storing snapshots in long-term storage elsewhere. For short-term storage, storing a snapshot as a file is an acceptable solution as long as you verify that the snapshots as files are valid

<    1   2   3   4   5   6   7   >