[zfs-discuss] LAST CALL: zfs-discuss is moving Sunday, March 24, 2013

2013-03-22 Thread Cindy Swearingen
I hope to see everyone on the other side... *** The ZFS discussion list is moving to java.net. This opensolaris/zfs discussion will not be available after March 24. There is no way to migrate the existing list to the new list. The solaris-zfs project is here

[zfs-discuss] Please join us on the new zfs discuss list on java.net

2013-03-20 Thread Cindy Swearingen
Hi Everyone, The ZFS discussion list is moving to java.net. This opensolaris/zfs discussion will not be available after March 24. There is no way to migrate the existing list to the new list. The solaris-zfs project is here: http://java.net/projects/solaris-zfs See the steps below to join the

Re: [zfs-discuss] This mailing list EOL???

2013-03-20 Thread Cindy Swearingen
Hi Ned, This list is migrating to java.net and will not be available in its current form after March 24, 2013. The archive of this list is available here: http://www.mail-archive.com/zfs-discuss@opensolaris.org/ I will provide an invitation to the new list shortly. Thanks for your patience.

Re: [zfs-discuss] What would be the best tutorial cum reference doc for ZFS

2013-03-19 Thread Cindy Swearingen
Hi Hans, Start with the ZFS Admin Guide, here: http://docs.oracle.com/cd/E26502_01/html/E29007/index.html Or, start with your specific questions. Thanks, Cindy On 03/19/13 03:30, Hans J. Albertsson wrote: as used on Illumos? I've seen a few tutorials written by people who obviously are very

Re: [zfs-discuss] partioned cache devices

2013-03-19 Thread Cindy Swearingen
Hi Andrew, Your original syntax was incorrect. A p* device is a larger container for the d* device or s* devices. In the case of a cache device, you need to specify a d* or s* device. That you can add p* devices to a pool is a bug. Adding different slices from c25t10d1 as both log and cache dev

Re: [zfs-discuss] zfs-discuss mailing list & opensolaris EOL

2013-02-18 Thread Cindy Swearingen
Hi Jim, We will be restaging the ZFS community info, most likely on OTN. The zfs discussion list archive cannot be migrated to the new list on java.net, but you can pick it up here: http://www.mail-archive.com/zfs-discuss@opensolaris.org/ We are looking at other ways to make the zfs discuss li

Re: [zfs-discuss] zfs-discuss mailing list & opensolaris EOL

2013-02-16 Thread cindy swearingen
Hey Ned and Everyone, This was new news to use too and we're just talking over some options yesterday afternoon so please give us a chance to regroup and provide some alternatives. This list will be shutdown but we can start a new one on java.net. There is a huge ecosystem around Solaris and ZFS,

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2013-01-14 Thread Cindy Swearingen
I believe the bug.oraclecorp.com URL is accessible with a support contract, but its difficult for me to test. I should have mentioned it. I apologize. cs On 01/14/13 14:02, Nico Williams wrote: On Mon, Jan 14, 2013 at 1:48 PM, Tomas Forsman wrote: https://bug.oraclecorp.com/pls/bug/webbug_pr

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2013-01-14 Thread Cindy Swearingen
Hi Jamie, Yes, that is correct. The S11u1 version of this bug is: https://bug.oraclecorp.com/pls/bug/webbug_print.show?c_rptno=15852599 and has this notation which means Solaris 11.1 SRU 3.4: Changeset pushed to build 0.175.1.3.0.4.0 Thanks, Cindy On 01/11/13 19:10, Jamie Krier wrote: It

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-03 Thread Cindy Swearingen
Free advice is cheap... I personally don't see the advantage of caching reads and logging writes to the same devices. (Is this recommended?) If this pool is serving CIFS/NFS, I would recommend testing for best performance with a mirrored log device first without a separate cache device: # zpool

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2012-12-30 Thread cindy swearingen
Existing Solaris 10 releases are not impacted. S10u11 isn't released yet so I think we can assume that this upcoming Solaris 10 release will include a preventative fix. Thanks, Cindy On Thu, Dec 27, 2012 at 11:11 PM, Andras Spitzer wrote: > Josh, > > You mention that Oracle is preparing patches

Re: [zfs-discuss] zfs receive options (was S11 vs illumos zfs compatiblity)

2012-12-21 Thread Cindy Swearingen
Hi Ned, Which man page are you referring to? I see the zfs receive -o syntax in the S11 man page. The bottom line is that not all properties can be set on the receiving side and the syntax is one property setting per -o option. See below for several examples. Thanks, Cindy I don't think ver

Re: [zfs-discuss] Pool performance when nearly full

2012-12-20 Thread Cindy Swearingen
Hi Sol, You can review the Solaris 11 ZFS best practices info, here: http://docs.oracle.com/cd/E26502_01/html/E29007/practice-1.html#scrolltoc The above section also provides info about the full pool performance penalty. For S11 releases, we're going to increase the 80% pool capacity recommend

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2012-12-19 Thread Cindy Swearingen
impacted by this problem. If scrubbing the pool finds permanent metadata errors, then you should open an SR. B. If zdb doesn't complete successfully, open an SR. On 12/18/12 09:45, Cindy Swearingen wrote: Hi Sol, The appliance is affected as well. I apologize. The MOS article is for int

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2012-12-18 Thread Cindy Swearingen
Hi Sol, The appliance is affected as well. I apologize. The MOS article is for internal diagnostics. I'll provide a set of steps to identify this problem as soon as I understand them better. Thanks, Cindy On 12/18/12 05:27, sol wrote: *From:* Cindy Swearingen No doubt. This

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2012-12-17 Thread Cindy Swearingen
Hi Jamie, No doubt. This is a bad bug and we apologize. Below is a misconception that this bug is related to the VM2 project. It is not. Its related to a problem that was introduced in the ZFS ARC code. If you would send me your SR number privately, we can work with the support person to correc

Re: [zfs-discuss] The format command crashes on 3TB disk but zpool create ok

2012-12-14 Thread Cindy Swearingen
Hey Sol, Can you send me the core file, please? I would like to file a bug for this problem. Thanks, Cindy On 12/14/12 02:21, sol wrote: Here it is: # pstack core.format1 core 'core.format1' of 3351: format - lwp# 1 / thread# 1 0806de73 can_efi_disk_be_ex

Re: [zfs-discuss] VXFS to ZFS

2012-12-05 Thread Cindy Swearingen
Hi Morris, I hope someone has done this recently and can comment, but the process is mostly manual and it will depend on how much gear you have. For example, if you have some extra disks, you can build a minimal ZFS storage pool to hold the bulk of your data. Then, you can do a live migration of

Re: [zfs-discuss] Segfault running "zfs create -o readonly=off tank/test" on Solaris 11 Express 11/11

2012-10-23 Thread Cindy Swearingen
Hi Andreas, Which release is this... Can you provide the /etc/release info? It works fine for me on a S11 Express (b162) system: # zfs create -o readonly=off pond/amy # zfs get readonly pond/amy NAME PROPERTY VALUE SOURCE pond/amy readonly off local This is somewhat redundant sy

Re: [zfs-discuss] Sudden and Dramatic Performance Drop-off

2012-10-04 Thread Cindy Swearingen
Hi Charles, Yes, a faulty or failing disk can kill performance. I would see if FMA has generated any faults: # fmadm faulty Or, if any of the devices are collecting errors: # fmdump -eV | more Thanks, Cindy On 10/04/12 11:22, Knipe, Charles wrote: Hey guys, I’ve run into another ZFS perf

Re: [zfs-discuss] Missing disk space

2012-08-03 Thread Cindy Swearingen
You said you're new to ZFS so might consider using zpool list and zfs list rather df -k to reconcile your disk space. In addition, your pool type (mirrored on RAIDZ) provides a different space perspective in zpool list that is not always easy to understand. http://docs.oracle.com/cd/E23824_01/ht

Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-08-01 Thread Cindy Swearingen
size changes 6430818 Solaris needs mechanism of dynamically increasing LUN size -Original Message- From: Hung-Sheng Tsao (LaoTsao) Ph.D [mailto:laot...@gmail.com] Sent: 2012. július 26. 14:49 To: Habony, Zsolt Cc: Cindy Swearingen; Sašo Kiselkov; zfs-discuss@opensolaris.org Subject: Re:

Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-07-25 Thread Cindy Swearingen
Hi-- I guess I can't begin to understand patching. Yes, you provided a whole disk to zpool create but it actually creates a part(ition) 0 as you can see in the output below. Part TagFlag First SectorSizeLast Sector 0 usrwm 256 19.99GB

Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-07-25 Thread Cindy Swearingen
Hi-- Patches are available to fix this so I would suggest that you request them from MOS support. This fix fell through the cracks and we tried really hard to get it in the current Solaris 10 release but sometimes things don't work in your favor. The patches are available though. Relabeling dis

Re: [zfs-discuss] Has anyone switched from IR -> IT firmware on the fly ? (existing zpool on LSI 9211-8i)

2012-07-18 Thread Cindy Swearingen
Here's a better link below. I have seen enough bad things happen to pool devices when hardware is changed or firmware is updated to recommend that the pool is exported first, even an HBA firmware update. Either shutting the system down (where pool is hosted) or exporting the pool should do it.

Re: [zfs-discuss] [osol-discuss] Creating NFSv4/ZFS XATTR through dirfd through /proc not allowed?

2012-07-16 Thread Cindy Swearingen
02:33, Cindy Swearingen wrote: I don't think that xattrs were ever intended or designed for /proc content. I could file an RFE for you if you wish. So Oracle Newspeak now calls it an RFE if you want a real bug fixed, huh? ;-) This is a real bug in procfs. Problem is, procfs can'

Re: [zfs-discuss] [osol-discuss] Creating NFSv4/ZFS XATTR through dirfd through /proc not allowed?

2012-07-13 Thread Cindy Swearingen
I don't think that xattrs were ever intended or designed for /proc content. I could file an RFE for you if you wish. Thanks, Cindy On 07/13/12 14:00, ольга крыжановская wrote: Yes, accessing the files through runat works. I think /proc (and /dev/fd, which has the same trouble but only works

Re: [zfs-discuss] Understanding ZFS recovery

2012-07-12 Thread Cindy Swearingen
Hi Rich, I don't think anyone can say definitively how this problem resolved, but I believe that the dd command overwrote some of the disk label, as you describe below. Your format output below looks like you relabeled the disk and maybe that was enough to resolve this problem. I have had succe

Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks

2012-06-15 Thread Cindy Swearingen
Hi Hans, Its important to identify your OS release to determine if booting from a 4k disk is supported. Thanks, Cindy On 06/15/12 06:14, Hans J Albertsson wrote: I've got my root pool on a mirror on 2 512 byte blocksize disks. I want to move the root pool to two 2 TB disks with 4k blocks. T

Re: [zfs-discuss] Spare drive inherited cksum errors?

2012-05-29 Thread Cindy Swearingen
Hi-- You don't see what release this is but I think that seeing the checkum error accumulation on the spare was a zpool status formatting bug that I have seen myself. This is fixed in a later Solaris release. Thanks, Cindy On 05/28/12 22:21, Stephan Budach wrote: Hi all, just to wrap this is

Re: [zfs-discuss] slow zfs send

2012-05-07 Thread Cindy Swearingen
Hi Karl, Someone sitting across the table from me (who saw my posting) informs me that CR 7060894 would not impact Solaris 10 releases, so kindly withdrawn my comment about CR 7060894. Thanks, Cindy On 5/7/12 11:35 AM, Cindy Swearingen wrote: Hi Karl, I like to verify that no dead or dying

Re: [zfs-discuss] slow zfs send

2012-05-07 Thread Cindy Swearingen
Hi Karl, I like to verify that no dead or dying disk is killing pool performance and your zpool status looks good. Jim has replied with some ideas to check your individual device performance. Otherwise, you might be impacted by this CR: 7060894 zfs recv is excruciatingly slow This CR covers bo

Re: [zfs-discuss] Aaron Toponce: Install ZFS on Debian GNU/Linux

2012-04-18 Thread Cindy Swearingen
>Hmmm, how come they have encryption and we don't? As in Solaris releases, or some other "we"? http://docs.oracle.com/cd/E23824_01/html/821-1448/gkkih.html https://blogs.oracle.com/darren/entry/my_11_favourite_solaris_11 Thanks, Cindy On 04/18/12 05:43, Jim Klimov wrote: 2012-04-18 6:57, Dav

Re: [zfs-discuss] zpool split failing

2012-04-17 Thread Cindy Swearingen
n both mirror devices were online. Is this a know issue with ZFS ? bug ? cheers Matt On 04/16/12 10:05 PM, Cindy Swearingen wrote: Hi Matt, I don't have a way to reproduce this issue and I don't know why this is failing. Maybe someone else does. I know someone who recently split

Re: [zfs-discuss] zpool split failing

2012-04-16 Thread Cindy Swearingen
Hi Matt, I don't have a way to reproduce this issue and I don't know why this is failing. Maybe someone else does. I know someone who recently split a root pool running the S11 FCS release without problems. I'm not a fan of root pools on external USB devices. I haven't tested these steps in a w

Re: [zfs-discuss] Replacing root pool disk

2012-04-12 Thread Cindy Swearingen
System Step 9 uses the format-->disk-->partition-->modify option and sets the free hog space to slice 0. Then, you press return for each existing slice to zero them out. This creates one large slice 0. cs On 04/12/12 11:48, Cindy Swearingen wrote: Hi Peter, The root pool disk labeling/par

Re: [zfs-discuss] Replacing root pool disk

2012-04-12 Thread Cindy Swearingen
Hi Peter, The root pool disk labeling/partitioning is not so easy. I don't know which OpenIndiana release this is but in a previous Solaris release we had a bug that caused the error message below and the workaround is exactly what you did, use the -f option. We don't yet have an easy way to cl

Re: [zfs-discuss] Accessing Data from a detached device.

2012-03-29 Thread Cindy Swearingen
Hi Matt, There is no easy way to access data from a detached device. You could try to force import it on another system or under a different name on the same system with the remaining device. The easiest way is to split the mirrored pool. See the steps below. Thanks, Cindy # zpool status po

Re: [zfs-discuss] Advice for migrating ZFS configuration

2012-03-07 Thread Cindy Swearingen
> In theory, instead of this missing > disk approach I could create a two-disk raidz pool and later add the > third disk to it, right? No, you can't add a 3rd disk to an existing RAIDZ vdev of two disks. You would want to add another 2 disk RAIDZ vdev. See Example 4-2 in this section: http://do

Re: [zfs-discuss] Advice for migrating ZFS configuration

2012-03-07 Thread Cindy Swearingen
Hi Bob, Not many options because you can't attach disks to convert a non-redundant pool to a RAIDZ pool. To me, the best solution is to get one more disk (for a total of 4 disks) to create a mirrored pool. Mirrored pools provide more flexibility. See 1 below. See the options below. Thanks, Ci

Re: [zfs-discuss] Strange send failure

2012-02-09 Thread Cindy Swearingen
Hi Ian, This looks like CR 7097870. To resolve this problem, apply the latest s11 SRU to both systems. Thanks, Cindy On 02/08/12 17:55, Ian Collins wrote: Hello, I'm attempting to dry run the send the root data set of a zone from one Solaris 11 host to another: sudo zfs send -r rpool/zoneR

Re: [zfs-discuss] Disk failing? High asvc_t and %b.

2012-02-01 Thread Cindy Swearingen
Hi Jan, These commands will tell you if FMA faults are logged: # fmdump # fmadm faulty This command will tell you if errors are accumulating on this disk: # fmdump -eV | more Thanks, Cindy On 02/01/12 11:20, Jan Hellevik wrote: I suspect that something is wrong with one of my disks. This

Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-19 Thread Cindy Swearingen
that go in? Was it in sol10u9? Thanks, Andrew *From: *Cindy Swearingen mailto:cindy.swearin...@oracle.com>> *Subject: **Re: [zfs-discuss] Can I create a mirror for a root rpool?* *Date: *December 16, 2011 10:38:21 AM CST *To: *Tim Cook mailto:t...@cook.ms>> *Cc: *mailto:zfs-discuss@

Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-19 Thread Cindy Swearingen
wrote: On Thu, Dec 15, 2011 at 04:39:07PM -0700, Cindy Swearingen wrote: Hi Anon, The disk that you attach to the root pool will need an SMI label and a slice 0. The syntax to attach a disk to create a mirrored root pool is like this, for example: # zpool attach rpool c1t0d0s0 c1t1d0s0 BTW

Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Cindy Swearingen
Yep, well said, understood, point taken, I hear you, you're preaching to the choir. Have faith in Santa. A few comments: 1. I need more info on the x86 install issue. I haven't seen this problem myself. 2. We don't use slice2 for anything and its not recommended. 3. The SMI disk is a long-stan

Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Cindy Swearingen
to do the partitioning by hand, which is just silly to fight with anyway. Gregg Sent from my iPhone On Dec 15, 2011, at 6:13 PM, Tim Cook mailto:t...@cook.ms>> wrote: Do you still need to do the grub install? On Dec 15, 2011 5:40 PM, "Cindy Swearingen" mailto:cindy.swear

Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Cindy Swearingen
Hi Tim, No, in current Solaris releases the boot blocks are installed automatically with a zpool attach operation on a root pool. Thanks, Cindy On 12/15/11 17:13, Tim Cook wrote: Do you still need to do the grub install? On Dec 15, 2011 5:40 PM, "Cindy Swearingen" mailto:cin

Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-15 Thread Cindy Swearingen
Hi Anon, The disk that you attach to the root pool will need an SMI label and a slice 0. The syntax to attach a disk to create a mirrored root pool is like this, for example: # zpool attach rpool c1t0d0s0 c1t1d0s0 Thanks, Cindy On 12/15/11 16:20, Anonymous Remailer (austria) wrote: On Sola

Re: [zfs-discuss] gaining access to var from a live cd

2011-11-30 Thread Cindy Swearingen
Hi Francois, A similar recovery process in OS11 is to just mount the BE, like this: # beadm mount s11_175 /mnt # ls /mnt/var adm croninetlogadm preservetmp ai db infomailrun tpm apache2 dhcpinstalladm nfs

Re: [zfs-discuss] ZFS smb/cifs shares in Solaris 11 (some observations)

2011-11-29 Thread Cindy Swearingen
Hi Sol, For 1) and several others, review the ZFS Admin Guide for a detailed description of the share changes, here: http://docs.oracle.com/cd/E23824_01/html/821-1448/gayne.html For 2-4), You can't rename a share. You would have to remove it and recreate it with the new name. For 6), I think y

Re: [zfs-discuss] bug moving files between two zfs filesystems (too many open files)

2011-11-29 Thread Cindy Swearingen
I think the "too many open files" is a generic error message about running out of file descriptors. You should check your shell ulimit information. On 11/29/11 09:28, sol wrote: Hello Has anyone else come across a bug moving files between two zfs file systems? I used "mv /my/zfs/filesystem/fi

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-11-10 Thread Cindy Swearingen
Hi John, CR 7102272: ZFS storage pool created on a 3 TB USB 3.0 device has device label problems Let us know if this is still a problem in the OS11 FCS release. Thanks, Cindy On 11/10/11 08:55, John D Groenveld wrote: In message<4e9db04b.80...@oracle.com>, Cindy Swearingen

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-18 Thread Cindy Swearingen
This is CR 7102272. cs On 10/18/11 10:50, John D Groenveld wrote: In message <4e9da8b1.7020...@oracle.com>, Cindy Swearingen writes: 1. If you re-create the pool on the whole disk, like this: # zpool create foo c1t0d0 Then, resend the prtvtoc output for c1t0d0s0. # zpool create

Re: [zfs-discuss] FS Reliability WAS: about btrfs and zfs

2011-10-18 Thread Cindy Swearingen
Hi Paul, Your 1-3 is very sensible advice and I must ask about this statement: >I have yet to have any data loss with ZFS. Maybe this goes without saying, but I think you are using ZFS redundancy. Thanks, Cindy On 10/18/11 08:52, Paul Kraus wrote: On Tue, Oct 18, 2011 at 9:38 AM, Gregory Sh

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-18 Thread Cindy Swearingen
Yeah, okay, duh. I should have known that large sector size support is only available for a non-root ZFS file system. A couple more things if you're still interested: 1. If you re-create the pool on the whole disk, like this: # zpool create foo c1t0d0 Then, resend the prtvtoc output for c1t0d0

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-18 Thread Cindy Swearingen
Hi John, I'm going to file a CR to get this issue reviewed by the USB team first, but if you could humor me with another test: Can you run newfs to create a UFS file system on this device and mount it? Thanks, Cindy On 10/18/11 08:18, John D Groenveld wrote: In message <201110150202.p9f22w2n

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-13 Thread Cindy Swearingen
John, Any USB-related messages in /var/adm/messages for this device? Thanks, Cindy On 10/12/11 11:29, John D Groenveld wrote: In message <4e95cb2a.30...@oracle.com>, Cindy Swearingen writes: What is the error when you attempt to import this pool? "cannot import 'fo

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-12 Thread Cindy Swearingen
In the steps below, you're missing a zpool import step. I would like to see the error message when the zpool import step fails. Thanks, Cindy On 10/12/11 11:29, John D Groenveld wrote: In message <4e95cb2a.30...@oracle.com>, Cindy Swearingen writes: What is the error when you

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-12 Thread Cindy Swearingen
Hi John, What is the error when you attempt to import this pool? Thanks, Cindy On 10/11/11 18:17, John D Groenveld wrote: Banging my head against a Seagate 3TB USB3 drive. Its marketing name is: Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102 format(1M) shows it identif

Re: [zfs-discuss] zpool recovery import from dd images

2011-08-24 Thread Cindy Swearingen
Hi Kelsey, I haven't had to do this myself so someone who has done this before might have a better suggestion. I wonder if you need to make links from the original device name to the new device names. You can see from the zdb -l output below that the device path is pointing to the original devi

Re: [zfs-discuss] ZFS raidz on top of hardware raid0

2011-08-15 Thread Cindy Swearingen
d array in JBOD mode. http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide cs On 08/15/11 08:41, Cindy Swearingen wrote: Hi Tom, I think you test this configuration with and without the underlying hardware RAID. If RAIDZ is the right redundancy level for your workload, yo

Re: [zfs-discuss] Space usage

2011-08-14 Thread Cindy Swearingen
Hi Ned, The difference is that for mirrored pools, zpool list displays the actual available space so that if you have a mirrored pool of two 30-GB disks, zpool list will display 30 GBs, which should jibe with the zfs list output of available space for file systems. For RAIDZ pools, zpool list di

Re: [zfs-discuss] Recover data from detached ZFS mirror

2011-07-28 Thread Cindy Swearingen
Hi Judy, The disk label of the detached disk is intact but the pool info is no longer accessible so no easy way exists to solve this problem. Its not simply a matter of getting the boot info back on there, its the pool info. Your disk will have to meet up with its other half before it can be bo

Re: [zfs-discuss] Recover data from detached ZFS mirror

2011-07-28 Thread Cindy Swearingen
Hi Judy, Without much to go on, let's try the easier task first. Is it possible that you can re-attach the detached disk back to original root mirror disk? Thanks, Cindy On 07/28/11 13:16, Judy Wheeler (QTSI) X7567 wrote: Does anyone know where the "Jeff Bonwick tool to recover a ZFS label"

Re: [zfs-discuss] recover zpool with a new installation

2011-07-26 Thread Cindy Swearingen
Hi Roberto, Yes, you can reinstall the OS on another disk and as long as the OS install doesn't touch the other pool's disks, your previous non-root pool should be intact. After the install is complete, just import the pool. Thanks, Cindy On 07/26/11 10:49, Roberto Scudeller wrote: Hi all,

Re: [zfs-discuss] Adding mirrors to an existing zfs-pool]

2011-07-26 Thread Cindy Swearingen
Subject: Re: [zfs-discuss] Adding mirrors to an existing zfs-pool Date: Tue, 26 Jul 2011 08:54:38 -0600 From: Cindy Swearingen To: Bernd W. Hennig References: <342994905.11311662049567.JavaMail.Twebapp@sf-app1> Hi Bernd, If you are talking about attaching 4 new disks to a non redundan

Re: [zfs-discuss] Each user has his own zfs filesystem??

2011-07-24 Thread Cindy Swearingen
That is correct. Those of us working in ZFS land recommend one file system per user, but the software has not yet caught up to that model. The wheels are turning though. When I get back to office, I will send out some steps that might help during this transition. Thanks, Cindy -- This messag

Re: [zfs-discuss] add device to mirror rpool in sol11exp

2011-07-23 Thread Cindy Swearingen
I believe in the OS 11 Express release (b151a), attaching a disk to the root pool with zpool attach applies the boot blocks automatically. Cindy -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http:/

Re: [zfs-discuss] Replacing failed drive

2011-07-22 Thread Cindy Swearingen
st a spare? Thanks! Chris *From: *"Cindy Swearingen" *To: *"Chris Dunbar - Earthside, LLC" *Cc: *zfs-discuss@opensolaris.org *Sent: *Friday, July 22, 2011 3:57:00 PM *Subject: *Re: [zfs-discuss] Replacing

Re: [zfs-discuss] Replacing failed drive

2011-07-22 Thread Cindy Swearingen
Hi Chris, Which Solaris release is this? Depending on the Solaris release, you have a couple of different options. Here's one: 1. Physically replace original failed disk and detach the spare. A. If c10t0d0 was the disk that you physically replaced, issue this command: # zpool replace tank c

Re: [zfs-discuss] How to recover -- LUNs go offline, now permanent errors?

2011-07-15 Thread Cindy Swearingen
Hi David, If the permanent error is in some kind of metadata, then it doesn't translate to a specific file name. You might try another zpool scrub and then a zpool clear to see if it clears this error. Thanks, Cindy On 07/13/11 12:47, David Smith wrote: I recently had an issue with my LUNs f

Re: [zfs-discuss] Non-Global zone recovery

2011-07-07 Thread Cindy Swearingen
I missed that you're using s10 *without* Live Upgrade. If so, I don't know what the resulting zone behavior would be. Cindy On 07/07/11 14:09, Cindy Swearingen wrote: Okay, so which Solaris 10 release is this? It might also depend on how your zones are created. You can review t

Re: [zfs-discuss] Non-Global zone recovery

2011-07-07 Thread Cindy Swearingen
create Non-global zone and it is giving error bash-3.00# zonecfg -z Test Test: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:Test> create -a /zones/Test invalid path to detached zone Thanks, Ram On Thu, Jul 7, 2011 at 3:14 PM, Cin

Re: [zfs-discuss] Non-Global zone recovery

2011-07-07 Thread Cindy Swearingen
Hi Ram, Which Solaris release is this and how was the OS re-imaged? If this is a recent Solaris 10 release and you used Live Upgrade, then the answer is yes. I'm not so sure about zone behavior in the Oracle Solaris 11 Express release. You should just be able to import testpool and boot your z

Re: [zfs-discuss] time-slider/plugin:zfs-send

2011-07-06 Thread Cindy Swearingen
Hi Adrian, I wonder if you have seen these setup instructions: http://www.oracle.com/technetwork/articles/servers-storage-dev/autosnapshots-397145.html If you have, let me know if you are still having trouble. Thanks, Cindy On 07/05/11 16:37, Adrian Carpenter wrote: I've been trying to figu

Re: [zfs-discuss] Trouble mirroring root pool onto larger disk

2011-07-01 Thread Cindy Swearingen
Hi Jiawen, Yes, the boot failure message would be very helpful. The first thing to rule out is: I think you need to be running a 64-bit kernel to boot from a 2 TB disk. Thanks, Cindy On 07/01/11 02:58, Jiawen Chen wrote: Hi, I have Solaris 11 Express with a root pool installed on a 500

Re: [zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-23 Thread Cindy Swearingen
Hi David, I see some inconsistencies between the mirrored pool tank info below and the device info that you included. 1. The zpool status for tank shows some remnants of log devices (?), here: tank FAULTED corrupted data logs Generally, the log devices are listed after the poo

Re: [zfs-discuss] Cannot format 2.5TB ext disk (EFI)

2011-06-23 Thread Cindy Swearingen
Hi Kitty, Try this: # zpool create test c5t0d0 Thanks, Cindy On 06/23/11 12:34, Kitty Tam wrote: It wouldn't let me # zpool create test_pool c5t0d0p0 cannot create 'test_pool': invalid argument for this pool operation Thanks, Kitty On 06/23/11 03:00, Roy Sigurd Karlsbakk wrote: I cannot

Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-23 Thread Cindy Swearingen
Hi Dave, Consider the easiest configuration first and it will probably save you time and money in the long run, like this: 73g x 73g mirror (one large s0 on each disk) - rpool 73g x 73g mirror (use whole disks) - data pool Then, get yourself two replacement disks, a good backup strategy, and we

Re: [zfs-discuss] zfs directory inheritance question/issue/problem ?

2011-06-22 Thread Cindy Swearingen
Hi Ed, This is current Solaris SMB sharing behavior. CR 6582165 is filed to provide this feature. You will need to reshare your 3 descendent file systems. NFS sharing does this automatically. Thanks, Cindy On 06/22/11 09:46, Ed Fang wrote: Need a little help. I set up my zfs storage last

Re: [zfs-discuss] Zpool with data errors

2011-06-22 Thread Cindy Swearingen
Hi Todd, Yes, I have seen zpool scrub do some miracles but I think it depends on the amount of corruption. A few suggestions are: 1. Identify and resolve the corruption problems on the underlying hardware. No point in trying to clear the pool errors if this problem continues. The fmdump comman

Re: [zfs-discuss] Resizing ZFS partition, shrinking NTFS?

2011-06-16 Thread Cindy Swearingen
Hi Clive, What you are asking is not recommended nor supported and could render your ZFS root pool unbootable. (I'm not saying that some expert couldn't do it, but its risky, like data corruption risky.) ZFS expects the partition boundaries to remain the same unless you replace the original disk

Re: [zfs-discuss] # disks per vdev

2011-06-15 Thread Cindy Swearingen
Hi Lanky, If you created a mirrored pool instead of a RAIDZ pool, you could use the zpool split feature to split your mirrored pool into two identical pools. For example, If you had 3-way mirrored pool, your primary pool will remain redundant with 2-way mirrors after the split. Then, you would h

Re: [zfs-discuss] changing vdev types

2011-06-01 Thread Cindy Swearingen
Hi Matt, You have several options in terms of migrating the data but I think the best approach is to do something like I have described below. Thanks, Cindy 1. Create snapshots of the file systems to be migrated. If you want to capture the file system properties, then see the zfs.1m man page f

Re: [zfs-discuss] not sure how to make filesystems

2011-05-30 Thread Cindy Swearingen
Hi Bill, I'm assuming you've already upgraded to a Solaris 10 release that supports a UFS to ZFS migration... I don't think Live Upgrade supports the operations below. The UFS to ZFS migration takes your existing UFS file systems and creates one ZFS BE in a root pool. An advantage to this is th

Re: [zfs-discuss] Same device node appearing twice in same mirror; one faulted, one not...

2011-05-24 Thread Cindy Swearingen
type: 'disk' id: 0 guid: 1717308203478351258 path: '/dev/dsk/c5t1d0s0' devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1939879/a' phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a&

Re: [zfs-discuss] Same device node appearing twice in same mirror; one faulted, one not...

2011-05-20 Thread Cindy Swearingen
Hi Alex More scary than interesting to me. What kind of hardware and which Solaris release? Do you know what steps lead up to this problem? Any recent hardware changes? This output should tell you which disks were in this pool originally: # zpool history tank If the history identifies tank's

Re: [zfs-discuss] bootfs ID on zfs root

2011-05-11 Thread Cindy Swearingen
Hi Ketan, What steps lead up to this problem? I believe the boot failure messages below are related to a mismatch between the pool version and the installed OS version. If you're using the JumpStart installation method, then the root pool is re-created each time, I believe. Does it also instal

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-26 Thread Cindy Swearingen
Hi-- I don't know why the spare isn't kicking in automatically, it should. A documented workaround is to outright replace the failed disk with one of the spares, like this: # zpool replace fwgpool0 c4t5000C5001128FE4Dd0 c4t5000C50014D70072d0 The autoreplace pool property has nothing to do with

Re: [zfs-discuss] Bootable root pool?

2011-04-18 Thread Cindy Swearingen
Hi Darren, Yes, a bootable root pool must be created on a disk slice. You can use a cache device, but not a log device, and the cache device must be a disk slice. See the output below. Thanks, Cindy # zpool add rpool log c0t2d0s0 cannot add to 'rpool': root pool can not have multiple vdevs o

Re: [zfs-discuss] zpool scrub on b123

2011-04-15 Thread Cindy Swearingen
Yes, the Solaris 10 9/10 release has the fix for RAIDZ checksum errors if you have ruled out any hardware problems. cs On 04/15/11 14:47, Karl Rossing wrote: Would moving the pool to a Solaris 10U9 server fix the random RAIDZ errors? On 04/15/2011 02:23 PM, Cindy Swearingen wrote: D'oh

Re: [zfs-discuss] zpool scrub on b123

2011-04-15 Thread Cindy Swearingen
em is resolved completely. If not, then see the recommendation below. Thanks, Cindy On 04/15/11 13:18, Cindy Swearingen wrote: Hi Karl... I just saw this same condition on another list. I think the poster resolved it by replacing the HBA. Drives go bad but they generally don't all go bad at

Re: [zfs-discuss] zpool scrub on b123

2011-04-15 Thread Cindy Swearingen
Hi Karl... I just saw this same condition on another list. I think the poster resolved it by replacing the HBA. Drives go bad but they generally don't all go bad at once, so I would suspect some common denominator like the HBA/controller, cables, and so on. See what FMA thinks by running fmdump

Re: [zfs-discuss] How to rename rpool. Is that recommended ?

2011-04-08 Thread Cindy Swearingen
Arjun, Yes, you an choose any name for the root pool, but an existing limitation is that you can't rename the root pool by using the zpool export/import with new name feature. Too much internal boot info is tied to the root pool name. What info are you changing? Instead, could you create a new

Re: [zfs-discuss] Zpool resize

2011-04-04 Thread Cindy Swearingen
Hi Albert, I didn't notice that you are running the Solaris 10 9/10 release. Although the autoexpand property is provided, the underlying driver changes to support the LUN expansion are not available in this release. I don't have the right storage to test, but a possible workaround is to create

Re: [zfs-discuss] Cannot remove zil device

2011-03-31 Thread Cindy Swearingen
You can add and remove mirrored or non-mirrored log devices. Jordan is probably running into CR 7000154: cannot remove log device Thanks, Cindy On 03/31/11 12:28, Roy Sigurd Karlsbakk wrote: http://pastebin.com/nD2r2qmh Here is zpool status and zpool version The only thing I wonder abo

Re: [zfs-discuss] detach configured log devices?

2011-03-16 Thread Cindy Swearingen
Hi Jim, Yes, the Solaris 10 9/10 release supports log device removal. http://download.oracle.com/docs/cd/E19253-01/819-5461/gazgw/index.html See Example 4-3 in this section. The ability to import a pool with a missing log device is not yet available in the Solaris 10 release. Thanks, Cindy

Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Cindy Swearingen
Robert, Which Solaris release is this? Thanks, Cindy On 03/04/11 11:10, Mark J Musante wrote: The fix for 6991788 would probably let the 40mb drive work, but it would depend on the asize of the pool. On Fri, 4 Mar 2011, Cindy Swearingen wrote: Hi Robert, We integrated some fixes that

Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Cindy Swearingen
Hi Robert, We integrated some fixes that allowed you to replace disks of equivalent sizes, but 40 MB is probably beyond that window. Yes, you can do #2 below and the pool size will be adjusted down to the smaller size. Before you do this, I would check the sizes of both spares. If both spares a

Re: [zfs-discuss] Hung Hot Spare

2011-03-03 Thread Cindy Swearingen
Hi Paul, I've seen some spare stickiness too and its generally when I'm trying to simulate a drive failure (like you are below) without actually physically replacing the device. If I actually physically replace the failed drive, the spare is detached automatically after the new device is resilve

  1   2   3   4   5   6   7   8   >