Re: [zfs-discuss] [storage-discuss] multipath used inadvertantly?

2011-02-15 Thread Cindy Swearingen
Hi Ray, MPxIO is on by default for x86 systems that run the Solaris 10 9/10 release. On my Solaris 10 9/10 SPARC system, I see this: # stmsboot -L stmsboot: MPxIO is not enabled stmsboot: MPxIO disabled You can use the stmsboot CLI to disable multipathing. You are prompted to reboot the

Re: [zfs-discuss] how to destroy a pool by id?

2011-02-14 Thread Cindy Swearingen
Hi Chris, Yes, this is a known problem and a CR is filed. I haven't tried these in a while, but consider one of the following workarounds below. #1 is most drastic and make sure you've got the right device name. No sanity checking is done by the dd command. Other experts can comment on a

Re: [zfs-discuss] ACL for .zfs directory

2011-02-14 Thread Cindy Swearingen
Hi Ian, You are correct. Previous Solaris releases displayed older POSIX ACL info on this directory. It was changed to the new ACL style from the integration of this CR: 6792884 Vista clients cannot access .zfs Thanks, Cindy On 02/13/11 19:30, Ian Collins wrote: While scanning filesystems

Re: [zfs-discuss] multiple disk failure (solved?)

2011-02-01 Thread Cindy Swearingen
wrote: On 1/31/2011 3:14 PM, Cindy Swearingen wrote: Hi Mike, Yes, this is looking much better. Some combination of removing corrupted files indicated in the zpool status -v output, running zpool scrub and then zpool clear should resolve the corruption, but its depends on how bad the corruption

Re: [zfs-discuss] fmadm faulty not showing faulty/offline disks?

2011-02-01 Thread Cindy Swearingen
Hi Krunal, It looks to me like FMA thinks that you removed the disk so you'll need to confirm whether the cable dropped or something else. I agree that we need to get email updates for failing devices. See if fmdump generated an error report using the commands below. Thanks, Cindy # fmdump

Re: [zfs-discuss] fmadm faulty not showing faulty/offline disks?

2011-02-01 Thread Cindy Swearingen
: On Tue, Feb 1, 2011 at 1:29 PM, Cindy Swearingen cindy.swearin...@oracle.com wrote: I agree that we need to get email updates for failing devices. Definitely! See if fmdump generated an error report using the commands below. Unfortunately not, see below: movax@megatron:/root# fmdump TIME

Re: [zfs-discuss] multiple disk failure (solved?)

2011-01-31 Thread Cindy Swearingen
Hi Mike, Yes, this is looking much better. Some combination of removing corrupted files indicated in the zpool status -v output, running zpool scrub and then zpool clear should resolve the corruption, but its depends on how bad the corruption is. First, I would try least destruction method:

Re: [zfs-discuss] ZFS root clone problem

2011-01-28 Thread Cindy Swearingen
Hi Alex, Disks that are part of the root pool must contain a valid slice 0 (this is boot restriction) and the disk names that you present to ZFS for the root pool must also specify the slice identifier (s0). For example, instead of this syntax: # zpool attach -f rpool c0t0d0 c0t2d0 try this

Re: [zfs-discuss] Shrinking a pool, Increasing hotspares

2011-01-25 Thread Cindy Swearingen
Hi Phillip, We don't yet support removing a mirrored pair, but you could reconfigure this pool by the process below. First, you would want to make sure that all the hardware is fully operational by reviewing fmdump -eV, iostat -En, /var/adm/messages and having a good backup. Yes, I'm paranoid

Re: [zfs-discuss] zfs create -p only creates the parent but not the child

2011-01-25 Thread Cindy Swearingen
You're mixing a mkdir operation with a zfs create operation and only the zfs create operation creates a file system that is mounted, which is why df -h doesn't show dir2 as mounted. dir2 is just a directory, not a file system. ZFS does two things with a default zfs create operation: o creates

Re: [zfs-discuss] sbdadm: unknown error (Solaris 11 Express)

2011-01-20 Thread Cindy Swearingen
My first inclination is 128k is too small for a pool component. You might try something more reasonable, like 1G, if you're just testing. Thanks, Cindy # zfs create -V 2g sanpool/vol1 # stmfadm create-lu /dev/zvol/rdsk/sanpool/vol1 Logical unit created: 600144F0C49A05004CC84BE20001 On

[zfs-discuss] [Fwd: [osol-discuss] Its Official...!! GA release on 14th Jan 2011]

2011-01-12 Thread Cindy Swearingen
Original Message Subject: [osol-discuss] Its Official...!! GA release on 14th Jan 2011 Date: Wed, 12 Jan 2011 05:34:18 PST From: darshin dars...@kqinfotech.com To: opensolaris-disc...@opensolaris.org Hi All, Happy New Year ! First of all, a big thanks to you all for the

Re: [zfs-discuss] pool metadata corrupted - any options?

2011-01-10 Thread Cindy Swearingen
Hi David, You might try importing this pool on a Oracle Solaris Express system, where a pool recovery feature is available might be able to bring this pool back (it rolls back to a previous transaction) or if that fails, you could import this pool by using the read-only option to at least

Re: [zfs-discuss] ZFS root backup/disaster recovery, and moving root pool

2011-01-10 Thread Cindy Swearingen
Hi Karl, I would keep your mirrored root pool separate on the smaller disks as you have setup now. You can move your root pool, its easy enough. You can even replace or attach larger disks to the root pool and detach the smaller disks. You can't currently boot from snapshots, you must boot

Re: [zfs-discuss] zfs recv failing - invalid backup stream

2011-01-05 Thread Cindy Swearingen
Hi Brandon, I'm not the right person to evaluate your zstreamdump output, but I can't reproduce this error on my b152 system, which as close as I could get to b151a. See below. Are the rpool and radar pool versions reasonably equivalent? In your follow-up, I think you are saying that

Re: [zfs-discuss] zfs recv failing - invalid backup stream

2011-01-05 Thread Cindy Swearingen
On 01/05/11 12:21, Brandon High wrote: On Wed, Jan 5, 2011 at 9:44 AM, Cindy Swearingen cindy.swearin...@oracle.com wrote: In your follow-up, I think you are saying that rp...@copy is a recursive snapshot and you are able to receive the individual rpool snapshots. You just can't receive

Re: [zfs-discuss] zfs recv failing - invalid backup stream

2011-01-05 Thread Cindy Swearingen
output for clues. Thanks, Cindy On 01/05/11 14:01, Brandon High wrote: On Wed, Jan 5, 2011 at 11:57 AM, Cindy Swearingen cindy.swearin...@oracle.com wrote: In the meantime, you could rule out a problem with zfs send/recv on your system if you could create another non-BE dataset with descendent

[zfs-discuss] Happy Holidays...

2010-12-23 Thread Cindy Swearingen
Hey ZFSers, This is a moderated discussion list and if you are not a member of this list, your postings are not posted until a moderator approves them. The list moderators will be on vacation and posting approvals will be delayed. If you are not a member of this list and you want to post to

Re: [zfs-discuss] A couple of quick questions

2010-12-22 Thread Cindy Swearingen
Hi Per, Disk devices are used to create ZFS storage pools. Then, you create file systems that can access all the available disk space in the storage pool. ZFS file systems are not constrained to any physical disk in the storage pool. Consider that you will need to backup your data regardless

Re: [zfs-discuss] A few questions

2010-12-17 Thread Cindy Swearingen
You should take a look at the ZFS best practices guide for RAIDZ and mirrored configuration recommendations: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide Its easy for me to say because I don't have to buy storage but mirrored storage pools are currently more flexible,

Re: [zfs-discuss] A few questions

2010-12-16 Thread Cindy Swearingen
Hi Lanky, Other follow-up posters have given you good advice. I don't see where you are getting the idea that you can combine pools with pools. You can't do this and I don't see that the southbrain tutorial illustrates this either. All of his examples for creating redundant pools are

Re: [zfs-discuss] Guide to COMSTAR iSCSI?

2010-12-16 Thread Cindy Swearingen
Hi Chris, I have attempted to document the steps to restrict LUN access, here: http://docs.sun.com/app/docs/doc/821-1459/gkgnr?l=ena=view Please see if this info helps. If it doesn't, let me know the errors. Thanks, Cindy On 12/13/10 16:30, Chris Mosetick wrote: I have found this post from

Re: [zfs-discuss] zfs send receive problem/questions

2010-12-02 Thread Cindy Swearingen
Hi Don, I'm no snapshot expert but I think you will have to remove the previous receiving side snapshots, at least. I created a file system hierarchy that includes a lower-level snapshot, created a recursive snapshot of that hierarchy and sent it over to a backup pool. Then, did the same steps

Re: [zfs-discuss] How to zfs send current stage of fs from read-only pool?

2010-11-29 Thread Cindy Swearingen
Karel, You can't create snapshots in a read-only pool. You will have to use something else besides zfs snapshots, such as tar or cpio. You could have used zfs send if a snapshot already existed but you can't write anything to the pool when it is in read-only mode. Thanks, Cindy On 11/25/10

Re: [zfs-discuss] Strange behavior of b151a and .zfs directory

2010-11-29 Thread Cindy Swearingen
Hi Karel, Try /usr/bin/find instead of /usr/gnu/bin/find: # which find /usr/gnu/bin/find # zfs snapshot rpool/cin...@snap1 # cd /rpool/cindys/.zfs # /usr/bin/find . -type f ./snapshot/snap1/file.1 ./snapshot/snap1/file.2 Thanks, Cindy On 11/25/10 15:22, Karel Gardas wrote: Hello, after

Re: [zfs-discuss] ZFS Crypto in Oracle Solaris 11 Express

2010-11-24 Thread Cindy Swearingen
On 23/11/2010 21:01, StorageConcepts wrote: r...@solaris11:~# zfs list mypool/secret_received cannot open 'mypool/secret_received': dataset does not exist r...@solaris11:~# zfs send mypool/plaint...@test | zfs receive -o encryption=on mypool/secret_received cannot receive: cannot

Re: [zfs-discuss] RAID-Z/mirror hybrid allocator

2010-11-18 Thread Cindy Swearingen
Hi Markus, Jeff Bonwick integrated this feature so I'll let him describe it. In a nutshell: If you create a RAIDZ pool in OS 11 Express or if you are running at least build 129, some of the pool metadata is mirrored automatically. This is a performance feature that should increase read I/O

Re: [zfs-discuss] RAID-Z/mirror hybrid allocator

2010-11-18 Thread Cindy Swearingen
need to upgrade the pool version to use this feature. In this case, newly written metadata would be mirrored. Thanks, Cindy On 11/18/10 10:15, Cindy Swearingen wrote: Hi Markus, Jeff Bonwick integrated this feature so I'll let him describe it. In a nutshell: If you create a RAIDZ pool in OS 11

Re: [zfs-discuss] Pool versions

2010-11-16 Thread Cindy Swearingen
Hi Ian, The pool and file system version information is available in the ZFS Administration Guide, here: http://docs.sun.com/app/docs/doc/821-1448/appendixa-1?l=ena=view The OpenSolaris version pages are up-to-date now also. Thanks, Cindy On 11/15/10 16:42, Ian Collins wrote: Is there an

Re: [zfs-discuss] Booting fails with `Can not read the pool label' error

2010-11-12 Thread Cindy Swearingen
Hi Rainer, I haven't seen this in a while but I wonder if you just need to set the bootfs property on your new root pool and/or reapplying the bootblocks. Can you import this pool booting from a LiveCD and to review the bootfs property value? I would also install the boot blocks on the rpool2

Re: [zfs-discuss] [OpenIndiana-discuss] format dumps the core

2010-11-03 Thread Cindy Swearingen
about this since I'm sure others will also run into this problem at some point if they have a mixed Linux/Solaris environment. -Moazam On Tue, Nov 2, 2010 at 3:15 PM, Cindy Swearingen cindy.swearin...@oracle.com wrote: Hi Moazam, The initial diagnosis is that the LSI controller is reporting bogus

Re: [zfs-discuss] [OpenIndiana-discuss] format dumps the core

2010-11-02 Thread Cindy Swearingen
Hi Moazam, The initial diagnosis is that the LSI controller is reporting bogus information. It looks like Roy is using a similar controller. You might report this problem to LSI, but I will pass this issue along to the format folks. Thanks, Cindy On 11/02/10 15:26, Moazam Raja wrote: I'm

Re: [zfs-discuss] Mirroring a zpool

2010-10-28 Thread Cindy Swearingen
Hi SR, You can create a mirrored storage pool, but you can't mirror an existing raidz2 pool nor can you convert a raidz2 pool to a mirrored pool. You would need to copy the data from the existing pool, destroy the raidz2 pool, and create a mirrored storage pool. Cindy On 10/28/10 11:19, SR

Re: [zfs-discuss] How do you use 1 partition on x86?

2010-10-26 Thread Cindy Swearingen
Hi Bill, Do you have another equivalent sized disk available? If so, and assuming this is the root pool, create one large slice 0 (s0) with an SMI label on the replacement disk. Attach that disk to create a mirrored root pool. After the replacement disk has resilvered, install the bootblocks,

Re: [zfs-discuss] Basic ACL usage in group environment

2010-10-26 Thread Cindy Swearingen
Hi Andy, What is the setting for the aclinherit property? I think you want to set this property to passthrough. Thanks, Cindy On 10/26/10 07:25, Andy Graybeal wrote: Yes, if you set up the directory ACLs for inheritance (include :fd: when you specify the ACEs), the ACLs on copied files will

Re: [zfs-discuss] No ACL inheritance with aclmode=passthrough in onnv-134

2010-10-25 Thread Cindy Swearingen
Hi Frank, You can't simulate the aclmode-less world in the upcoming release by setting aclmode to discard in b134. The reason you see your aclmode discarded because aclmode applies to both chmod operations and file/dir create operations. This is why you are seeing the ACL being discarded. It

Re: [zfs-discuss] When `zpool status' reports bad news

2010-10-21 Thread Cindy Swearingen
Hi Harry, Generally, you need to use zpool clear to clear the pool errors, but I can't reproduce the removed files reappearing in zpool status on my own system when I corrupt data so I'm not sure this will help. Some other larger problem is going on here... Did any hardware changes lead up to

Re: [zfs-discuss] Unknown Space Gain

2010-10-20 Thread Cindy Swearingen
Krunal, The file system size changes are probably caused when these snapshots are created and deleted automatically. The recurring messages below are driver related and probably have nothing to do with the snapshots. Thanks, Cindy On 10/20/10 10:50, Krunal Desai wrote: Argh, yes, lots of

Re: [zfs-discuss] rename zpool

2010-10-19 Thread Cindy Swearingen
Hi Sridhar, The answer to the first question is definitely no: No way exists to change a pool name without exporting and importing the pool. I thought we had an open CR that covered renaming pools but I can't find it. The underlying pool devices contain pool information and no easy way exists

Re: [zfs-discuss] vdev failure - pool loss ?

2010-10-19 Thread Cindy Swearingen
On 10/19/10 14:33, Tuomas Leikola wrote: On Mon, Oct 18, 2010 at 8:18 PM, Simon Breden sbre...@gmail.com wrote: So are we all agreed then, that a vdev failure will cause pool loss ? -- unless you use copies=2 or 3, in which case your data is still safe for those datasets that have this

Re: [zfs-discuss] adding new disks and setting up a raidz2

2010-10-15 Thread Cindy Swearingen
Derek, The c0t5000C500268CFA6Bd0 disk has some kind of label problem. You might compare the label of this disk to the other disks. I agree with Richard that using whole disks (use the d0 device) is best. You could also relabel it manually by using the format--fdisk-- delete the current

Re: [zfs-discuss] Online zpool expansion feature in Solaris 10 9/10

2010-10-13 Thread Cindy Swearingen
Hi James, I'm looking into this and will get back to you shortly. Thanks, Cindy On 10/13/10 00:14, James Patterson wrote: Iā€™m testing the new online zpool expansion feature of Solaris 10 9/10. My zpool was created using the entire disk (ie. no slice number was used). When I resize my LUN

Re: [zfs-discuss] raidz faulted with only one unavailable disk

2010-10-08 Thread Cindy Swearingen
Hi Hans-Christian, Can you provide the commands you used to create this pool? Are the pool devices actually files? If so, I don't see how you have a pool device that starts without a leading slash. I tried to create one and it failed. See the example below. By default, zpool import looks in

Re: [zfs-discuss] raidz faulted with only one unavailable disk

2010-10-08 Thread Cindy Swearingen
Hi Christian, Yes, with non-standard disks you will need to provide the path to zpool import. I don't think the force import of a degraded pool would cause the pool to be faulted. In general, the I/O error is caused when ZFS can't access the underlying devices. In this case, your non-standard

Re: [zfs-discuss] Finding corrupted files

2010-10-07 Thread Cindy Swearingen
I would not discount the performance issue... Depending on your workload, you might find that performance increases with ZFS on your hardware RAID in JBOD mode. Cindy On 10/07/10 06:26, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-

Re: [zfs-discuss] moving newly created pool to alternate host

2010-10-07 Thread Cindy Swearingen
Hi Sridhar, Most of the answers to your questions are yes. If I have a mirrored pool mypool, like this: # zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAMESTATE READ WRITE CKSUM mypool ONLINE 0 0 0

Re: [zfs-discuss] Finding corrupted files

2010-10-06 Thread Cindy Swearingen
Budy, Your previous zpool status output shows a non-redundant pool with data corruption. You should use the fmdump -eV command to find out the underlying cause of this corruption. You can review the hardware-level monitoring tools, here:

Re: [zfs-discuss] moving newly created pool to alternate host

2010-10-05 Thread Cindy Swearingen
Hi Sridhar, After a zpool split operation, you can access the newly created pool by using the zpool import command. If the LUNs from mypool are available on host1 and host2, you should be able to import mypool_snap from host2. After mypool_snap is imported, it will be available for backups, but

Re: [zfs-discuss] hot spare remains in use

2010-10-04 Thread Cindy Swearingen
Hi Brian, You could manually detach the spare, like this: # zpool detach pool2 c10t22d0 Sometimes, you might need to clear the pool error but I don't see any residual errors in this output: # zpool clear pool2 I would use fmdump -eV to see what's going with c10t11d0. Thanks, Cindy On

Re: [zfs-discuss] Can I upgrade a striped pool of vdevs to mirrored vdevs?

2010-10-04 Thread Cindy Swearingen
Hi-- Yes, you would use the zpool attach command to convert a non-redundant configuration into a mirrored pool configuration. http://docs.sun.com/app/docs/doc/819-5461/gcfhe?l=ena=view See: Example 4ā€“6 Converting a Nonredundant ZFS Storage Pool to a Mirrored ZFS Storage Pool If you have

Re: [zfs-discuss] Can I upgrade a striped pool of vdevs to mirrored vdevs?

2010-10-04 Thread Cindy Swearingen
errors Actually, this zpool consists of two FC raids and I think I created it simply by adding these two devs to the pool. Does this disqualify my zpool for upgrading? Thanks, budy Am 04.10.10 16:48, schrieb Cindy Swearingen: Hi-- Yes, you would use the zpool attach command to convert a non

Re: [zfs-discuss] Can I upgrade a striped pool of vdevs to mirrored vdevs?

2010-10-04 Thread Cindy Swearingen
for upgrading? Thanks, budy Am 04.10.10 16:48, schrieb Cindy Swearingen: Hi-- Yes, you would use the zpool attach command to convert a non-redundant configuration into a mirrored pool configuration. http://docs.sun.com/app/docs/doc/819-5461/gcfhe?l=ena=view See: Example 4ā€“6 Converting

Re: [zfs-discuss] hot spare remains in use

2010-10-04 Thread Cindy Swearingen
that there. I tried replace/remove. I guess the spare is actually a mirror of the disk and the spare disk and is treated as such. Thanks again, Brian On Oct 4, 2010, at 10:27 AM, Cindy Swearingen wrote: Hi Brian, You could manually detach the spare, like this: # zpool detach pool2 c10t22d0

Re: [zfs-discuss] Migrating to an aclmode-less world

2010-10-04 Thread Cindy Swearingen
Hi Simon, I don't think you will see much difference for these reasons: 1. The CIFS server ignores the aclinherit/aclmode properties. 2. Your aclinherit=passthrough setting overrides the aclmode property anyway. 3. The only difference is that if you use chmod on these files to manually change

Re: [zfs-discuss] Cannot destroy snapshots: dataset does not exist

2010-09-30 Thread Cindy Swearingen
Hi Ian, If this is a release prior to b122, you might be running into CR 6860996. Please see this thread for a possible resolution: http://opensolaris.org/jive/thread.jspa?messageID=493866#493866 Thanks, Cindy On 09/30/10 09:34, Ian Levesque wrote: Hello, I have a ZFS filesystem (zpool

Re: [zfs-discuss] rpool spare

2010-09-29 Thread Cindy Swearingen
Hi Tony, The current behavior is that you can add a spare to a root pool. If the spare kicks in automatically, you would need to apply the boot blocks manually before you could boot from the spared-in disk. A good alternative is to create a two-way or three-way mirrored root pool. We're

Re: [zfs-discuss] rpool spare

2010-09-29 Thread Cindy Swearingen
Tony, A brief follow-up is that the issue of applying the boot blocks automatically to a spare for a root pool is covered by this existing CR 6668666. See this URL for more details. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6668666 Thanks, Cindy On 09/29/10 08:38, Cindy

Re: [zfs-discuss] ZFS flash issue

2010-09-28 Thread Cindy Swearingen
Hi Ketan, My flash archive experience is minimal, but.. This error suggest that the disk components of this pool might have some SVM remnants. Is that possible? I would check with the metastat command, review /etc/vfstab, or /etc/lu/ICF.* to see if they are referencing meta devices. Thanks,

Re: [zfs-discuss] ZFS flash issue

2010-09-28 Thread Cindy Swearingen
Ketan, Someone with more flash archive experience that me says that you can't install a ZFS root flash archive with live upgrade at this time. Duh, I knew that. Sorry for the red herring... :-) Cindy On 09/28/10 08:30, Cindy Swearingen wrote: Hi Ketan, My flash archive experience

Re: [zfs-discuss] zfs proerty aclmode gone in 147?

2010-09-24 Thread Cindy Swearingen
Hi Stephan, Yes, the aclmode property was removed, but we're not sure how this change is impacting your users. Can you provide their existing ACL information and we'll take a look. Thanks, Cindy On 09/24/10 01:41, Stephan Budach wrote: Hi, I recently installed oi147 and I noticed that the

Re: [zfs-discuss] Growing a root ZFS mirror on b134?

2010-09-23 Thread Cindy Swearingen
On 23/09/2010 11:06 PM, casper@sun.com wrote: Ok, that doesn't seem to have worked so well ... I took one of the drives offline, rebooted and it just hangs at the splash screen after prompting for which BE to boot into. It gets to hostname: blah and just sits there.

Re: [zfs-discuss] What is l2cache setting?

2010-09-22 Thread Cindy Swearingen
On 9/22/10 1:40 PM, Peter Taps wrote: Neil, Thank you for your help. However, I don't see anything about l2cache under Cache devices man pages. To be clear, there are two different vdev types defined in zfs source code - cache and l2cache. I am familiar with cache devices. I am curious

Re: [zfs-discuss] zpool create using whole disk - do I add p0? E.g. c4t2d0 or c42d0p0

2010-09-09 Thread Cindy Swearingen
Hi-- It might help to review the disk component terminology description: c#t#d#p# = represents the the fdisk partition on x86 systems, where you can have up to 4 fdisk partitions, such as one for the Solaris OS or a Windows OS. An fdisk partition is the larger container of the disk or disk

Re: [zfs-discuss] zpool create using whole disk - do I add p0? E.g. c4t2d0 or c42d0p0

2010-09-07 Thread Cindy Swearingen
Hi Craig, D'oh. I kept wondering where those p0 examples were coming from. Don't use the p* devices for your storage pools. They represent the larger fdisk partition. Use the d* devices instead, like this example below: zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0

Re: [zfs-discuss] Configuration questions for Home File Server (CPU cores, dedup, checksum)?

2010-09-07 Thread Cindy Swearingen
Craig, I'm sure the other home file server users will comment on your gear and any possible benefit of a L2ARC or separate log device... Use the default checksum which is fletcher4, I fixed the tuning guide reference, skip dedup for now. Keep things as simple as possible. Thanks, Cindy On

Re: [zfs-discuss] possible ZFS-related panic?

2010-09-03 Thread Cindy Swearingen
Hi Marion, I'm not the right person to analyze your panic stack, but a quick search says the page_sub: bad arg(s): pp panic string might be associated with a bad CPU or a page locking problem. I would recommend running CPU/memory diagnostics on this system. Thanks, Cindy On 09/02/10 20:31,

Re: [zfs-discuss] What forum to use with a ZFS how-to question

2010-09-02 Thread Cindy Swearingen
This is the right forum, fire away... Feel free to review ZFS information in advance: http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs ZFS Administration Guide (Solaris 10): http://docs.sun.com/app/docs/doc/819-5461 ZFS Best Practices Guide:

Re: [zfs-discuss] Information lost? Does zpool create erase volumes

2010-09-02 Thread Cindy Swearingen
Dominik, You overwrite your data when you recreated a pool with the same name and the same disks with zpool create. If I try to recreate a pool that already exists, at least exported, I will see a message similar to the following: # zpool create tank c3t3d0 invalid vdev specification use '-f'

Re: [zfs-discuss] Information lost? Does zpool create erase volumes

2010-09-02 Thread Cindy Swearingen
Yes, I did try to import the pool. However, the response of the command was no pools available to import. I'm not sure what happened to your pool, but I think it is possible that the pool information on these disks was removed accidentally. I'm not sure what the diskutil command does but if

Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Cindy Swearingen
Hi Rainer, I'm no device expert but we see this problem when firmware updates or other device/controller changes change the device ID associated with the devices in the pool. In general, ZFS can handle controller/device changes if the driver generates or fabricates device IDs. You can view

Re: [zfs-discuss] Narrow escape with FAULTED disks

2010-08-18 Thread Cindy Swearingen
Its hard to tell what caused the smart predictive failure message, like a temp fluctuation. If ZFS noticed that a disk wasn't available yet, then I would expect a message to that effect. In any case, I think I would have a replacement disk available. The important thing is that you continue to

Re: [zfs-discuss] Narrow escape with FAULTED disks

2010-08-17 Thread Cindy Swearingen
Hi Mark, I would recheck with fmdump to see if you have any persistent errors on the second disk. The fmdump command will display faults and fmdump -eV will display errors (persistent faults that have turned into errors based on some criteria). If fmdump -eV doesn't show any activity for

Re: [zfs-discuss] NFS issue with ZFS

2010-08-13 Thread Cindy Swearingen
Hi Phillip, What's the error message? How did you share the ZFS file system? # zfs create tank/cindys # zfs sharenfs=on tank/cindys # share - /tank/cindys rw # cp /usr/dict/words /tank/cindys/file.1 # cd /tank/cindys # chmod 666 file.1 # ls -l file.1 -rw-rw-rw- 1 root

Re: [zfs-discuss] NFS issue with ZFS

2010-08-13 Thread Cindy Swearingen
but since this is a ZFS filesystem being NFS over. Who knows!!! Phillip -Original Message- From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com] Sent: Friday, August 13, 2010 12:59 PM To: Phillip Bruce (Mindsource) Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] NFS

Re: [zfs-discuss] autoreplace not kicking in

2010-08-11 Thread Cindy Swearingen
Hi Giovanni, The spare behavior and the autoreplace property behavior are separate but they should work pretty well in recent builds. You should not need to perform a zpool replace operation if the autoreplace property is set. If autoreplace is set and a replacement disk is inserted into the

Re: [zfs-discuss] Corrupt file without filename

2010-08-10 Thread Cindy Swearingen
You would look for the device name that might be a problem, like this: # fmdump -eV | grep c2t4d0 vdev_path = /dev/dsk/c2t4d0s0 vdev_path = /dev/dsk/c2t4d0s0 vdev_path = /dev/dsk/c2t4d0s0 vdev_path = /dev/dsk/c2t4d0s0 Then, review the file more closely for the details of these errors, such as

Re: [zfs-discuss] Global Spare for 2 pools

2010-08-10 Thread Cindy Swearingen
Yes, as long as the pools are on the same system, you can share a spare between two pools, but we are not recommending sharing spares at this time. We'll keep you posted. Thanks, Cindy On 08/10/10 07:39, Tony MacDoodle wrote: I have 2 ZFS pools all using the same drive type and size. The

Re: [zfs-discuss] ZFS with EMC PowerPath

2010-08-10 Thread Cindy Swearingen
Hi Brian, Is the pool exported before the update/upgrade of PowerPath software? This recommended practice might help the resulting devices to be more coherent. If the format utility sees the devices the same way as ZFS, then I don't see how ZFS can rename the devices. If the format utility

Re: [zfs-discuss] ZFS SCRUB

2010-08-10 Thread Cindy Swearingen
The ZFS best practices is here: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide Run zpool scrub on a regular basis to identify data integrity problems. If you have consumer-quality drives, consider a weekly scrubbing schedule. If you have datacenter-quality drives,

Re: [zfs-discuss] How to identify user-created zfs filesystems?

2010-08-04 Thread Cindy Swearingen
Hi Peter, I don't think we have any property that determines who created the file system. Would this work instead: # zfs list -r mypool NAME USED AVAIL REFER MOUNTPOINT mypool 172K 134G33K /mypool mypool/cifs131K 134G31K /mypool/cifs1 mypool/cifs231K

Re: [zfs-discuss] Corrupt file without filename

2010-08-04 Thread Cindy Swearingen
Because this is a non-redundant root pool, you should still check fmdump -eV to make sure the corrupted files aren't due to some ongoing disk problems. cs On 08/04/10 13:45, valrh...@gmail.com wrote: Oooh... Good call! I scrubbed the pool twice, then it showed a real filename from an old

Re: [zfs-discuss] Moved to new controller now degraded

2010-07-30 Thread Cindy Swearingen
In general, ZFS can detect device changes but we recommend exporting the pool before you move hardware around. You might try exporting and importing this pool to see if ZFS recognizes this device again. Make sure you have a good backup of this data before you export it because its hard to tell

Re: [zfs-discuss] ZFS acl and chmod

2010-07-29 Thread Cindy Swearingen
Which Solaris release is this and are you using /usr/bin/ls and /usr/bin/chmod? Thanks, Cindy On 07/29/10 02:44, . . wrote: Hi , while playing with ZFS acls I have noticed chmod strange behavior, it duplicates some acls , is it a bug or a feature :) ? For example scenario: #ls -dv ./2

Re: [zfs-discuss] zfs upgrade unmounts filesystems

2010-07-29 Thread Cindy Swearingen
individually, like this: # zfs upgrade space/direct # zfs upgrade space/dcc Thanks, Cindy On 07/29/10 09:48, Cindy Swearingen wrote: Hi Gary, This should just work without having to do anything. Looks like a bug but I haven't seen this problem before. Anything unusual about the mount points

Re: [zfs-discuss] ZFS acl and chmod

2010-07-29 Thread Cindy Swearingen
/synchronize:allow On 07/29/10 11:56, Cindy Swearingen wrote: Which Solaris release is this and are you using /usr/bin/ls and /usr/bin/chmod? Thanks, Cindy On 07/29/10 02:44, . . wrote: Hi , while playing with ZFS acls I have noticed chmod strange behavior, it duplicates some acls

Re: [zfs-discuss] root pool expansion

2010-07-28 Thread Cindy Swearingen
Hi Gary, If your root pool is getting full, you can replace the root pool disk with a larger disk. My recommendation is to attach the replacement disk, let the replacement disk resilver, install the boot blocks, and then detach the smaller disk. The system will see the expanded space

Re: [zfs-discuss] zfs allow does not work for rpool

2010-07-28 Thread Cindy Swearingen
Hi Mark, A couple of things are causing this to fail: 1. The user needs permissions to the underlying mount point. 2. The user needs both create and mount permissions to create ZFS datasets. See the syntax below, which might vary depending on your Solaris release. Thanks, Cindy # chmod

Re: [zfs-discuss] zfs allow does not work for rpool

2010-07-28 Thread Cindy Swearingen
Mike, Did you also give the user permissions to the underlying mount point: # chmod A+user:user-name:add_subdirectory:fd:allow /rpool If so, please let me see the syntax and error messages. Thanks, Cindy On 07/28/10 12:23, Mike DeMarco wrote: Thanks adding mount did allow me to create it

Re: [zfs-discuss] How can a mirror lose a file?

2010-07-28 Thread Cindy Swearingen
Hi Sol, What kind of disks? You should be able to use the fmdump -eV command to identify when the checksum errors occurred. Thanks, Cindy On 07/28/10 13:41, sol wrote: Hi Having just done a scrub of a mirror I've lost a file and I'm curious how this can happen in a mirror. Doesn't it

Re: [zfs-discuss] ZFSroot LiveUpgrade

2010-07-27 Thread Cindy Swearingen
Hi Ketan, The supported LU + zone configuration migration scenarios are described here: http://docs.sun.com/app/docs/doc/819-5461/gihfj?l=ena=view I think the problem is that the /zones is a mountpoint. You might have better results if /zones was just a directory. See the examples in this

Re: [zfs-discuss] Mirrored raidz

2010-07-26 Thread Cindy Swearingen
A small follow-up is that creating pools from components of other pools can cause system deadlocks. This approach is not recommended. Thanks, Cindy On 07/26/10 12:19, Saxon, Will wrote: -Original Message- From: zfs-discuss-boun...@opensolaris.org

Re: [zfs-discuss] Mirrored raidz

2010-07-26 Thread Cindy Swearingen
You might look at the zpool split feature, where you can split off the disks from a mirrored pool to create an identical pool, described here: http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs ZFS Admin Guide, p. 87 Thanks, Cindy On 07/26/10 12:51, Dav Banks wrote: I wanted to

Re: [zfs-discuss] zfs add -r output

2010-07-23 Thread Cindy Swearingen
Hi Ryan, You are seeing this CR: http://bugs.opensolaris.org/view_bug.do?bug_id=6916574 zpool add -n displays incorrect structure This is a display problem only. Thanks, Cindy On 07/22/10 15:54, Ryan Schwartz wrote: I've got a system running s10x_u7wos_08 with only half of the disks

Re: [zfs-discuss] Optimal Disk configuration

2010-07-21 Thread Cindy Swearingen
The answer depends on your goals: space, performance, reliability To me, optimal is best performance and reliability so use: - JBOD - ZFS mirrored pool of 22x2 + 2 spares - Mirror the disk pairs across both controllers Let ZFS protect your data. Cindy On 07/21/10 15:10, John Andrunas

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Cindy Swearingen
Hi-- I don't know what's up with iostat -En but I think I remember a problem where iostat does not correctly report drives running in legacy IDE mode. You might use the format utility to identify these devices. Thanks, Cindy On 07/18/10 14:15, Alxen4 wrote: This is a situation: I've got an

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Cindy Swearingen
. -Original Message- From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com] Sent: Monday, July 19, 2010 9:16 AM To: Yuri Homchuk Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] Help identify failed drive Hi-- I don't know what's up with iostat -En but I think I remember

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Cindy Swearingen
that all 7 drives are Seagate Barracuda which is definetly not correct. This is a reason of my original question. I need to know if c2t3d0 Seagate or Western Digital. Thanks, -Original Message- From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com] Sent: Monday, July 19, 2010 9:48

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Cindy Swearingen
-Original Message- From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com] Sent: Monday, July 19, 2010 10:28 AM To: Yuri Homchuk Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] Help identify failed drive I think you are saying that even though format shows 9 devices (0-8

Re: [zfs-discuss] disaster recovery process (replace disks)

2010-07-17 Thread Cindy Swearingen
Hi Ned, One of the benefits of using a mirrored ZFS configuration is just replacing each disk with a larger disk, in place, online, and so on... Its probably easiest to use zfs send -R (recursive) to do a recursive snapshot of your root pool. Check out the steps here:

Re: [zfs-discuss] preparing for future drive additions

2010-07-14 Thread Cindy Swearingen
Hi Daniel, No conversion from a mirrored to RAIDZ configuration is available yet. Mirrored pools are more flexible and generally provide good performance. You can easily create a mirrored pool of two disks and then add two more disks later. You can also replace each disk with larger disks if

<    1   2   3   4   5   6   7   >