Re: [zfs-discuss] zfs man pages on Open Solaris

2006-06-20 Thread Cindy Swearingen
Hi Ricardo, I just tested most of the links from the zones.5 man page on and they all seem to be working now. Outages occurred a couple of weeks ago but everything seems to be working now. Please let me know if you have any more problems with and I'll file a service

Re: [zfs-discuss] zfs man pages on Open Solaris

2006-06-20 Thread Cindy Swearingen
D'oh! Thanks for the tip. I was testing the Solaris Express version, which is working fine, here: If you're working with OpenSolaris features, then the Solaris Express docs will more closely correlate than the Solaris 10 man pages.

Re: [zfs-discuss]

2006-06-23 Thread Cindy Swearingen
Hi Chris, I fixed the immediate problem, which is to include a version/3 page. I think your other suggestion below is a good one. We'll work on that as well. Regards, Cindy Chris Gerhard wrote: having just upgraded to nv42 zpool status tells me I need to upgrade the ondisk version.

Re: [zfs-discuss] This may be a somewhat silly question ...

2006-06-28 Thread Cindy Swearingen
Dennis, You are absolutely correct that the doc needs a step to verify that the backup occurred. I'll work on getting this step added to the admin guide ASAP. Thanks for feedback... Cindy Dennis Clarke wrote: Am I missing something here? [1] Dennis [1] I am fully prepared for RTFM

Re: [zfs-discuss] ZFS, block device and Xen?

2006-08-01 Thread Cindy Swearingen
Sorry, here's the correct URL: Cindy Al Hopper wrote: On Tue, 1 Aug 2006, Cindy Swearingen wrote: Hi Patrick, Here's a pointer to the volume section in the ZFS admin guide: http://docsview.sfbay/app/docs/doc/817-2271/6mhupg6gl

Re: [zfs-discuss] Re: Re: Re: zpool status panics server

2006-08-25 Thread Cindy Swearingen
Hi Neal, The ZFS administration class, available in the fall, I think, covers basically the same content as the ZFS admin guide only with extensive lab exercises. If you're an experienced admin, I think you can pick up most of the basic features from the ZFS Admin Guide. If you can't, please

Re: [zfs-discuss] new ZFS links page

2006-09-01 Thread Cindy Swearingen
James, I noticed your link to the ZFS Admin Guide is out of date because I appended the date in the pdf filename. This doesn't work because when I update the guide once or month or so, you wouldn't get the latest version. So, I simplified this by renaming it as zfsadmin.pdf The month/year is

Re: [zfs-discuss] What's going to make it into 11/06?

2006-10-05 Thread Cindy Swearingen
Hi Brian, See the previous posting about this below. You can read about these features in the ZFS Admin Guide. Cheers, Cindy Subject: Solaris 10 ZFS Update From: George Wilson [EMAIL PROTECTED] Date: Mon, 31 Jul 2006 11:51:09 -0400 To: We have putback a

Re: [zfs-discuss] s10u3 query

2006-10-27 Thread Cindy Swearingen
Yes, hot spares are in the upcoming Solaris 10 release... You can read about hot spares in the Solaris Express docs, here: Essentially the same information will appear in the upcoming Solaris 10 version. Cindy ozan s. yigit

Re: [zfs-discuss] zfs+stripe detach

2006-11-09 Thread Cindy Swearingen
Hi-- ZFS stripes data across all pool configurations but you can only detach a device from mirrored storage pool. For more information, see this section: However, figuring out that this operation is only supported in a mirrored

Re: [zfs-discuss] # devices in raidz.

2006-11-13 Thread Cindy Swearingen
Hi Mike, Yes, outside of the hot-spares feature, you can detach, offline, and replace existing devices in a pool, but you can't remove devices, yet. This feature work is being tracked under this RFE: Cindy Mike Seda wrote:

Re: [zfs-discuss] Re: Production ZFS Server Death (06/06)

2006-11-29 Thread Cindy Swearingen
Hi Betsy, Yes, part of this is a documentation problem. I recently documented the find -inum scenario in the community version of the admin guide. Please see page 156, (well, for next time) here: We're working on the larger issue as well. Cindy

Re: [zfs-discuss] Re: Adding disk to a RAID-Z?

2007-01-11 Thread Cindy Swearingen
Hi Peter, I think you must be referring to this section in the ZFS admin guide: If you are creating a RAID-Z configuration with many disks, as in this example, a RAID-Z configuration with 14 disks is better split into a two 7-disk

Re: [zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ?

2007-01-16 Thread Cindy Swearingen
Hi Torrey, The MD21 entries were removed from the /etc/format.dat file in the Solaris 10 release although the controller itself was EOL'd long before this release. However, the entries are not removed upon upgrade from a previous release, which is this bug:

Re: [zfs-discuss] Re: Adding my own compression to zfs

2007-01-29 Thread Cindy . Swearingen
See the following bug: Cindy roland wrote: is it planned to add some other compression algorithm to zfs ? lzjb is quite good and especially performing very well, but i`d like to have better compression (bzip2?) - no matter how worse

Re: [zfs-discuss] dumpadm and using dumpfile on zfs?

2007-01-29 Thread Cindy . Swearingen
Hi Peter, This operation isn't supported yet. See this bug: Both the zfs man page and the ZFS Admin Guide identify swap and dump limitations, here: Cindy Peter Buckingham

Re: [zfs-discuss] ZFS inode equivalent

2007-01-31 Thread Cindy . Swearingen
Final for the first draft. :-) Use the .../community/zfs/docs link to get to this doc link at the bottom of the page. The current version is indeed 0822. More updates are needed, but the dnode description is still applicable. Someone will correct if I'm wrong. cs James Blackburn wrote: Or

Re: [zfs-discuss] Re: How to backup a slice ? - newbie

2007-02-20 Thread Cindy . Swearingen
Uwe, It was also unclear to me that legacy mounts were causing your troubles. The ZFS Admin Guide describes ZFS mounts and legacy mounts, here: Richard, I think we need some more basic troubleshooting info, such as this mount failure.

Re: [zfs-discuss] Replacing a drive using ZFS

2007-02-21 Thread Cindy . Swearingen
Matt, Generally, when a disk needs to be replaced, you replace the disk, use the zpool replace command, and you're done... This is only a little more complicated in your scenario below because of the sharing the disk between ZFS and UFS. Most disks are hot-pluggable so you generally don't need

Re: [zfs-discuss] ZFS with raidz

2007-03-19 Thread Cindy . Swearingen
Hi Kory, No, they don't have to the same size. But, the pool size will be constrained by the smallest disk and might not be the best use of your disk space. See the output below. I'd be better off mirroring the two 136-GB disks and using the 4 GB-disk for something else. :-) Cindy c0t0d0 =

Re: [zfs-discuss] Re: Re: simple Raid-Z question

2007-04-09 Thread Cindy . Swearingen
Malachi, The section on adding devices to a ZFS storage pool in the ZFS Admin guide, here, provides an example of adding to a raidz configuration: http://docsview.sfbay/app/docs/doc/817-2271/6mhupg6ft?a=view I think I need to provide a summary of what you can do with both raidz and mirrored

Re: [zfs-discuss] Re: Re: simple Raid-Z question

2007-04-09 Thread Cindy . Swearingen
Here's the correct link: The same example exists on page 52 of the 817-2271 PDF posted on the opensolaris.../zfs/documentation page. Cindy Malachi de Ælfweald wrote: FYI That page is not publicly viewable. It was the 817-2271 pdf I

Re: [zfs-discuss] Add mirror to an existing Zpool

2007-04-10 Thread Cindy . Swearingen
Hi Martin, Yes, you can do this with the zpool attach command. See the output below. An example in the ZFS Admin Guide is here: Cindy # zpool create mpool c1t20d0 # zpool status mpool pool: mpool state: ONLINE scrub: none

Re: [zfs-discuss] # devices in raidz.

2007-04-11 Thread Cindy . Swearingen
will be implemented? Cindy Swearingen wrote: Hi Mike, Yes, outside of the hot-spares feature, you can detach, offline, and replace existing devices in a pool, but you can't remove devices, yet. This feature work is being tracked under this RFE:

Re: [zfs-discuss] zfs send/receive question

2007-04-16 Thread Cindy . Swearingen
Chris, Looks like you're not running a Solaris release that contains the zfs receive -F option. This option is in current Solaris community release, build 48. Otherwise, you'll have to wait until an upcoming Solaris 10 release.

Re: [zfs-discuss] zfs send/receive question

2007-04-17 Thread Cindy . Swearingen
Chris, This option will be available in the upcoming Solaris 10 release, a few months from now. We'll send out a listing of the new ZFS features around that time. Cindy Krzys wrote: Ah, ok, not a problem, do you know Cindy when next Solaris Update is going to be released by SUN? Yes, I am

Re: [zfs-discuss] Permanently removing vdevs from a pool

2007-04-19 Thread Cindy . Swearingen
Mario, Until zpool remove is available, you don't have any options to remove a disk from a non-redundant pool. Currently, you can: - replace or detach a disk in a ZFS mirrored storage pool - replace a disk in a ZFS RAID-Z storage pool Please see the ZFS best practices site for more info about

Re: [zfs-discuss] Re: Re: Re: Re: How much do we really want zpool remove?

2007-04-30 Thread Cindy . Swearingen
Hi Rainer, This is a long thread and I wasn't commenting on your previous replies regarding mirror manipulation. If I was, I would have done so directly. :-) I saw the export-a-pool-to-remove-a-disk-solution described in a Sun doc. My point and (I agree with your points below) is that making a

Re: [zfs-discuss] Motley group of discs?

2007-05-07 Thread Cindy . Swearingen
Hi Lee, You can decide whether you want to use ZFS for a root file system now. You can find this info here: Consider this setup for your other disks, which are: 250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive 250GB = disk1 200GB

Re: [zfs-discuss] Motley group of discs?

2007-05-07 Thread Cindy . Swearingen
Lee, Yes, the hot spare (disk4) should kick if another disk in the pool fails and yes, the data is moved to disk4. You are correct: 160 GB (the smallest disk) * 3 + raidz parity info Here's the size of raidz pool comprised of 3 136-GB disks: # zpool list NAMESIZE

Re: [zfs-discuss] how do I revert back from ZFS partitioned disk to original partitions

2007-05-24 Thread Cindy . Swearingen
Arif, You need to boot from {net | DVD} in single-user mode, like this: boot net -s or boot cdrom -s Then, when you get to a shell prompt, relabel the disk like this: # format -e select disk format label [0] SMI Label [1] EFI Label Specify Label type[0]: 0 Then, you should be able to

Re: [zfs-discuss] Is this storage model correct?

2007-06-19 Thread Cindy . Swearingen
Huitzi, Yes, you are correct. You can add more raidz devices in the future as your excellent graphic suggests. A similar zpool add example is described here: This new section describes what operations are supported for both raidz

Re: [zfs-discuss] Notes for Cindys and Goo

2007-06-20 Thread Cindy . Swearingen
Huitzi, Awesome graphics! Do we have your permission to use them? :-) I might need to recreate them in another format. Someone was kind enough to point out the error in this example yesterday and I fixed it in the opensolaris.../zfs version, found here:

Re: [zfs-discuss] legacy shared ZFS vs. ZFS NFS shares

2007-06-20 Thread Cindy . Swearingen
Hi Ed, This BP was added as a lesson learned for not mixing these models because its too confusing to administer and no other reason. I'll update the BP to be clear about this. I'm sure someone else will answer your NFSv3 question. (I'd like to know too). Cindy Ed Ravin wrote: Looking over

Re: [zfs-discuss] New german white paper on ZFS

2007-06-27 Thread Cindy . Swearingen
Jens, Someone already added it to the ZFS links page, here: I just added a link to the links page from the zfs docs page so it is easier to find. Thanks, Cindy Jens Elkner wrote: On Tue, Jun 19, 2007 at 05:19:05PM +0200, Constantin Gonzalez

Re: [zfs-discuss] ZFS related articles in Japanese and Simplified Chinese

2007-07-10 Thread Cindy . Swearingen
Hi Young, I will link these versions on the ZFS community docs page. Thanks for the reminder. :-) Cindy Young Joo Pintaske wrote: Hi ZFS Community, Some time ago I posted a message that ZFS Administration Guide was translated (Russian and Brazilian Portuguese). There are several other

Re: [zfs-discuss] Changing a root vdev's config?

2007-07-26 Thread Cindy . Swearingen
Sean, This scenario is covered in the ZFS Admin Guide, found here: I provided an example below. Cindy # zpool create tank02 c0t0d0 # zpool status tank02 pool: tank02 state: ONLINE scrub: none requested config:

Re: [zfs-discuss] Privileges

2007-08-20 Thread Cindy . Swearingen
Marko, The ZFS Admin Guide has been updated to include the delegated administration feature. See Chapter 8, here: Cindy Matthew Ahrens wrote: Marko Milisavljevic wrote: Hmm.. my b69 installation understands zfs allow, but man zfs

Re: [zfs-discuss] Is ZFS efficient for large collections of small files?

2007-08-21 Thread Cindy . Swearingen
The OpenSolaris ZFS FAQ is here: Other resources are listed here: Cindy Brandorr wrote: P.S. - Is there a ZFS FAQ somewhere? ___ zfs-discuss

Re: [zfs-discuss] change uid/gid below 100

2007-09-17 Thread Cindy . Swearingen
Paul, Scroll down a bit in this section to the default passwd/group tables: Cindy Paul Kraus wrote: On 9/17/07, Darren J Moffat [EMAIL PROTECTED] wrote: Why not use the already assigned webservd/webserved 80/80 uid/gid pair ? Note

Re: [zfs-discuss] Zfs log device (zil) ever coming to Sol10?

2007-09-18 Thread Cindy . Swearingen
The log device feature integrated into snv_68. You can read about them here: And starting on page 18 of the ZFS Admin Guide, here: Albert Chin wrote: On Tue, Sep 18, 2007 at 12:59:02PM

Re: [zfs-discuss] zoneadm clone doesn't support ZFS snapshots in s10u4?

2007-09-21 Thread Cindy . Swearingen
Mike, Grant, I reported the zoneadm.1m man page problem to the man page group. I also added some stronger wording to the ZFS Admin Guide and the ZFS FAQ about not using ZFS for zone root paths for the Solaris 10 release and that upgrading or patching is not supported for either Solaris 10 or

Re: [zfs-discuss] ZFS drive replacement

2007-10-29 Thread Cindy . Swearingen
Hi Stephen, No, you can't replace a one device with a raidz device, but you can create a mirror from one device by using zpool attach. See the output below. The other choice is to add to an existing raidz configuration. See the output below. I thought we had an RFE to expand an existing raidz

Re: [zfs-discuss] zpool question

2007-10-30 Thread Cindy Swearingen
Chris, I agree that your best bet is to replace the 128-mb device with another device, fix the emcpower2a manually, and then replace it back. I don't know these drives at all, so I'm unclear about the fix it manually step. Because your pool isn't redundant, you can't use zpool offline or detach.

Re: [zfs-discuss] What is the correct way to replace a good disk?

2007-11-02 Thread Cindy . Swearingen
Chris, You need to use the zpool replace command. I recently enhanced this section of the admin guide with more explicit instructions on page 68, here: If these are hot-swappable disks, for example, c0t1d0, then use this syntax: #

Re: [zfs-discuss] I screwed up my zpool

2007-12-03 Thread Cindy . Swearingen
Jonathan, Thanks for providing the zpool history output. :-) You probably missed the message after this command: # zpool add tank c4t0d0 invalid vdev specification use '-f' to override the following errors: mismatched replication level: pool uses raidz and new vdev is disk I provided some

Re: [zfs-discuss] Error in zpool man page?

2007-12-07 Thread Cindy . Swearingen
Jonathan, I think I remember seeing this error in an older Solaris release. The current zpool.1m man page doesn't have this error unless I'm missing it: In a current Solaris release, this command fails as expected: # zpool create mirror

Re: [zfs-discuss] reset a disk?

2007-12-12 Thread Cindy . Swearingen
Hi Doug, ZFS uses an EFI label so you need to use format -e to set it back to a VTOC label, like this: # format -e Specify disk (enter its number)[4]: 3 selecting c0t4d0 [disk formatted] format label [0] SMI Label [1] EFI Label Specify Label type[1]: 0 Warning: This disk has an EFI label.

Re: [zfs-discuss] mirror a slice

2007-12-13 Thread Cindy . Swearingen
Shawn, Using slices for ZFS pools is generally not recommended so I think we minimized any command examples with slices: # zpool create tank mirror c1t0d0s0 c1t1d0s0 Keep in mind that using the slices from the same disk for both UFS and ZFS makes administration more complex. Please see the ZFS

Re: [zfs-discuss] Break a ZFS mirror and concatenate the disks

2008-01-10 Thread Cindy . Swearingen
Hey Kory, I think you must mean can you detach one of the 73GB disks from moodle and then add it to another pool of 146GB and you want to save the data from the 73GB disk? You can't do this and save the data. By using zpool detach, you are removing any knowledge of ZFS from that disk. If you

Re: [zfs-discuss] Break a ZFS mirror and concatenate the disks

2008-01-10 Thread Cindy . Swearingen
Hi Kory, Yes, I get it now. You want to detach one of the disks and then readd the same disk, but lose the redundancy of the mirror. Just as long as you realize you're losing the redundancy. I'm wondering if zpool add will complain. I don't have a system to try this at the moment. Cindy Kory

Re: [zfs-discuss] Replacing Devices in a Storage Pool

2008-01-24 Thread Cindy . Swearingen
Hi Kava, Your questions are hard for me to answer without seeing your syntax. Also, you don't need to futz with slices if you are using whole disks. I added some add'l information to the zpool replace section on page 74, here: Note

Re: [zfs-discuss] Replacing Devices in a Storage Pool

2008-01-24 Thread Cindy . Swearingen
Kava, Because of a recent bug, you need to export and import the pool to see the expanded space after you use zpool replace. Also, you don't need to detach first. The process would look like this: # zpool create test mirror 8gb-1 8gb-2 # zpool replace test 8gb-1 12gb-1 # zpool replace test

Re: [zfs-discuss] Drives of different size

2008-01-24 Thread Cindy . Swearingen
Sure you can, but it would be something like this: 300GB-1 = c0t0d0 300GB-2 = c0t1d0 500GB = c0t2d0s0 (300 GB slice is created on s0) # zpool create test raidz c0t0d0 c0t1d0 c0t2d0s0 However, if you are going to use the add'l 200 GB on the 500GB drive for something else, administration is

Re: [zfs-discuss] nfs exporting nested zfs

2008-02-07 Thread Cindy . Swearingen
Because of the mirror mount feature that integrated into that Solaris Express, build 77. You can read about here on page 20 of the ZFS Admin Guide: Cindy Andrew Tefft wrote: Let's say I have a zfs called pool/backups and it contains

Re: [zfs-discuss] raidz in zfs questions

2008-03-05 Thread Cindy . Swearingen
Chris, You can replace the disks one at a time with larger disks. No problem. You can also add another raidz vdev, but you can't add disks to an existing raidz vdev. See the sample output below. This might not solve all your problems, but should give you some ideas... Cindy # zpool create

Re: [zfs-discuss] raidz in zfs questions

2008-03-05 Thread Cindy . Swearingen
Chris, You would need to replace all the disks to see the expanded space. Otherwise, space on the 1-2 larger disks would be wasted. If you replace all the disks with larger disks, then yes, the disk space in the raidz config would be expanded. A ZFS mirrored config would be more flexible but it

Re: [zfs-discuss] Replacing a Lun (Raid 0) (Santricity)

2008-03-14 Thread Cindy . Swearingen
David, Try detaching the spare, like this: # zpool detach pool-name c10t600A0B80001139967CE145E80D4Dd0 Cindy David Smith wrote: Addtional information: It looks like perhaps the original drive is in use, and the hot spare is assigned but not in use see below about zpool iostat:

Re: [zfs-discuss] Snapshots silently eating user quota

2008-03-21 Thread Cindy . Swearingen
The file system only quotas and reservations feature description starts here: cs Eric Schrock wrote: On Thu, Mar 20, 2008 at 06:41:42PM -0500, [EMAIL PROTECTED] wrote: There was an change request put in to disable snaps affecting quota

Re: [zfs-discuss] Can not add ZFS LOG devices

2008-04-04 Thread Cindy . Swearingen
Hi Mertol, Log devices aren't supported in the Solaris 10 release yet. You would have to run a Solaris Express version to configure log devices, such as SXDE 9/07 or SXDE 1/08, described here: cs Mertol Ozyoney wrote: Hi All ; I

Re: [zfs-discuss] zfs concatenation to mirror

2008-04-11 Thread Cindy . Swearingen
Jeff, No easy way exists to convert this configuration to a mirrored configuration currently. If you had two more disks, you could use zpool attach to create a two-way, two disk mirror. See the output below. A more complicated solution is to create two files that are the size of your existing

Re: [zfs-discuss] Periodic ZFS maintenance?

2008-04-21 Thread Cindy . Swearingen
Hi Sam, You might review the ZFS best practice site for maintenance recommendations, here: Cindy Sam wrote: I have a 10x500 disc file server with ZFS+, do I need to perform any sort of periodic maintenance to the

Re: [zfs-discuss] ? ZFS boot in nv88 on SPARC ?

2008-04-30 Thread Cindy . Swearingen
Hi Ulrich, The updated lucreate.1m man page integrated accidentally into build 88. If you review the build 88 instructions, here: You'll see that we're recommending patience until the install/upgrade support integrates. If you are running the

Re: [zfs-discuss] cp -r hanged copying a directory

2008-05-02 Thread Cindy . Swearingen
Simon, I think you should review the checksum error reports from the fmdump output (dated 4/30) that you supplied previously. You can get more details by using fmdump -ev. Use zpool status -v to identify checksum errors as well. Cindy Simon Breden wrote: Thanks Max, I have not been able

Re: [zfs-discuss] cp -r hanged copying a directory

2008-05-02 Thread Cindy . Swearingen
Okay, thanks. I wanted to rule out that the checksum errors reported on 4/30 were persistent enough to be picked up by zpool status. ZFS is generally quick to identify device problems. Since fmdump doesn't show any add'l recent errors either, then I think you can rule out hardware problems other

Re: [zfs-discuss] ZFS boot mirror

2008-05-21 Thread Cindy . Swearingen
Hi Tom, You need to use the zpool attach command, like this: # zpool attach pool-name disk1 disk2 Cindy Tom Buskey wrote: I've always done a disksuite mirror of the boot disk. It's been easry to do after the install in Solaris. WIth Linux I had do do it during the install. OpenSolaris

Re: [zfs-discuss] What is a vdev?

2008-05-30 Thread Cindy . Swearingen
Hi Orvar, This section describes the operations you can do with a mirrored storage pool: This section describes the operations you can do with a raidz storage pool: Go with mirrored

Re: [zfs-discuss] Get your SXCE on ZFS here!

2008-06-04 Thread Cindy Swearingen
Tim, Start at the zfs boot page, here: Review the information and follow the links to the docs. Cindy - Original Message - From: Tim [EMAIL PROTECTED] Date: Wednesday, June 4, 2008 4:29 pm Subject: Re: [zfs-discuss] Get your SXCE on

Re: [zfs-discuss] Get your SXCE on ZFS here!

2008-06-05 Thread Cindy . Swearingen
-Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Cindy Swearingen Sent: Wednesday, June 04, 2008 6:50 PM To: Tim Cc: Subject: Re: [zfs-discuss] Get your SXCE on ZFS here! Tim, Start at the zfs boot page, here: http

Re: [zfs-discuss] Get your SXCE on ZFS here!

2008-06-05 Thread Cindy . Swearingen
Uwe, Please see pages 55-80 of the ZFS Admin Guide, here: Basically, the process is to upgrade from nv81 to nv90 by using the standard upgrade feature. Then, use lucreate to migrate your UFS root file system to a ZFS file system, like this: 1.

Re: [zfs-discuss] ZFS root finally here in SNV90

2008-06-05 Thread Cindy . Swearingen
Mike, As we discussed, you can't currently break out other datasets besides /var. I'll add this issue to the FAQ. Thanks, Cindy Ellis, Mike wrote: In addition to the standard containing the carnage arguments used to justify splitting /var/tmp, /var/mail, /var/adm (process accounting etc),

Re: [zfs-discuss] ZFS root boot failure?

2008-06-12 Thread Cindy . Swearingen
Vincent, I think you are running into some existing bugs, particularly this one: Please review the list of known issues here: Also check out the issues described on page 77 in this section:

Re: [zfs-discuss] ZFS root boot failure?

2008-06-13 Thread Cindy Swearingen
You want to install the zfs boot block, not the ufs bootblock. Check the syntax in the ZFS Admin Guide that is available from this location: Cindy - Original Message - From: Vincent Fox [EMAIL PROTECTED] Date: Friday, June 13, 2008 3:49 pm

Re: [zfs-discuss] SMC Webconsole 3.1 and ZFS Administration 1.0 - stacktraces in snv_b89

2008-06-17 Thread Cindy . Swearingen
Hi Dan, I filed a bug 6715550 to fix this issue. Thanks for reporting it-- Cindy Dan Reiland wrote: Yeah. The command line works fine. Thought it to be a bit curious that there was an issue with the HTTP interface. It's low priority I guess because it doesn't impact the functionality really.

Re: [zfs-discuss] mirroring zfs slice

2008-06-17 Thread Cindy . Swearingen
Sure. This operation can be done with whole disks too. The disk (new_device) should be the same size or larger than the existing disk (device). You can review some examples here: If the disks are of unequal size, then some disk space will

Re: [zfs-discuss] [SOLVED] Confusion with snapshot send-receive

2008-06-23 Thread Cindy . Swearingen
I modified the ZFS Admin Guide to show a simple zfs send | zfs recv example, then a more complex example using ssh to another system. Thanks for the feedback... Cindy Andrius wrote: James C. McPherson wrote: Andrius wrote: Boyd Adamson wrote: Andrius [EMAIL PROTECTED] writes: Hi,

Re: [zfs-discuss] Proper wayto do disk replacement in an A1000 storage array and raidz2.

2008-06-30 Thread Cindy . Swearingen
Hi-- You can replace the failed disk and then detach the spare using the general scenario described below. Some steps might be optional but I'm pretty cautious about disk replacement, even when its this easy. Cindy 1. Physically replace the failed disk. 2. Let ZFS know that you replaced the

Re: [zfs-discuss] Proper wayto do disk replacement in an A1000 storage array and raidz2.

2008-07-01 Thread Cindy . Swearingen
Hi-- I'm not quite sure about the exact sequence of events here, but it sounds like you had two spares and replaced the failed disk with one of the spares, which you can do manually with the zpool replace command. The remaining spare should drop back into the spare pool if you detached it. Check

Re: [zfs-discuss] HELP changing concat to a mirror

2008-07-02 Thread Cindy . Swearingen
Mark, If you don't want to backup the data, destroy the pool, and recreate the pool as a mirrored configuration, then another option it to attach two more disks to create 2 mirrors of 2 disks. See the output below. Cindy # zpool create zp01 c1t3d0 c1t4d0 # zpool status pool: zp01 state:

Re: [zfs-discuss] Using zfs boot with MPxIO on T2000

2008-07-09 Thread Cindy . Swearingen
ZFS uses EFI when a storage pool is created with whole disks. ZFS uses the old-style VTOC label when a storage pool is created with slices. To be able to boot from a ZFS root pool, the storage pool must be created with slices. This is a new requirement in ZFS land, and is described in the doc

Re: [zfs-discuss] zfs sparc boot Bad magic number in disk label

2008-07-17 Thread Cindy . Swearingen
Hi Joe, It is possible that your c0t1d0s0 disk has an existing EFI label instead of a VTOC label? (You can tell by using format--disk--partition and see if the cylinder info is displayed. If no cylinder info, then an EFI label.) Relabel with a VTOC label, like this: # format -e select disk

Re: [zfs-discuss] Formatting Problem of ZFS Adm Guide (pdf)

2008-07-21 Thread Cindy . Swearingen
For the record, the source of the ZFS Admin Guide is created with a SGML editor that is not Framemaker. I agree that the evince PDF display problems are with the font changes only. Cindy Akhilesh Mritunjai wrote: Welcome to font hell :-(. For many years, Sun documentation was written in the

Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-23 Thread Cindy . Swearingen
Rainer, Sorry for your trouble. I'm updating the installboot example in the ZFS Admin Guide with the -F zfs syntax now. We'll fix the installboot man page as well. Mark, I don't have an x86 system to test right now, can you send me the correct installgrub syntax for booting a ZFS file system?

Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-24 Thread Cindy . Swearingen
Hi Alan, ZFS doesn't swap to a slice in build 92. In this build, a ZFS root environment requires separate ZFS volumes for swap and dump devices. The ZFS boot/install project and information trail starts here: Cindy Alan Burlison wrote: I'm

Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-24 Thread Cindy . Swearingen
Alan, Just make sure you use dumpadm to point to valid dump device and this setup should work fine. Please let us know if it doesn't. The ZFS strategy behind automatically creating separate swap and dump devices including the following: o Eliminates the need to create separate slices o Enables

Re: [zfs-discuss] Errors in ZFS/NFSv4 ACL Documentation

2008-07-29 Thread Cindy . Swearingen
Mark, Thanks for your detailed review comments. I will check where the latest man pages are online and get back to you. In the meantime, I can file the bugs to get these issues fixed on your behalf. Thanks again, Cindy Marc Bevand wrote: I noticed some errors in ls(1), acl(5) and the ZFS

Re: [zfs-discuss] boot cdrom -w doesn't work

2008-07-31 Thread Cindy . Swearingen
Hi Ron, Try again by using this syntax: ok boot cdrom - text Make sure you have reviewed the ZFS boot/install chapter in the ZFS admin guide, here: Cindy Ron Halstead wrote: I have a Sun Blade 2500 running nv_88. I want to install nv_94 with a

Re: [zfs-discuss] Errors in ZFS/NFSv4 ACL Documentation

2008-07-31 Thread Cindy . Swearingen
Mark, I filed two bugs for these issues but they are not visible in the Opensolaris bug database yet: 6731639 More NFSv4 ACL changes for ls.1 (Nevada) 6731650 More NFSv4 ACL changes for acl.5 (Nevada) The current ls.1 man page can be displayed on, here:

Re: [zfs-discuss] Checksum error: which of my files have failed scrubbing?

2008-08-05 Thread Cindy . Swearingen
Soren, At this point, I'd like to know what fmdump -eV says about your disk so you can determine whether it should be replaced or not. Cindy soren wrote: soren wrote: ZFS has detected that my root filesystem has a small number of errors. Is there a way to tell which specific files have been

Re: [zfs-discuss] more ZFS recovery

2008-08-07 Thread Cindy . Swearingen
Hi Richard, Yes, sure. We can add that scenario. What's been on my todo list is a ZFS troubleshooting wiki. I've been collecting issues. Let's talk soon. Cindy Richard Elling wrote: Tom Bird wrote: Richard Elling wrote: I see no evidence that the data is or is not correct. What we

Re: [zfs-discuss] Setup idea

2008-08-19 Thread Cindy . Swearingen
Hi Ivan, If you are asking how you can make a ZFS root file system on a Solaris 10 system, then you'll need to wait a bit until that release is available. This features is currently provided in the SXCE, build 90 release, which provides similar support. You can read more about this support

Re: [zfs-discuss] Explaining ZFS message in FMA

2008-09-04 Thread Cindy . Swearingen
Alain, I think you want to use fmdump -eV to display the extended device information. See the output below. Cindy class = ereport.fs.zfs.checksum ena = 0x3242b9cdeac00401 detector = (embedded nvlist) nvlist version: 0 version = 0x0

Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2008-09-29 Thread Cindy . Swearingen
Ross, No need to apologize... Many of us work hard to make sure good ZFS information is available so a big thanks for bringing this wiki page to our attention. Playing with UFS on ZFS is one thing but even inexperienced admins need to know this kind of configuration will provide poor

Re: [zfs-discuss] Root pool mirror wasn't automatically configured during install

2008-10-03 Thread Cindy . Swearingen
Hi Eric, Are you saying that you selected two-disks for a mirrored root pool during the initial install and because you changed the default rpool name, the pool was created with just one disk? I netinstalled build 96, selected two disks for the root pool mirror, backspaced over rpool with mypool

Re: [zfs-discuss] Disabling auto-snapshot by default.

2008-10-24 Thread Cindy . Swearingen
Chris, Tim Foster sent out this syntax previously: zfs set com.sun:auto-snapshot=false dataset Unless I'm misunderstanding your questions, try this for the dataset on the removable media device. Let me know if you have any issues. I'm tracking the auto snapshot experience... Cindy Chris

Re: [zfs-discuss] Boot from mirror

2008-10-27 Thread Cindy . Swearingen
Dick, Well, not at the same time. :-) If you are running a recent SXCE release and you have a mirrored ZFS root pool with two disks, for example, you can boot off either disk, as described in the ZFS Admin Guide, pages 81-85, here: If you create a

Re: [zfs-discuss] zfs boot / root in Nevada build 101

2008-10-29 Thread Cindy . Swearingen
Hi Peter, You need to select the text-mode install option to select a ZFS root file system. Other ZFS root installation tips are described here: I'll be attending Richard Elling's ZFS workshop at LISA08. Hope to see you. :-) Cindy

Re: [zfs-discuss] zfs boot / root in Nevada build 101

2008-10-29 Thread Cindy . Swearingen
Good point and we've tried to document this issue all over the place and will continue to publicize this fact. With the new ZFS boot and install features, it is a good idea to read the docs first. Tell you friends. I will send out a set of s10 10/08 doc pointers as soon as they are available.

Re: [zfs-discuss] Backup/Restore root pool : SPARC and x86/x64

2008-11-12 Thread Cindy . Swearingen
Hi Marlanne, Excellent question and thank you for asking... We have a set of instructions for creating root pool snapshots and root pool recovery, here: The zfs send and recv options used in this

Re: [zfs-discuss] Easiest way to replace a boot disk with a larger one?

2008-12-11 Thread Cindy . Swearingen
Hi Alex, Not exactly. Just hadn't thought of that specific example yet, but its a good one so I'll add it. In your case, ZFS might not see the expanded capacity of the larger disk automatically due to a recent bug. For non-root pools, the workaround to see the expanded space is to export and

  1   2   3   4   5   6   7   >