Re: [zfs-discuss] zfs mount -a kernel panic

2010-05-19 Thread Mark J Musante
Do you have a coredump? Or a stack trace of the panic? On Wed, 19 May 2010, John Andrunas wrote: Running ZFS on a Nexenta box, I had a mirror get broken and apparently the metadata is corrupt now. If I try and mount vol2 it works but if I try and mount -a or mount vol2/vm2 is instantly kerne

Re: [zfs-discuss] MPT issues strikes back

2010-04-27 Thread Mark Ogden
rror message. Since removing that drive, we have not encounted that issue. You might want to look at http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=7acda35c626180d9cda7bd1df451?bug_id=6894775 too. -Mark > Machine specs : > > Dell R710, 16 GB memory, 2 Intel Qua

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-23 Thread Mark Musante
On 23 Apr, 2010, at 8.38, Phillip Oldham wrote: > The instances are "ephemeral"; once terminated they cease to exist, as do all > their settings. Rebooting an image keeps any EBS volumes attached, but this > isn't the case I'm dealing with - its when the instance terminates > unexpectedly. For

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-23 Thread Mark Musante
On 23 Apr, 2010, at 7.31, Phillip Oldham wrote: > I'm not actually issuing any when starting up the new instance. None are > needed; the instance is booted from an image which has the zpool > configuration stored within, so simply starts and sees that the devices > aren't available, which beco

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-23 Thread Mark Musante
On 23 Apr, 2010, at 7.06, Phillip Oldham wrote: > > I've created an OpenSolaris 2009.06 x86_64 image with the zpool structure > already defined. Starting an instance from this image, without attaching the > EBS volume, shows the pool structure exists and that the pool state is > "UNAVAIL" (as

Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-21 Thread Mark Shellenbaum
On 04/21/10 08:45 AM, Edward Ned Harvey wrote: From: Mark Shellenbaum [mailto:mark.shellenb...@oracle.com] You can create/destroy/rename snapshots via mkdir, rmdir, mv inside the .zfs/snapshot directory, however, it will only work if you're running the command locally. It will not

Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-21 Thread Mark Shellenbaum
On 4/21/10 6:49 AM, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Nicolas Williams And you can

Re: [zfs-discuss] zpool lists 2 controllers the same, how do I replace one?

2010-04-19 Thread Mark J Musante
On Sun, 18 Apr 2010, Michelle Bhaal wrote: zpool lists my pool as having 2 disks which have identical names. One is offline, the other is online. How do I tell zpool to replace the offline one? If you're lucky, the device will be marked as not being present, and then you can use the GUID.

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-07 Thread Mark J Musante
On Wed, 7 Apr 2010, Neil Perrin wrote: There have previously been suggestions to read slogs periodically. I don't know if there's a CR raised for this though. Roch wrote up CR 6938883 "Need to exercise read from slog dynamically" Regards, markm ___

Re: [zfs-discuss] zpool split problem?

2010-04-01 Thread Mark J Musante
> It would be nice for Oracle/Sun to produce a separate > script which reset system/devices back to a install > like beginning so if you move a OS disk with current > password file and software from one system to > another, and have it rebuild the device tree on the > new system. You mean /usr/sbi

Re: [zfs-discuss] zpool split problem?

2010-04-01 Thread Mark J Musante
On Wed, 31 Mar 2010, Damon Atkins wrote: Why do we still need "/etc/zfs/zpool.cache" file??? The cache file contains a list of pools to import, not a list of pools that exist. If you do a "zpool export foo" and then reboot, we don't want foo to be imported after boot completes. Unfortunat

Re: [zfs-discuss] Cannot replace a replacing device

2010-03-30 Thread Mark J Musante
On Mon, 29 Mar 2010, Jim wrote: Thanks for the suggestion, but have tried detaching but it refuses reporting no valid replicas. Capture below. Could you run 'zdb -ddd tank | | awk '/^Dirty/ {output=1} /^Dataset/ {output=0} {if (output) {print}}' This will print the dirty time log of the pool

Re: [zfs-discuss] zpool split problem?

2010-03-30 Thread Mark J Musante
OK, I see what the problem is: the /etc/zfs/zpool.cache file. When the pool was split, the zpool.cache file was also split - and the split happens prior to the config file being updated. So, after booting off the split side of the mirror, zfs attempts to mount rpool based on the information in

Re: [zfs-discuss] Cannot replace a replacing device

2010-03-29 Thread Mark J Musante
On Mon, 29 Mar 2010, Victor Latushkin wrote: On Mar 29, 2010, at 1:57 AM, Jim wrote: Yes - but it does nothing. The drive remains FAULTED. Try to detach one of the failed devices: zpool detach tank 4407623704004485413 As Victor says, the detach should work. This is a known issue and I'

Re: [zfs-discuss] zpool split problem?

2010-03-29 Thread Mark J Musante
On Sat, 27 Mar 2010, Frank Middleton wrote: Started with c0t1d0s0 running b132 (root pool is called rpool) Attached c0t0d0s0 and waited for it to resilver Rebooted from c0t0d0s0 zpool split rpool spool Rebooted from c0t0d0s0, both rpool and spool were mounted Rebooted from c0t1d0s0, only rpool

Re: [zfs-discuss] pool causes kernel panic, recursive mutex enter, 134

2010-03-15 Thread Mark
some screenshots that may help: pool: tank id: 5649976080828524375 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: data ONLINE mirror-0 ONLINE c27t2d0ONLINE c27t0d0ONLINE m

[zfs-discuss] pool causes kernel panic, recursive mutex enter, 134

2010-03-15 Thread Mark
where the pools never got an error, same panic. ANY ideas of volume rescue are welcome - if i missed some important information,please tell me. regards, mark -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Convert from rz2 to rz1

2010-03-11 Thread Mark J Musante
On Thu, 11 Mar 2010, Lars-Gunnar Persson wrote: > Is it possible to convert a rz2 array to rz1 array? I have a pool with > to rz2 arrays. I would like to convert them to rz1. Would that be > possible? No, you'll have to create a second pool with raidz1 and do a "send | recv" operation to copy th

Re: [zfs-discuss] Can you manually trigger spares?

2010-03-09 Thread Mark J Musante
On Mon, 8 Mar 2010, Tim Cook wrote: Is there a way to manually trigger a hot spare to kick in? Yes - just use 'zpool replace fserv 12589257915302950264 c3t6d0'. That's all the fma service does anyway. If you ever get your drive to come back online, the fma service should recognize that an

Re: [zfs-discuss] Thoughts pls. : Create 3 way rpool mirror and shelve one mirror as a backup

2010-03-08 Thread Mark J Musante
On Sat, 6 Mar 2010, Richard Elling wrote: On Mar 6, 2010, at 5:38 PM, tomwaters wrote: My though is this, I remove the 3rd mirror disk and offsite it as a backup. To do this either: 1. upgrade to a later version where the "zpool split" command is available 2. zfs send/receiv

Re: [zfs-discuss] snv_133 mpt0 freezing machine

2010-03-05 Thread Mark Ogden
sage (shown in the /var/adm/messages) : > > scsi: [ID 107833 kern.warning] WARNING: > /p...@0,0/pci8086,3...@4/pci1028,1...@0 (mpt0): > > Does anyone has any tip in how to start to trace the problem ? > Have a look at Bug ID: 6894775 http://bugs.opensolaris.org/bugdatabase/view_

Re: [zfs-discuss] (FreeBSD) ZFS RAID: Disk fails while replacing another disk

2010-03-04 Thread Mark J Musante
It looks like you're running into a DTL issue. ZFS believes that ad16p2 has some data on it that hasn't been copied off yet, and it's not considering the fact that it's part of a raidz group and ad4p2. There is a CR on this, http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6909724 bu

Re: [zfs-discuss] Moving dataset to another zpool but same mount?

2010-02-25 Thread Mark J Musante
On Wed, 24 Feb 2010, Gregory Gee wrote: files files/home files/mail files/VM I want to move the files/VM to another zpool, but keep the same mount point. What would be the right steps to create the new zpool, move the data and mount in the same spot? Create the new pool, take a snapshot of

Re: [zfs-discuss] Import zpool from FreeBSD in OpenSolaris

2010-02-24 Thread Mark J Musante
On Tue, 23 Feb 2010, patrik wrote: I want to import my zpool's from FreeBSD 8.0 in OpenSolaris 2009.06. secureUNAVAIL insufficient replicas raidz1 UNAVAIL insufficient replicas c8t1d0p0 ONLINE c8t2d0s2 ONLINE c8t3d0s8 UNAVAIL c

Re: [zfs-discuss] Adding a zfs mirror drive to rpool - new drive formats to one cylinder less

2010-02-23 Thread Mark J Musante
On Mon, 22 Feb 2010, tomwaters wrote: I have just installed open solaris 2009.6 on my server using a 250G laptop drive (using the entire drive). So, 2009.06 was based on 111b. There was a fix that went into build 117 that allows you to mirror to smaller disks if the metaslabs in zfs are sti

Re: [zfs-discuss] Removing Cloned Snapshot

2010-02-12 Thread Mark J Musante
On Fri, 12 Feb 2010, Daniel Carosone wrote: You can use zfs promote to change around which dataset owns the base snapshot, and which is the dependant clone with a parent, so you can deletehe other - but if you want both datasets you will need to keep the snapshot they share. Right. The othe

Re: [zfs-discuss] zfs import fails even though all disks are online

2010-02-11 Thread Mark J Musante
On Thu, 11 Feb 2010, Cindy Swearingen wrote: On 02/11/10 04:01, Marc Friesacher wrote: fr...@vault:~# zpool import pool: zedpool id: 10232199590840258590 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: zedpoolONLINE

Re: [zfs-discuss] Detach ZFS Mirror

2010-02-11 Thread Mark J Musante
On Thu, 11 Feb 2010, Tony MacDoodle wrote: I have a 2-disk/2-way mirror and was wondering if I can remove 1/2 the mirror and plunk it in another system? Intact? Or as a new disk in the other system? If you want to break the mirror, and create a new pool on the disk, you can just do 'zpool d

Re: [zfs-discuss] acl's and new dirs

2010-02-07 Thread Mark Shellenbaum
Thomas Burgess wrote: I've got a strange issue, If this is covered elsewhere, i apologize in advance for my newbness I've got a couple ZFS filesystems shared cifs and nfs, i've managed to get ACL's working the way i want, provided things are accessed via cifs and nfs. If i create a new dir

Re: [zfs-discuss] Pool disk replacing fails

2010-02-05 Thread Mark J Musante
On Fri, 5 Feb 2010, Alexander M. Stetsenko wrote: NAMESTATE READ WRITE CKSUM mypool DEGRADED 0 0 0 mirrorDEGRADED 0 0 0 c1t4d0 DEGRADED 0 028 too many errors c1t5d0 ONLINE 0 0 0 I

Re: [zfs-discuss] What would happen with a zpool if you 'mirrored' a disk...

2010-02-04 Thread Mark J Musante
On Thu, 4 Feb 2010, Karl Pielorz wrote: The reason for testing this is because of a weird RAID setup I have where if 'ad2' fails, and gets replaced - the RAID controller is going to mirror 'ad1' over to 'ad2' - and cannot be stopped. Does the raid controller not support a JBOD mode? Regards

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-03 Thread Mark Nipper
Looks like I got the textbook response from Western Digital: --- Western Digital technical support only provides jumper configuration and physical installation support for hard drives used in systems running the Linux/Unix operating systems. For setup questions beyond physical installation of yo

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-02 Thread Mark Nipper
> That's good to hear. Which revision are they: 00R6B0 > or 00P8B0? It's marked on the drive top. Interesting. I wonder if this is the issue too with the 01U1B0 2.0TB drives? I have 24 WD2002FYPS-01U1B0 drives under OpenSolaris with an LSI 1068E controller that have weird timeout issues and I

Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-02-01 Thread Mark Bennett
SI IT mode firmware changing the disk order so the bootable disk is no longer the one booted from with expanders? It boots with only two disks installed(bootable zfs mirror). Add some more and the target "boot disk" moves to one of them. Mark. -- This message posted from

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-01 Thread Mark Bennett
luate alternative supplers of low cost disks for low end high volume storage. Mark. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-02-01 Thread Mark Bennett
to be the only option available. Mark. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-02-01 Thread Mark Nipper
ceived for target 13. --- With no problems at all, that scrub I don't think takes nearly that long (I think it was less than 12 hours previously) and the percentage is barely moving, although it is increasing. Even still, the exported volumes still appear to be work

Re: [zfs-discuss] ZPOOL somehow got same physical drive assigned twice

2010-02-01 Thread Mark J Musante
On Thu, 28 Jan 2010, TheJay wrote: Attached the zpool history. Did the resilver ever complete on the first c6t1d0? I see a second replace here: 2010-01-27.20:41:15 zpool replace rzpool2 c6t1d0 c6t16d0 2010-01-28.07:57:27 zpool scrub rzpool2 2010-01-28.20:39:42 zpool clear rzpool2 c6t1d0 2

Re: [zfs-discuss] Home ZFS NAS - 2 drives or 3?

2010-01-31 Thread Mark
I thank each of you for all of your insights. I think if this was a production system I'd abandon the idea of 2 drives and get a more capable system, maybe a 2U box with lots of SAS drives so I could use RAIDZ configurations. But in this case, I think all I can do is try some things until I unde

Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-31 Thread Mark Bennett
256*512/4096=32 Mark. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-30 Thread Mark Bennett
many scsi cards. Mark. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Home ZFS NAS - 2 drives or 3?

2010-01-30 Thread Mark
I have a 1U server that supports 2 SATA drives in the chassis. I have 2 750 GB SATA drives. When I install opensolaris, I assume it will want to use all or part of one of those drives for the install. That leaves me with the remaining part of disk 1, and all of disk 2. Question is, how do I be

Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-28 Thread Mark Grant
> Also, I noticed you're using 'EARS' series drives. > Again, I'm not sure if the WD10EARS drives suffer > from a problem mentioned in these posts, but it might > be worth looking into -- especially the last link: Aren't the EARS drives the first ones using 4k sectors? Does OpenSolaris support th

Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-28 Thread Mark Bennett
24 x WD10EARS in 6 disk vdev sets, 1 on 16 bay and 2 on 24 bay. Mark -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] ZFS and EMC Replication Software

2010-01-28 Thread Mark Woelfel
Hello, Can anybody tell me if EMC's Replication software is supported using ZFS, and if so is there any particular version of Solaris that it is supported with? thanks, -mark -- Mark Woelfel Storage TSC Backline Volume Products Sun Microsystems Work: 78

Re: [zfs-discuss] ZPOOL somehow got same physical drive assigned twice

2010-01-28 Thread Mark J Musante
On Wed, 27 Jan 2010, TheJay wrote: Guys, Need your help. My DEV131 OSOL build with my 21TB disk system somehow got really screwed: This is what my zpool status looks like: NAME STATE READ WRITE CKSUM rzpool2 DEGRADED 0 0 0 raidz2

Re: [zfs-discuss] Strange random errors getting automatically repaired

2010-01-27 Thread Mark Bennett
Hi Giovanni, I have seen these while testing the mpt timeout issue, and on other systems during resilvering of failed disks and while running a scrub. Once so far on this test scrub, and several on yesterdays. I checked the iostat errors, and they weren't that high on that device, compared to

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-01-26 Thread Mark Bennett
ntroller and see if that helps. P.S. I have a client with a "suspect", nearly full, 20Tb zpool to try to scrub, so this is a big issue for me. A resilver of a 1Tb disk takes up to 40 hrs., so I expect a scrub to be a week (or two), and at present, would probably result in multiple dis

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-01-26 Thread Mark Nipper
I would definitely be interested to see if the newer firmware fixes the problem for you. I have a very similar setup to yours, and finally forcing the firmware flash to 1.26.00 of my on-board LSI 1068E on a SuperMicro H8DI3+ running snv_131 seemed to address the issue. I'm still waiting to see

Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-26 Thread Mark Nipper
> It may depend on the firmware you're running. We've > got a SAS1068E based > card in Dell R710 at the moment, connected to an > external SAS JBOD, and > we did have problems with the as shipped firmware. Well, I may have misspoke. I just spent a good portion of yesterday upgrading to the lat

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-01-25 Thread Mark Bennett
t or hot plug. The robustness of ZFS certainly helps keep things running. Mark. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-25 Thread Mark Nipper
As in they work without any possibility of mpt timeout issues? I'm at my wits end with a machine right now that has an integrated 1068E and is dying almost hourly at this point. If I could spend three hundred dollars or so and have my problems magically go away, I'd love to pull the trigger on

Re: [zfs-discuss] Remove ZFS Mount Points

2010-01-22 Thread Mark J Musante
On Fri, 22 Jan 2010, Tony MacDoodle wrote: Can I move the below mounts under / ? rpool/export/export rpool/export/home /export/home Sure. Just copy the data out of the directory, do a zfs destroy on the two filesystems, and copy it back. For example: # mkdir /save # cp -r /expo

Re: [zfs-discuss] Invalid zpool argument in Solaris 10 (10/09)

2010-01-14 Thread Mark J Musante
On Thu, 14 Jan 2010, Josh Morris wrote: Hello List, I am porting a block device driver(for a PCIe NAND flash disk driver) from OpenSolaris to Solaris 10. On Solaris 10 (10/09) I'm having an issues creating a zpool with the disk. Apparently I have an 'invalid argument' somewhere: % pfexec z

Re: [zfs-discuss] unable to zfs destroy

2010-01-11 Thread Mark J Musante
On Fri, 8 Jan 2010, Rob Logan wrote: this one has me alittle confused. ideas? j...@opensolaris:~# zpool import z cannot mount 'z/nukeme': mountpoint or dataset is busy cannot share 'z/cle2003-1': smb add share failed j...@opensolaris:~# zfs destroy z/nukeme internal error: Bad exchange descrip

Re: [zfs-discuss] abusing zfs boot disk for fun and DR

2010-01-09 Thread Mark Bennett
Ben, I have found that booting from cdrom and importing the pool on the new host, then boot the hard disk will prevent these issues. That will reconfigure the zfs to use the new disk device. When running, zpool detach the missing mirror device and attach a new one. Mark. -- This message posted

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2010-01-07 Thread Mark Bennett
uit needed in a few V240 PSU's. Much cheaper than replacing the whole psu due to poor fan lifespan. Mark. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2010-01-06 Thread Mark Bennett
Will, sorry for picking an old thread, but you mentioned a psu monitor to supplement the CSE-PTJBOD-CB1. I have two of these and am interested in your design. Oddly, the LSI backplane chipset supports 2 x i2c busses that Supermicro didn't make use of for monitoring the psu's. Mark

Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 -- cfgadm won't create attach point (dsk/xxxx)

2010-01-06 Thread Mark Bennett
Check if your card has the latest firmware. Mark. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] how do i prevent changing device names? is this even a problem in ZFS

2010-01-06 Thread Mark Bennett
backplane price difference diminishes when you get to 24 bays. Mark. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] how do i prevent changing device names? is this even a problem in ZFS

2010-01-04 Thread Mark Bennett
I'd recommend a SAS non-raid controller (with sas backplane) over sata. It has better hot plug support. I use the Supermicro SC836E1 and a AOC-USAS-L4i with a UIO M/b. Mark. -- This message posted from opensolaris.org ___ zfs-discuss mailing lis

Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2010-01-03 Thread Mark Bennett
signed for the UIO slot. The cards are a mirror image of a normal pci-e card and may overlap adjacent slots. They "may" work in other servers, but I have found some Supermicro non-UIO servers that wouldn't run them. Mark. -- This message po

[zfs-discuss] zpool import without mounting

2010-01-03 Thread Mark Bennett
Hi, Is it possible to import a zpool and stop it mounting the zfs file systems, or override the mount paths? Mark. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-10 Thread Mark Grant
Thanks, sounds like it should handle all but the worst faults OK then; I believe the maximum retry timeout is typically set to about 60 seconds in consumer drives. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@ope

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-10 Thread Mark Grant
>From what I remember the problem with the hardware RAID controller is that the >long delay before the drive responds causes the drive to be dropped from the >RAID and then if you get another error on a different drive while trying to >repair the RAID then that disk is also marked failed and you

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-10 Thread Mark Grant
Yeah, this is my main concern with moving from my cheap Linux server with no redundancy to ZFS RAID on OpenSolaris; I don't really want to have to pay twice as much to buy the 'enterprise' disks which appear to be exactly the same drives with a flag set in the firmware to limit read retries, but

Re: [zfs-discuss] Pool resize

2009-12-07 Thread Mark J Musante
Did you set autoexpand on? Conversely, did you try doing a 'zpool online bigpool ' for each disk after the replace completed? On Mon, 7 Dec 2009, Alexandru Pirvulescu wrote: Hi, I've read before regarding zpool size increase by replacing the vdevs. The initial pool was a raidz2 with 4 640

Re: [zfs-discuss] Adding drives to system - disk labels not consistent

2009-12-01 Thread Mark J. Musante
This may be a dup of 6881631. Regards, markm On 1 Dec 2009, at 15:14, Cindy Swearingen wrote: I was able to reproduce this problem on the latest Nevada build: # zpool create tank raidz c1t2d0 c1t3d0 c1t4d0 # zpool add -n tank raidz c1t5d0 c1t6d0 c1t7d0 would update 'tank' to the follow

Re: [zfs-discuss] mpt errors on snv 127

2009-12-01 Thread Mark Johnson
Mark Johnson wrote: Chad Cantwell wrote: Hi, I was using for quite awhile OpenSolaris 2009.06 with the opensolaris-provided mpt driver to operate a zfs raidz2 pool of about ~20T and this worked perfectly fine (no issues or device errors logged for several months, no hanging). A few days

Re: [zfs-discuss] mpt errors on snv 127

2009-12-01 Thread Mark Johnson
Chad Cantwell wrote: Hi, I was using for quite awhile OpenSolaris 2009.06 with the opensolaris-provided mpt driver to operate a zfs raidz2 pool of about ~20T and this worked perfectly fine (no issues or device errors logged for several months, no hanging). A few days ago I decided to reinsta

Re: [zfs-discuss] mpt errors on snv 127

2009-12-01 Thread Mark Nipper
This is basically just a me too. I'm using different hardware but essentially the same problems. The relevant hardware I have is: --- SuperMicro MBD-H8Di3+-F-O motherboard with LSI 1068E onboard SuperMicro SC846E2-R900B 4U chassis with two LSI SASx36 expander chips on the backplane 24 Western D

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread Mark Johnson
card installed. The mptsas cards are not generally available yet (they're 2nd generation), so I would be surprised if you had one. No...I had set the other two variables after Mark contacted me offline to do some testing mainly to verify the problem was, indeed, not xVM specific. I had

Re: [zfs-discuss] Zpool hosed during testing

2009-11-11 Thread Mark J Musante
On 10 Nov, 2009, at 21.02, Ron Mexico wrote: This didn't occur on a production server, but I thought I'd post this anyway because it might be interesting. This is CR 6895446 and a fix for it should be going into build 129. Regards, markm ___ zf

Re: [zfs-discuss] zfs eradication

2009-11-10 Thread Mark A. Carlson
Typically this is called "Sanitization" and could be done as part of an evacuation of data from the disk in preparation for removal. You would want to specify the patterns to write and the number of passes. -- mark Brian Kolaci wrote: Hi, I was discussing the common practi

Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Mark
Ok. Thanks. Why does '/' show up in the newly created /BE/etc/vfstab but not in the current /etc/vfstab? Should '/' be in the /BE/etc/vfstab? btw, thank you for responding so quickly to this. Mark On Wed, Oct 21, 2009 at 12:49 PM, Enda O'Connor wrote: > Mark Horst

Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Mark Horstman
Then why the warning on the lucreate. It hasn't done that in the past. Mark On Oct 21, 2009, at 12:41 PM, "Enda O'Connor" wrote: Hi T his will boot ok in my opinion, not seeing any issues there. Enda Mark Horstman wrote: more input: # lumount foobar /mnt /mnt #

Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Mark Horstman
more input: # lumount foobar /mnt /mnt # cat /mnt/etc/vfstab # cat /mnt/etc/vfstab #live-upgrade: updated boot environment #device device mount FS fsckmount mount #to mount to fsck point typepassat boot options # fd -

Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Mark Horstman
More input: # cat /etc/lu/ICF.1 sol10u8:-:/dev/zvol/dsk/rpool/swap:swap:67108864 sol10u8:/:rpool/ROOT/sol10u8:zfs:0 sol10u8:/appl:pool00/global/appl:zfs:0 sol10u8:/home:pool00/global/home:zfs:0 sol10u8:/rpool:rpool:zfs:0 sol10u8:/install:pool00/shared/install:zfs:0 sol10u8:/opt/local:pool00/shared

Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Mark Horstman
Neither the virgin SPARC sol10u8 nor the (update to date) patched SPARC sol10u7 have any local zones. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-d

Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Mark Horstman
I'm seeing the same [b]lucreate[/b] error on my fresh SPARC sol10u8 install (and my SPARC sol10u7 machine I keep patches up to date), but I don't have a separate /var: # zfs list NAMEUSED AVAIL REFER MOUNTPOINT pool00 3.36G 532G20K none pool00/global

Re: [zfs-discuss] Zpool without any redundancy

2009-10-20 Thread Mark J Musante
On Mon, 19 Oct 2009, Espen Martinsen wrote: Let's say I've chosen to live with a zpool without redundancy, (SAN disks, has actually raid5 in disk-cabinet) What benefit are you hoping zfs will provide in this situation? Examine your situation carefully and determine what filesystem works best

Re: [zfs-discuss] NFS sgid directory interoperability with Linux

2009-10-12 Thread Mark Shellenbaum
en inheriting an ACL? I just tried it locally and it appears to work. # ls -ld test.dir drwsr-sr-x 2 marksstorage4 Oct 12 16:45 test.dir my primary group is "staff" $ touch file $ ls -l file -rw-r--r-- 1 marksstorage

Re: [zfs-discuss] Destroying zfs snapshot

2009-10-05 Thread Mark Horstman
Sorry. My environment: # uname -a SunOS xx 5.10 Generic_141414-10 sun4v sparc SUNW,SPARC-Enterprise-T5220 -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listin

[zfs-discuss] Destroying zfs snapshot

2009-10-05 Thread Mark Horstman
I have a snapshot that I'd like to destroy: # zfs list rpool/ROOT/be200909160...@200909160720 NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT/be200909160...@200909160720 1.88G - 4.18G - But when I try it warns me of dependent clones: # zfs destroy rpool

Re: [zfs-discuss] moving files from one fs to another, splittin/merging

2009-09-24 Thread Mark J Musante
On Thu, 24 Sep 2009, Paul Archer wrote: I may have missed something in the docs, but if I have a file in one FS, and want to move it to another FS (assuming both filesystems are on the same ZFS pool), is there a way to do it outside of the standard mv/cp/rsync commands? Not yet. CR 6483179

Re: [zfs-discuss] Checksum property change does not change pre-existing data - right?

2009-09-24 Thread Mark J Musante
On 23 Sep, 2009, at 21.54, Ray Clark wrote: My understanding is that if I "zfs set checksum=" to change the algorithm that this will change the checksum algorithm for all FUTURE data blocks written, but does not in any way change the checksum for previously written data blocks. I need to

Re: [zfs-discuss] Which kind of ACLs does tmpfs support ?

2009-09-15 Thread Mark Shellenbaum
Roland Mainz wrote: Hi! Does anyone know out-of-the-head whether tmpfs supports ACLs - and if "yes" - which type(s) of ACLs (e.g. NFSv4/ZFS, old POSIX draft ACLs etc.) are supported by tmpfs ? tmpfs does not support ACLs see _PC_ACL_ENABLED in [f]pathconf(2). You can query the file sy

Re: [zfs-discuss] zfs send older version?

2009-09-15 Thread Mark J Musante
On Mon, 14 Sep 2009, Marty Scholes wrote: I really want to move back to 2009.06 and keep all of my files / snapshots. Is there a way somehow to zfs send an older stream that 2009.06 will read so that I can import that into 2009.06? Can I even create an older pool/dataset using 122? Ideall

Re: [zfs-discuss] raidz replace issue

2009-09-14 Thread Mark J Musante
On Sat, 12 Sep 2009, Jeremy Kister wrote: scrub: resilver in progress, 0.12% done, 108h42m to go [...] raidz1 DEGRADED 0 0 0 c3t8d0ONLINE 0 0 0 c5t8d0ONLINE 0 0 0 c3t9d0ONLINE 0 0 0

Re: [zfs-discuss] raidz replace issue

2009-09-13 Thread Mark J Musante
The device is listed with s0; did you try using c5t9d0s0 as the name? On 12 Sep, 2009, at 17.44, Jeremy Kister wrote: [sorry for the cross post to solarisx86] One of my disks died that i had in a raidz configuration on a Sun V40z with Solaris 10u5. I took the bad disk out, replaced the dis

Re: [zfs-discuss] ZFS Export, Import = Windows sees wrong groups in ACLs

2009-09-12 Thread Mark Shellenbaum
avies How are the parent and kids defined in the /etc/passwd file? What do the ACLs look like? Issues with the CIFS server are best served by asking on cifs-disc...@opensolaris.org -Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Status/priority of 6761786

2009-08-28 Thread Mark J Musante
On Fri, 28 Aug 2009, Dave wrote: Thanks, Trevor. I understand the RFE/CR distinction. What I don't understand is how this is not a bug that should be fixed in all solaris versions. Just to get the terminology right: "CR" means Change Request, and can refer to Defects ("bugs") or RFE's. Defe

Re: [zfs-discuss] Problem booting with zpool

2009-08-27 Thread Mark J Musante
Hi Stephen, Have you got many zvols (or snapshots of zvols) in your pool? You could be running into CR 6761786 and/or 6693210. On Thu, 27 Aug 2009, Stephen Green wrote: I'm having trouble booting with one of my zpools. It looks like this: pool: tank state: ONLINE scrub: none requested c

[zfs-discuss] Identify cause when disk faulted

2009-08-25 Thread Mark Bennett
eroing and checking after a replace attempt , shows it is being partitioned during the replace attempt. obviously corrupt data suggests read/write integrity, but is there any way to get more detailed info or logs from zfs on what the reason for rejecting it ? e.g. sector that is failing Mark

Re: [zfs-discuss] *Almost* empty ZFS filesystem - 14GB?

2009-08-21 Thread Mark Shellenbaum
of an inconvenience, but it does make me wonder whether the 'used' figures on my other filesystems and zvols are correct. You could be running into an instance of 6792701 Removing large holey file does not free space A fix for this was

Re: [zfs-discuss] `zfs list -t filesystem` shouldn't return snapshots

2009-08-05 Thread Mark Shellenbaum
Robert Lawhead wrote: I recently tried to post this as a bug, and received an auto-ack, but can't tell whether its been accepted. Does this seem like a bug to anyone else? Default for zfs list is now to show only filesystems. However, a `zfs list` or `zfs list -t filesystem` shows filesystem

Re: [zfs-discuss] ZFS CIFS problem with Ubuntu, NFS as an alternative?

2009-08-05 Thread Mark Shellenbaum
nsolaris.org They will start out by asking you to run: http://opensolaris.org/os/project/cifs-server/files/cifs-gendiag -Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] zpool causing boot to hang

2009-08-01 Thread Mark Johnson
I was wondering if this is a known problem.. I am running stock b118 bits. System has a UFS root and a single zpool (with multiple nfs, smb, and iscsi exports) Powered off my machine last night.. Powered it on this morning and it hung during boot. It hung when reading the zpool disks.. It wou

Re: [zfs-discuss] feature proposal

2009-07-29 Thread Mark J Musante
On Wed, 29 Jul 2009, David Magda wrote: Which makes me wonder: is there a programmatic way to determine if a path is on ZFS? Yes, if it's local. Just use df -n $path and it'll spit out the filesystem type. If it's mounted over NFS, it'll just say something like nfs or autofs, though. Reg

Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Mark J Musante
On Wed, 29 Jul 2009, Glen Gunselman wrote: Where would I see CR 6308817 my usual search tools aren't find it. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6308817 Regards, markm ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

<    1   2   3   4   5   6   >