Re: [zfs-discuss] 3ware support

2008-02-12 Thread Nicolas Szalay
Le mardi 12 février 2008 à 07:22 +0100, Johan Kooijman a écrit : Goodmorning all, Hi, can anyone confirm that 3ware raid controllers are indeed not working under Solaris/OpenSolaris? I can't seem to find it in the HCL. I do confirm they don't work We're now using a 3Ware 9550SX as a S-ATA

Re: [zfs-discuss] 3ware support

2008-02-12 Thread Lida Horn
Jason J. W. Williams wrote: X4500 problems seconded. Still having issues with port resets due to the Marvell driver. Though they seem considerably more transient and less likely to lock up the entire systems in the most recent ( b72) OpenSolaris builds. Build 72 is pretty old. The build

Re: [zfs-discuss] 3ware support

2008-02-12 Thread Jason J. W. Williams
X4500 problems seconded. Still having issues with port resets due to the Marvell driver. Though they seem considerably more transient and less likely to lock up the entire systems in the most recent ( b72) OpenSolaris builds. -J On Feb 12, 2008 9:35 AM, Carson Gaspar [EMAIL PROTECTED] wrote:

Re: [zfs-discuss] Lost intermediate snapshot; incremental backup still possible?

2008-02-12 Thread Jeff Bonwick
I think so. On your backup pool, roll back to the last snapshot that was successfully received. Then you should be able to send an incremental between that one and the present. Jeff On Thu, Feb 07, 2008 at 08:38:38AM -0800, Ian wrote: I keep my system synchronized to a USB disk from time to

Re: [zfs-discuss] Need help with a dead disk

2008-02-12 Thread Marion Hakanson
[EMAIL PROTECTED] said: One thought I had was to unconfigure the bad disk with cfgadm. Would that force the system back into the 'offline' response? In my experience (X4100 internal drive), that will make ZFS stop trying to use it. It's also a good idea to do this before you hot-unplug the

Re: [zfs-discuss] Computer usable output for zpool commands

2008-02-12 Thread eric kustarz
On Feb 1, 2008, at 7:17 AM, Nicolas Dorfsman wrote: Hi, I wrote an hobbit script around lunmap/hbamap commands to monitor SAN health. I'd like to add detail on what is being hosted by those luns. With svm metastat -p is helpful. With zfs, zpool status output is awful for script. Is

Re: [zfs-discuss] Need help with a dead disk

2008-02-12 Thread Brian H. Nelson
Here's a bit more info. The drive appears to have failed at 22:19 EST but it wasn't until 1:30 EST the next day that the system finally decided that it was bad. (Why?) Here's some relevant log stuff (with lots of repeated 'device not responding' errors removed) I don't know if it will be

[zfs-discuss] Need help with a dead disk (was: ZFS keeps trying to open a dead disk: lots of logging)

2008-02-12 Thread Brian H. Nelson
Ok. I think I answered my own question. ZFS _didn't_ realize that the disk was bad/stale. I power-cycled the failed drive (external) to see if it would come back up and/or run diagnostics on it. As soon as I did that, ZFS put the disk ONLINE and started using it again! Observe: bash-3.00#

Re: [zfs-discuss] 3ware support

2008-02-12 Thread Lida Horn
Carson Gaspar wrote: Tim wrote: A much cheaper (and probably the BEST supported card), is the supermicro based on the marvell chipset. This is the same chipset that is used in the thumper x4500 so you know that the folks at sun are doing their due diligence to make sure the

Re: [zfs-discuss] 3ware support

2008-02-12 Thread Rob Windsor
Johan Kooijman wrote: Goodmorning all, can anyone confirm that 3ware raid controllers are indeed not working under Solaris/OpenSolaris? I can't seem to find it in the HCL. We're now using a 3Ware 9550SX as a S-ATA RAID controller. The original plan was to disable all it's RAID functions

Re: [zfs-discuss] Need help with a dead disk

2008-02-12 Thread Ross
Hmm... this won't help you, but I think I'm having similar problems with an iSCSI target device. If I offline the target, zfs hangs for just over 5 minutes before it realises the device is unavailable, and even then it doesn't report the problem until I repeat the zpool status command. What I

[zfs-discuss] Which DTrace provider to use

2008-02-12 Thread Jonathan Loran
Hi List, I'm wondering if one of you expert DTrace guru's can help me. I want to write a DTrace script to print out a a histogram of how long IO requests sit in the service queue. I can output the results with the quantize method. I'm not sure which provider I should be using for this.

Re: [zfs-discuss] 3ware support

2008-02-12 Thread Carson Gaspar
Tim wrote: A much cheaper (and probably the BEST supported card), is the supermicro based on the marvell chipset. This is the same chipset that is used in the thumper x4500 so you know that the folks at sun are doing their due diligence to make sure the drivers are solid. Except the

[zfs-discuss] ZFS keeps trying to open a dead disk: lots of logging

2008-02-12 Thread Brian H. Nelson
This is Solaris 10U3 w/127111-05. It appears that one of the disks in my zpool died yesterday. I got several SCSI errors finally ending with 'device not responding to selection'. That seems to be all well and good. ZFS figured it out and the pool is degraded: maxwell /var/adm zpool status

Re: [zfs-discuss] scrub halts

2008-02-12 Thread Lida Horn
Will Murnane wrote: On Feb 12, 2008 4:45 AM, Lida Horn [EMAIL PROTECTED] wrote: The latest changes to the sata and marvell88sx modules have been put back to Solaris Nevada and should be available in the next build (build 84). Hopefully, those of you who use it will find the changes

Re: [zfs-discuss] 3ware support

2008-02-12 Thread Tim
On 2/12/08, Johan Kooijman [EMAIL PROTECTED] wrote: Goodmorning all, can anyone confirm that 3ware raid controllers are indeed not working under Solaris/OpenSolaris? I can't seem to find it in the HCL. We're now using a 3Ware 9550SX as a S-ATA RAID controller. The original plan was to

[zfs-discuss] We can't import pool zfs faulted

2008-02-12 Thread Stéphane Delmotte
Yesterday, we needed to stop our nfs file server. after having restarted one pool zfs was marked as faulted we exported the faulted pool we tried to import it (even with option -f) but it can't msg by solaris is cannot import one or more devices curently unavaillable. what can I do ? is there a

Re: [zfs-discuss] Avoiding performance decrease when pool usage is over 80%

2008-02-12 Thread Thomas Liesner
bda wrote: I haven't noticed this behavior when ZFS has (as recommended) the full disk. Good to know, as i intended to use the whole disks anyway. Thanks, Tom This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Avoiding performance decrease when pool usage is over 80%

2008-02-12 Thread Thomas Liesner
Ralf Ramge wrote: Quotas are applied to file systems, not pools, and a such are pretty independent from the pool size. I found it best to give every user his/her own filesystem and applying individual quotas afterwards. Does this mean, that if i have a pool of 7TB with one filesystem for

Re: [zfs-discuss] Avoiding performance decrease when pool usage is over 80%

2008-02-12 Thread Bryan Allen
+-- | On 2008-02-12 02:40:33, Thomas Liesner wrote: | | Subject: Re: [zfs-discuss] Avoiding performance decrease when pool usage is | over 80% | | Nobody out there who ever had problems with low diskspace? Only in

Re: [zfs-discuss] Avoiding performance decrease when pool usage is over 80%

2008-02-12 Thread Thomas Liesner
Nobody out there who ever had problems with low diskspace? Regrads, Tom This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] 3ware support

2008-02-12 Thread James C. McPherson
Nicolas Szalay wrote: Le mardi 12 février 2008 à 07:22 +0100, Johan Kooijman a écrit : Goodmorning all, Hi, can anyone confirm that 3ware raid controllers are indeed not working under Solaris/OpenSolaris? I can't seem to find it in the HCL. I do confirm they don't work We're now

Re: [zfs-discuss] Avoiding performance decrease when pool usage is over 80%

2008-02-12 Thread Ralf Ramge
Thomas Liesner wrote: Nobody out there who ever had problems with low diskspace? Okay, I found your original mail :-) Quotas are applied to file systems, not pools, and a such are pretty independent from the pool size. I found it best to give every user his/her own filesystem and applying

Re: [zfs-discuss] Avoiding performance decrease when pool usage is over 80%

2008-02-12 Thread Ralf Ramge
Thomas Liesner wrote: Does this mean, that if i have a pool of 7TB with one filesystem for all users with a quota of 6TB i'd be alright? Yep. Although I *really* recommend creating individual file systems, e.g. if you have 1,000 users on your server, I'd create 1,000 file systems with a

Re: [zfs-discuss] We can't import pool zfs faulted

2008-02-12 Thread Thomas Liesner
If you can't use zpool status, you probably should check wether your system is right and not all devices needed for this pool are currently available... i.e. format... Regards, Tom This message posted from opensolaris.org ___ zfs-discuss mailing