[zfs-discuss] ZFS, Kernel Panic on import

2008-08-29 Thread Mike Aldred
G'day, I've got a OpenSolaris server n95, that I use for media, serving. It's uses a DQ35JOE motherboard, dual core, and I have my rpool mirrored on two IDE 40GB drives, and my media mirrored on 2 x 500GB SATA drives. I've got a few CIFS shares on the media drive, and I'm using MediaTomb to

Re: [zfs-discuss] cannot delete file when fs 100% full

2008-08-29 Thread Tomas Ögren
On 15 August, 2008 - Tomas Ögren sent me these 0,4K bytes: On 14 August, 2008 - Paul Raines sent me these 2,9K bytes: This problem is becoming a real pain to us again and I was wondering if there has been in the past few month any known fix or workaround. Sun is sending me an IDR

[zfs-discuss] Repairing a Root Pool

2008-08-29 Thread Krister Joas
Hello. I have a machine at home on which I have SXCE B96 installed on a root zpool mirror. It's been working great until yesterday. The root pool is a mirror with two identical 160GB disks. The other day I added a third disk to the mirror, a 250 GB disk. Soon after, the third disk

Re: [zfs-discuss] cannot delete file when fs 100% full

2008-08-29 Thread Michael Schuster
On 08/29/08 04:09, Tomas Ögren wrote: On 15 August, 2008 - Tomas Ögren sent me these 0,4K bytes: On 14 August, 2008 - Paul Raines sent me these 2,9K bytes: This problem is becoming a real pain to us again and I was wondering if there has been in the past few month any known fix or

Re: [zfs-discuss] Repairing a Root Pool

2008-08-29 Thread Krister Joas
Here is the output from zpool import showing the configuration of the pool in case that can help diagnosing my problem. pool: rpool id: ... state: DEGRADED status: The pool was last accessed by another system. action: The pool can be imported despite missing or damaged devices. The

Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-29 Thread Kenny
To All... Problem solved. Operator error on my part. (but I did learn something!! grin) Thank you all very much! --Kenny -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Upgrading my ZFS server

2008-08-29 Thread Joe S
Just an update to this thread with my results. To summarize, I have no problems with the nVidia 750a chipset. It's simply a newer version of the 5** series chipets that have reportedly worked well. Also, at IDLE, this system uses 133 Watts: CPU - AMD Athlon X2 4850e Motherboard - XFX

[zfs-discuss] Proposed 2540 and ZFS configuration

2008-08-29 Thread Kenny
Hello again... Now that I've got my 2540 up and running. I'm considering which configuration is best. I have a proposed config and wanted your opinions and comments on it. Background I have a requirement to host syslog data from approx 30 servers. Currently the data is about 3.5TB in

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-08-29 Thread Nicolas Williams
On Thu, Aug 28, 2008 at 11:29:21AM -0500, Bob Friesenhahn wrote: Which of these do you prefer? o System waits substantial time for devices to (possibly) recover in order to ensure that subsequently written data has the least chance of being lost. o System immediately

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-08-29 Thread Nicolas Williams
On Thu, Aug 28, 2008 at 01:05:54PM -0700, Eric Schrock wrote: As others have mentioned, things get more difficult with writes. If I issue a write to both halves of a mirror, should I return when the first one completes, or when both complete? One possibility is to expose this as a tunable,

Re: [zfs-discuss] Proposed 2540 and ZFS configuration

2008-08-29 Thread Bob Friesenhahn
On Fri, 29 Aug 2008, Kenny wrote: 1) I didn't do raid2 because I didn't want to lose the space. Is this a bas idea?? Raidz2 is the most reliable vdev configuration other than triple-mirror. The pool is only as strong as its weakest vdev. In private email I suggested using all 12 drives in

Re: [zfs-discuss] Proposed 2540 and ZFS configuration

2008-08-29 Thread Bob Friesenhahn
On Fri, 29 Aug 2008, Bob Friesenhahn wrote: If you do use the two raidz2 vdevs, then if you pay attention to how MPxIO works, you can balance the load across your two fiber channel links for best performance. Each raidz2 vdev can be served (by default) by a differente FC link. As a

Re: [zfs-discuss] cannot delete file when fs 100% full

2008-08-29 Thread Shawn Ferry
On Aug 29, 2008, at 7:09 AM, Tomas Ögren wrote: On 15 August, 2008 - Tomas Ögren sent me these 0,4K bytes: On 14 August, 2008 - Paul Raines sent me these 2,9K bytes: This problem is becoming a real pain to us again and I was wondering if there has been in the past few month any known fix

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-08-29 Thread Richard Elling
Nicolas Williams wrote: On Thu, Aug 28, 2008 at 11:29:21AM -0500, Bob Friesenhahn wrote: Which of these do you prefer? o System waits substantial time for devices to (possibly) recover in order to ensure that subsequently written data has the least chance of being lost.

Re: [zfs-discuss] Proposed 2540 and ZFS configuration

2008-08-29 Thread Bob Friesenhahn
On Fri, 29 Aug 2008, Kyle McDonald wrote: What would one look for to decide what vdev to place each LUN? All mine have the same Current Load Balance value: round robin. That is a good question and I will have to remind myself of the answer. The round robin is good because that means that

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-08-29 Thread Miles Nordin
es == Eric Schrock [EMAIL PROTECTED] writes: es The main problem with exposing tunables like this is that they es have a direct correlation to service actions, and es mis-diagnosing failures costs everybody (admin, companies, es Sun, etc) lots of time and money. Once you expose

[zfs-discuss] ZFS noob question

2008-08-29 Thread Krenz von Leiberman
Hi. I took a snapshot of a directory in which I hold PDF files related to math. I then added a 50MB pdf file from a CD (Oxford Math Reference; I strongly reccomend this to any math enthusiast) and did zfs list to see the size of the snapshot (sheer curiosity). I don't have compression turned

Re: [zfs-discuss] ZFS noob question

2008-08-29 Thread Marion Hakanson
[EMAIL PROTECTED] said: I took a snapshot of a directory in which I hold PDF files related to math. I then added a 50MB pdf file from a CD (Oxford Math Reference; I strongly reccomend this to any math enthusiast) and did zfs list to see the size of the snapshot (sheer curiosity). I don't have

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-08-29 Thread Miles Nordin
re == Richard Elling [EMAIL PROTECTED] writes: re if you use Ethernet switches in the interconnect, you need to re disable STP on the ports used for interconnects or risk re unnecessary cluster reconfigurations. RSTP/802.1w plus setting the ports connected to Solaris as ``edge'' is

Re: [zfs-discuss] Repairing a Root Pool

2008-08-29 Thread George Wilson
Krister Joas wrote: Hello. I have a machine at home on which I have SXCE B96 installed on a root zpool mirror. It's been working great until yesterday. The root pool is a mirror with two identical 160GB disks. The other day I added a third disk to the mirror, a 250 GB disk. Soon

Re: [zfs-discuss] ZFS, Kernel Panic on import

2008-08-29 Thread Mike Aldred
Ok, I've managed to get around the kernel panic. [EMAIL PROTECTED]:~/Download$ pfexec mdb -kw Loading modules: [ unix genunix specfs dtrace cpu.generic uppc pcplusmp scsi_vhci zfs sd ip hook neti sctp arp usba uhci s1394 fctl md lofs random sppp ipc ptm fcip fcp cpc crypto logindmux ii nsctl

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-08-29 Thread Richard Elling
Miles Nordin wrote: re == Richard Elling [EMAIL PROTECTED] writes: re if you use Ethernet switches in the interconnect, you need to re disable STP on the ports used for interconnects or risk re unnecessary cluster reconfigurations. RSTP/802.1w plus setting the

Re: [zfs-discuss] Repairing a Root Pool

2008-08-29 Thread Krister Joas
On Aug 30, 2008, at 8:45 AM, George Wilson wrote: Krister Joas wrote: Hello. I have a machine at home on which I have SXCE B96 installed on a root zpool mirror. It's been working great until yesterday. The root pool is a mirror with two identical 160GB disks. The other day I added

Re: [zfs-discuss] ZFS hangs/freezes after disk failure,

2008-08-29 Thread Todd H. Poole
Wrt. what I've experienced and read in ZFS-discussion etc. list I've the __feeling__, that we would have got really into trouble, using Solaris (even the most recent one) on that system ... So if one asks me, whether to run Solaris+ZFS on a production system, I usually say: definitely, but