Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-05 Thread Willard Korfhage
Looks like it was RAM. I ran memtest+ 4.00, and it found no problems. I removed 2 of the 3 sticks of RAM, ran a backup, and had no errors. I'm running more extensive tests, but it looks like that was it. A new motherboard, CPU and ECC RAM are on the way to me now. -- This message posted from

[zfs-discuss] Tuning the ARC towards LRU

2010-04-05 Thread Peter Schuller
Hello, For desktop use, and presumably rapidly changing non-desktop uses, I find the ARC cache pretty annoying in its behavior. For example this morning I had to hit my launch-terminal key perhaps 50 times (roughly) before it would start completing without disk I/O. There are plenty of other

Re: [zfs-discuss] Problems with zfs and a STK RAID INT SAS HBA

2010-04-05 Thread Ragnar Sundblad
On 5 apr 2010, at 04.35, Edward Ned Harvey wrote: When running the card in copyback write cache mode, I got horrible performance (with zfs), much worse than with copyback disabled (which I believe should mean it does write-through), when tested with filebench. When I benchmark my disks, I

[zfs-discuss] Removing SSDs from pool

2010-04-05 Thread Andreas Höschler
Hi all, while setting of our X4140 I have - following suggestions - added two SSDs as log devices as follows zpool add tank log c1t6d0 c1t7d0 I currently have pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool

Re: [zfs-discuss] ZFS getting slower over time

2010-04-05 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Marcus Wilhelmsson I have a problem with my zfs system, it's getting slower and slower over time. When the OpenSolaris machine is rebooted and just started I get about 30-35MB/s in read and

Re: [zfs-discuss] ZFS getting slower over time

2010-04-05 Thread Marcus Wilhelmsson
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Marcus Wilhelmsson I have a problem with my zfs system, it's getting slower and slower over time. When the OpenSolaris machine is rebooted and just started I get about 30-35MB/s in

Re: [zfs-discuss] Removing SSDs from pool

2010-04-05 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Andreas Höschler • I would like to remove the two SSDs as log devices from the pool and instead add them as a separate pool for sole use by the database to see how this enhences performance.

Re: [zfs-discuss] ZFS getting slower over time

2010-04-05 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Marcus Wilhelmsson pool: s1 state: ONLINE scrub: none requested config: NAMESTATE READ WRITE CKSUM s1 ONLINE 0 0 0

[zfs-discuss] no hot spare activation?

2010-04-05 Thread Garrett D'Amore
While testing a zpool with a different storage adapter using my blkdev device, I did a test which made a disk unavailable -- all attempts to read from it report EIO. I expected my configuration (which is a 3 disk test, with 2 disks in a RAIDZ and a hot spare) to work where the hot spare would

Re: [zfs-discuss] ZFS getting slower over time

2010-04-05 Thread Marcus Wilhelmsson
Alright, I've made the benchmarks and there isn't a difference worth mentioning except that i only get about 30MB/s (to my Mac, which has an SSD as system disk). I've also tried copying to a ram disk with slightly better results. Well, now that I've restarted the server I probably won't see the

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-05 Thread Kyle McDonald
On 4/4/2010 11:04 PM, Edward Ned Harvey wrote: Actually, It's my experience that Sun (and other vendors) do exactly that for you when you buy their parts - at least for rotating drives, I have no experience with SSD's. The Sun disk label shipped on all the drives is setup to make the drive

Re: [zfs-discuss] Removing SSDs from pool

2010-04-05 Thread Andreas Höschler
Hi Edward, thanks a lot for your detailed response! From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Andreas Höschler • I would like to remove the two SSDs as log devices from the pool and instead add them as a separate pool for sole use by

Re: [zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-05 Thread Torrey McMahon
Not true. There are different ways that a storage array, and it's controllers, connect to the host visible front end ports which might be confusing the author but i/o isn't duplicated as he suggests. On 4/4/2010 9:55 PM, Brad wrote: I had always thought that with mpxio, it load-balances IO

[zfs-discuss] Why does ARC grow above hard limit?

2010-04-05 Thread Mike Z
I would appreciate if somebody can clarify a few points. I am doing some random WRITES (100% writes, 100% random) testing and observe that ARC grows way beyond the hard limit during the test. The hard limit is set 512 MB via /etc/system and I see the size going up to 1 GB - how come is it

Re: [zfs-discuss] Removing SSDs from pool

2010-04-05 Thread Khyron
Response below... 2010/4/5 Andreas Höschler ahoe...@smartsoft.de Hi Edward, thanks a lot for your detailed response! From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Andreas Höschler • I would like to remove the two SSDs as log

Re: [zfs-discuss] no hot spare activation?

2010-04-05 Thread Eric Schrock
On Apr 5, 2010, at 11:43 AM, Garrett D'Amore wrote: I see ereport.fs.zfs.io_failure, and ereport.fs.zfs.probe_failure. Also, ereport.io.service.lost and ereport.io.device.inval_state. There is indeed a fault.fs.zfs.device in the list as well. The ereports are not interesting, only the

Re: [zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-05 Thread Bob Friesenhahn
On Sun, 4 Apr 2010, Brad wrote: I had always thought that with mpxio, it load-balances IO request across your storage ports but this article http://christianbilien.wordpress.com/2007/03/23/storage-array-bottlenecks/ has got me thinking its not true. The available bandwidth is 2 or 4Gb/s

Re: [zfs-discuss] Removing SSDs from pool

2010-04-05 Thread Andreas Höschler
Hi Khyron, No, he did *not* say that a mirrored SLOG has no benefit, redundancy-wise. He said that YOU do *not* have a mirrored SLOG.  You have 2 SLOG devices which are striped.  And if this machine is running Solaris 10, then you cannot remove a log device because those updates have not

Re: [zfs-discuss] Tuning the ARC towards LRU

2010-04-05 Thread Bob Friesenhahn
On Mon, 5 Apr 2010, Peter Schuller wrote: For desktop use, and presumably rapidly changing non-desktop uses, I find the ARC cache pretty annoying in its behavior. For example this morning I had to hit my launch-terminal key perhaps 50 times (roughly) before it would start completing without

Re: [zfs-discuss] Tuning the ARC towards LRU

2010-04-05 Thread Peter Schuller
It sounds like you are complaining about how FreeBSD has implemented zfs in the system rather than about zfs in general.  These problems don't occur under Solaris.  Zfs and the kernel need to agree on how to allocate/free memory, and it seems that Solaris is more advanced than FreeBSD in this

[zfs-discuss] EON ZFS Storage 0.60.0 based on snv 130, Sun-set release!

2010-04-05 Thread Andre Lue
Embedded Operating system/Networking (EON), RAM based live ZFS NAS appliance is released on Genunix! This release marks the end of SXCE releases and Sun Microsystems as we know it! It is dubbed the Sun-set release! Many thanks to Al at Genunix.org for download hosting and serving the

[zfs-discuss] Are there (non-Sun/Oracle) vendors selling OpenSolaris/ZFS based NAS Hardware?

2010-04-05 Thread Kyle McDonald
I've seen the Nexenta and EON webpages, but I'm not looking to build my own. Is there anything out there I can just buy? -Kyle ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Are there (non-Sun/Oracle) vendors selling OpenSolaris/ZFS based NAS Hardware?

2010-04-05 Thread Ahmed Kamal
Install nexenta on a dell poweredge ? or one of these http://www.pogolinux.com/products/storage_director On Mon, Apr 5, 2010 at 9:48 PM, Kyle McDonald kmcdon...@egenera.com wrote: I've seen the Nexenta and EON webpages, but I'm not looking to build my own. Is there anything out there I can

Re: [zfs-discuss] Are there (non-Sun/Oracle) vendors selling OpenSolaris/ZFS based NAS Hardware?

2010-04-05 Thread Volker A. Brandt
Kyle McDonald writes: I've seen the Nexenta and EON webpages, but I'm not looking to build my own. Is there anything out there I can just buy? In Germany, someone sells preconfigured hardware based on Nexenta:

Re: [zfs-discuss] Tuning the ARC towards LRU

2010-04-05 Thread Bob Friesenhahn
On Mon, 5 Apr 2010, Peter Schuller wrote: It may be FreeBSD specific, but note that I a not talking about the amount of memory dedicated to the ARC and how it balances with free memory on the system. I am talking about eviction policy. I could be wrong but I didn't think ZFS port made

Re: [zfs-discuss] Are there (non-Sun/Oracle) vendors selling OpenSolaris/ZFS based NAS Hardware?

2010-04-05 Thread Roy Sigurd Karlsbakk
- Kyle McDonald kmcdon...@egenera.com skrev: I've seen the Nexenta and EON webpages, but I'm not looking to build my own. Is there anything out there I can just buy? I've setup a few systems with supermicro hardware - works well and doesn't cost a whole lot roy -- Roy Sigurd

Re: [zfs-discuss] Tuning the ARC towards LRU

2010-04-05 Thread Peter Schuller
The ARC is designed to use as much memory as is available up to a limit.  If the kernel allocator needs memory and there is none available, then the allocator requests memory back from the zfs ARC. Note that some systems have multiple memory allocators.  For example, there may be a memory

Re: [zfs-discuss] Tuning the ARC towards LRU

2010-04-05 Thread Richard Elling
On Apr 5, 2010, at 2:23 PM, Peter Schuller wrote: That's a very general statement. I am talking about specifics here. For example, you can have mountains of evidence that shows that a plain LRU is optimal (under some conditions). That doesn't change the fact that if I want to avoid a

Re: [zfs-discuss] Tuning the ARC towards LRU

2010-04-05 Thread Peter Schuller
In simple terms, the ARC is divided into a MRU and MFU side.        target size (c) = target MRU size (p) + target MFU size (c-p) On Solaris, to get from the MRU to the MFU side, the block must be read at least once in 62.5 milliseconds.  For pure read-once workloads, the data won't to the

Re: [zfs-discuss] Tuning the ARC towards LRU

2010-04-05 Thread Bill Sommerfeld
On 04/05/10 15:24, Peter Schuller wrote: In the urxvt case, I am basing my claim on informal observations. I.e., hit terminal launch key, wait for disks to rattle, get my terminal. Repeat. Only by repeating it very many times in very rapid succession am I able to coerce it to be cached such that

Re: [zfs-discuss] Tuning the ARC towards LRU

2010-04-05 Thread Richard Elling
On Apr 5, 2010, at 3:24 PM, Peter Schuller wrote: I will have to look into it in better detail to understand the consequences. Is there a paper that describes the ARC as it is implemented in ZFS (since it clearly diverges from the IBM ARC)? There are various blogs, but perhaps the best

Re: [zfs-discuss] Removing SSDs from pool

2010-04-05 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Andreas Höschler Thanks for the clarification! This is very annoying. My intend was to create a log mirror. I used zpool add tank log c1t6d0 c1t7d0 and this was obviously false.

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-05 Thread Edward Ned Harvey
From: Kyle McDonald [mailto:kmcdon...@egenera.com] So does your HBA have newer firmware now than it did when the first disk was connected? Maybe it's the HBA that is handling the new disks differently now, than it did when the first one was plugged in? Can you down rev the HBA FW? Do you

Re: [zfs-discuss] no hot spare activation?

2010-04-05 Thread Eric Schrock
On Apr 5, 2010, at 3:38 AM, Garrett D'Amore wrote: Am I missing something here? Under what conditions can I expect hot spares to be recruited? Hot spares are activated by the zfs-retire agent in response to a list.suspect event containing one of the following faults:

Re: [zfs-discuss] no hot spare activation?

2010-04-05 Thread Garrett D'Amore
On 04/ 5/10 05:28 AM, Eric Schrock wrote: On Apr 5, 2010, at 3:38 AM, Garrett D'Amore wrote: Am I missing something here? Under what conditions can I expect hot spares to be recruited? Hot spares are activated by the zfs-retire agent in response to a list.suspect event containing

Re: [zfs-discuss] Removing SSDs from pool

2010-04-05 Thread Neil Perrin
On 04/05/10 11:43, Andreas Höschler wrote: Hi Khyron, No, he did *not* say that a mirrored SLOG has no benefit, redundancy-wise. He said that YOU do *not* have a mirrored SLOG. You have 2 SLOG devices which are striped. And if this machine is running Solaris 10, then you cannot remove a

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-05 Thread Daniel Carosone
On Sun, Apr 04, 2010 at 11:46:16PM -0700, Willard Korfhage wrote: Looks like it was RAM. I ran memtest+ 4.00, and it found no problems. Then why do you suspect the ram? Especially with 12 disks, another likely candidate could be an overloaded power supply. While there may be problems showing

Re: [zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-05 Thread Brad
I'm wondering if the author is talking about cache mirroring where the cache is mirrored between both controllers. If that is the case, is he saying that for every write to the active controlle,r a second write issued on the passive controller to keep the cache mirrored? -- This message

Re: [zfs-discuss] ZFS: Raid and dedup

2010-04-05 Thread Learner Study
Hi Folks: I'm wondering what is the correct flow when both raid5 and de-dup are enabled on a storage volume I think we should do de-dup first and then raid5 ... is that understanding correct? Thanks! ___ zfs-discuss mailing list

Re: [zfs-discuss] Removing SSDs from pool

2010-04-05 Thread Daniel Carosone
On Mon, Apr 05, 2010 at 07:43:26AM -0400, Edward Ned Harvey wrote: Is the database running locally on the machine? Or at the other end of something like nfs? You should have better performance using your present config than just about any other config ... By enabling the log devices, such as

Re: [zfs-discuss] ZFS: Raid and dedup

2010-04-05 Thread Daniel Carosone
On Mon, Apr 05, 2010 at 06:32:13PM -0700, Learner Study wrote: I'm wondering what is the correct flow when both raid5 and de-dup are enabled on a storage volume I think we should do de-dup first and then raid5 ... is that understanding correct? Not really. Strictly speaking, ZFS

Re: [zfs-discuss] ZFS: Raid and dedup

2010-04-05 Thread Learner Study
Hi Jeff: I'm a bit confused...did you say Correct to my orig email or the reply from Daniel...Is there a doc that may explain it better? Thanks! On Mon, Apr 5, 2010 at 6:54 PM, jeff.bonw...@oracle.com jeff.bonw...@oracle.com wrote: Correct. Jeff Sent from my iPhone On Apr 5, 2010, at

Re: [zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-05 Thread Torrey McMahon
The author mentions multipathing software in the blog entry. Kind of hard to mix that up with cache mirroring if you ask me. On 4/5/2010 9:16 PM, Brad wrote: I'm wondering if the author is talking about cache mirroring where the cache is mirrored between both controllers. If that is the

Re: [zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-05 Thread Tim Cook
On Mon, Apr 5, 2010 at 8:16 PM, Brad bene...@yahoo.com wrote: I'm wondering if the author is talking about cache mirroring where the cache is mirrored between both controllers. If that is the case, is he saying that for every write to the active controlle,r a second write issued on the

Re: [zfs-discuss] ZFS: Raid and dedup

2010-04-05 Thread Daniel Carosone
On Mon, Apr 05, 2010 at 06:58:57PM -0700, Learner Study wrote: Hi Jeff: I'm a bit confused...did you say Correct to my orig email or the reply from Daniel... Jeff is replying to your mail, not mine. It looks like he's read your question a little differently. By that reading, you are

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-05 Thread Willard Korfhage
It certainly has symptoms that match a marginal power supply, but I measured the power consumption some time ago and found it comfortably within the power supply's capacity. I've also wondered if the RAM is fine, but there is just some kind of flaky interaction of the ram configuration I had

Re: [zfs-discuss] ZFS: Raid and dedup

2010-04-05 Thread Richard Elling
On Apr 5, 2010, at 6:32 PM, Learner Study wrote: Hi Folks: I'm wondering what is the correct flow when both raid5 and de-dup are enabled on a storage volume I think we should do de-dup first and then raid5 ... is that understanding correct? Yes. If you look at the (somewhat

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-05 Thread Tim Cook
On Mon, Apr 5, 2010 at 9:39 PM, Willard Korfhage opensola...@familyk.orgwrote: It certainly has symptoms that match a marginal power supply, but I measured the power consumption some time ago and found it comfortably within the power supply's capacity. I've also wondered if the RAM is fine,

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-05 Thread Daniel Carosone
On Mon, Apr 05, 2010 at 09:46:58PM -0500, Tim Cook wrote: On Mon, Apr 5, 2010 at 9:39 PM, Willard Korfhage opensola...@familyk.orgwrote: It certainly has symptoms that match a marginal power supply, but I measured the power consumption some time ago and found it comfortably within the

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-05 Thread Willard Korfhage
Memtest didn't show any errors, but between Frank, early in the thread, saying that he had found memory errors that memtest didn't catch, and remove of DIMMs apparently fixing the problem, I too soon jumped to the conclusion it was the memory. Certainly there are other explanations. I see

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-05 Thread Daniel Carosone
On Mon, Apr 05, 2010 at 09:35:21PM -0700, Willard Korfhage wrote: By the way, I see that now one of the disks is listed as degraded - too many errors. Is there a good way to identify exactly which of the disks it is? It's hidden in iostat -E, of all places. -- Dan. pgpB1dUBrSfPC.pgp

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-05 Thread Tim Cook
On Tue, Apr 6, 2010 at 12:24 AM, Daniel Carosone d...@geek.com.au wrote: On Mon, Apr 05, 2010 at 09:35:21PM -0700, Willard Korfhage wrote: By the way, I see that now one of the disks is listed as degraded - too many errors. Is there a good way to identify exactly which of the disks it is?

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-05 Thread Daniel Carosone
On Tue, Apr 06, 2010 at 12:29:35AM -0500, Tim Cook wrote: On Tue, Apr 6, 2010 at 12:24 AM, Daniel Carosone d...@geek.com.au wrote: On Mon, Apr 05, 2010 at 09:35:21PM -0700, Willard Korfhage wrote: By the way, I see that now one of the disks is listed as degraded - too many errors. Is