Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-08 Thread Mattias Pantzare
On Wed, Sep 8, 2010 at 06:59, Edward Ned Harvey sh...@nedharvey.com wrote: On Tue, Sep 7, 2010 at 4:59 PM, Edward Ned Harvey sh...@nedharvey.com wrote: I think the value you can take from this is: Why does the BPG say that?  What is the reasoning behind it? Anything that is a rule of thumb

Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-08 Thread hatish
Rebuild time is not a concern for me. The concern with rebuilding was the stress it puts on the disks for an extended period of time (increasing the chances of another disk failure). The % of data used doesnt matter, as the system will try to get it done at max speed, thus creating the

Re: [zfs-discuss] Configuration questions for Home File Server (CPU cores, dedup, checksum)?

2010-09-08 Thread Roy Sigurd Karlsbakk
3. Should I consider using dedup if my server has only 8Gb of RAM? Or, will that not be enough to hold the DDT? In which case, should I add L2ARC / ZIL or am I better to just skip using dedup on a home file server? As Cindy said, skip dedup for now. It's not stable (enough). Try to destroy a

Re: [zfs-discuss] Configuration questions for Home File Server (CPU cores, dedup, checksum)?

2010-09-08 Thread Darren J Moffat
On 08/09/2010 00:41, Scott Meilicke wrote: Craig, 3. I do not think you will get much dedupe on video, music and photos. I would not bother. If you really wanted to know at some later stage, you could create a new file system, enable dedupe, and copy your data (or a subset) into it just to

[zfs-discuss] Solaris 10u9

2010-09-08 Thread David Magda
The 9/10 Update appears to have been released. Some of the more noticeable ZFS stuff that made it in: * Triple parity RAID-Z (raidz3) – In this release, a redundant RAID-Z configuration can now have either single-parity, double-parity, or triple- parity, which means that one, two, or three

Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-08 Thread Edward Ned Harvey
From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of Mattias Pantzare It is about 1 vdev with 12 disk or 2 vdev with 6 disks. If you have 2 vdev you have to read half the data compared to 1 vdev to resilver a disk. Let's suppose you have 1T of data. You have 12-disk raidz2.

Re: [zfs-discuss] Solaris 10u9

2010-09-08 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of David Magda The 9/10 Update appears to have been released. Some of the more noticeable ZFS stuff that made it in: More at: http://docs.sun.com/app/docs/doc/821-1840/gijtg Awesome!

Re: [zfs-discuss] Solaris 10u9

2010-09-08 Thread Tomas Ögren
On 08 September, 2010 - Edward Ned Harvey sent me these 0,6K bytes: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of David Magda The 9/10 Update appears to have been released. Some of the more noticeable ZFS stuff that made it in:

Re: [zfs-discuss] Solaris 10u9

2010-09-08 Thread David Magda
On Wed, September 8, 2010 09:46, Tomas Ögren wrote: On 08 September, 2010 - Edward Ned Harvey sent me these 0,6K bytes: Now when is dedup going to be ready? ;-) It's not in U9 at least: ... 16 stmf property support 17 Triple-parity RAID-Z 18 Snapshot user holds 19 Log device

Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-08 Thread Mattias Pantzare
On Wed, Sep 8, 2010 at 15:27, Edward Ned Harvey sh...@nedharvey.com wrote: From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of Mattias Pantzare It is about 1 vdev with 12 disk or  2 vdev with 6 disks. If you have 2 vdev you have to read half the data compared to 1 vdev to

Re: [zfs-discuss] Solaris 10u9

2010-09-08 Thread Frank Cusack
On 9/8/10 9:32 AM -0400 Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of David Magda The 9/10 Update appears to have been released. Some of the more noticeable ZFS stuff that made it in: More at:

Re: [zfs-discuss] Solaris 10u9

2010-09-08 Thread DeirdréŽ Straughan
For those more audio-visually inclined, there's a series of short videos on http://blogs.sun.com/video/ with George Wilson discussing what's new. Frank Cusack wrote: On 9/8/10 9:32 AM -0400 Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-

[zfs-discuss] NFS performance issue

2010-09-08 Thread Dr. Martin Mundschenk
Hi! I searched the web for hours, trying to solve the NFS/ZFS low performance issue on my just setup OSOL box (snv134). The problem is discussed in many threads but I've found no solution. On a nfs shared volume, I get write performance of 3,5M/sec (!!) read performance is about 50M/sec

Re: [zfs-discuss] NFS performance issue

2010-09-08 Thread Ray Van Dolson
On Wed, Sep 08, 2010 at 01:20:58PM -0700, Dr. Martin Mundschenk wrote: Hi! I searched the web for hours, trying to solve the NFS/ZFS low performance issue on my just setup OSOL box (snv134). The problem is discussed in many threads but I've found no solution. On a nfs shared volume, I

[zfs-discuss] Forgot username

2010-09-08 Thread Rather not say
Hello - After waiting an hour or so for opensolaris, I had forgot what username I put so I booted into windows to see if I could find it, no luck. How can I figure it out? -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Forgot username

2010-09-08 Thread Ian Collins
On 09/ 9/10 11:37 AM, Rather not say wrote: Hello - After waiting an hour or so for opensolaris, I had forgot what username I put so I booted into windows to see if I could find it, no luck. How can I figure it out? Not by asking here! The opensolaris-help list is more appropriate. Boot

[zfs-discuss] performance leakage when copy huge data

2010-09-08 Thread Fei Xu
Hi all: I'm a new guy who is just started ZFS for half a year. We are using Nexenta in corporate pilot environment. these days, when I was trying to move around 4TB data from an old pool(4*2TB raidz) to new pool (11*2TB raidz2), it seems will never end up successfully. 1. I used CP

Re: [zfs-discuss] performance leakage when copy huge data

2010-09-08 Thread Ian Collins
On 09/ 9/10 01:14 PM, Fei Xu wrote: Hi all: I'm a new guy who is just started ZFS for half a year. We are using Nexenta in corporate pilot environment. these days, when I was trying to move around 4TB data from an old pool(4*2TB raidz) to new pool (11*2TB raidz2), it seems will never

Re: [zfs-discuss] performance leakage when copy huge data

2010-09-08 Thread Fei Xu
thank you Ian. I've re-build the pool to 9*2TB Raidz2 and start the ZFS send command. result will come out after about 3 hours. thanks fei -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] performance leakage when copy huge data

2010-09-08 Thread Fei Xu
now it gets extremly slow at around 400G sent. first iostat result is captured when the send operation starts. capacity operationsbandwidth pool alloc free read write read write --- - - - - - - sh001a 37.6G

Re: [zfs-discuss] performance leakage when copy huge data

2010-09-08 Thread Ian Collins
On 09/ 9/10 02:42 PM, Fei Xu wrote: now it gets extremly slow at around 400G sent. first iostat result is captured when the send operation starts. capacity operationsbandwidth pool alloc free read write read write --- - - - -

Re: [zfs-discuss] performance leakage when copy huge data

2010-09-08 Thread Fei Xu
ve you get dedup enabled? Note the read bandwith is much higher. -- Ian. no, dedup is not enabled since it's still not stable enough even for test environment. here is a JPG of Read/Write indicator. RED line is read and GREEN line is write. you can see, because destination pool

Re: [zfs-discuss] performance leakage when copy huge data

2010-09-08 Thread Fei Xu
I dig deeper into it and might find some useful information. I attached an X25 SSD for ZIL to see if it helps. but no luck. I run IOstate -xnz for more details and got interesting result as below.(maybe too long) some explaination: 1. c2d0 is SSD for ZIL 2. c0t3d0, c0t20d0, c0t21d0, c0t22d0 is

Re: [zfs-discuss] zpool create using whole disk - do I add p0? E.g. c4t2d0 or c42d0p0

2010-09-08 Thread R.G. Keen
Hi Craig, Don't use the p* devices for your storage pools. They represent the larger fdisk partition. Use the d* devices instead, like this example below: Good advice, something I wondered about too. However, aside from my having guessed right once (I think...) I have no clue why this

Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-08 Thread Freddie Cash
On Wed, Sep 8, 2010 at 6:27 AM, Edward Ned Harvey sh...@nedharvey.com wrote: Both of the above situations resilver in equal time, unless there is a bus bottleneck.  21 disks in a single raidz3 will resilver just as fast as 7 disks in a raidz1, as long as you are avoiding the bus bottleneck.