Re: [zfs-discuss] ZFS Pool, what happen when disk failure

2010-04-24 Thread Haudy Kazemi
aneip wrote: I really new to zfs and also raid. I have 3 hard disk, 500GB, 1TB, 1.5TB. On each HD i wanna create 150GB partition + remaining space. I wanna create raidz for 3x150GB partition. This is for my document + photo. You should be able to create 150 GB slices on each drive, and

Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-24 Thread Brandon High
On Fri, Apr 23, 2010 at 7:17 PM, Edward Ned Harvey solar...@nedharvey.com wrote: As the thread unfolds, it appears, although netapp may sometimes have some problems with mv directories ... This is evidence that appears to be weakening ... Sometimes they do precisely what you would want them to

Re: [zfs-discuss] Benchmarking Methodologies

2010-04-24 Thread Robert Milkowski
On 21/04/2010 18:37, Ben Rockwood wrote: You've made an excellent case for benchmarking and where its useful but what I'm asking for on this thread is for folks to share the research they've done with as much specificity as possible for research purposes. :) However you can also find

Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-24 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey Actually, I find this very surprising:

Re: [zfs-discuss] ZFS Pool, what happen when disk failure

2010-04-24 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Haudy Kazemi Your remaining space can be configured as slices. These slices can be added directly to a second pool without any redundancy. If any drive fails, that whole non-redundant pool

Re: [zfs-discuss] ZFS Pool, what happen when disk failure

2010-04-24 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of aneip I really new to zfs and also raid. I have 3 hard disk, 500GB, 1TB, 1.5TB. On each HD i wanna create 150GB partition + remaining space. I wanna create raidz for 3x150GB

Re: [zfs-discuss] Data movement across filesystems within a pool

2010-04-24 Thread devsk
This is really painful. My source was a backup of my folders which I wanted as filesystems in the RAIDZ setup. So, I copied the source to the new pool and wanted to be able to move those folders to different filesystems within the RAIDZ. But its turning out to be a brand new copy and since its

Re: [zfs-discuss] ZFS Pool, what happen when disk failure

2010-04-24 Thread Robert Milkowski
On 24/04/2010 13:51, Edward Ned Harvey wrote: But what you might not know: If any pool fails, the system will crash. This actually depends on the failmode property setting in your pools. The default is panic, but it also might be wait or continue - see zpool(1M) man page for more details.

Re: [zfs-discuss] Data movement across filesystems within a pool

2010-04-24 Thread devsk
Is there anything anybody has to advise? Will I be better of copying each folder into its own FS from source pool? How about removal of the stuff that's now in this FS? How long will the removal of 770GB data containing 6millions files take? Cost1: copy folders into respective FS + remove

Re: [zfs-discuss] Data movement across filesystems within a pool

2010-04-24 Thread Richard Elling
Search the archives. This dead horse gets beaten about every 6-9 months or so. -- richard On Apr 24, 2010, at 7:37 AM, devsk wrote: Is there anything anybody has to advise? Will I be better of copying each folder into its own FS from source pool? How about removal of the stuff that's now

Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-24 Thread Richard Elling
On Apr 24, 2010, at 5:27 AM, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey

Re: [zfs-discuss] Data movement across filesystems within a pool

2010-04-24 Thread Bob Friesenhahn
On Sat, 24 Apr 2010, devsk wrote: This is really painful. My source was a backup of my folders which I wanted as filesystems in the RAIDZ setup. So, I copied the source to the new pool and wanted to be able to move those folders to different filesystems within the RAIDZ. But its turning out

[zfs-discuss] ZFS RAID-Z2 degraded vs RAID-Z1

2010-04-24 Thread Peter Tripp
Had an idea, could someone please tell me why it's wrong? (I feel like it has to be). A RaidZ-2 pool with one missing disk offers the same failure resilience as a healthy RaidZ1 pool (no data loss when one disk fails). I had initially wanted to do single parity raidz pool (5disk), but after a

Re: [zfs-discuss] ZFS Pool, what happen when disk failure

2010-04-24 Thread aneip
Thanks for all the answer, still trying to read slowly and understand. Pardon my English coz this is my 2nd language. I believe I owe some more explanation. The system is actually freenas which installed on separate disk. 3 disks, 500GB, 1TB and 1.5TB is for data only. The first pool will be

Re: [zfs-discuss] ZFS Pool, what happen when disk failure

2010-04-24 Thread Bob Friesenhahn
On Sat, 24 Apr 2010, aneip wrote: What I trying to avoid is, if 1 of the disk fail I will lost all of the data in the pool even from healthy drive. I not sure whether I can simple pull out 1 drive and all the file which located on the faulty drive will be lost. The file which on other drive

[zfs-discuss] not showing data in L2ARC or ZIL

2010-04-24 Thread Brad
I'm not showing any data being populated in the L2ARC or ZIL SSDs with a J4500 (48 - 500GB SATA drives). # zpool iostat -v capacity operationsbandwidth poolused avail read write read write - -

[zfs-discuss] Performance data for dedup?

2010-04-24 Thread Roy Sigurd Karlsbakk
Hi all I've been playing a little with dedup, and it seems it needs a truckload of memory, something I don't have on my test systems. Does anyone have performance data for large (20TB+) systems with dedup? roy ___ zfs-discuss mailing list

Re: [zfs-discuss] not showing data in L2ARC or ZIL

2010-04-24 Thread Bob Friesenhahn
On Sat, 24 Apr 2010, Brad wrote: We're running Solaris 10 10/09 with Oracle 10G - in our previous configs data was clearing shown in the L2ARC and ZIL but then again we didn't have 48GB (16GB previous tests) and a jbod. Thoughts? Clearly this is a read-optimized system. Sweet! My

[zfs-discuss] Extremely slow raidz resilvering

2010-04-24 Thread Leandro Vanden Bosch
Hello everyone, As one of the steps of improving my ZFS home fileserver (snv_134) I wanted to replace a 1TB disk with a newer one of the same vendor/model/size because this new one has 64MB cache vs. 16MB in the previous one. The removed disk will be use for backups, so I thought it's better off

Re: [zfs-discuss] Extremely slow raidz resilvering

2010-04-24 Thread Roy Sigurd Karlsbakk
ZFS first does a scan of indicies and such, which requires lots of seeks. After that, the resilvering starts. I guess if you give it an hour, it'll be done roy - Leandro Vanden Bosch l.vbo...@gmail.com skrev: Hello everyone, As one of the steps of improving my ZFS home fileserver

Re: [zfs-discuss] Extremely slow raidz resilvering

2010-04-24 Thread Leandro Vanden Bosch
Thanks Roy for your reply. I actually waited a little more than an hour, but I'm still going to wait a little longer following your suggestion and a little hunch of mine. I just found out that this new WD10EARS is one of the new 4k disks. I believed that only the 2TB models where 4k. See: BEFORE

Re: [zfs-discuss] ZFS RAID-Z2 degraded vs RAID-Z1

2010-04-24 Thread Roy Sigurd Karlsbakk
- Peter Tripp petertr...@gmail.com skrev: Can someone with a stronger understanding of ZFS tell me why a degraded RaidZ2 (minus one disk) is less efficient than RaidZ1? (Besides the fact that your pools are always reported as degraded.) I guess the same would apply with RaidZ2 vs RaidZ3

Re: [zfs-discuss] zfs-discuss Digest, Vol 54, Issue 153

2010-04-24 Thread Leandro Vanden Bosch
Confirmed then that the issue was with the WD10EARS. I swapped it out with the old one and things look a lot better: pool: datos state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait

Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-24 Thread Ragnar Sundblad
On 24 apr 2010, at 16.43, Richard Elling wrote: I do not recall reaching that conclusion. I think the definition of the problem is what you continue to miss. Me too then, I think. Can you please enlighten us about the definition of the problem? The .snapshot directories do precisely what

Re: [zfs-discuss] ZFS RAID-Z2 degraded vs RAID-Z1

2010-04-24 Thread Freddie Cash
On Sat, Apr 24, 2010 at 9:21 AM, Peter Tripp petertr...@gmail.com wrote: Can someone with a stronger understanding of ZFS tell me why a degraded RaidZ2 (minus one disk) is less efficient than RaidZ1? (Besides the fact that your pools are always reported as degraded.) I guess the same would

Re: [zfs-discuss] Extremely slow raidz resilvering

2010-04-24 Thread Leandro Vanden Bosch
Confirmed then that the issue was with the WD10EARS. I swapped it out with the old one and things look a lot better: pool: datos state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait

Re: [zfs-discuss] not showing data in L2ARC or ZIL

2010-04-24 Thread Bob Friesenhahn
On Sat, 24 Apr 2010, Brad wrote: Hmm so that means read requests are hitting/fulfilled by the arc cache? Am I correct in assuming that because the ARC cache is fulfilling read requests, the zpool and l2arc is barely touched? That is the state of nirvana you are searching for, no? Bob --

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-04-24 Thread Kyle McDonald
On 3/9/2010 1:55 PM, Matt Cowger wrote: That's a very good point - in this particular case, there is no option to change the blocksize for the application. I have no way of guessing the effects it would have, but is there a reason that the filesystem blocks can't be a multiple of the

Re: [zfs-discuss] not showing data in L2ARC or ZIL

2010-04-24 Thread Brad
thanks - :) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss