Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2010-02-14 Thread Bruno Damour
Hello, I'm now thinking there is some _real_ bug in the way zfs handles files systems created with the pool itself (ie tank filesystem when zpool is tank, usually mounted as /tank) My own experiens shows that zfs is unable to send/receive recursively (snapshots, child fs) properly when the

Re: [zfs-discuss] SSD and ZFS

2010-02-14 Thread Tracey Bernath
OK, that was the magic incantation I was looking for: - changing the noprefetch option opened the floodgates to the L2ARC - changing the max queue depth relived the wait time on the drives, although I may undo this again in the benchmarking since these drives all have NCQ I went from all four

[zfs-discuss] ZFS slowness under domU high load

2010-02-14 Thread Bogdan Ćulibrk
Greetings zfs-discuss@ I was trying to narrow this down for some quite time. The problem is resides on couple of osol/sxce boxes that are used as dom0 hosts. Under high disk load on domU guests (backup process for example) domU performance is terrible. The worst thing is that iostat shows

Re: [zfs-discuss] ZFS slowness under domU high load

2010-02-14 Thread Richard Elling
On Feb 14, 2010, at 9:24 AM, Bogdan Ćulibrk wrote: Greetings zfs-discuss@ I was trying to narrow this down for some quite time. The problem is resides on couple of osol/sxce boxes that are used as dom0 hosts. Under high disk load on domU guests (backup process for example) domU

[zfs-discuss] Painfully slow RAIDZ2 as fibre channel COMSTAR export

2010-02-14 Thread Dave Pooser
I'm trying to set up an OpenSolaris 2009.6 server as a Fibre Channel storage device, and I'm seeing painfully slow performance while copying large (6-50GB) files -- like 3-5 MB/second over 4Gb FC. However, if instead of creating a volume and exporting it via FC I create a standard filesystem and

Re: [zfs-discuss] ZFS slowness under domU high load

2010-02-14 Thread Bogdan Ćulibrk
Richard first of all thank you for your time looking into this, apricieting that. What are my options from here? To move onto zvol with greater blocksize? 64k? 128k? Or I will get into another trouble going that way when I have small reads coming from domU (ext3 with default blocksize of 4k)?

Re: [zfs-discuss] ZFS slowness under domU high load

2010-02-14 Thread Kjetil Torgrim Homme
Bogdan Ćulibrk b...@default.rs writes: What are my options from here? To move onto zvol with greater blocksize? 64k? 128k? Or I will get into another trouble going that way when I have small reads coming from domU (ext3 with default blocksize of 4k)? yes, definitely. have you considered

Re: [zfs-discuss] Painfully slow RAIDZ2 as fibre channel COMSTAR export

2010-02-14 Thread Nigel Smith
Hi Dave So which hard drives are connected to which controllers? And what device drivers are those controllers using? The output from 'format', 'cfgadm' and 'prtconf -D' may help us to understand. Strange that you say that there are two hard drives per controllers, but three drives are showing

Re: [zfs-discuss] Painfully slow RAIDZ2 as fibre channel COMSTAR export

2010-02-14 Thread Dave Pooser
So which hard drives are connected to which controllers? And what device drivers are those controllers using? 0. c7t0d0 DEFAULT cyl 7764 alt 2 hd 255 sec 63 /p...@0,0/pci8086,3...@3/pci1000,3...@0/s...@0,0 1. c7t1d0 ATA-Hitachi HDS72202-A20N-1.82TB

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Edward Ned Harvey
Never mind. I have no interest in performance tests for Solaris 10. The code is so old, that it does not represent current ZFS at all. Whatever. Regardless of what you say, it does show: . Which is faster, raidz, or a stripe of mirrors? . How much does raidz2 hurt

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Edward Ned Harvey
iozone -m -t 8 -T -O -r 128k -o -s 12G Actually, it seems that this is more than sufficient: iozone -m -t 8 -T -r 128k -o -s 4G Good news, cuz I kicked off the first test earlier today, and it seems like it will run till Wednesday. ;-) The first run, on a single disk, took 6.5 hrs,

Re: [zfs-discuss] Painfully slow RAIDZ2 as fibre channel COMSTAR export

2010-02-14 Thread Bob Friesenhahn
On Sun, 14 Feb 2010, Dave Pooser wrote: c7, c8 and c9 are LSI controllers using the MPT driver. The motherboard has 6 SATA ports which are presented as two controllers (presumably c10 and c11) one for ports 0-3 and one for ports 4 and 5; both currently use the PCI-IDE drivers. One should

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Thomas Burgess
Whatever. Regardless of what you say, it does show: · Which is faster, raidz, or a stripe of mirrors? · How much does raidz2 hurt performance compared to raidz? · Which is faster, raidz, or hardware raid 5? · Is a mirror twice as fast as a single disk for

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Bob Friesenhahn
On Sun, 14 Feb 2010, Edward Ned Harvey wrote: Never mind. I have no interest in performance tests for Solaris 10. The code is so old, that it does not represent current ZFS at all. Whatever.  Regardless of what you say, it does show: Since Richard abandoned Sun (in favor of gmail), he has

Re: [zfs-discuss] Painfully slow RAIDZ2 as fibre channel COMSTAR export

2010-02-14 Thread Thomas Burgess
c7, c8 and c9 are LSI controllers using the MPT driver. The motherboard has 6 SATA ports which are presented as two controllers (presumably c10 and c11) one for ports 0-3 and one for ports 4 and 5; both currently use the PCI-IDE drivers. on my motherboard, i can make the onboard sata ports

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Bob Friesenhahn
On Sun, 14 Feb 2010, Edward Ned Harvey wrote: iozone -m -t 8 -T -O -r 128k -o -s 12G Actually, it seems that this is more than sufficient: iozone -m -t 8 -T -r 128k -o -s 4G Good news, cuz I kicked off the first test earlier today, and it seems like it will run till Wednesday. ;-)

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Bob Friesenhahn
On Sun, 14 Feb 2010, Thomas Burgess wrote: Solaris 10 has a really old version of ZFS.  i know there are some pretty big differences in zfs versions from my own non scientific benchmarks.  It would make sense that people wouldn't be as interested in benchmarks of solaris 10 ZFS seeing as

Re: [zfs-discuss] Painfully slow RAIDZ2 as fibre channel COMSTAR export

2010-02-14 Thread Tim Cook
On Sun, Feb 14, 2010 at 8:49 PM, Thomas Burgess wonsl...@gmail.com wrote: c7, c8 and c9 are LSI controllers using the MPT driver. The motherboard has 6 SATA ports which are presented as two controllers (presumably c10 and c11) one for ports 0-3 and one for ports 4 and 5; both currently use

Re: [zfs-discuss] ZFS slowness under domU high load

2010-02-14 Thread zfs ml
On 2/14/10 4:12 PM, Kjetil Torgrim Homme wrote: Bogdan Ćulibrkb...@default.rs writes: What are my options from here? To move onto zvol with greater blocksize? 64k? 128k? Or I will get into another trouble going that way when I have small reads coming from domU (ext3 with default blocksize of

Re: [zfs-discuss] ZFS slowness under domU high load

2010-02-14 Thread zfs ml
On 2/14/10 7:02 PM, zfs ml wrote: On 2/14/10 4:12 PM, Kjetil Torgrim Homme wrote: Bogdan Ćulibrkb...@default.rs writes: What are my options from here? To move onto zvol with greater blocksize? 64k? 128k? Or I will get into another trouble going that way when I have small reads coming from

Re: [zfs-discuss] ZFS ARC Hits By App

2010-02-14 Thread Sanjeev
Abdullah, On Thu, Feb 11, 2010 at 03:42:38PM -0500, Abdullah Al-Dahlawi wrote: Hi Sanjeev linking the application to the ARCSTAT_BUMP(arcstat_hits) is not straightforward and time consuming especially if I am running many experiments. Brendan has commented on on the post by providing an

Re: [zfs-discuss] Painfully slow RAIDZ2 as fibre channel COMSTAR export

2010-02-14 Thread Dave Pooser
on my motherboard, i can make the onboard sata ports show up as IDE or SATA, you may look into that.  It would probably be something like AHCI mode. Yeah, I changed the motherboard setting from enhanced to AHCI and now those ports show up as SATA. -- Dave Pooser, ACSA Manager of Information

Re: [zfs-discuss] Painfully slow RAIDZ2 as fibre channel COMSTAR export

2010-02-14 Thread Thomas Burgess
oh, so i WAS right? awesome On Sun, Feb 14, 2010 at 10:45 PM, Dave Pooser dave@alfordmedia.comwrote: on my motherboard, i can make the onboard sata ports show up as IDE or SATA, you may look into that. It would probably be something like AHCI mode. Yeah, I changed the motherboard

Re: [zfs-discuss] Painfully slow RAIDZ2 as fibre channel COMSTAR export

2010-02-14 Thread Dave Pooser
I'm off to straighten out my controller distribution, check to see if I have write caching turned off on the motherboard ports, install the b132 build, and possibly grab some dinner while I'm about it. I'll report back to the list with any progress or lack thereof. OK, the issue seems to be

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Richard Elling
On Feb 14, 2010, at 6:45 PM, Thomas Burgess wrote: Whatever. Regardless of what you say, it does show: · Which is faster, raidz, or a stripe of mirrors? · How much does raidz2 hurt performance compared to raidz? · Which is faster, raidz, or hardware raid 5?