Re: [zfs-discuss] Migrating 10TB of data from NTFS is there a simple way?

2009-07-04 Thread Eric D. Mudama
On Fri, Jul 3 at 16:34, Erik Trimble wrote: Ian Collins wrote: Ross wrote: [please keep some context for the email list] Quick question to the more experienced guys here - how much space would you end up with from 8 1.5TB drives in a raid-z array? Around 8-9TB? Bearing in mind

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Phil Harman
ZFS doesn't mix well with mmap(2). This is because ZFS uses the ARC instead of the Solaris page cache. But mmap() uses the latter. So if anyone maps a file, ZFS has to keep the two caches in sync. cp(1) uses mmap(2). When you use cp(1) it brings pages of the files it copies into the

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Joerg Schilling
Mattias Pantzare pant...@ludd.ltu.se wrote: Performance when coping 236 GB of files (each file is 5537792 bytes, with 20001 files per directory) from one directory to another: Copy Method                             Data Rate    ==

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Joerg Schilling
Phil Harman phil.har...@sun.com wrote: ZFS doesn't mix well with mmap(2). This is because ZFS uses the ARC instead of the Solaris page cache. But mmap() uses the latter. So if anyone maps a file, ZFS has to keep the two caches in sync. cp(1) uses mmap(2). When you use cp(1) it brings

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
On Sat, 4 Jul 2009, Jonathan Edwards wrote: somehow i don't think that reading the first 64MB off (presumably) off a raw disk device 3 times and picking the middle value is going to give you much useful information on the overall state of the disks .. i believe this was more of a quick hack

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
On Sat, 4 Jul 2009, Phil Harman wrote: ZFS doesn't mix well with mmap(2). This is because ZFS uses the ARC instead of the Solaris page cache. But mmap() uses the latter. So if anyone maps a file, ZFS has to keep the two caches in sync. cp(1) uses mmap(2). When you use cp(1) it brings pages

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
On Sat, 4 Jul 2009, Phil Harman wrote: If you reboot, your cpio(1) tests will probably go fast again, until someone uses mmap(2) on the files again. I think tar(1) uses read(2), but from my iPod I can't be sure. It would be interesting to see how tar(1) performs if you run that test before

Re: [zfs-discuss] surprisingly poor performance

2009-07-04 Thread Bob Friesenhahn
On Sat, 4 Jul 2009, James Lever wrote: Any insightful observations? Probably multiple slog devices are used to expand slog size and not used in parallel since that would require somehow knowing the order. The principle bottleneck is likely the update rate of the first device in the chain,

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Gary Mills
On Sat, Jul 04, 2009 at 08:48:33AM +0100, Phil Harman wrote: ZFS doesn't mix well with mmap(2). This is because ZFS uses the ARC instead of the Solaris page cache. But mmap() uses the latter. So if anyone maps a file, ZFS has to keep the two caches in sync. That's the first I've heard of

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
A tar pipeline still provides terrible file copy performance. Read bandwidth is only 26 MB. So I stopped the tar copy and re-tried the cpio copy. A second copy with the cpio results in a read/write data rate of only 54.9 MB/s (vs the just experienced 132 MB/s). Performance is reduced by

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Joerg Schilling
Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: A tar pipeline still provides terrible file copy performance. Read bandwidth is only 26 MB. So I stopped the tar copy and re-tried the cpio copy. A second copy with the cpio results in a read/write data rate of only 54.9 MB/s (vs the

Re: [zfs-discuss] Migrating 10TB of data from NTFS is there a simple way?

2009-07-04 Thread Erik Trimble
Ian Collins wrote: Ross wrote: Is that accounting for ZFS overhead? I thought it was more than that (but of course, it's great news if not) :-) A raidz2 pool with 8 500G drives showed 2.67GB free. Same here. The ZFS overhead appears to be much smaller than similar UFS filesystems. E.g.

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Phil Harman
Joerg Schilling wrote: Phil Harman phil.har...@sun.com wrote: ZFS doesn't mix well with mmap(2). This is because ZFS uses the ARC instead of the Solaris page cache. But mmap() uses the latter. So if anyone maps a file, ZFS has to keep the two caches in sync. cp(1) uses mmap(2). When

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
On Sat, 4 Jul 2009, Joerg Schilling wrote: by more than half. Based on yesterday's experience, that may diminish to only 33 MB/s. star -copy -no-fsync bs=8m fs=256m -C from-dir . to-dir is nearly 40% faster than find . | cpio -pdum to-dir Did you try to use highly

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Phil Harman
Bob Friesenhahn wrote: On Sat, 4 Jul 2009, Phil Harman wrote: If you reboot, your cpio(1) tests will probably go fast again, until someone uses mmap(2) on the files again. I think tar(1) uses read(2), but from my iPod I can't be sure. It would be interesting to see how tar(1) performs if

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Joerg Schilling
Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Sat, 4 Jul 2009, Joerg Schilling wrote: by more than half. Based on yesterday's experience, that may diminish to only 33 MB/s. star -copy -no-fsync bs=8m fs=256m -C from-dir . to-dir is nearly 40% faster than find .

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Jonathan Edwards
On Jul 4, 2009, at 11:57 AM, Bob Friesenhahn wrote: This brings me to the absurd conclusion that the system must be rebooted immediately prior to each use. see Phil's later email .. an export/import of the pool or a remount of the filesystem should clear the page cache - with mmap'd files

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Phil Harman
Gary Mills wrote: On Sat, Jul 04, 2009 at 08:48:33AM +0100, Phil Harman wrote: ZFS doesn't mix well with mmap(2). This is because ZFS uses the ARC instead of the Solaris page cache. But mmap() uses the latter. So if anyone maps a file, ZFS has to keep the two caches in sync. That's

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
Ok, here is the scoop on the dire Solaris 10 (Generic_141415-03) performance bug on my Sun Ultra 40-M2 attached to a StorageTek 2540 with latest firmware. I rebooted the system used cpio to send the input files to /dev/null, and then immediately used cpio a second time to send the input files

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
On Sat, 4 Jul 2009, Phil Harman wrote: However, this is only part of the problem. The fundamental issue is that ZFS has its own ARC apart from the Solaris page cache, so whenever mmap() is used, all I/O to that file has to make sure that the two caches are in sync. Hence, a read(2) on a file

Re: [zfs-discuss] [storage-discuss] surprisingly poor performance

2009-07-04 Thread Miles Nordin
rw == Ross Walker rswwal...@gmail.com writes: rw Barriers are by default are disabled on ext3 mounts... http://lwn.net/Articles/283161/ https://bugzilla.redhat.com/show_bug.cgi?id=458936 enabled by default on SLES. to enable on other distro: mount -t ext3 -o barrier=1 device mount

Re: [zfs-discuss] [storage-discuss] surprisingly poor performance

2009-07-04 Thread David Magda
On Jul 4, 2009, at 14:30, Miles Nordin wrote: yes, which is why it's worth suspecting knfsd as well. However I don't think you can sell a Solaris system that performs 1/3 as well on better hardware without a real test case showing the fast system's broken. It should be noted that RAID-0

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Joerg Schilling
Phil Harman phil.har...@sun.com wrote: I think Solaris (if you count SunOS 4.0, which was part of Solaris 1.0) was the first UNIX to get a working implementation of mmap(2) for files (if I recall correctly, BSD 4.3 had a manpage but no implementation for files). From that we got a whole

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Phil Harman
Bob Friesenhahn wrote: On Sat, 4 Jul 2009, Phil Harman wrote: However, this is only part of the problem. The fundamental issue is that ZFS has its own ARC apart from the Solaris page cache, so whenever mmap() is used, all I/O to that file has to make sure that the two caches are in sync.

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
On Sat, 4 Jul 2009, Phil Harman wrote: However, it seems that memory mapping is not responsible for the problem I am seeing here. Memory mapping may make the problem seem worse, but it is clearly not the cause. mmap(2) is what brings ZFS files into the page cache. I think you've shown us

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
On Sat, 4 Jul 2009, Jonathan Edwards wrote: this is only going to help if you've got problems in zfetch .. you'd probably see this better by looking for high lock contention in zfetch with lockstat This is what lockstat says when performance is poor: Adaptive mutex spin: 477 events in

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread dick hoogendijk
On Sat, 4 Jul 2009 13:03:52 -0500 (CDT) Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Sat, 4 Jul 2009, Joerg Schilling wrote: Did you try to use highly performant software like star? No, because I don't want to tarnish your software's stellar reputation. I am focusing on

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Phil Harman
Bob Friesenhahn wrote: On Sat, 4 Jul 2009, Phil Harman wrote: However, it seems that memory mapping is not responsible for the problem I am seeing here. Memory mapping may make the problem seem worse, but it is clearly not the cause. mmap(2) is what brings ZFS files into the page cache. I

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
On Sat, 4 Jul 2009, Phil Harman wrote: This is not a new problem. It seems that I have been banging my head against this from the time I started using zfs. I'd like to see mpstat 1 for each case, on an otherwise idle system, but then there's probably a whole lot of dtrace I'd like to do