Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-28 Thread Bob Friesenhahn
high-speed streams can be ramped up quickly while files belonging to a pool which has recently fed low-speed streams can be ramped up more conservatively (until proven otherwise) in order to not flood memory and starve the I/O needed by other streams. Bob -- Bob Friesenhahn bfrie

Re: [zfs-discuss] USF drive on S10u7

2009-07-28 Thread Bob Friesenhahn
e ones used for zfs. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.ope

Re: [zfs-discuss] When writing to SLOG at full speed all disk IO is blocked

2009-07-28 Thread Bob Friesenhahn
uge synchronous writes should necessary block reading (particularly if the reads are for unrelated blocks/files), but it is understandable if zfs focuses more on the writes. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Main

Re: [zfs-discuss] When writing to SLOG at full speed all disk IO is blocked

2009-07-27 Thread Bob Friesenhahn
riters of important updates may be blocked due to many readers trying to access those important updates. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___

Re: [zfs-discuss] Help with setting up ZFS

2009-07-26 Thread Bob Friesenhahn
address, I supply the identification of the system via an script argument. The name could be obtained from `uname -n` or `hostname` instead. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http:/

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-26 Thread Bob Friesenhahn
directly. Zfs removes the RAID obfustication which exists in traditional RAID systems. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-25 Thread Bob Friesenhahn
t the disks write data in the most efficient order, but it absolutely must commit all of the data when requested so that the checkpoint is valid. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http:/

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Bob Friesenhahn
On Fri, 24 Jul 2009, Frank Middleton wrote: On 07/24/09 04:35 PM, Bob Friesenhahn wrote: Regardless, it [VirtualBox] has committed a crime. But ZFS is a journalled file system! Any hardware can lose a flush; From my understanding, ZFS is not a journalled file system. ZFS relies on

Re: [zfs-discuss] slog writing patterns vs SSD tech.

2009-07-24 Thread Bob Friesenhahn
then a time may come where the drive suddenly "hits the wall" and is no longer able to erase the data as fast as it comes in. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Grap

Re: [zfs-discuss] The importance of ECC RAM for ZFS

2009-07-24 Thread Bob Friesenhahn
d a crime. Regardless, it has committed a crime. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zf

Re: [zfs-discuss] SSD's and ZFS...

2009-07-24 Thread Bob Friesenhahn
=1253.74 ops/sec Min xfer= 85589.00 ops Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org

Re: [zfs-discuss] SSD's and ZFS...

2009-07-24 Thread Bob Friesenhahn
On Fri, 24 Jul 2009, Bob Friesenhahn wrote: This seems like rather low random write performance. My 12-drive array of rotating rust obtains 3708.89 ops/sec. In order to be effective, it seems that a synchronous write log should perform considerably better than the backing store. Actually

Re: [zfs-discuss] SSD's and ZFS...

2009-07-24 Thread Bob Friesenhahn
5000 on reads. This seems like rather low random write performance. My 12-drive array of rotating rust obtains 3708.89 ops/sec. In order to be effective, it seems that a synchronous write log should perform considerably better than the backing store. Bob -- Bob Friesenhahn

Re: [zfs-discuss] L2ARC support in Solaris 10 (Update 8?)

2009-07-22 Thread Bob Friesenhahn
ggest maxing out your server RAM capacity before worrying about adding a L2ARC. The reason why is that RAM is full speed and contains the L1ARC. The only reason to do otherwise if if you can't afford it. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/user

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-22 Thread Bob Friesenhahn
> /dev/null' 144000768 blocks real35m22.41s user0m4.40s sys 1m14.22s Notice that with 3X the files, the throughput is dramatically reduced and the time is the same for both cases. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.or

Re: [zfs-discuss] virtualization, alignment and zfs variation stripes

2009-07-22 Thread Bob Friesenhahn
nly part of the block is updated. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opens

Re: [zfs-discuss] SSDs get faster and less expensive

2009-07-21 Thread Bob Friesenhahn
on a server and then saved off to bulk storage. The local disks on the server surely see massive amounts of use. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagic

Re: [zfs-discuss] SSDs get faster and less expensive

2009-07-21 Thread Bob Friesenhahn
theroretically results in Poof! Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] SSDs get faster and less expensive

2009-07-21 Thread Bob Friesenhahn
e end of its lifetime. Unless drive ages are carefully staggered, or different types of drives are intentionally used, it might be that data redundancy does not help. Poof! Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maint

Re: [zfs-discuss] SSDs get faster and less expensive

2009-07-21 Thread Bob Friesenhahn
icksArea.0 This is pretty exciting stuff. We just have to switch to Windows 7. :-) I notice that the life of the drives depends considerably on write cycles. 20GB/day 24/7 does not support a data intensive environment, which could easly write 10-20X that much. Bob -- Bob Friesen

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-20 Thread Bob Friesenhahn
thing to get right. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris

Re: [zfs-discuss] Motherboard for home zfs/solaris file server

2009-07-20 Thread Bob Friesenhahn
K10 CPU caused any problem for Solaris. The chipset used on the motherboard is probably what you should pay attention to. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org

Re: [zfs-discuss] triple-parity: RAID-Z3

2009-07-20 Thread Bob Friesenhahn
other parity disk to an existing raidz vdev but I don't know how much work that entails. Zfs development seems to be overwelmed with marketing-driven requirements lately and it is time to get back to brass tacks and make sure that the parts already developed are truely enterprise-grade.

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-19 Thread Bob Friesenhahn
that the 10TB data lost to a failed pool was not lost due to lack of ECC. It was lost because VirtualBox intentionally broke the guest operating system. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://ww

Re: [zfs-discuss] triple-parity: RAID-Z3

2009-07-19 Thread Bob Friesenhahn
nice. Before developers worry about such exotic features, I would rather that they attend to the gross performance issues so that zfs performs at least as well as Windows NTFS or Linux XFS in all common cases. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-19 Thread Bob Friesenhahn
ctim. I think that the standard disclaimer "Always use protection" applies here. Victims who do not use protection should assume substantial guilt for their subsequent woes. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick M

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-19 Thread Bob Friesenhahn
ed, and not used for any critical application. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___ zfs-discuss mailing list zfs-dis

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-19 Thread Bob Friesenhahn
ware cache sync works and is respected. Without taking advantage of the drive caches, zfs would be considerably less performant. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Graphics

Re: [zfs-discuss] triple-parity: RAID-Z3

2009-07-19 Thread Bob Friesenhahn
easy. A RAID system with distributed parity (like raidz) does not have a "parity device". Instead, all disks are treated as equal. Without distributed parity you have a bottleneck and it becomes difficult to scale the array to different stripe sizes. Bob -- Bob Friesenhahn bfrie...@

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Bob Friesenhahn
Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listi

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-16 Thread Bob Friesenhahn
I have received email that Sun CR numbers 6861397 & 6859997 have been created to get this performance problem fixed. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-15 Thread Bob Friesenhahn
0.00.0 0 0 c6t0d0 0.00.00.00.0 0.0 0.00.00.0 0 0 c2t202400A0B83A8A0Bd31 0.00.00.00.0 0.0 0.00.00.0 0 0 c3t202500A0B83A8A0Bd31 0.00.00.00.0 0.0 0.00.00.0 0 0 freddy:vold(pid508) Bob -- Bob Friesenhah

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-15 Thread Bob Friesenhahn
em with 40 fast SAS drives. This is the opposite situation from the zfs writes which periodically push the hardware to its limits. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Gra

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-15 Thread Bob Friesenhahn
laris improvement is not as much as was assumed. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolari

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-15 Thread Bob Friesenhahn
ng's hypothesis (which is the same as my own). Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Bob Friesenhahn
mprovement. This is demonstrated by Scott Lawson's little two disk mirror almost producing the same performance as our much more exotic setups. Evidence suggests that SPARC systems are doing better than x86. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simples

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Bob Friesenhahn
what are you running?): 4m7.13s This new one from Scott Lawson is incredible (but technically quite possible): SPARC Enterprise M3000, single SAS mirror pair: 3m25.13s Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maint

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Bob Friesenhahn
t your little two disk mirror reads as fast as mega Sun systems with 38+ disks and striped vdevs to boot. Incredible! Does this have something to do with your well-managed power and cooling? :-) Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ G

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Bob Friesenhahn
Solaris kernel is throttling the read rate so that throwing more and faster hardware at the problem does not help. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ _

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Bob Friesenhahn
has a syntax error when used for Solaris 10 U7. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Bob Friesenhahn
ng to much better performance than we are seeing. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opens

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Bob Friesenhahn
details, the script is updated to dump some system info and the pool configuration. Refresh from http://www.simplesystems.org/users/bfriesen/zfs-discuss/zfs-cache-test.ksh Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Bob Friesenhahn
ad 540MB/second from a huge file). Do the math and see if you think that zfs is giving you the read performance you expect based on your hardware. I think that we are encountering several bugs here. We also have a general read bottleneck. Bob -- Bob Friesenhahn bfrie...@simple.dal

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Bob Friesenhahn
rejected. If star is truely more efficient than cpio, it may make the difference even more obvious. What did you discover when you modified my test script to use 'star' instead? Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Bob Friesenhahn
been using ZFS (1-3/4 years). Sometimes it takes a while for me to wake up and smell the coffee. Meanwhile I have opened a formal service request (IBIS 71326296) with Sun Support. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsM

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Bob Friesenhahn
, I have not filed a bug report yet. Any problem report to Sun's Service department seems to require at least one day's time. I was curious to see if recent OpenSolaris suffers from the same problem, but posted results (thus far) are not as conclusive as they are for Solaris 10.

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Bob Friesenhahn
this testing mmap is not being used (cpio does not use mmap) so the page cache is not an issue. It does become an issue for 'cp -r' though where we see the I/O be substantially (and essentially permanently) reduced even more for impacted files until the filesystem is unmounted. Bob --

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Bob Friesenhahn
che. Then once it finally determines that there is no cached data after all, it issues a read request. Even the "better" read performance is 1/2 of what I would expect from my hardware and based on prior test results from 'iozone'. More prefetch would surely help. Bo

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Bob Friesenhahn
when caching is disabled. I don't think that this is strictly a bug since it is what the database folks are looking for. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagic

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Bob Friesenhahn
caused by purging old data from the ARC. If these delays were caused by purging data from the ARC, then 'zfs iostat' would start showing lower read performance once the ARC becomes full, but that is not the case. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simples

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Bob Friesenhahn
portable USB drives that I use for backup because of ZFS. This is making me madder and madder by the minute. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Bob Friesenhahn
is an OpenSolaris issue as well. It seems likely to be more evident with fast SAS disks or SAN devices rather than a few SATA disks since the SATA disks have more access latency. Pools composed of mirrors should offer less read latency as well. Bob -- Bob Friesenhahn bfrie...@simple.dallas.

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Bob Friesenhahn
On Mon, 13 Jul 2009, Alexander Skwar wrote: This is a M4000 mit 32 GB RAM and two HDs in a mirror. I think that you should edit the script to increase the file count since your RAM size is big enough to cache most of the data. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-12 Thread Bob Friesenhahn
/dev/null' 48000247 blocks real23m50.27s user2m41.81s sys 9m46.76s Feel free to clean up with 'zfs destroy rpool/zfscachetest'. I am interested to hear about systems which do not suffer from this bug. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.u

Re: [zfs-discuss] deduplication

2009-07-12 Thread Bob Friesenhahn
On Sun, 12 Jul 2009, Bob Friesenhahn wrote: This is the first I have heard about a ZFS deduplication project. Is there a public anouncement (from Sun) somewhere that there is a ZFS deduplication project or are you just speculating that there might be such a project? Ahhh, I found some

Re: [zfs-discuss] deduplication

2009-07-12 Thread Bob Friesenhahn
times and zfs send. To me, getting existing features to operate in a stellar fashion is more important than adding new features. Rome was not built in a day and we all know how the Tower of Babel (http://en.wikipedia.org/wiki/Tower_of_Babel) turned out. Bob -- Bob Friesenhahn

Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-07-08 Thread Bob Friesenhahn
On Wed, 8 Jul 2009, Fredrich Maney wrote: Any idea what the Patch ID was? x86:119535-15 SPARC: 119534 Description of change "6690473 request to have flash support for ZFS root install". Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/user

Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-07-08 Thread Bob Friesenhahn
Solaris 10 patch for supporting Flash archives on ZFS came out about a week ago. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-07 Thread Bob Friesenhahn
tuned so it is likely that a single process won't see much benefit. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-07 Thread Bob Friesenhahn
existing working madvise() functionality. ZFS seems to want to cache all read data in the ARC, period. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-06 Thread Bob Friesenhahn
vise() provides the ability for I/O scheduling, or to flush stale data from memory. In recent Solaris, it also includes provisions which allow applications to improve their performance on NUMA systems. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfri

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
nbusy the CPU cores are when I/O is bad. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
0xe1 1 17% 100% 0.00 324 cpu[2]+0xf8cv_timedwait_sig+0xe1 ------- -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintaine

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
wn us that once you've copied files with cp(1) - which does use mmap(2) - that anything that uses read(2) on the same files is impacted. The problem is observed with cpio, which does not use mmap. This is immediately after a reboot or unmount/mount of the filesystem. Bob -- Bob Friesen

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
which has sometime been mapped, will be impacted, even if the file is nolonger mapped. However, it seems that memory mapping is not responsible for the problem I am seeing here. Memory mapping may make the problem seem worse, but it is clearly not the cause. Bob -- Bob Friesenhahn bfrie

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
h, but I have attached output from 'hotkernel' while a subsequent cpio copy is taking place. It shows that the kernel is mostly sleeping. This is not a new problem. It seems that I have been banging my head against this from the time I started using zfs. Bob -- Bob Friesenhahn bf

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
Did you try to use highly performant software like star? No, because I don't want to tarnish your software's stellar reputation. I am focusing on Solaris 10 bugs today. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
ooted immediately prior to each use. /etc/system tunables are currently: set zfs:zfs_arc_max = 0x28000 set zfs:zfs_write_limit_override = 0xea60 set zfs:zfs_vdev_max_pending = 5 Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maint

Re: [zfs-discuss] surprisingly poor performance

2009-07-04 Thread Bob Friesenhahn
SSDs. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
ools to improve application performance which are just not available via traditional I/O. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
cycle is more on the order of one second in every five. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-di

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Bob Friesenhahn
configure the various tunables to match after that Yes, this comes off of a 2540. I used iozone for testing and see that through zfs, the hardware is able to write a 64GB file at 380 MB/s and read at 551 MB/s. Unfortunately, this does not seem to translate well for the actual task.

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-03 Thread Bob Friesenhahn
On Fri, 3 Jul 2009, Bob Friesenhahn wrote: Copy Method Data Rate == cpio -pdum 75 MB/s cp -r 32 MB/s tar -cf - . | (cd dest && tar -xf -)

[zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-03 Thread Bob Friesenhahn
erformance. If I am encountering this problem, then it is likely that many others are as well. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ __

Re: [zfs-discuss] Hangs when transferring ftp off of a ZFS filesystem (truss included)

2009-07-03 Thread Bob Friesenhahn
the end of the transfer or if the application waits for a response while the kernel is still waiting for more data from the application. It is necessary to remove TCP_CORK before writing the final data and if the application guesses wrong, the connection will hang. Bob -- Bob Friesen

Re: [zfs-discuss] ZFS write I/O stalls

2009-07-03 Thread Bob Friesenhahn
intervals results in large delays while the system syncs, followed by normal response times while the system buffers more input... I don't see any such problems unless compression is enabled. When compression is enabled, the TXG sync causes definite response time issues in the syste

Re: [zfs-discuss] ZFS write I/O stalls

2009-07-03 Thread Bob Friesenhahn
On Fri, 3 Jul 2009, Victor Latushkin wrote: On 02.07.09 22:05, Bob Friesenhahn wrote: On Thu, 2 Jul 2009, Zhu, Lejun wrote: Actually it seems to be 3/4: 3/4 is an awful lot. That would be 15 GB on my system, which explains why the "5 seconds to write" rule is dominant. 3/4

Re: [zfs-discuss] surprisingly poor performance

2009-07-03 Thread Bob Friesenhahn
huge variation in performance (and cost) with so-called "enterprise" SSDs. SSDs with capacitor-backed write caches seem to be fastest. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Graphics

Re: [zfs-discuss] Interposing on readdir and friends

2009-07-02 Thread Bob Friesenhahn
correct order. Ugly, but seems to have worked for many years.) Oops! The only solution is to add some code to sort the results. You can use qsort() for that. Depending on existing directory order is an error. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simples

Re: [zfs-discuss] ZFS write I/O stalls

2009-07-02 Thread Bob Friesenhahn
m that zfs is incable of reading during all/much of the time it is syncing a TXG. Even if the TXG is written more often, readers will still block, resulting in a similar cumulative effect on performance. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/user

Re: [zfs-discuss] Q: zfs log device

2009-07-01 Thread Bob Friesenhahn
us is good - no issues or errors. any ideas? Try using direct i/o (the -D flag) in bonnie++. You'll need at least version 1.03e. If this -D flag uses the Solaris directio() function, then it will do nothing for ZFS. It only works for UFS and NFS. Bob -- Bob Friesenhahn bfrie...@simp

Re: [zfs-discuss] ZFS write I/O stalls

2009-07-01 Thread Bob Friesenhahn
This causes me to believe that the algorithm is not implemented as described in Solaris 10. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-30 Thread Bob Friesenhahn
where the sawtooth occurs. More at 11. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolari

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-30 Thread Bob Friesenhahn
issue does not apply at all to NFS service, database service, or any other usage which does synchronous writes. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://ww

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-30 Thread Bob Friesenhahn
.png Perfmeter display for 768 MB: http://www.simplesystems.org/users/bfriesen/zfs-discuss/perfmeter-768mb.png Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Gra

Re: [zfs-discuss] ZFS, power failures, and UPSes

2009-06-30 Thread Bob Friesenhahn
wise the outage is usually longer than the UPSs can stay up since the problem required human attention. A standby generator is needed for any long outages. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Main

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-30 Thread Bob Friesenhahn
g globally blocked and it is not just misbehavior of a single process. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailin

Re: [zfs-discuss] Useful Emulex tunable for i386

2009-06-30 Thread Bob Friesenhahn
On Sun, 28 Jun 2009, Bob Friesenhahn wrote: On Sun, 28 Jun 2009, Bob Friesenhahn wrote: Today I experimented with doubling this value to 688128 and was happy to see a large increase in sequential read performance from my ZFS pool which is based on six mirrors vdevs. Sequential read

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-29 Thread Bob Friesenhahn
ation issue and it will be resolved soon. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] slow ls or slow zfs

2009-06-29 Thread Bob Friesenhahn
ed below. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ #!/bin/ksh # Date: Mon, 14 Apr 2008 15:49:41 -0700 # From: Jeff Bonwick # To: Henrik Hjort # Cc: zfs-discuss@opensolaris.or

Re: [zfs-discuss] Useful Emulex tunable for i386

2009-06-28 Thread Bob Friesenhahn
On Sun, 28 Jun 2009, Bob Friesenhahn wrote: Today I experimented with doubling this value to 688128 and was happy to see a large increase in sequential read performance from my ZFS pool which is based on six mirrors vdevs. Sequential read performance jumped from 552787 MB/s to 799626 MB/s

[zfs-discuss] Useful Emulex tunable for i386

2009-06-28 Thread Bob Friesenhahn
read performance by balancing the reads from the mirror devices. Now the read performance is almost 2X the write performance. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagic

Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread Bob Friesenhahn
obvious since the disks will only be driven as hard as the slowest disk and so the slowest disk may not seem much slower. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org

Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread Bob Friesenhahn
the slow disk a bit more difficult. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-26 Thread Bob Friesenhahn
earlier zpool iostat data for iSCSI). Isn't this what we expect, because NFS does syncs, while iSCSI does not (assumed)? If iSCSI does not do syncs (presumably it should when a cache flush is requested) then NFS is safer in case the server crashes and reboots. Bob -- Bob Friesenhahn

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-25 Thread Bob Friesenhahn
s internal buffers are much smaller than that, and fiber channel device drivers are not allowed to consume much memory either. To make matters worse, I am using ZFS mirrors so the amount of data written to the array in those five seconds is doubled to 3.6GB. Bob -- Bob Friese

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-25 Thread Bob Friesenhahn
. Since Matt Ahrens has been working on it for almost a year, it must be almost fixed by now. :-) I am not sure how is queue depth is managed, but it seems possible to detect when reads are blocked by bulk writes and make some automatic adjustments to improve balance. Bob -- Bob Friesenhahn bfrie

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-24 Thread Bob Friesenhahn
even though the large ZFS flushes are taking place. This proves that my application is seeing stalled reads rather than stalled writes. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.Gra

Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-24 Thread Bob Friesenhahn
250 MB/s. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.o

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-24 Thread Bob Friesenhahn
386M Sun_2540 460G 1.18T142792 15.8M 82.7M Sun_2540 460G 1.18T375 0 46.9M 0 Here is an interesting discussion thread on another list that I had not seen before: http://opensolaris.org/jive/thread.jspa?messageID=347212 Bob -- Bob Friesenhahn bfri

<    4   5   6   7   8   9   10   11   12   13   >