Bob Friesenhahn wrote: > On Sat, 26 Jul 2008, Bob Friesenhahn wrote: > > >> I suspect that the maximum peak latencies have something to do with >> zfs itself (or something in the test program) rather than the pool >> configuration. >> > > As confirmation that the reported timings have virtually nothing to do > with the pool configuration, I ran the program on a two-drive ZFS > mirror pool consisting of two cheap 500MB USB drives. The average > latency was not much worse. The peak latency values are often larger > but the maximum peak is still on the order of 9000 microseconds. >
Is it doing buffered or sync writes? I'll try it later today or tomorrow... > I then ran the test on a single-drive UFS filesystem (300GB 15K RPM > SAS drive) which is freshly created and see that the average latency > is somewhat lower but the maximum peak for each interval is typically > much higher (at least 1200 but often 4000). I even saw a measured peak > as high as 22224. > > Based on the findings, it seems that using the 2540 is a complete > waste if two cheap USB drives in a zfs mirror pool can almost obtain > the same timings. UFS on the fast SAS drive performed worse. > > I did not run your program in a real-time scheduling class (see > priocntl). Perhaps it would perform better using real-time > scheduling. It might also do better in a fixed-priority class. > This might be more important. But a better solution is to assign a processor set to run only the application -- a good idea any time you need a predictable response. -- richard _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss