On 09 October, 2009 - Brandon Hume sent me these 2,0K bytes:

> I've got a mail machine here that I built using ZFS boot/root.  It's
> been having some major I/O performance problems, which I posted once
> before... but that post seems to have disappeared.
> 
> Now I've managed to obtain another identical machine, and I've built
> it in the same way as the original.  Running Solaris 10 U6, I've got
> it fully patched as of 2009/10/06.  It's using a mirrored disk via the
> PERC (LSI Megaraid) controller.
> 
> The main problem seems to be ZFS.  If I do the following on a UFS filesystem:
> 
>  # /usr/bin/time dd if=/dev/zero of=whee.bin bs=1024000 count=<x>
> 
> ... then I get "real" times of the following:
> 
>  x     time
> 128  35. 4
> 256  1:01.8
> 512  2:19.8

Is this minutes:seconds.millisecs ? if so, you're looking at 3-4MB/s ..
I would say something is wrong.

> It's all very linear and fairly decent.

Decent?!

> However, if I then destroy that filesystem and recreate it using ZFS
> (no special options or kernel variables set) performance degrades
> substantially.  With the same dd, I get:
> 
> x      time
> 128  3:45.3
> 256  6:52.7
> 512  15:40.4

0.5MB/s .. that's floppy speed :P

> So basically a 6.5x loss across the board.  I realize that a simple
> 'dd' is an extremely weak test, but real-world use on these machines
> shows similar problems... long delays logging in, and running a
> command that isn't cached can take 20-30 seconds (even something as
> simple as 'psrinfo -vp').
> 
> Ironically, the machine works just fine for simple email, because the
> files are small and very transient and thus can exist quite easily
> just in memory.  But more complex things, like a local copy of our
> mailmaps, cripples the machine.

.. because something is messed up, and for some reason ZFS seems to feel
worse than UFS..

> I'm about to rebuild the machine with the RAID controller in
> passthrough mode, and I'll see what that accomplishes.  Most of the
> machines here are Linux and use the hardware RAID1, so I was/am
> hesitant to "break standard" that way.  Does anyone have any
> experience or suggestions for trying to make ZFS boot+root work fine
> on this machine?

Check for instance 'iostat -xnzmp 1'  while doing this and see if any
disk is behaving badly, high service times etc.. Even your speedy
3-4MB/s is nowhere close to what you should be getting, unless you've
connected a bunch of floppy drives to your PERC..

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to