But can't this behavior be "tuned" (so to speak...I hate that word but I
can't think of
something better) by increasing the recordsize?
For DSS applications, Video streaming, etc....apps that read very large
files, I seem
to remember (in some ZFS work many, many months ago), getting very good
(excellent?) sequential read performance by tweaking recordsize to 1MB.
(I may be remembering this wrong.....will recheck this).
Thanks,
/jim
Anton B. Rang wrote:
If your database performance is dominated by sequential reads, ZFS may not be
the best solution from a performance perspective. Because ZFS uses a
write-anywhere layout, any database table which is being updated will quickly
become scattered on the disk, so that sequential read patterns become random
reads.
Of course, ZFS has other benefits, such as ease of use and protection from many
sources of data corruption; if you want to use ZFS in this application, though,
I'd expect that you will need substantially more raw I/O bandwidth than UFS or
QFS (which update in place) would require.
(If you have predictable access patterns to the tables, a QFS setup which ties
certain tables to particular LUNs using stripe groups might work well, as you
can guarantee that accesses to one table will not interfere with accesses to
another.)
As always, your application is the real test. ;-)
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss