On Fri, Sep 25, 2009 at 5:47 PM, Marion Hakanson <hakan...@ohsu.edu> wrote:
> j...@jamver.id.au said:
>> For a predominantly NFS server purpose, it really looks like a case of the
>> slog has to outperform your main pool for continuous write speed as well as
>> an instant response time as the primary criterion. Which might as well be a
>> fast (or group of fast) SSDs or 15kRPM drives with some NVRAM in front of
>> them.
>
> I wonder if you ran Richard Elling's "zilstat" while running your
> workload.  That should tell you how much ZIL bandwidth is needed,
> and it would be interesting to see if its stats match with your
> other measurements of slog-device traffic.

Yes, but if it's on NFS you can just figure out the workload in MB/s
and use that as a rough guideline.

Problem is most SSD manufactures list sustained throughput with large
IO sizes, say 4MB, and not 128K, so it is tricky buying a good SSD
that can handle the throughput.

> I did some filebench and "tar extract over NFS" tests of J4400 (500GB,
> 7200RPM SATA drives), with and without slog, where slog was using the
> internal 2.5" 10kRPM SAS drives in an X4150.  These drives were behind
> the standard Sun/Adaptec internal RAID controller, 256MB battery-backed
> cache memory, all on Solaris-10U7.
>
> We saw slight differences on filebench oltp profile, and a huge speedup
> for the "tar extract over NFS" tests with the slog present.  Granted, the
> latter was with only one NFS client, so likely did not fill NVRAM.  Pretty
> good results for a poor-person's slog, though:
>        http://acc.ohsu.edu/~hakansom/j4400_bench.html

I did a smiliar test with a 512MB BBU controller and saw no difference
with or without the SSD slog, so I didn't end up using it.

Does your BBU controller ignore the ZFS flushes?

> Just as an aside, and based on my experience as a user/admin of various
> NFS-server vendors, the old Prestoserve cards, and NetApp filers, seem
> to get very good improvements with relatively small amounts of NVRAM
> (128K, 1MB, 256MB, etc.).  None of the filers I've seen have ever had
> tens of GB of NVRAM.

They don't hold on to the cache for a long time, just as long as it
takes to write it all to disk.

-Ross
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to