>>>>> "rvd" == Ray Van Dolson <rvandol...@esri.com> writes:
>>>>> "ak" == Andrey Kuzmin <andrey.v.kuz...@gmail.com> writes:

   rvd> I missed out on this thread.  How would these dropped flushed
   rvd> writes manifest themselves?  

presumably corrupted databases, lost mail, or strange NFS behavior
after the server reboots when the clients do not.  But the actual test
to which I referred is benchmark-like and didn't observe any of those
things.  If you read my post I gave you Lutz's name and the date he
posted and also linked to the msgid in my message's header, so go read
for yourself!

A good point, though, is that drives with lying write caches are still
okay if your box reboots because of a kernel panic, just not if it
loses power, so they're not worthless.

    ak> performance from anyone using (real) enterprise SSD (which now
    ak> spells STEC) as slog.

I wonder how ACARD would do also since it is 1/5th the cost, or if
Seagate Pulsar will behave correctly.  STEC coming in at more
expensive than DRAM is like a sucker-premium you pay because no one
else has their act together.  And according to the test Lutz did the
X25-M (and probably also -E?) are okay so long as you disable the
write cache, though you have to do it at every boot, and 'hdadm' is
not bundled.

It would also be nice to convince anandtech and friends to yank power
cords, too, to confirm that write flushes issued in their tests are
actually obeyed, and to redo the io/s test with write cache disabled
if the device lies, so that we actually have comparable numbers.  If
they would do that, the $ value of a supercap would become obvious.

Attachment: pgp2YcU6ajqw3.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to