I see Sun has recently released part number XRA-ST1CH-32G2SSD, a 32GB SATA
SSD for the x4540 server.

We have five x4500's we purchased last year that we are deploying to
provide file and web services to our users. One issue that we have had is
horrible performance for the "single threaded process creating lots of
small files over NFS" scenario. The bottleneck in that case is fairly
clear, and to verify it we temporarily disabled the ZIL on one of the
servers. Extraction time for a large tarball into an NFSv4 mounted
filesystem dropped from 20 minutes to 2 minutes.

Obviously, it is strongly recommended not to run with the ZIL disabled, and
we don't particularly want to do so in production. However, for some of our
users, performance is simply unacceptable for various usage cases
(including not only tar extracts, but other common software development
processes such as svn checkouts).

As such, we have been investigating the possibility of improving
performance via a slog, preferably on some type of NVRAM or SSD. We haven't
really found anything appropriate, and now we see Sun has officially
released something very possibly like what we have been looking for.

My sales rep tells me the drive is only qualified for use in an x4540.
However, as a standard SATA interface SSD there is theoretically no reason
why it would not work in an x4500, they even share the exact same drive
sleds. I was told Sun just didn't want to spend the time/effort to qualify
it for the older hardware (kind of sucks that servers we bought less than a
year ago are being abandoned). We are considering using them anyway, in the
worst case if Sun support complains that they are installed and refuses to
continue any diagnostic efforts, presumably we can simply swap them out for
standard hard drives. slog devices can be replaced like any other zfs vdev,
correct? Or alternatively, what is the state of removing a slog device and
reverting back to a pool embedded log?

So, has anyone played with this new SSD in an x4500 and can comment on
whether or not they seemed to work okay? I can't imagine no one inside of
Sun, regardless of official support level, hasn't tried it :). Feel free to
post anonymously or reply off list if you don't want anything on the record
;).

>From reviewing the Sun hybrid storage documentation, it describes two
different flash devices, the "Logzilla", optimized for blindingly fast
writes and intended as a ZIL slog, and the "Cachezilla", optimized for fast
reads and intended for use as L2ARC. Is this one of those, or some other
device? If the latter, what are its technical read/write performance
characteristics?

We currently have all 48 drives allocated, 23 mirror pairs and two hot
spares. Is there any timeline on the availability of removing an active
vdev from a pool, which would allow us to swap out a couple of devices
without destroying and having to rebuild our pool?

What is the current state of behavior in the face of slog failure?
Theoretically, if a dedicated slog device failed, the pool could simply
revert to logging embedded in the pool. However, the last I heard slog
device failure rendered a pool completely unusable and inaccessible. If
that is still the case and not expected to be resolved anytime soon, we
would presumably need two of the devices to mirror?

Thanks for any info you might be able to provide.


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to