It's a home set up, the performance penalty from splitting the cache devices is
non-existant, and that work around sounds like some pretty crazy amount of
overhead where I could instead just have a mirrored slog.
I'm less concerned about wasted space, more concerned about amount of SAS ports
I have available.
I understand that p0 refers to the whole disk... in the logs I pasted in I'm
not attempting to mount p0. I'm trying to work out why I'm getting an error
attempting to mount p2, after p1 has successfully mounted. Further, this has
been done before on other systems in the same hardware configuration in the
exact same fashion, and I've gone over the steps trying to make sure I haven't
missed something but can't see a fault.
I'm not keen on using Solaris slices because I don't have an understanding of
what that does to the pool's OS interoperability.
From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
Sent: Friday, 15 March 2013 8:44 PM
To: Andrew Werchowiecki; email@example.com
Subject: RE: partioned cache devices
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Andrew Werchowiecki
> muslimwookie@Pyzee:~$ sudo zpool add aggr0 cache c25t10d1p2
> cannot open '/dev/dsk/c25t10d1p2': I/O error
> I have two SSDs in the system, I've created an 8gb partition on each drive for
> use as a mirrored write cache. I also have the remainder of the drive
> partitioned for use as the read only cache. However, when attempting to add
> it I get the error above.
Sounds like you're probably running into confusion about how to partition the
drive. If you create fdisk partitions, they will be accessible as p0, p1, p2,
but I think p0 unconditionally refers to the whole drive, so the first
partition is p1, and the second is p2.
If you create one big solaris fdisk parititon and then slice it via "partition"
where s2 is typically the encompassing slice, and people usually use s1 and s2
and s6 for actual slices, then they will be accessible via s1, s2, s6
Generally speaking, it's unadvisable to split the slog/cache devices anyway.
If you're splitting it, evidently you're focusing on the wasted space. Buying
an expensive 128G device where you couldn't possibly ever use more than 4G or
8G in the slog. But that's not what you should be focusing on. You should be
focusing on the speed (that's why you bought it in the first place.) The slog
is write-only, and the cache is a mixture of read/write, where it should be
hopefully doing more reads than writes. But regardless of your actual success
with the cache device, your cache device will be busy most of the time, and
competing against the slog.
You have a mirror, you say. You should probably drop both the cache & log.
Use one whole device for the cache, use one whole device for the log. The only
risk you'll run is:
Since a slog is write-only (except during mount, typically at boot) it's
possible to have a failure mode where you think you're writing to the log, but
the first time you go back and read, you discover an error, and discover the
device has gone bad. In other words, without ever doing any reads, you might
not notice when/if the device goes bad. Fortunately, there's an easy
workaround. You could periodically (say, once a month) script the removal of
your log device, create a junk pool, write a bunch of data to it, scrub it
(thus verifying it was written correctly) and in the absence of any scrub
errors, destroy the junk pool and re-add the device as a slog to the main pool.
I've never heard of anyone actually being that paranoid, and I've never heard
of anyone actually experiencing the aforementioned possible undetected device
failure mode. So this is all mostly theoretical.
Mirroring the slog device really isn't necessary in the modern age.
zfs-discuss mailing list