On Thu, 29 Nov 2012, Edward Ned Harvey (openindiana) wrote:

From: Sebastian Gabler [mailto:sequoiamo...@gmx.net]

I have bought and installed an Intel SSD 313 20 GB to use as ZIL for one
or many pools. I am running openindiana on a x86 platform, no SPARC. As
4 GB should suffice, I am considering to partition the drive in order to
assign each partition to one pool (ATM pools are 2 on the Server, but I
could expand it in the future)

Beware, the intel 313 ssd seems to have no power loss protection:

http://ark.intel.com/products/66290/Intel-SSD-313-Series-24GB-mSATA-3Gbs-25nm-SLC

zil is relying on this feature.


After some reading, I am still confused about slicing and partitioning.
What do I actually need to do to achieve the wanted effect to have up to
4 partions on that SSD?

Everybody seems to want to do this, and I can see why - If you look at the storage capacity of an 
SSD, you're like "Hey, this thing has 64G and I only need 1 or 2 or 4G, that leaves all the 
rest unused."  But you forget to think to yourself, "Hey, this thing has a 6Gbit bus, and 
I'm trying to use it all to boost the performance of some other pool."

The only situation where I think it's a good idea to slice the SSD and use it 
for more than one slog device, is:  If you have two pools, and you know you're 
not planning to write them simultaneously.  I have a job that reads from pool A 
and writes to pool B, and then it will read from pool B and write to pool A, 
and so forth.  But this is highly contrived, and I seriously doubt it's what 
you're doing (until you say that's what you're doing.)

The better thing is to swallow and accept 60G wasted on your slog device.  It's 
not there for storage capacity - you bought it for speed.  And there isn't 
excess speed going to waste.

Well, it depends on usage. I sliced up a 120gb intel 320 into 8gb for zil and 50gb for l2arc. On my "not so many users" multi user system most of the time it's either heavy nfs write activity, or heavy read activity, seldom it's both. zil slices for multiple pools may show the same behaviour.

- Michael



_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to