Re: [zfs-discuss] one ZIL SLOG per zpool?

2010-08-13 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Chris Twa
 
 My plan now is to buy the ssd's and do extensive testing.  I want to
 focus my performance efforts on two zpools (7x146GB 15K U320 + 7x73GB
 10k U320).  I'd really like two ssd's for L2ARC (one ssd per zpool) and
 then slice the other two ssd's and then mirror the slices for SLOG (one
 mirrored slice per zpool).  I'm worried that the ZILs won't be
 significantly faster than writing to disk.  But I guess that's what
 testing is for.  If the ZIL in this arrangement isn't beneficial then I
 can have four disks for L2ARC instead of two (or my wife and I get
 ssd's for our laptops).

Remember that ZIL is only for sync writes.  So if you're not doing sync
writes, there is no benefit of a dedicated log device.  

Also, for a lot of purposes, disabling ZIL is actually viable.  It's zero
cost which guarantees absolute optimal performance on spindle disks.
Nothing is faster.  To quantify the risk, here's what you need to know:

In the event of an ungraceful crash, up to 30sec of async writes are lost.
Period.  But as long as you have not disabled ZIL, then all the sync writes
were not lost.

If you have ZIL disabled, then sync=async.  Up to 30sec of all writes are
lost.  Period.

But there is no corruption or data written out-of-order.  The end result is
as-if you halted the server suddenly, flushed all the buffers to disk, and
then powered off.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one ZIL SLOG per zpool?

2010-08-13 Thread David Magda
On Fri, August 13, 2010 07:52, Edward Ned Harvey wrote:

 If you have ZIL disabled, then sync=async.  Up to 30sec of all writes are
 lost.  Period.

 But there is no corruption or data written out-of-order.  The end result
 is as-if you halted the server suddenly, flushed all the buffers to disk,
 and then powered off.

With the proviso that you should ideally be using a version of OpenSolaris
later than snv_128 which allows you to go back a previous uberblock/txg in
case the more recent one(s) are not viable:

  zpool recovery support
http://arc.opensolaris.org/caselog/PSARC/2009/479/

  need a way to rollback to an uberblock from a previous txg
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6667683

  http://www.c0t0d0s0.org/archives/6067-PSARC-2009479-zpool-recovery-a.html

If you're at ZFSv20 or later, you're pretty much guaranteed to have this
functionality:

  http://hub.opensolaris.org/bin/view/Community+Group+zfs/20

I'm hoping this, and slogs removal are incorporated into Solaris-proper soon:

  http://hub.opensolaris.org/bin/view/Community+Group+zfs/19


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] one ZIL SLOG per zpool?

2010-08-12 Thread Chris Twa
I have three zpools on a server and want to add a mirrored pair of ssd's for 
the ZIL.  Can the same pair of SSDs be used for the ZIL of all three zpools or 
is it one ZIL SLOG device per zpool?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one ZIL SLOG per zpool?

2010-08-12 Thread Darren J Moffat

On 12/08/2010 07:27, Chris Twa wrote:

I have three zpools on a server and want to add a mirrored pair of ssd's for 
the ZIL.  Can the same pair of SSDs be used for the ZIL of all three zpools or 
is it one ZIL SLOG device per zpool?


Only if you partition it up and give slices to the pools, however I 
personally don't like giving parts of the same device to multiple pools 
if I can help it.


The only vdev types that can be shared between pools are spares, all 
others need to be per pool or the physical devices partitioned up.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one ZIL SLOG per zpool?

2010-08-12 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Chris Twa
 
 I have three zpools on a server and want to add a mirrored pair of
 ssd's for the ZIL.  Can the same pair of SSDs be used for the ZIL of
 all three zpools or is it one ZIL SLOG device per zpool?

If you format, fdisk, and partition the disks, you can use the slices for
slogs.  (You can also implement it in some other ways.)  However, the point
of the slog device is *performance* so you're defeating your purpose by
doing this.

People are always tempted to put more than one log onto a SSD because Hey,
the system could never use more than 8G, but I've got a 32G drive!  What a
waste of money!  Which has some truth in it.  But the line of thought you
should have is Hey, the system will do its best to max out the 3Gbit/sec or
6Gbit/sec bus to the drive, so that disk is already fully utilized!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one ZIL SLOG per zpool?

2010-08-12 Thread Roy Sigurd Karlsbakk
 People are always tempted to put more than one log onto a SSD because
 Hey,
 the system could never use more than 8G, but I've got a 32G drive!
 What a
 waste of money! Which has some truth in it. But the line of thought
 you
 should have is Hey, the system will do its best to max out the
 3Gbit/sec or
 6Gbit/sec bus to the drive, so that disk is already fully utilized!

That depends on your workload. I would guess most pools aren't utilising their 
SLOGs 100%, since this will need pretty high sync-write usage, and typically, 
about 90% of the I/O on most servers is read.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one ZIL SLOG per zpool?

2010-08-12 Thread Chris Twa
Thank you everyone for your answers.

Cost is a factor, but the main obstacle is that the chassis will only support 
four SSDs (and that's with using the spare 5.25 bay for a 4x2.5 hotswap bay).  

My plan now is to buy the ssd's and do extensive testing.  I want to focus my 
performance efforts on two zpools (7x146GB 15K U320 + 7x73GB 10k U320).  I'd 
really like two ssd's for L2ARC (one ssd per zpool) and then slice the other 
two ssd's and then mirror the slices for SLOG (one mirrored slice per zpool).  
I'm worried that the ZILs won't be significantly faster than writing to disk.  
But I guess that's what testing is for.  If the ZIL in this arrangement isn't 
beneficial then I can have four disks for L2ARC instead of two (or my wife and 
I get ssd's for our laptops).

Thank you again everyone for your quick responses
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss