Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-30 Thread opensolarisisdeadlongliveopensolaris
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Jim Klimov
 
 Thanks... but doesn't your description imply that the sync writes
 would always be written twice? 

That is correct, regardless of whether you have slog or not.  In the case of 
slog, it gets written first to a dedicated device, and then to the pool.  In 
the case of no-slog, it gets written to dedicated ZIL recyclable blocks in the 
main pool, and then it gets written to the main pool again as part of the next 
TXG.

All of this is true, except when the sync write is sufficiently large.  When 
it's larger than a configurable threshold (I forget which parameter, but I 
could look it up) then it goes directly to the main pool and skips the ZIL 
completely.  I don't know how exactly that is implemented - Maybe it goes into 
the next TXG and simply forces the next TXG to flush immediately.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can the ZFS copies attribute substitute HW disk redundancy?

2012-07-30 Thread GREGG WONDERLY

On Jul 29, 2012, at 3:12 PM, opensolarisisdeadlongliveopensolaris 
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:

 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Jim Klimov
 
   I wondered if the copies attribute can be considered sort
 of equivalent to the number of physical disks - limited to seek
 times though. Namely, for the same amount of storage on a 4-HDD
 box I could use raidz1 and 4*1tb@copies=1 or 4*2tb@copies=2 or
 even 4*3tb@copies=3, for example.
 
 The first question - reliability...
 
 copies might be on the same disk.  So it's not guaranteed to help if you 
 have a disk failure.

I thought I understood that copies would not be on the same disk, I guess I 
need to go read up on this again.

Gregg Wonderly
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-30 Thread Roy Sigurd Karlsbakk
   For several times now I've seen statements on this list implying
  that a dedicated ZIL/SLOG device catching sync writes for the log,
  also allows for more streamlined writes to the pool during normal
  healthy TXG syncs, than is the case with the default ZIL located
  within the pool.
 
 After reading what some others have posted, I should remind that zfs
 always has a ZIL (unless it is specifically disabled for testing).
 If it does not have a dedicated ZIL, then it uses the disks in the
 main pool to construct the ZIL. Dedicating a device to the ZIL should
 not improve the pool storage layout because the pool already had a
 ZIL.

Also keep in mind that if you have an SLOG (ZIL on a separate device), and then 
lose this SLOG (disk crash etc), you will probably lose the pool. So if you 
want/need SLOG, you probably want two of them in a mirror…

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
r...@karlsbakk.net
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-30 Thread Freddie Cash
On Mon, Jul 30, 2012 at 8:58 AM, Roy Sigurd Karlsbakk r...@karlsbakk.net 
wrote:
   For several times now I've seen statements on this list implying
  that a dedicated ZIL/SLOG device catching sync writes for the log,
  also allows for more streamlined writes to the pool during normal
  healthy TXG syncs, than is the case with the default ZIL located
  within the pool.

 After reading what some others have posted, I should remind that zfs
 always has a ZIL (unless it is specifically disabled for testing).
 If it does not have a dedicated ZIL, then it uses the disks in the
 main pool to construct the ZIL. Dedicating a device to the ZIL should
 not improve the pool storage layout because the pool already had a
 ZIL.

 Also keep in mind that if you have an SLOG (ZIL on a separate device), and 
 then lose this SLOG (disk crash etc), you will probably lose the pool. So if 
 you want/need SLOG, you probably want two of them in a mirror…

That's only true on older versions of ZFS.  ZFSv19 (or 20?) includes
the ability to import a pool with a failed/missing log device.  You
lose any data that is in the log and not in the pool, but the pool is
importable.

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] encfs on top of zfs

2012-07-30 Thread Tristan Klocke
Dear ZFS-Users,

I want to switch to ZFS, but still want to encrypt my data. Native
Encryption for ZFS was added in ZFS Pool Version Number
30http://en.wikipedia.org/wiki/ZFS#Release_history,
but I'm using ZFS on FreeBSD with Version 28. My question is how would
encfs (fuse encryption) affect zfs specific features like data Integrity
and deduplication?

Regards

Tristan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-30 Thread Roy Sigurd Karlsbakk
  Also keep in mind that if you have an SLOG (ZIL on a separate
  device), and then lose this SLOG (disk crash etc), you will probably
  lose the pool. So if you want/need SLOG, you probably want two of
  them in a mirror…
 
 That's only true on older versions of ZFS. ZFSv19 (or 20?) includes
 the ability to import a pool with a failed/missing log device. You
 lose any data that is in the log and not in the pool, but the pool is
 importable.

Are you sure? I booted this v28 pool a couple of months back, and found it 
didn't recognize its pool, apparently because of a missing SLOG. It turned out 
the cache shelf was disconnected, after re-connecting it, things worked as 
planned. I didn't try to force a new import, though, but it didn't boot up 
normally, and told me it couldn't import its pool due to lack of SLOG devices.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
r...@karlsbakk.net
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-30 Thread Freddie Cash
On Mon, Jul 30, 2012 at 9:38 AM, Roy Sigurd Karlsbakk r...@karlsbakk.net 
wrote:
  Also keep in mind that if you have an SLOG (ZIL on a separate
  device), and then lose this SLOG (disk crash etc), you will probably
  lose the pool. So if you want/need SLOG, you probably want two of
  them in a mirror…

 That's only true on older versions of ZFS. ZFSv19 (or 20?) includes
 the ability to import a pool with a failed/missing log device. You
 lose any data that is in the log and not in the pool, but the pool is
 importable.

 Are you sure? I booted this v28 pool a couple of months back, and found it 
 didn't recognize its pool, apparently because of a missing SLOG. It turned 
 out the cache shelf was disconnected, after re-connecting it, things worked 
 as planned. I didn't try to force a new import, though, but it didn't boot up 
 normally, and told me it couldn't import its pool due to lack of SLOG devices.

Positive.  :)  I tested it with ZFSv28 on FreeBSD 9-STABLE a month or
two ago.  See the updated man page for zpool, especially the bit about
import -m.  :)

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] encfs on top of zfs

2012-07-30 Thread Freddie Cash
On Mon, Jul 30, 2012 at 5:20 AM, Tristan Klocke
tristan.klo...@googlemail.com wrote:
 I want to switch to ZFS, but still want to encrypt my data. Native
 Encryption for ZFS was added in ZFS Pool Version Number 30, but I'm using
 ZFS on FreeBSD with Version 28. My question is how would encfs (fuse
 encryption) affect zfs specific features like data Integrity and
 deduplication?

If you are using FreeBSD, why not use GELI to provide the block
devices used for the ZFS vdevs?  That's the standard way to get
encryption and ZFS working on FreeBSD.

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can the ZFS copies attribute substitute HW disk redundancy?

2012-07-30 Thread John Martin

On 07/29/12 14:52, Bob Friesenhahn wrote:


My opinion is that complete hard drive failure and block-level media
failure are two totally different things.


That would depend on the recovery behavior of the drive for
block-level media failure.  A drive whose firmware does excessive
(reports of up to 2 minutes) retries of a bad sector may be
indistinguishable from a failed drive.  See previous discussions
of the firmware differences between desktop and enterprise drives.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-30 Thread Roy Sigurd Karlsbakk


- Opprinnelig melding -
 On Mon, Jul 30, 2012 at 9:38 AM, Roy Sigurd Karlsbakk
 r...@karlsbakk.net wrote:
   Also keep in mind that if you have an SLOG (ZIL on a separate
   device), and then lose this SLOG (disk crash etc), you will
   probably
   lose the pool. So if you want/need SLOG, you probably want two of
   them in a mirror…
 
  That's only true on older versions of ZFS. ZFSv19 (or 20?) includes
  the ability to import a pool with a failed/missing log device. You
  lose any data that is in the log and not in the pool, but the pool
  is
  importable.
 
  Are you sure? I booted this v28 pool a couple of months back, and
  found it didn't recognize its pool, apparently because of a missing
  SLOG. It turned out the cache shelf was disconnected, after
  re-connecting it, things worked as planned. I didn't try to force a
  new import, though, but it didn't boot up normally, and told me it
  couldn't import its pool due to lack of SLOG devices.
 
 Positive. :) I tested it with ZFSv28 on FreeBSD 9-STABLE a month or
 two ago. See the updated man page for zpool, especially the bit about
 import -m. :)

On 151a2, man page just says 'use this or that mountpoint' with import -m, but 
the fact was zpool refused to import the pool at boot when 2 SLOG devices 
(mirrored) and 10 L2ARC devices were offline. Should OI/Illumos be able to boot 
cleanly without manual action with the SLOG devices gone?

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
r...@karlsbakk.net
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-30 Thread Freddie Cash
On Mon, Jul 30, 2012 at 10:20 AM, Roy Sigurd Karlsbakk
r...@karlsbakk.net wrote:
 On 151a2, man page just says 'use this or that mountpoint' with import -m, 
 but the fact was zpool refused to import the pool at boot when 2 SLOG devices 
 (mirrored) and 10 L2ARC devices were offline. Should OI/Illumos be able to 
 boot cleanly without manual action with the SLOG devices gone?

From FreeBSD 9-STABLE, which includes ZFSv28:

 zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile]
 [-D] [-f] [-m] [-N] [-R root] [-F [-n]] -a

 Imports all pools found in the search directories. Identical to the
 previous command, except that all pools with a sufficient number of
 devices available are imported. Destroyed pools, pools that were pre-
 viously destroyed with the zpool destroy command, will not be
 imported unless the -D option is specified.

 -o mntopts
 Comma-separated list of mount options to use when mounting
 datasets within the pool. See zfs(8) for a description of
 dataset properties and mount options.

 -o property=value
 Sets the specified property on the imported pool. See the
 Properties section for more information on the available
 pool properties.

 -c cachefile
 Reads configuration from the given cachefile that was created
 with the cachefile pool property. This cachefile is used
 instead of searching for devices.

 -d dir  Searches for devices or files in dir.  The -d option can be
 specified multiple times. This option is incompatible with
 the -c option.

 -D  Imports destroyed pools only. The -f option is also required.

 -f  Forces import, even if the pool appears to be potentially
 active.

 -m  Enables import with missing log devices.


-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-30 Thread Bob Friesenhahn

On Mon, 30 Jul 2012, Roy Sigurd Karlsbakk wrote:


Should OI/Illumos be able to boot cleanly without manual action with 
the SLOG devices gone?


If this is allowed, then data may be unnecessarily lost.

When the drives are not all in one chassis, then it is not uncommon 
for one chassis to not come up immediately, or be slow to come up when 
recovering from a power failure.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-30 Thread Richard Elling
On Jul 30, 2012, at 10:20 AM, Roy Sigurd Karlsbakk wrote:
 - Opprinnelig melding -
 On Mon, Jul 30, 2012 at 9:38 AM, Roy Sigurd Karlsbakk
 r...@karlsbakk.net wrote:
 Also keep in mind that if you have an SLOG (ZIL on a separate
 device), and then lose this SLOG (disk crash etc), you will
 probably
 lose the pool. So if you want/need SLOG, you probably want two of
 them in a mirror…
 
 That's only true on older versions of ZFS. ZFSv19 (or 20?) includes
 the ability to import a pool with a failed/missing log device. You
 lose any data that is in the log and not in the pool, but the pool
 is
 importable.
 
 Are you sure? I booted this v28 pool a couple of months back, and
 found it didn't recognize its pool, apparently because of a missing
 SLOG. It turned out the cache shelf was disconnected, after
 re-connecting it, things worked as planned. I didn't try to force a
 new import, though, but it didn't boot up normally, and told me it
 couldn't import its pool due to lack of SLOG devices.
 
 Positive. :) I tested it with ZFSv28 on FreeBSD 9-STABLE a month or
 two ago. See the updated man page for zpool, especially the bit about
 import -m. :)
 
 On 151a2, man page just says 'use this or that mountpoint' with import -m, 
 but the fact was zpool refused to import the pool at boot when 2 SLOG devices 
 (mirrored) and 10 L2ARC devices were offline. Should OI/Illumos be able to 
 boot cleanly without manual action with the SLOG devices gone?

No. Missing slogs is a potential data-loss condition. Importing the pool without
slogs requires acceptance of the data-loss -- human interaction.
 -- richard

--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422







___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-30 Thread Tim Cook
On Mon, Jul 30, 2012 at 12:44 PM, Richard Elling
richard.ell...@gmail.comwrote:

 On Jul 30, 2012, at 10:20 AM, Roy Sigurd Karlsbakk wrote:

 - Opprinnelig melding -

 On Mon, Jul 30, 2012 at 9:38 AM, Roy Sigurd Karlsbakk

 r...@karlsbakk.net wrote:

 Also keep in mind that if you have an SLOG (ZIL on a separate

 device), and then lose this SLOG (disk crash etc), you will

 probably

 lose the pool. So if you want/need SLOG, you probably want two of

 them in a mirror…


 That's only true on older versions of ZFS. ZFSv19 (or 20?) includes

 the ability to import a pool with a failed/missing log device. You

 lose any data that is in the log and not in the pool, but the pool

 is

 importable.


 Are you sure? I booted this v28 pool a couple of months back, and

 found it didn't recognize its pool, apparently because of a missing

 SLOG. It turned out the cache shelf was disconnected, after

 re-connecting it, things worked as planned. I didn't try to force a

 new import, though, but it didn't boot up normally, and told me it

 couldn't import its pool due to lack of SLOG devices.


 Positive. :) I tested it with ZFSv28 on FreeBSD 9-STABLE a month or

 two ago. See the updated man page for zpool, especially the bit about

 import -m. :)


 On 151a2, man page just says 'use this or that mountpoint' with import -m,
 but the fact was zpool refused to import the pool at boot when 2 SLOG
 devices (mirrored) and 10 L2ARC devices were offline. Should OI/Illumos be
 able to boot cleanly without manual action with the SLOG devices gone?


 No. Missing slogs is a potential data-loss condition. Importing the pool
 without
 slogs requires acceptance of the data-loss -- human interaction.
  -- richard

 --
 ZFS Performance and Training
 richard.ell...@richardelling.com
 +1-760-896-4422



I would think a flag to allow you to automatically continue with a
disclaimer might be warranted (default behavior obviously requiring human
input).

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL devices and fragmentation

2012-07-30 Thread Richard Elling
On Jul 30, 2012, at 12:25 PM, Tim Cook wrote:
 On Mon, Jul 30, 2012 at 12:44 PM, Richard Elling richard.ell...@gmail.com 
 wrote:
 On Jul 30, 2012, at 10:20 AM, Roy Sigurd Karlsbakk wrote:
 - Opprinnelig melding -
 On Mon, Jul 30, 2012 at 9:38 AM, Roy Sigurd Karlsbakk
 r...@karlsbakk.net wrote:
 Also keep in mind that if you have an SLOG (ZIL on a separate
 device), and then lose this SLOG (disk crash etc), you will
 probably
 lose the pool. So if you want/need SLOG, you probably want two of
 them in a mirror…
 
 That's only true on older versions of ZFS. ZFSv19 (or 20?) includes
 the ability to import a pool with a failed/missing log device. You
 lose any data that is in the log and not in the pool, but the pool
 is
 importable.
 
 Are you sure? I booted this v28 pool a couple of months back, and
 found it didn't recognize its pool, apparently because of a missing
 SLOG. It turned out the cache shelf was disconnected, after
 re-connecting it, things worked as planned. I didn't try to force a
 new import, though, but it didn't boot up normally, and told me it
 couldn't import its pool due to lack of SLOG devices.
 
 Positive. :) I tested it with ZFSv28 on FreeBSD 9-STABLE a month or
 two ago. See the updated man page for zpool, especially the bit about
 import -m. :)
 
 On 151a2, man page just says 'use this or that mountpoint' with import -m, 
 but the fact was zpool refused to import the pool at boot when 2 SLOG 
 devices (mirrored) and 10 L2ARC devices were offline. Should OI/Illumos be 
 able to boot cleanly without manual action with the SLOG devices gone?
 
 No. Missing slogs is a potential data-loss condition. Importing the pool 
 without
 slogs requires acceptance of the data-loss -- human interaction.
  -- richard
 
 --
 ZFS Performance and Training
 richard.ell...@richardelling.com
 +1-760-896-4422
 
 
 
 I would think a flag to allow you to automatically continue with a disclaimer 
 might be warranted (default behavior obviously requiring human input). 

Disagree, the appropriate action is to boot as far as possible.
The pool will not be imported and will have the normal fault management
alerts generated.

For interactive use, the import will fail, and you can add the -m option.
 -- richard

--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422







___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can the ZFS copies attribute substitute HW disk redundancy?

2012-07-30 Thread Brandon High
On Mon, Jul 30, 2012 at 7:11 AM, GREGG WONDERLY gregg...@gmail.com wrote:
 I thought I understood that copies would not be on the same disk, I guess I 
 need to go read up on this again.

ZFS attempts to put copies on separate devices, but there's no guarantee.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can the ZFS copies attribute substitute HW disk redundancy?

2012-07-30 Thread Nico Williams
The copies thing is a really only for laptops, where the likelihood of
redundancy is very low (there are some high-end laptops with multiple
drives, but those are relatively rare) and where this idea is better
than nothing.  It's also nice that copies can be set on a per-dataset
manner (whereas RAID-Zn and mirroring are for pool-wide redundancy,
not per-dataset), so you could set it  1 on home directories but not
/.

Nico
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss