Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2010-01-15 Thread Charles Edge
To have Mac OS X connect via iSCSI:
http://krypted.com/mac-os-x/how-to-use-iscsi-on-mac-os-x/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL to disk

2010-01-15 Thread Robert Milkowski

On 16/01/2010 00:09, Jeffry Molanus wrote:
   

-Original Message-
From: neil.per...@sun.com [mailto:neil.per...@sun.com]
 


   

I think you misunderstand the function of the ZIL. It's not a journal,
and doesn't get transferred to the pool as of a txg. It's only ever
written except
after a crash it's read to do replay. See:

http://blogs.sun.com/perrin/entry/the_lumberjack
 

I also read another blog[1]; the part of interest here is this:

The zil behaves differently for different size of writes that happens. For 
small writes, the data is stored as a part of the log record. For writes 
greater than zfs_immediate_write_sz (64KB), the ZIL does not store a copy of 
the write, but rather syncs the write to disk and only a pointer to the sync-ed 
data is stored in the log record.

If I understand this right, writes<64KB get stored on the SSD devices.

   


if an application requests a synchronous write then it is commited to 
ZIL immediately, once it is done the IO is acknowledged to application. 
But data written to ZIL is still in memory as part of an currently open 
txg and will be committed to a pool with no need to read anything from 
ZIL. Then there is an optimization you wrote above so data block not 
necesarilly need to be writen just pointers which point to them.


Now it is slightly more complicated as you need to take into account 
logbias property and a possibility that a dedicated zil device could be 
present.


As Neil wrote zfs will read from ZIL only if while importing a pool it 
will be detected that there is some data in ZIL which hasn't been 
commited to a pool yet which could happen due to system reset, power 
loss or devices suddenly disappearing.


--
Robert Milkowski
http://milek.blogspot.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zfs over iscsi bad status

2010-01-15 Thread Arnaud Brand
I was testing zfs over iscsi (with commstar over a zvol) and got some errors.
Target and initiator are on the same host.

I've copy-pasted an excerpt on zpool status hereafter.
The pool (tank) containing the iscsi-shared zvol (tank/tsmvol) is healthy and 
show no errors.
But the zpool (tsmvol) on the initiator side shows errors.

The processes that where running to generate some IO load/volume are stuck and 
aren't killable.
Zpool destroy tsmvol, zpool export tsmvol, zfs list stay stuck.
I managed to open an ssh session by connecting as an user that has no homedir 
and thus doesn't run quota (which is stuck and non killable too).

My questions are the following : 

1 - why does the tsmvol pool show up as degraded when the only device it 
contains is itself degraded ? 
Shouldn't it show as faulted ?

2 - why are my load generating processes (plain old cat) unkillable ? 
Shouldn't they have been killed by ioerrors caused by the dead pool ?
I guess this is what the failmode setting is for, so I'll set it for the rest 
of my tests.
But I'm a bit puzzled about implications.

3 - how could there be errors in localhost transmission ?
I followed the basic steps outlined in commstar's how to, are there some 
specific settings to set for localhost iscsi access ? 

Thanks for your help,
Arnaud

Excerpt of zpool status :

  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
tank   ONLINE   0 0 0
  raidz1-0 ONLINE   0 0 0
c10t0d0p0  ONLINE   0 0 0
c10t1d0p0  ONLINE   0 0 0
c10t2d0p0  ONLINE   0 0 0
c10t3d0p0  ONLINE   0 0 0
c10t4d0p0  ONLINE   0 0 0
c10t5d0p0  ONLINE   0 0 0
c10t6d0p0  ONLINE   0 0 0
  raidz1-1 ONLINE   0 0 0
c10t7d0p0  ONLINE   0 0 0
c11t1d0p0  ONLINE   0 0 0
c11t2d0p0  ONLINE   0 0 0
c11t4d0p0  ONLINE   0 0 0
c11t5d0p0  ONLINE   0 0 0
c11t6d0p0  ONLINE   0 0 0
c11t7d0p0  ONLINE   0 0 0
logs
  c11t0d0p2ONLINE   0 0 0
cache
  c11t0d0p3ONLINE   0 0 0

errors: No known data errors

  pool: tsmvol
 state: DEGRADED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: http://www.sun.com/msg/ZFS-8000-HC
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
tsmvol   DEGRADED 3 24,0K 0
  c9t600144F05DF34C004B50D7D80001d0  DEGRADED 1 24,3K 1  
too many errors

errors: 24614 data errors, use '-v' for a list



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (snv_129, snv_130) can't import zfs pool

2010-01-15 Thread Victor Latushkin

LevT wrote:

switched to another system, RAM 4Gb -> 16Gb

the importing process lasts about 18hrs now
the system is responsive

if developers want it I may provide ssh access
I have no critical data there, it is an acceptance test only


If it is still relevant, feel free to contact me offline to setup ssh 
access.


regards,
victor

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL to disk

2010-01-15 Thread Jeffry Molanus

> -Original Message-
> From: neil.per...@sun.com [mailto:neil.per...@sun.com]


> I think you misunderstand the function of the ZIL. It's not a journal,
> and doesn't get transferred to the pool as of a txg. It's only ever
> written except
> after a crash it's read to do replay. See:
> 
> http://blogs.sun.com/perrin/entry/the_lumberjack

I also read another blog[1]; the part of interest here is this:

The zil behaves differently for different size of writes that happens. For 
small writes, the data is stored as a part of the log record. For writes 
greater than zfs_immediate_write_sz (64KB), the ZIL does not store a copy of 
the write, but rather syncs the write to disk and only a pointer to the sync-ed 
data is stored in the log record.

If I understand this right, writes <64KB get stored on the SSD devices. 

[1] http://blogs.sun.com/realneel/entry/the_zfs_intent_log
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Finito: Glassfish V3 Pet Catalog sample DEMO in VM Template - Internal Download

2010-01-15 Thread emuls
Hi All,

As I promise I finish before SUN leave:



http://blogs.sun.com/VirtualGuru/entry/virtual_applinaces_ovf_workshop

Feedback is welcome

Nice day
Rudolf
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL to disk

2010-01-15 Thread Al Hopper
On Fri, Jan 15, 2010 at 1:59 PM, Jeffry Molanus
 wrote:
>
>> Sometimes people get confused about the ZIL and separate logs. For
>> sizing purposes,
>> the ZIL is a write-only workload.  Data which is written to the ZIL is
>> later asynchronously
>> written to the pool when the txg is committed.
>
> Right; the tgx needs time to transfer the ZIL.
>
>
>> The ZFS write performance for this configuration should consistently
>> be greater than 80 IOPS.  We've seen measurements in the 600 write
>> IOPS range.  Why?  Because ZFS writes tend to be contiguous. Also,
>> with the SATA disk write cache enabled, bursts of writes are handled
>> quite nicely.
>>  -- richard
>
> Is there a method to determine this value before pool configuration ? Some 
> sort of rule of thumb? It would be sad when you configure the pool and have 
> to reconfigure later one because you discover the pool can't handle the tgx 
> commits from SSD to disk fast enough. In other words; with Y as expected load 
> you would require a minimal of X mirror devs or X raid-z vdevs in order to 
> have a  pool with enough bandwith/IO to flush the ZIL without stalling the 
> system.
>
>

All I can tell you (echoed elsewhere in this thread) that a beautiful
ZIL will have two main characteristics: 1) IOPS  -  it must be an IOPS
"monster" and 2) low latency.   On my workloads, adding a ZIL based on
a nice fast 15k RPM SAS disk to a pool of nice 7k2 SATA drives did'nt
provide the kick-in-the-ascii improvement I was looking for.  In fact,
the improvement was almost impossible for a typical user to be aware
of.  Why?  a) not enough IOPS and b) high latency.

Your starting point, at a bare *minimum*, should be an x25-m SSD drive
[1] and only going *up* from this base point.  YMMV of course - this
is based on my personal experience on a relatively small ZFS system.

[1] According to Intel, the x25m should last 5 years if you write 20Gb
to it every day.  Of course they don't provide a 5 year warranty -
only a 3 year one.  Draw your own conclusions.

-- 
Al Hopper  Logical Approach Inc,Plano,TX a...@logical-approach.com
   Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs fast mirror resync?

2010-01-15 Thread Daniel Carosone
On Fri, Jan 15, 2010 at 10:37:15AM -0500, Charles Menser wrote:
> Perhaps an ISCSI mirror for a laptop? Online it when you are back
> "home" to keep your backup current.

I do exactly this, but:
 - It's not the only thing I do for backup. 
 - The iscsi initiator is currently being a major PITA for me.
   http://opensolaris.org/jive/thread.jspa?threadID=121484

This kind of sometimes-attached mirror (whether via iscsi, usb disks,
or whatever else) is a useful kind of backup for some circumstances,
but it is not much good in others. If your need to recover falls into
one of those other circumstances, it can be a bit rough.

I like it, because:
 - It's easy to set up (just a few commands) and (at least ideally)
   will work basically by itself after that, no thinking required.  
   Those are important characteristics of a good backup! :)  
 - It's reasonably quick to resilver, and reasonably unintrusive; it
   can make things slower, but doesn't prevent normal use while going.
 - It can be used for repair, rather than recovery, in the case where
   the laptop disk develops bad sectors, and maybe therefore avoid
   more complicaed restores and impact analysis.  For recently-written
   data that hasn't been mirrored, too bad - but it's more likely to
   happen to older sectors.  Scrub regularly overnight with the mirror
   attached.

However, it falls short of the ideal in a number of ways:
 - In practice you often need to take some action (plug in usb disk,
   online/offline the component, clear a fault) to kick it off, and/or
   to prevent hangs and timeouts when moving away.  Sometimes you get
   those anyway (e.g timing out iscsi during zfs import at boot).
 - ZFS lacks the ability to assign preference or weights to mirrors
   for read, so running with the mirror attached can often slow the
   system down, even when not resilvering.
 - You mirror everything, including all that scratch data that's
   really not worth backing up, the data you already have replicated
   elsewhere, and those delete actions you did by mistake.
 - Restores are tricker and can take multiple steps and thinking,
   especially if you just want some critical data now.  It's hard to
   know what's in each backup instance.
 - The size of the backup is tied to the size of the disk - it's more
   complicated to keep more/older backups than are what is on the
   primary disk, and tricker to restore to a different-sized disk.

I keep snapshots of the backing volume so I can have older images (as
well as within the backed-up pool), and if I ever need to look inside
an image I clone it before importing on the server.  Remember to use
import -R!

I also zfs send -R my pool to another zfs elsewhere, as a second
backup. I have some very Q&D hokey scripts for this (which I want to
rewrite and make smarter, now that we have snapshot holds), as do
other peeople, but there's not yet quite such an easy setup path for
this.

I do both things, in the hope that each makes up for the deficiencies
of the other.

--
Dan.


pgpagaxgDjGbh.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Add disk to raidz pool

2010-01-15 Thread Bob Friesenhahn

On Fri, 15 Jan 2010, Kenny wrote:

Can you add a disk (volume) to an existing raidz pool??  Or do you 
still need to create a new pool, copy data, destroy old pool in 
order to expand the pool size?


There is not really such a thing as a 'raidz pool'.  You are not able 
to add another disk to a raidz vdev, but you can add additional vdevs 
to your pool (typically requires more than one disk) in order to 
expand the pool size.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL to disk

2010-01-15 Thread Scott Meilicke
I think Y is such a variable and complex number it would be difficult to give a 
rule of thumb, other than to 'test with your workload'. 

My server, having three, five disk raidzs (striped) and an intel x25-e as a zil 
can fill my two G ethernet pipes over NFS (~200MBps) during mostly sequential 
writes. That same server can only consume about 22 MBps using an artificial 
load designed to simulate my VM activity (using iometer). So it varies greatly 
depending upon Y.

-Scott
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL to disk

2010-01-15 Thread Neil Perrin



On 01/15/10 12:59, Jeffry Molanus wrote:
 

Sometimes people get confused about the ZIL and separate logs. For
sizing purposes,
the ZIL is a write-only workload.  Data which is written to the ZIL is
later asynchronously
written to the pool when the txg is committed.


Right; the tgx needs time to transfer the ZIL.


I think you misunderstand the function of the ZIL. It's not a journal,
and doesn't get transferred to the pool as of a txg. It's only ever written 
except
after a crash it's read to do replay. See:

http://blogs.sun.com/perrin/entry/the_lumberjack





The ZFS write performance for this configuration should consistently
be greater than 80 IOPS.  We've seen measurements in the 600 write
IOPS range.  Why?  Because ZFS writes tend to be contiguous. Also,
with the SATA disk write cache enabled, bursts of writes are handled
quite nicely.
 -- richard


Is there a method to determine this value before pool configuration ? Some sort 
of rule of thumb? It would be sad when you configure the pool and have to 
reconfigure later one because you discover the pool can't handle the tgx 
commits from SSD to disk fast enough. In other words; with Y as expected load 
you would require a minimal of X mirror devs or X raid-z vdevs in order to have 
a  pool with enough bandwith/IO to flush the ZIL without stalling the system.


Jeffry


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I use a clone to split a filesystem?

2010-01-15 Thread A Darren Dunham
On Fri, Jan 15, 2010 at 02:07:40PM -0600, Gary Mills wrote:
> I have a ZFS filesystem that I wish to split into two
> ZFS filesystems at one of the subdirectories.  I understand that I
> first need to make a snapshot of the filesystem and then make a clone
> of the snapshot, with a different name.  Then, in the clone I can move
> everything in that subdirectory up to the top level, and in the
> original I can remove that subdirectory.  All of these operations
> should be quick, with no copying of data.
> 
> I understand also that I won't be able to destroy the snapshot because
> the clone is dependant on it.  Can I just promote the clone and then
> destroy the snapshot?  Does that remove all dependancies?

No.  A clone and the parent filesystem are always bound together through
that snapshot.  You can't remove it as long as both filesystems remain
in existence. 

> I suppose a simpler case, without the file shuffling, would be to
> create a second filesystem as a copy of a first.  All of the examples
> of this I've seen destroy the first filesystem but don't destroy the
> snapshot.  I want to do the opposite.  Is this possible?

Snapshots don't create copies.  If you want them to be truly
independent, you'd have to create the filesystem and then copy the data
manually.
-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up a ZFS pool

2010-01-15 Thread Bryan Allen
Have a simple rolling ZFS replication script:

http://dpaste.com/145790/
-- 
bda
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backing up a ZFS pool

2010-01-15 Thread David Dyer-Bennet

On Fri, January 15, 2010 13:47, Kenny wrote:
> What is the best way to back up a zfs pool for recovery?  Recover entire
> pool or files from a pool...  Would you use snapshots and clones?
>
> I would like to move the "backup" to a different disk and not use tapes.
>
> suggestions??

What I'm trying to do is:

  1)  Make regular snapshots on the live filesystems.  So long as nothing
goes wrong, people can recover individual files from those easily.

  2)  Back up the live filesystems to one or more backup pools, with all
snapshots.  This can be restored to the live filesystem if there's a
total disaster, or mounted and individual files retrieved if necessary.

This does take up more space in the live filesystem; if one eliminated all
the old snapshots there, it would be smaller.  Since the big things in
this environment tend to stick around once they appear, I don't mind this
too much.

To accomplish 2, I'm trying to use zfs send/receive.  I'm not going to
archive the stream, just use it to create / update the backup filesystem. 
So far, I'm running into frequent problems.  I can't get incrementals to
work, and the last time I made a full backup, I couldn't export the pool
afterwards.

I had a previous system using rsync working fine, but that didn't handle
ZFS ACLs properly, and when I went from Samba to cifs, that became an
issue.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Can I use a clone to split a filesystem?

2010-01-15 Thread Gary Mills
I've been reading the zfs man page, but I'm confused about
dependancies.  I have a ZFS filesystem that I wish to split into two
ZFS filesystems at one of the subdirectories.  I understand that I
first need to make a snapshot of the filesystem and then make a clone
of the snapshot, with a different name.  Then, in the clone I can move
everything in that subdirectory up to the top level, and in the
original I can remove that subdirectory.  All of these operations
should be quick, with no copying of data.

I understand also that I won't be able to destroy the snapshot because
the clone is dependant on it.  Can I just promote the clone and then
destroy the snapshot?  Does that remove all dependancies?

I suppose a simpler case, without the file shuffling, would be to
create a second filesystem as a copy of a first.  All of the examples
of this I've seen destroy the first filesystem but don't destroy the
snapshot.  I want to do the opposite.  Is this possible?

-- 
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZIL to disk

2010-01-15 Thread Jeffry Molanus
 
> Sometimes people get confused about the ZIL and separate logs. For
> sizing purposes,
> the ZIL is a write-only workload.  Data which is written to the ZIL is
> later asynchronously
> written to the pool when the txg is committed.

Right; the tgx needs time to transfer the ZIL. 


> The ZFS write performance for this configuration should consistently
> be greater than 80 IOPS.  We've seen measurements in the 600 write
> IOPS range.  Why?  Because ZFS writes tend to be contiguous. Also,
> with the SATA disk write cache enabled, bursts of writes are handled
> quite nicely.
>  -- richard

Is there a method to determine this value before pool configuration ? Some sort 
of rule of thumb? It would be sad when you configure the pool and have to 
reconfigure later one because you discover the pool can't handle the tgx 
commits from SSD to disk fast enough. In other words; with Y as expected load 
you would require a minimal of X mirror devs or X raid-z vdevs in order to have 
a  pool with enough bandwith/IO to flush the ZIL without stalling the system.


Jeffry


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Add disk to raidz pool

2010-01-15 Thread Kenny
Can you add a disk (volume) to an existing raidz pool??  Or do you still need 
to create a new pool, copy data, destroy old pool in order to expand the pool 
size?

TIA   --Kenny
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Backing up a ZFS pool

2010-01-15 Thread Kenny
What is the best way to back up a zfs pool for recovery?  Recover entire pool 
or files from a pool...  Would you use snapshots and clones?

I would like to move the "backup" to a different disk and not use tapes.

suggestions??

TIA   --Kenny
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is the disk a member of a zpool?

2010-01-15 Thread Victor Latushkin

Lutz Schumann wrote:
The on Disk Layout is shown here: 
http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/ondiskformat0822.pdf


You can use the name value pairs in the vdev label. ( I guess). Unfortunately I 
do not know any scripts.


you can try

zdb -l /dev/rdsk/cXtYdZs0
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is the disk a member of a zpool?

2010-01-15 Thread Lutz Schumann
The on Disk Layout is shown here: 
http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/ondiskformat0822.pdf

You can use the name value pairs in the vdev label. ( I guess). Unfortunately I 
do not know any scripts.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] adpu320 scsi timeouts only with ZFS

2010-01-15 Thread Marty Scholes
> To fix it, I swapped out the Adaptec controller and
> put in LSI Logic  
> and all the problems went away.

I'm using Sun's built-in LSI controller with (I presume) the original internal 
cable shipped by Sun.

Still, no joy for me at U320 speeds.  To be precise, when the controller is set 
at U320, it runs amazingly fast until it freezes, at which point it is quite 
slow.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Is the disk a member of a zpool?

2010-01-15 Thread Morten-Christian Bernson
I was curious as to if it's possible to know if a disk device (from the SAN) is 
a member of any zpool.  The disks are shared to several servers, and the zpool 
is exported/imported between servers.  I am writing a script to list all the 
disks available from the SAN with some misc. information, and it would be nice 
to list which pool the disk is a part of, and which disks are free to quickly 
find new disks when they are shared out from the san...

It would not be possible to get this from a "zpool status" since all the disks 
will not be present on the machine, some may be in use in a zpool mounted on 
another machine.  I have noticed though, that if you do a "format " on a disk, it will tell you if this disk is a member of a ZFS pool, 
and the name of the pool in question.  This much mean that this information is 
stored on the disk somehow, and the quetion is: Is it possible to get this 
information trough a command in a script?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs fast mirror resync?

2010-01-15 Thread Charles Menser
Perhaps an ISCSI mirror for a laptop? Online it when you are back
"home" to keep your backup current.

Charles

On Thu, Jan 14, 2010 at 7:04 PM, A Darren Dunham  wrote:
> On Thu, Jan 14, 2010 at 06:11:10PM -0500, Miles Nordin wrote:
>> zpool offline / zpool online of a mirror component will indeed
>> fast-resync, and I do it all the time.  zpool detach / attach will
>> not.
>
> Yes, but the offline device is still part of the pool.  What are you
> doing with the device when you take it offline?  (What's the reason
> you're offlining it?)
>
> --
> Darren
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool fragmentation issues? (dovecot) [SEC=UNCLASSIFIED]

2010-01-15 Thread Michael Keller
> Got a link to this magic dbox format ?

http://wiki.dovecot.org/MailboxFormat
http://wiki.dovecot.org/MailboxFormat/dbox
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send/receive as backup - reliability?

2010-01-15 Thread Lassi Tuura
Hi,

I am considering building a modest sized storage system with zfs. Some of the 
data on this is quite valuable, some small subset to be backed up "forever", 
and I am evaluating back-up options with that in mind.

My understanding is that zfs send approximately captures the copy-on-write file 
system block-level dump, and zfs receive plays it back to rebuild the file 
system, and this can be used among other things for back-ups. I call the dump 
"stream" below.

How reliable is this? I don't mind the fact I would have to replay entire file 
system instead of individual files. My concern is that for whatever reason I'd 
lose ability to play the stream back, and would not be able to restore possibly 
years from now.

Is the format documented? Is it possible to interpret the data with independent 
tools, like it's possible with tar/pax/cpio archives? Even if no such tool 
exists now, could I for example write a user space tool using currently 
existing open source zfs user space library that was able to extract useful 
information from the data stream? I realise this could be somewhat complex, 
especially incremental dumps - but just how hard?

How exactly does the stream have to match the file system to restore? My 
assumption zfs requires an exact match: you can only restore at exact point you 
backed up from. (Fine by me, just need to know.)

Follow-up: does one corrupted incremental back-up set invalidate all 
incremental back-up sets until the next full back-up point (or higher-level 
incremental point)?

Assuming the zfs send data stream hasn't been corrupted, have there been 
instances where it's not possible to restore file system by playing it back via 
zfs receive?

Have there been cases where some bug has caused "zfs send" data become 
corrupted so that restoration is no longer possible? (Either zfs on-disk file 
system bug, or something in zfs send not doing the right thing.)

Is it possible to make dry-run restoration attempt at back-up time to verify 
the restoration would succeed?

Would you recommend zfs send/receive as a back-up strategy for highly valuable 
data at this point in time? Ignoring the individual file vs. whole file system 
aspect, how reliable you rank it compared to tar/pax?

Regards,
Lassi
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recovering a broken mirror

2010-01-15 Thread Jim Sloey
Never mind.
It looks like the controller is flakey. Neither disk in the mirror is clean.
Attempts to backup and recover the remaining disk produced I/O errors that were 
traced to the controller.
Thanks for your help Victor.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New ZFS Intent Log (ZIL) device available - Beta program now open!

2010-01-15 Thread Andrey Kuzmin
On Fri, Jan 15, 2010 at 2:07 AM, Christopher George
 wrote:
>> Why not enlighten EMC/NTAP on this then?
>
> On the basic chemistry and possible failure characteristics of Li-Ion
> batteries?
>
> I will agree, if I had system level control as in either example, one could
> definitely help mitigate said risks compared to selling a card based
> product where I have very little control over the thermal envelopes I am
> subjected.
>
>> Could you please elaborate on the last statement, provided you meant
>> anything beyond "UPS is a power-backup standard"?
>
> Although, I do think the discourse is healthy and relevant.  At this point, I
> am comfortable to agree to disagree.  I respect your point of view, and do

Same on my side. I don't object to your design decision, my objection
was to the negative advertisement wrt another design. Good luck with
beta and beyond.

Regards,
Andrey

> agree strongly that Li-Ion batteries play a critical and highly valued role in
> many industries.

>
> Thanks,
>
> Christopher George
> Founder/CTO
> www.ddrdrive.com
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New ZFS Intent Log (ZIL) device available - Beta program now open!

2010-01-15 Thread zfsml

On 1/13/10 9:51 AM, Christopher George wrote:

The DDRdrive X1 OpenSolaris device driver is now complete,
please join us in our first-ever ZFS Intent Log (ZIL) beta test
program.  A select number of X1s are available for loan,
preferred candidates would have a validation background
and/or a true passion for torturing new hardware/driver :-)

We are singularly focused on the ZIL device market, so a test
environment bound by synchronous writes is required.  The
beta program will provide extensive technical support and a
unique opportunity to have direct interaction with the product
designers.

Would you like to take part in the advancement of Open
Storage and explore the far-reaching potential of ZFS
based Hybrid Storage Pools?

If so, please send an inquiry to "zfs at ddrdrive dot com".

The drive for speed,

Christopher George
Founder/CTO
www.ddrdrive.com

*** Special thanks goes out to SUN employees Garrett D'Amore and
James McPherson for their exemplary help and support.  Well done!


I'd like to say thanks for putting a price point somewhere on your website, so 
many people with new products make the mistake of thinking it is important to 
withhold that info from potential customers.

I get vendors calling or wanting to meet without giving a price range.
I tell them - "We don't have to meet - If it costs $15,000 I don't want any, 
if it is $1,000 I'll take one, if it is $100 I'll take 12..."

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New ZFS Intent Log (ZIL) device available - Beta program now open!

2010-01-15 Thread Al Hopper
On Thu, Jan 14, 2010 at 5:07 PM, Christopher George
 wrote:
>> Why not enlighten EMC/NTAP on this then?
>
> On the basic chemistry and possible failure characteristics of Li-Ion
> batteries?
>
> I will agree, if I had system level control as in either example, one could
> definitely help mitigate said risks compared to selling a card based
> product where I have very little control over the thermal envelopes I am
> subjected.
>
>> Could you please elaborate on the last statement, provided you meant
>> anything beyond "UPS is a power-backup standard"?
>
> Although, I do think the discourse is healthy and relevant.  At this point, I
> am comfortable to agree to disagree.  I respect your point of view, and do
> agree strongly that Li-Ion batteries play a critical and highly valued role in
> many industries.
>
> Thanks,
>

Congratulations Christopher - great product - and I'm sure that this
will be the first of a family of products for you!  Personally I like
the simplicity of the design and the lack of a battery to worry about.

This is a great addition to any ZFS installation ...   :)

Regards,

-- 
Al Hopper  Logical Approach Inc,Plano,TX a...@logical-approach.com
   Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Writing faster than storage can handle crashes system

2010-01-15 Thread Saso Kiselkov
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I've run up into yet another peculiarity while building my video storage
solution on top of ZFS: if I fill up the disk to 98%, writes slow down -
this is expected. However, what happens when the write load overwhelms
the storage systems is puzzling to me. Instead of just blocking the
writer threads, the whole system's memory usage also starts to spiral
out of control (and not by my processes, their VM usage remains
constant, as they simply start rejecting input if the writer threads
stall for too long). This process completely exhausts all physical
memory and starts swapping memory pages in and out, essentially bringing
the whole system to a screeching halt.

Even if I SIGKILL all the writing processes, they remain around as
zombies and the memory load continues to increase for quite some time,
often times still overwhelming the system - I can still see the disks
writing like crazy in iostat.

Is this behavior normal? What subsystems could be causing it? Or is it
totally unusual and should I be looking for it somewhere in my app?

BR,
- --
Saso
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAktQOQkACgkQRO8UcfzpOHB4VgCeLUQFppKadZvuEhPRjz7zqaRR
BHIAoIZZkHIDpkbkXcfjaqv6Ku2B5DMp
=ErtL
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss