Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-05-04 Thread Robert Milkowski

On 16/02/2010 21:54, Jeff Bonwick wrote:

People used fastfs for years in specific environments (hopefully
understanding the risks), and disabling the ZIL is safer than fastfs.
Seems like it would be a useful ZFS dataset parameter.
 

We agree.  There's an open RFE for this:

6280630 zil synchronicity

No promise on date, but it will bubble to the top eventually.

   


So everyone knows - it has been integrated into snv_140 :)

--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-05-04 Thread Victor Latushkin

On May 4, 2010, at 2:02 PM, Robert Milkowski wrote:

 On 16/02/2010 21:54, Jeff Bonwick wrote:
 People used fastfs for years in specific environments (hopefully
 understanding the risks), and disabling the ZIL is safer than fastfs.
 Seems like it would be a useful ZFS dataset parameter.
 
 We agree.  There's an open RFE for this:
 
 6280630 zil synchronicity
 
 No promise on date, but it will bubble to the top eventually.
 
   
 
 So everyone knows - it has been integrated into snv_140 :)

Congratulations, Robert!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-25 Thread Robert Milkowski

On 17/02/2010 09:55, Robert Milkowski wrote:

On 16/02/2010 23:59, Christo Kutrovsky wrote:

On ZVOLs it appears the setting kicks in life. I've tested this by 
turning it off/on and testing with iometer on an exported iSCSI 
device (iscsitgtd not comstar).
I haven't looked at zvol's code handling zil_disable, but with 
datasets I'm sure I'm right.





yes, on zvold zil_disable takes immediate effect.

--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-25 Thread Robert Milkowski

On 25/02/2010 12:48, Robert Milkowski wrote:

On 17/02/2010 09:55, Robert Milkowski wrote:

On 16/02/2010 23:59, Christo Kutrovsky wrote:

On ZVOLs it appears the setting kicks in life. I've tested this by 
turning it off/on and testing with iometer on an exported iSCSI 
device (iscsitgtd not comstar).
I haven't looked at zvol's code handling zil_disable, but with 
datasets I'm sure I'm right.





yes, on zvold zil_disable takes immediate effect.



In the mean time you might be interested in:

http://milek.blogspot.com/2010/02/zvols-write-cache.html

--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-17 Thread Robert Milkowski

On 16/02/2010 23:59, Christo Kutrovsky wrote:

Robert,

That would be pretty cool especially if it makes into the 2010.02 release. I 
hope there are no weird special cases that pop-up from this improvement.

   

I'm pretty sure it won't make 2010.03


Regarding workaround.

That's not my experience, unless it behaves differently on ZVOLs and datasets.

On ZVOLs it appears the setting kicks in life. I've tested this by turning it 
off/on and testing with iometer on an exported iSCSI device (iscsitgtd not 
comstar).
   
I haven't looked at zvol's code handling zil_disable, but with datasets 
I'm sure I'm right.



--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-16 Thread Moshe Vainer
Eric, is this answer by George wrong?

http://opensolaris.org/jive/message.jspa?messageID=439187#439187

Are we to expect the fix soon or is there still no schedule?

Thanks, 
Moshe
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-16 Thread Andrew Gabriel

Darren J Moffat wrote:
You have done a risk analysis and if you are happy that your NTFS 
filesystems could be corrupt on those ZFS ZVOLs if you lose data then 
you could consider turning off the ZIL.  Note though that it isn't

just those ZVOLs you are serving to Windows that lose access to a ZIL
but *ALL* datasets on *ALL* pools and that includes your root pool.

For what it's worth I personally run with the ZIL disabled on my home 
NAS system which is serving over NFS and CIFS to various clients, but 
I wouldn't recommend it to anyone.  The reason I say never to turn off 
the ZIL is because in most environments outside of home usage it just 
isn't worth the risk to do so (not even for a small business).


People used fastfs for years in specific environments (hopefully 
understanding the risks), and disabling the ZIL is safer than fastfs. 
Seems like it would be a useful ZFS dataset parameter.


--
Andrew

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-16 Thread Jeff Bonwick
 People used fastfs for years in specific environments (hopefully 
 understanding the risks), and disabling the ZIL is safer than fastfs. 
 Seems like it would be a useful ZFS dataset parameter.

We agree.  There's an open RFE for this:

6280630 zil synchronicity

No promise on date, but it will bubble to the top eventually.

Jeff
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-16 Thread Christo Kutrovsky
Jeff, thanks for link, looking forward to per data set control.

6280630 zil synchronicity 
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6280630)

It's been open for 5 years now :) Looking forward to not compromising my entire 
storage with disabled ZIL when I only need it on a few devices.

I would like to get back on the NTFS corruption on ZFS iSCSI device during 
power loss.

Think home server scenario. When power goes down, everything goes down. So 
having to restart the client for cache consistency - no problems.

Question is, can written data cause corruption due to write coalescing, out of 
order writing and etc.

looking to answer myself the following question: 
Do I need to rollback all my NTFS volumes on iSCSI to the last available 
snapshot every time there's a power failure involving the ZFS storage server 
with a disabled ZIL.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-16 Thread Daniel Carosone
On Tue, Feb 16, 2010 at 02:53:18PM -0800, Christo Kutrovsky wrote:
 looking to answer myself the following question: 
 Do I need to rollback all my NTFS volumes on iSCSI to the last available 
 snapshot every time there's a power failure involving the ZFS storage server 
 with a disabled ZIL.

No, but not for the reasons you think.  If the issue you're concerned
about applies, it applies whether the txg is tagged with a snapshot
name or not, whether it is the most recent or not. 

I don't think the issue applies; write reordering might happen within
a txg, because it has the freedom to do so within the whole-txg commit
boundary.  Out of order writes to the disk won't be valid until the
txg commits, making them be reachable. If other boundaries also apply
(sync commitments via iscsi commands) they will be respected, at at
least that granularity. 

--
Dan.

pgp0adJGU1a96.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-16 Thread Robert Milkowski

On 16/02/2010 22:53, Christo Kutrovsky wrote:

Jeff, thanks for link, looking forward to per data set control.

6280630 zil synchronicity 
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6280630)

It's been open for 5 years now :) Looking forward to not compromising my entire 
storage with disabled ZIL when I only need it on a few devices.

   
I quickly looked at the code and it seems to be rather simple to 
implement it.

I will try to do it in a next couple of weeks if I will find enough time.


btw: zil_disable is taken into account each time a zfs filesystem is 
being mounted, so as a workaround you may unmount all filesystems you 
want to disable zil for, set zil_disable to 1, mount these filesystems 
and set zil_disable back to 0. That way it will affect only the 
filesystems which were mounted while zil_disable=1. This is of course 
not a bullet-proof solution as other filesystems might be 
created/mounted during that period but it still might be a good enough 
workaround for you if you know no other filesystems are being mounted 
during that time.


--
Robert Milkowski
http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-16 Thread Christo Kutrovsky
Ok, now that you explained it, it makes sense. Thanks for replying Daniel.

Feel better now :) Suddenly, that Gigabyte i-Ram is no longer a necessity but a 
nice to have thing.

What would be really good to have is the that per-data set ZIL control in 
2010.02. And perhaps add another mode sync no wait where the sync is issued, 
but the application doesn't wait for it. Similar to Oracle's commit nowait vs 
commit batch nowait (current idea for delayed).
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-16 Thread Christo Kutrovsky
Robert,

That would be pretty cool especially if it makes into the 2010.02 release. I 
hope there are no weird special cases that pop-up from this improvement.

Regarding workaround. 

That's not my experience, unless it behaves differently on ZVOLs and datasets.

On ZVOLs it appears the setting kicks in life. I've tested this by turning it 
off/on and testing with iometer on an exported iSCSI device (iscsitgtd not 
comstar).
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-13 Thread Charles Hedrick
I have a similar situation. I have a system that is used for backup copies of 
logs and other non-critical things, where the primary copy is on a Netapp. Data 
gets written in batches a few times a day. We use this system because storage 
on it is a lot less expensive than on the Netapp. It's only non-critical data 
that is sent via NFS. Critical data is sent to this server either by zfs send | 
receive, or by an rsync running on the server that reads from the Netapp over 
NFS. Thus the important data shouldn't go through the ZIL.

I am seriously considering turning off the ZIL, because NFS write performance 
is so lousy.

I'd use SSD, except that I can't find a reasonable way of doing so. I have a 
pair of servers with Sun Cluster, sharing a J4200 JBOD. If there's a failure, 
operations move to the other server. Thus a local SSD is no better than ZIL 
disabled. I'd love to put an SSD in the J4200, but the claim that this was 
going to be supported seems to have vanished.

Someone once asked why I both with redundant systems if I don't care about the 
data. The answer is that if the NFS mounts hang, my production service hang. 
Also, I do care about some of the data. It just happens not to go through the 
ZIL.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-08 Thread Miles Nordin
 ck == Christo Kutrovsky kutrov...@pythian.com writes:
 djm == Darren J Moffat darr...@opensolaris.org writes:
 kth == Kjetil Torgrim Homme kjeti...@linpro.no writes:

ck The never turn off the ZIL sounds scary, but if the only
ck consequences are 15 (even 45) seconds of data loss .. i am
ck willing to take this for my home environment.

   djm You have done a risk analysis and if you are happy that your
   djm NTFS filesystems could be corrupt on those ZFS ZVOLs if you
   djm lose data then you could consider turning off the ZIL.

yeah I wonder if this might have more to do with write coalescing and
reordering within the virtualizing package's userland, though?
Disabling ZIL-writing should still cause ZVOL's to recover to a
crash-consistent state: so long as the NTFS was stored on a single
zvol it should not become corrupt.  It just might be older than you
might like, right?  I'm not sure it's working as well as that, just
saying it's probably not disabling the ZIL that's causing whatever
problems people have with guest NTFS's, right?

also, you can always rollback the zvol to the latest snapshot and
uncorrupt the NTFS.  so this NEVER is probably too strong.

especially because ZFS recovers to txg's, the need for fsync() by
certain applications is actually less than it is on other filesystems
that lack that characteristic and need to use fsync() as a barrier.
seems silly not to exploit this.

 I mean, there is no guarantee writes will be executed in order,
 so in theory, one could corrupt it's NTFS file system.

   kth I think you have that guarantee, actually.

+1, at least from ZFS I think you have it.  It'll recover to a txg
commit which is a crash-consistent point-in-time snapshot w.r.t. to
when the writes were submitted to it.  so as long as they aren't
being reordered by something above ZFS...

   kth I think you need to reboot the client so that its RAM cache is
   kth cleared before any other writes are made.

yeah it needs to understand the filesystem was force-unmounted, and
the only way to tell it so is to yank the virtual cord.

   djm For what it's worth I personally run with the ZIL disabled on
   djm my home NAS system which is serving over NFS and CIFS to
   djm various clients, but I wouldn't recommend it to anyone.  The
   djm reason I say never to turn off the ZIL is because in most
   djm environments outside of home usage it just isn't worth the
   djm risk to do so (not even for a small business).

yeah ok but IMHO you are getting way too much up in other people's
business, assuming things about them, by saying this.  these dire
warnings of NEVER are probably what's led to this recurring myth that
disabling ZIL-writing can lead to pool corruption when it can't.


pgpI9mKkUHVuo.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-07 Thread Eric Schrock

On Feb 6, 2010, at 11:30 PM, Christo Kutrovsky wrote:

 Eric, thanks for clarifying.
 
 Could you confirm the release for #1 ? As today can be misleading depending 
 on the user.

A long time (snv_96/s10u8).

 Is there a schedule/target for #2 ?

No.

 And just to confirm the alternative to turn off the ZIL globally is the 
 equivalent to always throwing away some commited data on a crash/reboot (as 
 if your dedicated ZIL went bad every time)?

If by turn off the ZIL you mean to tweak the private kernel tunables to 
disable it, yes.  There is no supported way to turn off the ZIL.  If you don't 
have a log device or your log device fails, then the main pool devices are used.

 I've seen enhancement requests to make ZIL control per data set/zvol is this 
 been worked on? Is there a bug number to follow?

There already is (as of snv_122/s10u9) the ability to change the 'logbias' 
property.  This allows you to turn off slogs for arbitrary datasets.  There is 
no way to turn off the ZIL per dataset, nor are there any plans to allow this.

- Eric

 
 Thanks again for your reply.
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-07 Thread Christo Kutrovsky
Eric,

I am confused. What's difference between:

- turning off slogs (via logbias)
vs
- turning off ZIL (via kernel tunable)

Isn't that similar, just one is more granular?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-07 Thread Christo Kutrovsky
Darren, thanks for reply.

Still not clear to me thought.

The only purpose of the slog is to serve the ZIL. There may be many ZILs on a 
single slog.

From Milek's blog:

logbias=latency - data written to slog first
logbias=throughtput - data written directly to dataset.

Here's my problem. I have raidz device with SATA drives. I use it to serve 
iSCSI that is used for NTFS devices (bootable).

Windows is constantly writing something to the devices and all writes are 
synchronous. The result is that cache flushes are so often that the NCQ 
(queue dept) hardly goes above 0.5 resulting in very poor read/write 
performance.

Disabling the ZIL (globally unfortunately) yields huge performance benefits for 
me as now my ZFS server is acting as a buffer, and Windows is far more snappy. 
And now I see queue  3 occasionally and write performance doesn't suck big 
time.

I am fine with loosing 5-10 even 15 seconds of data in the event of the crash, 
as far as the data is consistent.

The never turn off the ZIL sounds scary, but if the only consequences are 15 
(even 45) seconds of data loss .. i am willing to take this for my home 
environment.

Opinions?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-07 Thread Darren J Moffat

On 07/02/2010 20:56, Christo Kutrovsky wrote:

Darren, thanks for reply.

Still not clear to me thought.


Based on what you wrote below you do understand it.


The only purpose of the slog is to serve the ZIL. There may be many ZILs on a 
single slog.


Correct, and correct.


 From Milek's blog:

logbias=latency - data written to slog first
logbias=throughtput - data written directly to dataset.


Roughly yes but there is slightly more to it than that, but those
are implementation details.


Here's my problem. I have raidz device with SATA drives. I use it to serve 
iSCSI that is used for NTFS devices (bootable).

Windows is constantly writing something to the devices and all writes are 
synchronous. The result is that cache flushes are so often that the NCQ 
(queue dept) hardly goes above 0.5 resulting in very poor read/write performance.

Disabling the ZIL (globally unfortunately) yields huge performance benefits for me 
as now my ZFS server is acting as a buffer, and Windows is far more snappy. And 
now I see queue  3 occasionally and write performance doesn't suck big time.


That hints that an SSD that is fast to right to would be a good addition 
to your system.



I am fine with loosing 5-10 even 15 seconds of data in the event of the crash, 
as far as the data is consistent.

The never turn off the ZIL sounds scary, but if the only consequences are 15 
(even 45) seconds of data loss .. i am willing to take this for my home environment.

Opinions?


You have done a risk analysis and if you are happy that your NTFS 
filesystems could be corrupt on those ZFS ZVOLs if you lose data then 
you could consider turning off the ZIL.  Note though that it isn't

just those ZVOLs you are serving to Windows that lose access to a ZIL
but *ALL* datasets on *ALL* pools and that includes your root pool.

For what it's worth I personally run with the ZIL disabled on my home 
NAS system which is serving over NFS and CIFS to various clients, but I 
wouldn't recommend it to anyone.  The reason I say never to turn off the 
ZIL is because in most environments outside of home usage it just isn't 
worth the risk to do so (not even for a small business).


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-07 Thread Christo Kutrovsky
Has anyone seen soft corruption in NTFS iSCSI ZVOLs after a power loss?

I mean, there is no guarantee writes will be executed in order, so in theory, 
one could corrupt it's NTFS file system.

Would best practice be to rollback the last snapshot before making those iSCSI 
available again?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-07 Thread Kjetil Torgrim Homme
Christo Kutrovsky kutrov...@pythian.com writes:

 Has anyone seen soft corruption in NTFS iSCSI ZVOLs after a power
 loss?

this is not from experience, but I'll answer anyway.

 I mean, there is no guarantee writes will be executed in order, so in
 theory, one could corrupt it's NTFS file system.

I think you have that guarantee, actually.

the problem is that the Windows client will think that block N has been
updated, since the iSCSI server told it it was commited to stable
storage.  however, when ZIL is disabled, that update may get lost during
power loss.  if block N contains, say, directory information, this could
cause weird behaviour.  it may look fine at first -- the problem won't
appear until NTFS has thrown block N out of its cache and it needs to
re-read it from the server.  when the re-read stale data is combined
with fresh data from RAM, it's panic time...

 Would best practice be to rollback the last snapshot before making
 those iSCSI available again?

I think you need to reboot the client so that its RAM cache is cleared
before any other writes are made.

a rollback shouldn't be necessary.

-- 
Kjetil T. Homme
Redpill Linpro AS - Changing the game

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-06 Thread Christo Kutrovsky
Me too,  I would like to know the answer.

I am considering Gigabyte's i-RAM for ZIL, but I don't want to worry what 
happens if the battery dies after a system crash.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-06 Thread Eric Schrock

On Feb 6, 2010, at 10:18 PM, Christo Kutrovsky wrote:

 Me too,  I would like to know the answer.
 
 I am considering Gigabyte's i-RAM for ZIL, but I don't want to worry what 
 happens if the battery dies after a system crash.

There are two different things here:

1. Opening a pool with a missing or broken top-level slog
2. Importing a pool with a missing or broken top-level slog

#1 works today.  The pool goes into the faulted state and the administrator has 
the ability to consciously repair the fault (thereby throwing away some amount 
of committed data) or re-attach the device if it is indeed just missing.

#2 is being worked on, but also does not affect the standard reboot case.

- Eric

--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-06 Thread Christo Kutrovsky
Eric, thanks for clarifying.

Could you confirm the release for #1 ? As today can be misleading depending 
on the user.

Is there a schedule/target for #2 ?

And just to confirm the alternative to turn off the ZIL globally is the 
equivalent to always throwing away some commited data on a crash/reboot (as if 
your dedicated ZIL went bad every time)?

I've seen enhancement requests to make ZIL control per data set/zvol is this 
been worked on? Is there a bug number to follow?

Thanks again for your reply.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Pool import with failed ZIL device now possible ?

2010-01-03 Thread Robert Heinzmann (reg)

Hello list,

someone (actually neil perrin (CC)) mentioned in this thread:

http://mail.opensolaris.org/pipermail/zfs-discuss/2009-December/034340.html

that is should be possible to import a pool with failed log devices 
(with or without data loss ?).


/ 

// Has the following error no consequences?
// 
//  Bug ID 6538021

//  Synopsis   Need a way to force pool startup when zil cannot be replayed
//  State  3-Accepted (Yes, that is a problem)
//  Link   
// http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6538021

/
Er that bug should probably be closed as a duplicate.
We now have this functionality.

Can someone clarify in which release this is fixed and which bug 
addresses this ?


Are the situations then handled like this:

1) Machine + ZIL Failes (e.g. ZIL Device damaged during system crash)
In this case max 5seconds data loss, but pool is still correct.

2) Machine reboot / shutdown (e.g ZIL Device failure during machine 
transport of a machine)

In this case no data loss.

3) Removal of a ZIL device of a pool ?
Is this possible ?

Regards,
Robert

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss