[zfs-discuss] FreeBSD ZFS

2012-08-09 Thread Karl Wagner

Hi everyone,

I have a couple of questions regarding FreeBSD's ZFS support.

Firstly, I believe it currently stands at zpool v28. Is this correct? 
Will this be updated any time soon?


Also, looking at the Wikipedia page, the updates beyond this are:
29  Solaris Nevada b148 RAID-Z/mirror hybrid allocator.
30  Solaris Nevada b149 ZFS encryption.
31  Solaris Nevada b150 improved 'zfs list' performance.
32  Solaris Nevada b151 One MB block support
33  Solaris Nevada b163 Improved share support

I am not currently interested in encryption, but what are the 
advantages of the other improvements? If I were to use Solaris 11 11/11 
on a small file server (running 16GB RAM and 3TB storage in 2 mirrored 
pairs) would I see any improvement in upgrading from v28 created under 
FreeBSD 9?


Thanks
Karl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread Jim Klimov

2012-08-09 13:57, Karl Wagner wrote:

Hi everyone,

I have a couple of questions regarding FreeBSD's ZFS support.

Firstly, I believe it currently stands at zpool v28. Is this correct?
Will this be updated any time soon?

Also, looking at the Wikipedia page, the updates beyond this are:
29 Solaris Nevada b148 RAID-Z/mirror hybrid allocator.
30 Solaris Nevada b149 ZFS encryption.
31 Solaris Nevada b150 improved 'zfs list' performance.
32 Solaris Nevada b151 One MB block support
33 Solaris Nevada b163 Improved share support

I am not currently interested in encryption, but what are the advantages
of the other improvements? If I were to use Solaris 11 11/11 on a small
file server (running 16GB RAM and 3TB storage in 2 mirrored pairs) would
I see any improvement in upgrading from v28 created under FreeBSD 9?


From what I gather, ZFS features v29 and beyond are proprietary
to Oracle, so unless their licensing changes and/or the code is
officially legally published as grabbable open-source, it is not
likely that these features will ever appear in non-Oracle ZFS
implementations.

There is even some FUD regarding the use of same zpool version
numbers for open-sourced reimplementations of identical features
(so open and proprietary zpools are compatible), and whether that
won't be sued.

In the end, the open-sourced ZFS community got no public replies
from Oracle regarding collaboration or lack thereof, and decided
to part ways and implement things independently from Oracle.
AFAIK main ZFS development converges in illumos-gate, contributed
to by some OpenSolaris-derived distros and being the upstream for
FreeBSD port of ZFS (probably others too).

Lacking an authority to assign zpool version numbers to particular
features, they instead went for enumeratable feature flags which
report whether a particular zfs/zpool format feature is in use
on the pool and supported by the software trying to import it.
New features in the works include modernized compression and
checksum algorithms, among others. Nominal zpool version is 5000
for pools which enabled feature flags, and that is currently
supported by oi_151a5 prebuilt distro (I don't know of other
builds with that - feature integrated into code this summer).

Regarding your other question, what the v29+ features provide,
here's my understanding:
29 Solaris Nevada b148 RAID-Z/mirror hybrid allocator.
The miniature metadata blocks are allocated by mirroring
sectors instead of raidz-encumbering them, which makes
tasks with metadata faster and probably reduces associated
storage and processing overheads.

30 Solaris Nevada b149 ZFS encryption.
Encryption of datasets, pools and/or objects?

31 Solaris Nevada b150 improved 'zfs list' performance.
Probably a performance bump

32 Solaris Nevada b151 One MB block support
Should improve efficiency of large file storage, especially
on modern 4Kb sectored disks, by reducing the needed portion
of metadata overhead and fragmentation (more data is written
sequentially, low-level prefetches win more). Writes on *very
full* pools might suffer, because it is less likely to quickly
find an available block big enough.

33 Solaris Nevada b163 Improved share support
Probably a performance and/or interoperability bump

HTH,
//Jim Klimov

BTW, are you sure your intended use of Solaris 11 fits into the
free usage license restrictions (dev/POC for Solaris, basically)?
This is not a rhethorical question, because I know some home-users
who were uncertain if they can use Sol11 as their home-NAS OS or
a home desktop or their small office server, and just to be surely
on the safe side, switched to some of the other distros. I would
welcome enlightened comments to this part ;)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread Joerg Schilling
Jim Klimov jimkli...@cos.ru wrote:

 In the end, the open-sourced ZFS community got no public replies
 from Oracle regarding collaboration or lack thereof, and decided
 to part ways and implement things independently from Oracle.
 AFAIK main ZFS development converges in illumos-gate, contributed
 to by some OpenSolaris-derived distros and being the upstream for
 FreeBSD port of ZFS (probably others too).

To me it seems that the open-sourced ZFS community is not open, or could you 
point me to their mailing list archives?

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread Karl Wagner

On 2012-08-09 11:35, Jim Klimov wrote:

2012-08-09 13:57, Karl Wagner wrote:

Hi everyone,

I have a couple of questions regarding FreeBSD's ZFS support.

Firstly, I believe it currently stands at zpool v28. Is this 
correct?

Will this be updated any time soon?

Also, looking at the Wikipedia page, the updates beyond this are:
29 Solaris Nevada b148 RAID-Z/mirror hybrid allocator.
30 Solaris Nevada b149 ZFS encryption.
31 Solaris Nevada b150 improved 'zfs list' performance.
32 Solaris Nevada b151 One MB block support
33 Solaris Nevada b163 Improved share support

I am not currently interested in encryption, but what are the 
advantages
of the other improvements? If I were to use Solaris 11 11/11 on a 
small
file server (running 16GB RAM and 3TB storage in 2 mirrored pairs) 
would

I see any improvement in upgrading from v28 created under FreeBSD 9?


From what I gather, ZFS features v29 and beyond are proprietary
to Oracle, so unless their licensing changes and/or the code is
officially legally published as grabbable open-source, it is not
likely that these features will ever appear in non-Oracle ZFS
implementations.

There is even some FUD regarding the use of same zpool version
numbers for open-sourced reimplementations of identical features
(so open and proprietary zpools are compatible), and whether that
won't be sued.

In the end, the open-sourced ZFS community got no public replies
from Oracle regarding collaboration or lack thereof, and decided
to part ways and implement things independently from Oracle.
AFAIK main ZFS development converges in illumos-gate, contributed
to by some OpenSolaris-derived distros and being the upstream for

Thank you for the info.

Looking at your responses, I believe I may gain some advantage from an 
upgrade to v28, particularly from the hybrid alloc and 1MB blocks. 
However, I don't think it is likely to be worth sacrificing 
compatibility with other solutions.


Regarding licensing, I am not 100% certain of this. I, personally, 
count my entire home network as a dev platform (much to the dismay of my 
other half), and use it to learn stuff for work and/or personal 
projects. I doubt, however, that this fits Oracle's definition of 
development. This is another good reason for me to maintain backwards 
compatibility, so even if I decide to try out Sol11 I doubt I will be 
upgrading the pool.


Thanks
Karl


FreeBSD port of ZFS (probably others too).

Lacking an authority to assign zpool version numbers to particular
features, they instead went for enumeratable feature flags which
report whether a particular zfs/zpool format feature is in use
on the pool and supported by the software trying to import it.
New features in the works include modernized compression and
checksum algorithms, among others. Nominal zpool version is 5000
for pools which enabled feature flags, and that is currently
supported by oi_151a5 prebuilt distro (I don't know of other
builds with that - feature integrated into code this summer).

Regarding your other question, what the v29+ features provide,
here's my understanding:
29 Solaris Nevada b148 RAID-Z/mirror hybrid allocator.
The miniature metadata blocks are allocated by mirroring
sectors instead of raidz-encumbering them, which makes
tasks with metadata faster and probably reduces associated
storage and processing overheads.

30 Solaris Nevada b149 ZFS encryption.
Encryption of datasets, pools and/or objects?

31 Solaris Nevada b150 improved 'zfs list' performance.
Probably a performance bump

32 Solaris Nevada b151 One MB block support
Should improve efficiency of large file storage, especially
on modern 4Kb sectored disks, by reducing the needed portion
of metadata overhead and fragmentation (more data is written
sequentially, low-level prefetches win more). Writes on *very
full* pools might suffer, because it is less likely to quickly
find an available block big enough.

33 Solaris Nevada b163 Improved share support
Probably a performance and/or interoperability bump

HTH,
//Jim Klimov

BTW, are you sure your intended use of Solaris 11 fits into the
free usage license restrictions (dev/POC for Solaris, basically)?
This is not a rhethorical question, because I know some home-users
who were uncertain if they can use Sol11 as their home-NAS OS or
a home desktop or their small office server, and just to be surely
on the safe side, switched to some of the other distros. I would
welcome enlightened comments to this part ;)


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread Sašo Kiselkov
On 08/09/2012 12:52 PM, Joerg Schilling wrote:
 Jim Klimov jimkli...@cos.ru wrote:
 
 In the end, the open-sourced ZFS community got no public replies
 from Oracle regarding collaboration or lack thereof, and decided
 to part ways and implement things independently from Oracle.
 AFAIK main ZFS development converges in illumos-gate, contributed
 to by some OpenSolaris-derived distros and being the upstream for
 FreeBSD port of ZFS (probably others too).
 
 To me it seems that the open-sourced ZFS community is not open, or could 
 you 
 point me to their mailing list archives?
 
 Jörg
 

z...@lists.illumos.org

Welcome.

--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread opensolarisisdeadlongliveopensolaris
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Joerg Schilling
 
 Jim Klimov jimkli...@cos.ru wrote:
 
  In the end, the open-sourced ZFS community got no public replies
  from Oracle regarding collaboration or lack thereof, and decided
  to part ways and implement things independently from Oracle.
  AFAIK main ZFS development converges in illumos-gate, contributed
  to by some OpenSolaris-derived distros and being the upstream for
  FreeBSD port of ZFS (probably others too).
 
 To me it seems that the open-sourced ZFS community is not open, or could
 you
 point me to their mailing list archives?

The last publicly released under opensolaris zpool version, v28, is 
incorporated into illumos / openindiana / nexenta / etc.  Jim is talking about 
how Oracle closed-source after v28, and never issued any public statement 
confirming or denying that they were going closed-source.  They just never 
released any source anymore.  So the community spun off with the latest open 
source code, and there it is...

Since that time, open source ZFS development has continued, but not at the same 
rate and not in the same direction as the oracle closed-source.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread Joerg Schilling
Sa?o Kiselkov skiselkov...@gmail.com wrote:

  To me it seems that the open-sourced ZFS community is not open, or could 
  you 
  point me to their mailing list archives?
  
  Jörg
  

 z...@lists.illumos.org

Well, why then has there been a discussion about a closed zfs mailing list?
Is this no longer true?

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread Sašo Kiselkov
On 08/09/2012 01:05 PM, Joerg Schilling wrote:
 Sa?o Kiselkov skiselkov...@gmail.com wrote:
 
 To me it seems that the open-sourced ZFS community is not open, or could 
 you 
 point me to their mailing list archives?

 Jörg


 z...@lists.illumos.org
 
 Well, why then has there been a discussion about a closed zfs mailing list?
 Is this no longer true?

Not that I know of. The above one is where I post my changes and Matt,
George, Garrett and all the others are lurking there.

Cheers,
--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread Joerg Schilling
Sa?o Kiselkov skiselkov...@gmail.com wrote:

 On 08/09/2012 01:05 PM, Joerg Schilling wrote:
  Sa?o Kiselkov skiselkov...@gmail.com wrote:
  
  To me it seems that the open-sourced ZFS community is not open, or 
  could you 
  point me to their mailing list archives?
 
  Jörg
 
 
  z...@lists.illumos.org
  
  Well, why then has there been a discussion about a closed zfs mailing 
  list?
  Is this no longer true?

 Not that I know of. The above one is where I post my changes and Matt,
 George, Garrett and all the others are lurking there.

So if you frequently read this list, can you tell me whether they discuss the 
on-disk format in this list?

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread Sašo Kiselkov
On 08/09/2012 01:11 PM, Joerg Schilling wrote:
 Sa?o Kiselkov skiselkov...@gmail.com wrote:
 
 On 08/09/2012 01:05 PM, Joerg Schilling wrote:
 Sa?o Kiselkov skiselkov...@gmail.com wrote:

 To me it seems that the open-sourced ZFS community is not open, or 
 could you 
 point me to their mailing list archives?

 Jörg


 z...@lists.illumos.org

 Well, why then has there been a discussion about a closed zfs mailing 
 list?
 Is this no longer true?

 Not that I know of. The above one is where I post my changes and Matt,
 George, Garrett and all the others are lurking there.
 
 So if you frequently read this list, can you tell me whether they discuss the 
 on-disk format in this list?

It's more of a list for development discussion and integration of
changes, not a list for general ZFS discussion like zfs-discuss is.

--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread opensolarisisdeadlongliveopensolaris
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Joerg Schilling
 
 Well, why then has there been a discussion about a closed zfs mailing list?
 Is this no longer true?

Oracle can do anything internally they want.  I would presume they have an 
internal mailing list for zfs developers, but that's in relation to the 
closed-source zfs that they develop.  Not in relation to the open-source zfs 
that's used in illumos, etc.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread Joerg Schilling
opensolarisisdeadlongliveopensolaris 
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:

  From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
  boun...@opensolaris.org] On Behalf Of Joerg Schilling
  
  Well, why then has there been a discussion about a closed zfs mailing 
  list?
  Is this no longer true?

 Oracle can do anything internally they want.  I would presume they have an 
 internal mailing list for zfs developers, but that's in relation to the 
 closed-source zfs that they develop.  Not in relation to the open-source zfs 
 that's used in illumos, etc.

I was talking about illumos...

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread opensolarisisdeadlongliveopensolaris
 From: Joerg Schilling [mailto:joerg.schill...@fokus.fraunhofer.de]
 Sent: Thursday, August 09, 2012 11:35 AM
 
   From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
   boun...@opensolaris.org] On Behalf Of Joerg Schilling
  
   Well, why then has there been a discussion about a closed zfs mailing
 list?
   Is this no longer true?
 
  Oracle can do anything internally they want.  I would presume they have an
 internal mailing list for zfs developers, but that's in relation to the 
 closed-
 source zfs that they develop.  Not in relation to the open-source zfs that's
 used in illumos, etc.
 
 I was talking about illumos...

Then why are you talking about a closed zfs mailing list?
Have you heard of an illumos closed-zfs mailing list that I haven't heard of?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread Peter Jeremy
On 2012-Aug-09 16:05:00 +0530, Jim Klimov jimkli...@cos.ru wrote:
2012-08-09 13:57, Karl Wagner wrote:
 Firstly, I believe it currently stands at zpool v28. Is this correct?

For FreeBSD 8.x and 9.x, yes.  FreeBSD-head includes feature flags
and com.delphix:async_destroy.

 Will this be updated any time soon?

I expect 8-stable and 9-stable will be update to match -head once
FreeBSD 9.1 is released (ie 9.1 won't support feature flags but 9.2
and a potential 8.4 will).  In general, FreeBSD imports ZFS fixes and
enhancements, generally from Illumos, as they become available.  The
Oracle v29 and later updates won't be available in FreeBSD unless they
are open-sourced by Oracle.

New features in the works include modernized compression and
checksum algorithms, among others. Nominal zpool version is 5000
for pools which enabled feature flags, and that is currently
supported by oi_151a5 prebuilt distro (I don't know of other
builds with that - feature integrated into code this summer).

FreeBSD-head does.

-- 
Peter Jeremy


pgpaswWHOLhMp.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FreeBSD ZFS

2012-08-09 Thread Richard Elling
On Aug 9, 2012, at 4:11 AM, joerg.schill...@fokus.fraunhofer.de (Joerg 
Schilling) wrote:

 Sa?o Kiselkov skiselkov...@gmail.com wrote:
 
 On 08/09/2012 01:05 PM, Joerg Schilling wrote:
 Sa?o Kiselkov skiselkov...@gmail.com wrote:
 
 To me it seems that the open-sourced ZFS community is not open, or 
 could you 
 point me to their mailing list archives?
 
 Jörg
 
 
 z...@lists.illumos.org
 
 Well, why then has there been a discussion about a closed zfs mailing 
 list?
 Is this no longer true?
 
 Not that I know of. The above one is where I post my changes and Matt,
 George, Garrett and all the others are lurking there.
 
 So if you frequently read this list, can you tell me whether they discuss the 
 on-disk format in this list?

Yes, but nobody has posted proposals for new on-disk format changes
since feature flags was first announced. 

NB, the z...@lists.illumos.org is but one of the many discuss groups
where ZFS users can get questions answered. There is also active
Mac OSX, ZFS on Linux, and OTN lists. IMHO, zfs-discuss@opensolaris 
is shrinking, not growing.
  -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (FreeBSD) ZFS RAID: Disk fails while replacing another disk

2010-03-09 Thread Victor Latushkin

Christian Hessmann wrote:

Victor,


Btw, they affect some files referenced by snapshots as
'zpool status -v' suggests:

  tank/DVD:0x9cd tank/d...@2010025100:/Memento.m4v
  tank/d...@2010025100:/Payback.m4v
  tank/d...@2010025100:/TheManWhoWasntThere.m4v

In case of OpenSolaris it is not that difficult to work around this bug
without getting rid of files (snapshots referencing them) with errors,
but in I'm not sure how to do the same on FreeBSD.
But you always have option of destroying snapshot indicated above (and may
be more).


I'm still reluctant to reboot the machine, so what I did now was as you
suggested destroy these snapshots (after deleting the files from the
current filesystem, of course).
I'm not so sure the result is good, though:

===
[r...@camelot /tank/DVD]# zpool status -v tank
  pool: tank
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: resilver completed after 10h42m with 136 errors on Tue Mar  2
07:55:05 2010
config:

NAME   STATE READ WRITE CKSUM
tank   DEGRADED   137 0 0
  raidz1   ONLINE   0 0 0
ad17p2 ONLINE   0 0 0
ad18p2 ONLINE   0 0 0
ad20p2 ONLINE   0 0 0
  raidz1   DEGRADED   326 0 0
replacing  DEGRADED 0 0 0
  ad16p2   OFFLINE  2  241K 6
  ad4p2ONLINE   0 0 0  839G resilvered
ad14p2 ONLINE   0 0 0  5.33G resilvered
ad15p2 ONLINE 418 0 0  5.33G resilvered

errors: Permanent errors have been detected in the following files:

tank/DVD:0x9cd
0x2064:0x25a4
0x20ae:0x503
0x20ae:0x9cd
===

Any further information available on this hex messages?


This tells that ZFS can no longer map object numbers from errlog into meaningful 
 names, and this is expected, as you have destroyed them.


Now you need to rerun a scrub.

regards,
victor

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (FreeBSD) ZFS RAID: Disk fails while replacing another disk

2010-03-05 Thread Victor Latushkin

Mark J Musante wrote:

It looks like you're running into a DTL issue.  ZFS believes that ad16p2 has
some data on it that hasn't been copied off yet, and it's not considering the
fact that it's part of a raidz group and ad4p2.

There is a CR on this,
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6909724 but what's
viewable in the bug database is pretty minimal.

If you haven't made a backup yet (or at least done a complete snapshot and
generated a send stream from it), my advice would be to do that now.  Then
reboot and see if that clears the DTL enough to let you do the detach.


Actually besides the bug mentioned above, resilvering will not clear DTLs upon 
completion due to


6887372 DTLs not cleared after resilver if permanent errors present

as there are permanent errors present. Btw, they affect some files referenced by 
snapshots as 'zpool status -v' suggests:


 tank/DVD:0x9cd tank/d...@2010025100:/Memento.m4v
 tank/d...@2010025100:/Payback.m4v
 tank/d...@2010025100:/TheManWhoWasntThere.m4v

In case of OpenSolaris it is not that difficult to work around this bug without 
getting rid of files (snapshots referencing them) with errors, but in I'm not 
sure how to do the same on FreeBSD.


But you always have option of destroying snapshot indicated above (and may be 
more).

regards,
victor





On 3 Mar, 2010, at 18.46, Christian Heßmann wrote:


Hello guys,


I've already written this on the FreeBSD forums, but so far, the feedback
is not so great - seems FreeBSD guys aren't that keen on ZFS. I have some
hopes you'll be more experienced on these kind of errors:

I have a ZFS pool comprised of two 3-disk RAIDs which I've recently moved
from OS X to FreeBSD (8 stable).

One harddisk failed last weekend with lots of shouting, SMART messages and
even a kernel panic. I attached a new disk and started the replacement. 
Unfortunately, about 20% into the replacement, a second disk in the same

RAID showed signs of misbehaviour by giving me read errors. The resilvering
did finish, though, and it left me with only three broken files according
to zpool status:

[r...@camelot /]# zpool status -v tank pool: tank state: DEGRADED status:
One or more devices has experienced an error resulting in data corruption.
Applications may be affected. action: Restore the file in question if
possible.  Otherwise restore the entire pool from backup. see:
http://www.sun.com/msg/ZFS-8000-8A scrub: resilver completed after 10h42m
with 136 errors on Tue Mar  2 07:55:05 2010 config:

NAME   STATE READ WRITE CKSUM tank   DEGRADED   137
0 0 raidz1   ONLINE   0 0 0 ad17p2 ONLINE   0
0 0 ad18p2 ONLINE   0 0 0 ad20p2 ONLINE   0
0 0 raidz1   DEGRADED   326 0 0 replacing  DEGRADED 0
0 0 ad16p2   OFFLINE  2  169K 6 ad4p2ONLINE   0 0
0  839G resilvered ad14p2 ONLINE   0 0 0  5.33G resilvered 
ad15p2 ONLINE 418 0 0  5.33G resilvered


errors: Permanent errors have been detected in the following files:

tank/DVD:0x9cd tank/d...@2010025100:/Memento.m4v 
tank/d...@2010025100:/Payback.m4v 
tank/d...@2010025100:/TheManWhoWasntThere.m4v


I have the feeling the problems on ad15p2 are related to a cable issue,
since it doesn't have any SMART errors, is quite a new drive (3 months old)
and was IMHO sufficiently burned in by repeatedly filling it to the brim
and checking the contents (via ZFS). So I'd like to switch off the server,
replace the cable and do a scrub afterwards to make sure it doesn't produce
additional errors.

Unfortunately, although it says the resilvering completed, I can't detach
ad16p2 (the first faulted disk) from the system:

[r...@camelot /]# zpool detach tank ad16p2 cannot detach ad16p2: no valid
replicas

To be honest, I don't know how to proceed now. It feels like my system is
in a very unstable state right now, with a replacement not yet finished and
errors on two drives in one RAID.Z1.

I deleted the files affected, but have about 20 snapshots of this
filesystem and think these files are in most of them since they're quite
old.

So, what should I do now? Delete all snapshots? Move all other files from
this filesystem to a new filesystem and destroy the old filesystem? Try to
export and import the pool? Is it even safe to reboot the machine right
now?

I got one response in the FreeBSD Forum telling me I should reboot the
machine and do a scrub afterwards, it should then detect that it doesn't
need the old disk anymore - I am a bit reluctant doing that, to be
honest...

Any help would be appreciated.

Thank you.

Christian ___ zfs-discuss
mailing list zfs-discuss@opensolaris.org 
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___ zfs-discuss mailing list 
zfs-discuss@opensolaris.org 
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (FreeBSD) ZFS RAID: Disk fails while replacing another disk

2010-03-05 Thread Christian Hessmann
Victor,

 Btw, they affect some files referenced by snapshots as
 'zpool status -v' suggests:

   tank/DVD:0x9cd tank/d...@2010025100:/Memento.m4v
   tank/d...@2010025100:/Payback.m4v
   tank/d...@2010025100:/TheManWhoWasntThere.m4v

 In case of OpenSolaris it is not that difficult to work around this bug
 without getting rid of files (snapshots referencing them) with errors,
 but in I'm not sure how to do the same on FreeBSD.
 But you always have option of destroying snapshot indicated above (and may
 be more).

I'm still reluctant to reboot the machine, so what I did now was as you
suggested destroy these snapshots (after deleting the files from the
current filesystem, of course).
I'm not so sure the result is good, though:

===
[r...@camelot /tank/DVD]# zpool status -v tank
  pool: tank
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: resilver completed after 10h42m with 136 errors on Tue Mar  2
07:55:05 2010
config:

NAME   STATE READ WRITE CKSUM
tank   DEGRADED   137 0 0
  raidz1   ONLINE   0 0 0
ad17p2 ONLINE   0 0 0
ad18p2 ONLINE   0 0 0
ad20p2 ONLINE   0 0 0
  raidz1   DEGRADED   326 0 0
replacing  DEGRADED 0 0 0
  ad16p2   OFFLINE  2  241K 6
  ad4p2ONLINE   0 0 0  839G resilvered
ad14p2 ONLINE   0 0 0  5.33G resilvered
ad15p2 ONLINE 418 0 0  5.33G resilvered

errors: Permanent errors have been detected in the following files:

tank/DVD:0x9cd
0x2064:0x25a4
0x20ae:0x503
0x20ae:0x9cd
===

Any further information available on this hex messages?


Regards
Christian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (FreeBSD) ZFS RAID: Disk fails while replacing another disk

2010-03-04 Thread Mark J Musante
It looks like you're running into a DTL issue.  ZFS believes that ad16p2 has 
some data on it that hasn't been copied off yet, and it's not considering the 
fact that it's part of a raidz group and ad4p2.

There is a CR on this, 
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6909724 but what's 
viewable in the bug database is pretty minimal.

If you haven't made a backup yet (or at least done a complete snapshot and 
generated a send stream from it), my advice would be to do that now.  Then 
reboot and see if that clears the DTL enough to let you do the detach.


On 3 Mar, 2010, at 18.46, Christian Heßmann wrote:

 Hello guys,
 
 
 I've already written this on the FreeBSD forums, but so far, the feedback is 
 not so great - seems FreeBSD guys aren't that keen on ZFS. I have some hopes 
 you'll be more experienced on these kind of errors:
 
 I have a ZFS pool comprised of two 3-disk RAIDs which I've recently moved 
 from OS X to FreeBSD (8 stable).
 
 One harddisk failed last weekend with lots of shouting, SMART messages and 
 even a kernel panic.
 I attached a new disk and started the replacement.
 Unfortunately, about 20% into the replacement, a second disk in the same RAID 
 showed signs of misbehaviour by giving me read errors. The resilvering did 
 finish, though, and it left me with only three broken files according to 
 zpool status:
 
 [r...@camelot /]# zpool status -v tank
  pool: tank
 state: DEGRADED
 status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
 action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: resilver completed after 10h42m with 136 errors on Tue Mar  2 07:55:05 
 2010
 config:
 
NAME   STATE READ WRITE CKSUM
tank   DEGRADED   137 0 0
  raidz1   ONLINE   0 0 0
ad17p2 ONLINE   0 0 0
ad18p2 ONLINE   0 0 0
ad20p2 ONLINE   0 0 0
  raidz1   DEGRADED   326 0 0
replacing  DEGRADED 0 0 0
  ad16p2   OFFLINE  2  169K 6
  ad4p2ONLINE   0 0 0  839G resilvered
ad14p2 ONLINE   0 0 0  5.33G resilvered
ad15p2 ONLINE 418 0 0  5.33G resilvered
 
 errors: Permanent errors have been detected in the following files:
 
tank/DVD:0x9cd
tank/d...@2010025100:/Memento.m4v
tank/d...@2010025100:/Payback.m4v
tank/d...@2010025100:/TheManWhoWasntThere.m4v
 
 I have the feeling the problems on ad15p2 are related to a cable issue, since 
 it doesn't have any SMART errors, is quite a new drive (3 months old) and was 
 IMHO sufficiently burned in by repeatedly filling it to the brim and 
 checking the contents (via ZFS). So I'd like to switch off the server, 
 replace the cable and do a scrub afterwards to make sure it doesn't produce 
 additional errors.
 
 Unfortunately, although it says the resilvering completed, I can't detach 
 ad16p2 (the first faulted disk) from the system:
 
 [r...@camelot /]# zpool detach tank ad16p2
 cannot detach ad16p2: no valid replicas
 
 To be honest, I don't know how to proceed now. It feels like my system is in 
 a very unstable state right now, with a replacement not yet finished and 
 errors on two drives in one RAID.Z1.
 
 I deleted the files affected, but have about 20 snapshots of this filesystem 
 and think these files are in most of them since they're quite old.
 
 So, what should I do now? Delete all snapshots? Move all other files from 
 this filesystem to a new filesystem and destroy the old filesystem? Try to 
 export and import the pool? Is it even safe to reboot the machine right now?
 
 I got one response in the FreeBSD Forum telling me I should reboot the 
 machine and do a scrub afterwards, it should then detect that it doesn't need 
 the old disk anymore - I am a bit reluctant doing that, to be honest...
 
 Any help would be appreciated.
 
 Thank you.
 
 Christian
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] (FreeBSD) ZFS RAID: Disk fails while replacing another disk

2010-03-03 Thread Christian Heßmann

Hello guys,


I've already written this on the FreeBSD forums, but so far, the  
feedback is not so great - seems FreeBSD guys aren't that keen on ZFS.  
I have some hopes you'll be more experienced on these kind of errors:


I have a ZFS pool comprised of two 3-disk RAIDs which I've recently  
moved from OS X to FreeBSD (8 stable).


One harddisk failed last weekend with lots of shouting, SMART messages  
and even a kernel panic.

I attached a new disk and started the replacement.
Unfortunately, about 20% into the replacement, a second disk in the  
same RAID showed signs of misbehaviour by giving me read errors. The  
resilvering did finish, though, and it left me with only three broken  
files according to zpool status:


[r...@camelot /]# zpool status -v tank
  pool: tank
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: resilver completed after 10h42m with 136 errors on Tue Mar  2  
07:55:05 2010

config:

NAME   STATE READ WRITE CKSUM
tank   DEGRADED   137 0 0
  raidz1   ONLINE   0 0 0
ad17p2 ONLINE   0 0 0
ad18p2 ONLINE   0 0 0
ad20p2 ONLINE   0 0 0
  raidz1   DEGRADED   326 0 0
replacing  DEGRADED 0 0 0
  ad16p2   OFFLINE  2  169K 6
  ad4p2ONLINE   0 0 0  839G resilvered
ad14p2 ONLINE   0 0 0  5.33G resilvered
ad15p2 ONLINE 418 0 0  5.33G resilvered

errors: Permanent errors have been detected in the following files:

tank/DVD:0x9cd
tank/d...@2010025100:/Memento.m4v
tank/d...@2010025100:/Payback.m4v
tank/d...@2010025100:/TheManWhoWasntThere.m4v

I have the feeling the problems on ad15p2 are related to a cable  
issue, since it doesn't have any SMART errors, is quite a new drive (3  
months old) and was IMHO sufficiently burned in by repeatedly  
filling it to the brim and checking the contents (via ZFS). So I'd  
like to switch off the server, replace the cable and do a scrub  
afterwards to make sure it doesn't produce additional errors.


Unfortunately, although it says the resilvering completed, I can't  
detach ad16p2 (the first faulted disk) from the system:


[r...@camelot /]# zpool detach tank ad16p2
cannot detach ad16p2: no valid replicas

To be honest, I don't know how to proceed now. It feels like my system  
is in a very unstable state right now, with a replacement not yet  
finished and errors on two drives in one RAID.Z1.


I deleted the files affected, but have about 20 snapshots of this  
filesystem and think these files are in most of them since they're  
quite old.


So, what should I do now? Delete all snapshots? Move all other files  
from this filesystem to a new filesystem and destroy the old  
filesystem? Try to export and import the pool? Is it even safe to  
reboot the machine right now?


I got one response in the FreeBSD Forum telling me I should reboot the  
machine and do a scrub afterwards, it should then detect that it  
doesn't need the old disk anymore - I am a bit reluctant doing that,  
to be honest...


Any help would be appreciated.

Thank you.

Christian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (FreeBSD) ZFS RAID: Disk fails while replacing another disk

2010-03-03 Thread Bob Friesenhahn

On Thu, 4 Mar 2010, Christian Heßmann wrote:


I've already written this on the FreeBSD forums, but so far, the feedback is 
not so great - seems FreeBSD guys aren't that keen on ZFS. I have some hopes


I see lots and lots of zfs traffic on the discussion list 
freebsd...@freebsd.org.  This is where the FreeBSD filesystem 
developers hang out.



raidz1   DEGRADED   326 0 0
  replacing  DEGRADED 0 0 0
ad16p2   OFFLINE  2  169K 6
ad4p2ONLINE   0 0 0  839G resilvered
  ad14p2 ONLINE   0 0 0  5.33G resilvered
  ad15p2 ONLINE 418 0 0  5.33G resilvered

Unfortunately, although it says the resilvering completed, I can't detach 
ad16p2 (the first faulted disk) from the system:


The zpool status you posted shows that ad16p2 is still in 'replacing' 
mode.  If this is still the case, then it could be a reason that the 
original disk can't yet be removed.


To be honest, I don't know how to proceed now. It feels like my system is in 
a very unstable state right now, with a replacement not yet finished and 
errors on two drives in one RAID.Z1.


If it is still in 'replacing' mode then it seems that the best policy 
is to just wait.  If there is no drive activity on ad4p2 then there 
may be something more wrong.


Cold booting a system can be one of the scariest things to do so it 
should be a means of last resort.  Maybe the system would not come 
back.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (FreeBSD) ZFS RAID: Disk fails while replacing another disk

2010-03-03 Thread Freddie Cash
On Wed, Mar 3, 2010 at 5:57 PM, Bob Friesenhahn 
bfrie...@simple.dallas.tx.us wrote:

 On Thu, 4 Mar 2010, Christian Heßmann wrote:

 To be honest, I don't know how to proceed now. It feels like my system is
 in a very unstable state right now, with a replacement not yet finished and
 errors on two drives in one RAID.Z1.


 If it is still in 'replacing' mode then it seems that the best policy is to
 just wait.  If there is no drive activity on ad4p2 then there may be
 something more wrong.

 Cold booting a system can be one of the scariest things to do so it should
 be a means of last resort.  Maybe the system would not come back.


We've had this happen a couple of times on our FreeBSD-based storage
servers.  Rebooting and manually running a scrub has fixed the issue each
time.

24x 500 GB SATA drives in 3x raidz2 vdev of 8 drives each

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (FreeBSD) ZFS RAID: Disk fails while replacing another disk

2010-03-03 Thread Christian Heßmann

On 04.03.2010, at 02:57, Bob Friesenhahn wrote:

I see lots and lots of zfs traffic on the discussion list freebsd...@freebsd.org 
.  This is where the FreeBSD filesystem developers hang out.


Thanks - I'll have a look there. As usual, the cool kids are in  
mailing lists... ;-)



The zpool status you posted shows that ad16p2 is still in  
'replacing' mode.  If this is still the case, then it could be a  
reason that the original disk can't yet be removed.

[...]
If it is still in 'replacing' mode then it seems that the best  
policy is to just wait.  If there is no drive activity on ad4p2 then  
there may be something more wrong.


It bothers me as well that it says replacing instead of replaced or  
whatever else it should say. Since the resilvering completed I don't  
have any activity on the drives anymore, so I presume it somehow  
thinks it's done.



Cold booting a system can be one of the scariest things to do so it  
should be a means of last resort.  Maybe the system would not come  
back.


That's my fear. Although from what I can gather from the feedback so  
far the FreeBSD users seem somewhat familiar with an error like that  
and recommend rebooting. I might take the majority advice, make a  
backup of the important parts of the pool and just go for a reboot.


Might go for another repost into the freebsd-fs list before, though,  
so please bear with me that you have to read this again...


Thanks.

Christian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss