Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread a . smith

Still i wonder what Gartner means with Oracle monetizing on ZFS..


It simply means that Oracle want to make money from ZFS (as is normal  
for technology companies with their own technology). The reason this  
might cause uncertainty for ZFS is that maintaining or helping make  
the open source version of ZFS better may be seen by Oracle as  
contradictory to them making money from it.
That said, what is already open source cannot be un-open sourced, as  
others have said...


cheers Andy.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Joerg Schilling
Peter Jeremy peter.jer...@alcatel-lucent.com wrote:

 On 2011-May-25 03:49:43 +0800, Brandon High bh...@freaks.com wrote:
 ... unless Oracle's zpool v30 is different than Nexenta's v30.

 This would be unfortunate but no worse than the current situation
 with UFS - Solaris, *BSD and HP Tru64 all have native UFS filesystems,
 all of which are incompatible.

There are verious media formats out, but I know of only one format that defines 
an enhencement method that really allows enhancements from various vendors 
without problems and without the need for a common format commitee: tar.

The current enhanced POSIX tar format defines an enhancement method proposed by 
Sun that works by defining a framework to introduce new features that all have 
names with company prefixes. 

What Sun defines for ZFS enhancements on the other side is bases on ideas that 
are at least 30 years old and that try to prevent other entities from 
introducing features, so it is not useful in a OSS world.

 I believe the various OSS projects that use ZFS have formed a working
 group to co-ordinate ZFS amongst themselves.  I don't know if Oracle
 was invited to join (though given the way Oracle has behaved in all
 the other OSS working groups it was a member of, having Oracle onboard
 might be a disadvantage).

I recently made a proposal for a way to handle vendor specific enhancements but 
nobody did contact me. Are you sure that such a group exists?

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Frank Van Damme
Op 24-05-11 22:58, LaoTsao schreef:
 With various fock of opensource project
 E.g. Zfs, opensolaris, openindina etc there are all different
 There are not guarantee to be compatible 

I hope at least they'll try. Just in case I want to import/export zpools
between Nexenta and OpenIndiana?

-- 
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means, including but not limited to telepathy
without the benevolence of the author.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Erik Trimble

On 5/25/2011 4:37 AM, Frank Van Damme wrote:

Op 24-05-11 22:58, LaoTsao schreef:

With various fock of opensource project
E.g. Zfs, opensolaris, openindina etc there are all different
There are not guarantee to be compatible

I hope at least they'll try. Just in case I want to import/export zpools
between Nexenta and OpenIndiana

Given the new versioning governing board, I think that's highly likely.

However, do remember that you might not be able to import a pool from 
another system, simply because your system can't support the 
featureset.  Ideally, it would be nice if you could just import the pool 
and use the features your current OS supports, but that's pretty darned 
dicey, and I'd be very happy if importing worked when both systems 
supported the same featureset.


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Garrett D'Amore
This will absolutely remain possible -- as the party responsible for Nexenta's 
kernel, I can assure that pool import/export compatibility is a key requirement 
for Nexenta's product.

  -- Garrett D'Amore

On May 25, 2011, at 3:39 PM, Frank Van Damme frank.vanda...@gmail.com wrote:

 Op 24-05-11 22:58, LaoTsao schreef:
 With various fock of opensource project
 E.g. Zfs, opensolaris, openindina etc there are all different
 There are not guarantee to be compatible 
 
 I hope at least they'll try. Just in case I want to import/export zpools
 between Nexenta and OpenIndiana?
 
 -- 
 No part of this copyright message may be reproduced, read or seen,
 dead or alive or by any means, including but not limited to telepathy
 without the benevolence of the author.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Casper . Dik

However, do remember that you might not be able to import a pool from 
another system, simply because your system can't support the 
featureset.  Ideally, it would be nice if you could just import the pool 
and use the features your current OS supports, but that's pretty darned 
dicey, and I'd be very happy if importing worked when both systems 
supported the same featureset.

You can use zpool create to set a specific version; this should allow
you to create a pool usable in a number of different systems.

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Joerg Schilling
Garrett D'Amore garr...@nexenta.com wrote:

 I am sure that the group exists ... I am a part of it, as are many of the 
 former Oracle ZFS engineers and a number of other ZFS contributors.

 Whatever your proposal was, we have not seen it, but a solution has been 
 agreed upon widely already, and implementation should be starting on it.  
 Ultimately this solution is based on people with a huge amount of experience 
 in ZFS, and with an eye towards future ZFS features.

I tend to believe that a group that acts in the secret does not exist.

Standardization nowerdays typically is done in the public. 

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Garrett D'Amore
You are welcome to your beliefs.   There are many groups that do standards that 
do not meet in public.  In fact, I can't think of any standards bodies that 
*do* hold open meetings.

  -- Garrett D'Amore

On May 25, 2011, at 4:09 PM, Joerg Schilling 
joerg.schill...@fokus.fraunhofer.de wrote:

 Garrett D'Amore garr...@nexenta.com wrote:
 
 I am sure that the group exists ... I am a part of it, as are many of the 
 former Oracle ZFS engineers and a number of other ZFS contributors.
 
 Whatever your proposal was, we have not seen it, but a solution has been 
 agreed upon widely already, and implementation should be starting on it.  
 Ultimately this solution is based on people with a huge amount of experience 
 in ZFS, and with an eye towards future ZFS features.
 
 I tend to believe that a group that acts in the secret does not exist.
 
 Standardization nowerdays typically is done in the public. 
 
 Jörg
 
 -- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
 http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread C Bergström
On Wed, May 25, 2011 at 7:15 PM, Garrett D'Amore garr...@nexenta.com wrote:
 You are welcome to your beliefs.   There are many groups that do standards 
 that do not meet in public.  In fact, I can't think of any standards bodies 
 that *do* hold open meetings.


I think he may mean open to public application.  Not everyone will be
accepted or partake in the meetings, but anyone can apply.  Right now
the group is secret - there's no or little information on
who/when/where or anything.  It's basically the ZFS Standards Mafia
maybe you guys live by..

Rule #1 - Don't talk about ZFS club

;)

./C
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread joerg.moellenk...@sun.com
Well, at first ZFS development is no standard body and at the end 
everything has to be measured in compatibility to the Oracle ZFS 
implementation. However there is surely a bad aftertaste of such a 
policy. Someone can't complain  about Oracles position to opensource and 
put the development of ZFS themself into a secret circle. But as i wrote 
a long time ago, a lot of things were done because of business 
considerations, not because of open source is great.



Am 25.05.2011 14:15, schrieb Garrett D'Amore:

You are welcome to your beliefs.   There are many groups that do standards that 
do not meet in public.  In fact, I can't think of any standards bodies that 
*do* hold open meetings.

   -- Garrett D'Amore

On May 25, 2011, at 4:09 PM, Joerg 
Schillingjoerg.schill...@fokus.fraunhofer.de  wrote:


Garrett D'Amoregarr...@nexenta.com  wrote:


I am sure that the group exists ... I am a part of it, as are many of the 
former Oracle ZFS engineers and a number of other ZFS contributors.

Whatever your proposal was, we have not seen it, but a solution has been agreed 
upon widely already, and implementation should be starting on it.  Ultimately 
this solution is based on people with a huge amount of experience in ZFS, and 
with an eye towards future ZFS features.

I tend to believe that a group that acts in the secret does not exist.

Standardization nowerdays typically is done in the public.

Jörg

--
EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Joerg Schilling
Garrett D'Amore garr...@nexenta.com wrote:

 You are welcome to your beliefs.   There are many groups that do standards 
 that do not meet in public.  In fact, I can't think of any standards bodies 
 that *do* hold open meetings.

You probybly don't know POSIX.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Paul Kraus
On Wed, May 25, 2011 at 8:15 AM, Garrett D'Amore garr...@nexenta.com wrote:

 You are welcome to your beliefs.   There are many groups that do standards 
 that
 do not meet in public.  In fact, I can't think of any standards bodies that 
 *do* hold
 open meetings.

The standards committees I have observed (I have never been on
one) are generally in the audio space and not the computer, but while
they welcome guests, the decisions are reserved for the committee
members. Committee membership is not open to anyone who wants to be on
the committee, but those with a degree of expertise in the area the
committee is addressing. Anything else leads to madness.

I think it would help the 'ZFS Standards Committee' if it's
existence, membership, goals, and decisions were more public. I am not
suggesting a high level of detail. For example, membership could be
identified as: Members include representatives from Oracle and
Nexenta, send an email to the cont...@zfs-standard.org to contact a
committee member.

   Knowing that something is happening and that the right players are
at the table is important to having trust in the process and results.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Frank Van Damme
Op 25-05-11 14:27, joerg.moellenk...@sun.com schreef:
 Well, at first ZFS development is no standard body and at the end
 everything has to be measured in compatibility to the Oracle ZFS
 implementation

Why? Given that ZFS is Solaris ZFS just as well as Nexenta ZFS just as
well as illumos ZFS, by what reason is Oracle ZFS being declared the
standard or reference? Because they write the first so-many lines or
because they make the biggest sales on it (kinda hard to sell licenses
to an open source product)?


-- 
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means, including but not limited to telepathy
without the benevolence of the author.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Bob Friesenhahn

On Wed, 25 May 2011, Garrett D'Amore wrote:

You are welcome to your beliefs.  There are many groups that do 
standards that do not meet in public.  In fact, I can't think of any 
standards bodies that *do* hold open meetings.


The IETF holds totally open meetings.  I hope that you are 
appreciative of that since they brought you the Internet and enabled 
us to send this email.  Clearly it works.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Tim Cook
On Wed, May 25, 2011 at 8:53 AM, Frank Van Damme
frank.vanda...@gmail.comwrote:

 Op 25-05-11 14:27, joerg.moellenk...@sun.com schreef:
  Well, at first ZFS development is no standard body and at the end
  everything has to be measured in compatibility to the Oracle ZFS
  implementation

 Why? Given that ZFS is Solaris ZFS just as well as Nexenta ZFS just as
 well as illumos ZFS, by what reason is Oracle ZFS being declared the
 standard or reference? Because they write the first so-many lines or
 because they make the biggest sales on it (kinda hard to sell licenses
 to an open source product)?



Because they OWN the code, and the patents to protect the code.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Bob Friesenhahn

On Wed, 25 May 2011, Paul Kraus wrote:


   The standards committees I have observed (I have never been on
one) are generally in the audio space and not the computer, but while
they welcome guests, the decisions are reserved for the committee
members. Committee membership is not open to anyone who wants to be on
the committee, but those with a degree of expertise in the area the
committee is addressing. Anything else leads to madness.


Not necessarily madness.  As I mentioned to Garrett, the IETF 
(http://en.wikipedia.org/wiki/IETF) holds totally open meetings and 
mailing lists.  Anyone who shows up can vote on whatever is discussed 
and all votes count as equal.  There is no need to pay for attendance, 
no need to apply for acceptance, no need to show an ID at the door, 
and anyone can just walk in, yet actions and demonstrated 
implementations speak louder than any words.  Anyone can write an RFC 
as long as it meets certain standards.  However, the IETF also has a 
working code requirement and demands several independent 
interoperable implementations before some new interface can be 
accepted for the standards track.


The method the IETF uses seems to be particularly immune to vendor 
interference.  Vendors who want to participate in defining an 
interoperable standard can achieve substantial success.  Vendors who 
only want their own way encounter deafening silence and isolation.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Paul Kraus
On Wed, May 25, 2011 at 10:27 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:

 The method the IETF uses seems to be particularly immune to vendor
 interference.  Vendors who want to participate in defining an interoperable
 standard can achieve substantial success.  Vendors who only want their own
 way encounter deafening silence and isolation.

There have been a number of RFC's effectively written by one
vendor in order to be able to claim open standards compliance, the
biggest corporate offender in this regard, but clearly not the only
one, is Microsoft. The next time I run across one of these RFC's I'll
make sure to forward you a copy.

The only one that comes to mind immediately was the change to the
specification of what characters were permissible in DNS records to
include underscore _. This was specifically to support Microsoft's
existing naming convention. I am NOT saying that was a bad change, but
that it was a change driven by ONE vendor.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Tim Cook
On Wed, May 25, 2011 at 10:01 AM, Paul Kraus p...@kraus-haus.org wrote:

 On Wed, May 25, 2011 at 10:27 AM, Bob Friesenhahn
 bfrie...@simple.dallas.tx.us wrote:

  The method the IETF uses seems to be particularly immune to vendor
  interference.  Vendors who want to participate in defining an
 interoperable
  standard can achieve substantial success.  Vendors who only want their
 own
  way encounter deafening silence and isolation.

There have been a number of RFC's effectively written by one
 vendor in order to be able to claim open standards compliance, the
 biggest corporate offender in this regard, but clearly not the only
 one, is Microsoft. The next time I run across one of these RFC's I'll
 make sure to forward you a copy.

The only one that comes to mind immediately was the change to the
 specification of what characters were permissible in DNS records to
 include underscore _. This was specifically to support Microsoft's
 existing naming convention. I am NOT saying that was a bad change, but
 that it was a change driven by ONE vendor.




Except it wasn't just Microsoft at all.  There were three vendors on the
original RFC, and one of the authors was Paul Vixie... the author of BIND.
http://www.ietf.org/rfc/rfc2782.txt

You should probably do a bit of research before throwing out claims like
that to try to shoot someone down.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Joerg Schilling
Paul Kraus p...@kraus-haus.org wrote:

 There have been a number of RFC's effectively written by one
 vendor in order to be able to claim open standards compliance, the
 biggest corporate offender in this regard, but clearly not the only
 one, is Microsoft. The next time I run across one of these RFC's I'll
 make sure to forward you a copy.

 The only one that comes to mind immediately was the change to the
 specification of what characters were permissible in DNS records to
 include underscore _. This was specifically to support Microsoft's
 existing naming convention. I am NOT saying that was a bad change, but
 that it was a change driven by ONE vendor.

Im Y2001, Microsoft first tried to standardize to permit chars to be 16 bit
also, in order to make their UCS-2 based system POSIX compliant. We have been
able to prevent this from happening.

A few weeks later, they tried to make ':' an illegal character in filenames
in order to make foo:bar an extended attribute file bar located in file
foo. We have been able to prevent this too.

The people who actively work in a standard commitee decide with their majority 
and if your example with Microsoft has been something that was not acceptable 
by others, it did not happen.

BTW: I am not an OpenGroup member and I did never pay anything. The POSIX 
standard (since 2001) nevertheless contains proposals from me and my name is
listed in the standard as contributor/reviewer..all meetings are open 
(phone and IRC) and there is an open mailing list.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Bob Friesenhahn

On Wed, 25 May 2011, Paul Kraus wrote:


   There have been a number of RFC's effectively written by one
vendor in order to be able to claim open standards compliance, the
biggest corporate offender in this regard, but clearly not the only
one, is Microsoft. The next time I run across one of these RFC's I'll
make sure to forward you a copy.


RFC means Request For Comment.  Unless an RFC has survived the 
grueling standards-track process, it is no more than a documented idea 
put out for public comment.  Indeed, the majority of RFCs fail this 
process, and many do not even try to enter it but simply exist to 
document an idea or a vendor's existing protocol.  I am impressed if 
Microsoft still produces new ideas worthy of putting in a document.


This sort of open RFC process would be good for zfs because it 
provides ample paths to utter failure while winnowing out the good 
ideas which achieve rough consensus and interoperability.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Richard Elling
On May 25, 2011, at 7:27 AM, Bob Friesenhahn wrote:

 On Wed, 25 May 2011, Paul Kraus wrote:
 
   The standards committees I have observed (I have never been on
 one) are generally in the audio space and not the computer, but while
 they welcome guests, the decisions are reserved for the committee
 members. Committee membership is not open to anyone who wants to be on
 the committee, but those with a degree of expertise in the area the
 committee is addressing. Anything else leads to madness.
 
 Not necessarily madness.  As I mentioned to Garrett, the IETF 
 (http://en.wikipedia.org/wiki/IETF) holds totally open meetings and mailing 
 lists.  Anyone who shows up can vote on whatever is discussed and all votes 
 count as equal.  There is no need to pay for attendance, no need to apply for 
 acceptance, no need to show an ID at the door, and anyone can just walk in, 
 yet actions and demonstrated implementations speak louder than any words.  
 Anyone can write an RFC as long as it meets certain standards.  However, the 
 IETF also has a working code requirement and demands several independent 
 interoperable implementations before some new interface can be accepted for 
 the standards track.
 
 The method the IETF uses seems to be particularly immune to vendor 
 interference.  Vendors who want to participate in defining an interoperable 
 standard can achieve substantial success.  Vendors who only want their own 
 way encounter deafening silence and isolation.

Actually, this doesn't always work. There have been attempts to stack the deck
and force votes at IETF. One memorable meeting was more of a flashmob than a
standards meeting :-)

The key stakeholders and contributors of ZFS code are represented in the ZFS 
Working
Group. This is very similar to working groups in other standards bodies and 
organizations.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Bob Friesenhahn

On Wed, 25 May 2011, Richard Elling wrote:


The method the IETF uses seems to be particularly immune to vendor 
interference.  Vendors who want to participate in defining an interoperable 
standard can achieve substantial success.  Vendors who only want their own way 
encounter deafening silence and isolation.


Actually, this doesn't always work. There have been attempts to stack the deck
and force votes at IETF. One memorable meeting was more of a flashmob than a
standards meeting :-)


I totally agree.  In fact a large fraction of efforts at the IETF 
fail, and failure can be a good thing.



The key stakeholders and contributors of ZFS code are represented in the ZFS 
Working
Group. This is very similar to working groups in other standards bodies and 
organizations.


The error in the statement above is that most key stakeholders are not 
represented.  I consider myself to be a key stakeholder in that I have 
entrusted my precious data to zfs.  Stakeholder is not necessarily the 
same as a famous zfs software developer or someone who specifically 
invests money in a company doing zfs development.  We are stakeholders 
too!


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS working group and feature flags proposal

2011-05-25 Thread Matthew Ahrens
The community of developers working on ZFS continues to grow, as does
the diversity of companies betting big on ZFS.  We wanted a forum for
these developers to coordinate their efforts and exchange ideas.  The
ZFS working group was formed to coordinate these development efforts.
The working group encourages new membership.  In order to maintain the
group's focus on ZFS development, candidates should demonstrate
significant and ongoing contribution to ZFS.

The first product of the working group is the design for a ZFS on-disk
versioning method that will allow for distributed development of ZFS
on-disk format changes without further explicit coordination. This
method eliminates the problem of two developers both allocating
version number 31 to mean their own feature.

This feature flags versioning allows unknown versions to be
identified, and in many cases the ZFS pool or filesystem can be
accessed read-only even in the presence of unknown on-disk features.
My proposal covers versioning of the SPA/zpool, ZPL/zfs, send stream,
and allocation of compression and checksum identifiers (enum values).

We plan to implement the feature flags this summer, and aim to
integrate it into Illumos.  I welcome feedback on my proposal, and I'd
especially like to hear from people doing ZFS development -- what are
you working on?  Does this meet your needs?  If we implement it, will
you use it?

Thanks,
--matt
ZFS Feature Flags proposal, version 1.0, May 25th 2011

===
ON-DISK FORMAT CHANGES
===

for SPA/zpool versioning:
new pool version = SPA_VERSION_FEATURES = 1000
ZAP objects in MOS, pointed to by DMU_POOL_DIRECTORY_OBJECT = 1
features_for_read - { feature name - nonzero if in use }
features_for_write - { feature name - nonzero if in use }
feature_descriptions - { feature name - description }
Note that a pool can't be opened write-only, so the
features_for_read are always required.  A given feature should
be stored in either features_for_read or features_for_write, not
both.
Note that if a feature is promoted from a company-private
feature to part of a larger distribution (eg. illumos), this can
be handled in a variety of ways, all of which can be handled
with code added at that time, without changing the on-disk
format.

for ZPL/zfs versioning:
new zpl version = ZPL_VERSION_FEATURES = 1000
same 3 ZAP objects as above, but pointed to by MASTER_NODE_OBJ = 1
features_for_read - { feature name - nonzero if in use }
features_for_write - { feature name - nonzero if in use }
feature_descriptions - { feature name - description }
Note that the namespace for ZPL features is separate from SPA
features (like version numbers), so the same feature name can be
used for both (eg. for related SPA and ZPL features), but
compatibility-wise this is not treated specially.

for compression:
must be at pool version SPA_VERSION_FEATURES
ZAP object in MOS, pointed to by POOL_DIR_OBJ:
compression_algos - { algo name - enum value }
Existing enum values (0-14) must stay the same, but new
algorithms may have different enum values in different pools.
Note that this simply defines the enum value.  If a new algorithm
is in use, there must also be a corresponding feature in
features_for_read with a nonzero value.  For simplicity, all
algorithms, including legacy algorithms with fixed values (lzjb,
gzip, etc) should be stored here (pending evaluation of
prototype code -- this may be more trouble than it's worth).

for checksum:
must be at pool version SPA_VERSION_FEATURES
ZAP object in MOS, pointed to by POOL_DIR_OBJ:
checksum_algos - { algo name - enum value }
All notes for compression_algos apply here too.

Must also store copy of what's needed to read the MOS in label nvlist:
features_for_read - { feature name - nonzero if in use }
compression_algos - { algo name - enum value }
checksum_algos - { algo name - enum value }

ZPL information is never needed.
It's fine to store complete copies of these objects in the label.
However, space in the label is limited.  It's only *required* to
store information needed to read the MOS so we can get to the
definitive version of this information.  Eg, introduce new
compression algo, but it is never used in the MOS, don't need to
add it to the label.  Legacy algos with fixed values may be
omitted from the label nvlist (eg. lzjb, fletcher4).

The values in the nvlist features_for_read map may be different
from the values in the MOS features_for_read.  However, they
must be the same when 

Re: [zfs-discuss] [illumos-Developer] ZFS working group and feature flags proposal

2011-05-25 Thread Deano
snip
Hi Matt,

That's looks really good, I've been meaning to implement a ZFS compressor
(using a two pass, LZ4 + Arithmetic Entropy), so nice to see a route with
which this can be done.

One question, is the extendibility of RAID and other similar systems, my
quick perusal makes me thinks this is handled by simple asserting a new
feature using the extension mechanism, but perhaps I've missed something? Do
you see it being able to handle this situation?
Its of course a slightly tricky one, as not only does it change data but
potentially data layout as well...

Great work ZFS working group :) Nice to see ZFS's future coming together!

Bye,
Deano

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [illumos-Developer] ZFS working group and feature flags proposal

2011-05-25 Thread Matthew Ahrens
On Wed, May 25, 2011 at 12:55 PM, Deano de...@rattie.demon.co.uk wrote:

 snip
 Hi Matt,

 That's looks really good, I've been meaning to implement a ZFS compressor
 (using a two pass, LZ4 + Arithmetic Entropy), so nice to see a route with
 which this can be done.


Cool!  New compression algorithms are definitely something we want to make
straightforward to implement.  I look forward to seeing your results.


 One question, is the extendibility of RAID and other similar systems, my
 quick perusal makes me thinks this is handled by simple asserting a new
 feature using the extension mechanism, but perhaps I've missed something?
 Do
 you see it being able to handle this situation?
 Its of course a slightly tricky one, as not only does it change data but
 potentially data layout as well...


Yes, a feature like RAIDZ3 could be implemented as a feature_for_read.  It
would be extra nice if the value was a count of RAIDZ3 devices.  That way
you could zpool upgrade, but if you didn't actually have any RAIDZ3
devices, systems that don't know about RAIDZ3 would still be able to read
it.



 Great work ZFS working group :) Nice to see ZFS's future coming together!


Thank you!

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS issues and the choice of platform

2011-05-25 Thread Roy Sigurd Karlsbakk
Hi all

I have a few servers running openindiana 148, and it's been running rather well 
for some time. Lately, however, we've seen some hichups that may be related to 
the platform, rather than the hardware. The actual errors have been variable. 
Some issues were due to some supermicro backplanes that tended to fail, causing 
drives to report massive i/o errors. But then, the really bad ones are issues 
where zfs reports bad drives even though iostat report them as good. So far, we 
haven't lost a pool, it has been sorted out, but I still wonder what happens if 
I'm gone for a few weeks and something like that happens.

The systems where we have had issues, are two 100TB boxes, with some 160TB 
raw storage each, so licensing this with nexentastor will be rather 
expensive. What would you suggest? Will a solaris express install give us good 
support when the shit hits the fan?

-- 
Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] DDT sync?

2011-05-25 Thread Edward Ned Harvey
I've finally returned to this dedup testing project, trying to get a handle
on why performance is so terrible.  At the moment I'm re-running tests and
monitoring memory_throttle_count, to see if maybe that's what's causing the
limit.  But while that's in progress and I'm still thinking...

 

I assume the DDT tree must be stored on disk, in the regular pool, and each
entry is stored independently from each other entry, right?  So whenever
you're performing new unique writes, that means you're creating new entries
in the tree, and every so often the tree will need to rebalance itself.  By
any chance, are DDT entry creation treated as sync writes?  If so, that
could be hurting me.  For every new unique block written, there might be a
significant amount of small random writes taking place that are necessary to
support the actual data write.  Anyone have any knowledge to share along
these lines?

 

Thanks...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Ian Collins

 On 05/26/11 12:15 AM, Garrett D'Amore wrote:

You are welcome to your beliefs.   There are many groups that do standards that 
do not meet in public.  In fact, I can't think of any standards bodies that 
*do* hold open meetings.


ISO language standards committees may not hold public meetings, but all 
their materials are public and they invite discussion.  Why can't ZFS 
follow this model?  What's (apparently because we don't know) happening 
now to something born out of open source is appalling.


The only beneficiaries from all this secrecy are the nay-slayers.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread Ian Collins

 On 05/26/11 04:21 AM, Richard Elling wrote:

Actually, this doesn't always work. There have been attempts to stack the deck
and force votes at IETF. One memorable meeting was more of a flashmob than a
standards meeting :-)


Is there a video :)


The key stakeholders and contributors of ZFS code are represented in the ZFS 
Working
Group.

Does that include users?  If so by whom?  Or is that another secret?


This is very similar to working groups in other standards bodies and 
organizations.

Except for the secrecy.

Please, by all means hold private meetings, but do follow the ISO 
programming language standards committee model and publish minutes and 
other material.  Encourage feedback on open mail lists and engage with 
the user community.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS working group and feature flags proposal

2011-05-25 Thread Peter Jeremy
On 2011-May-26 03:02:04 +0800, Matthew Ahrens mahr...@delphix.com wrote:
The first product of the working group is the design for a ZFS on-disk
versioning method that will allow for distributed development of ZFS
on-disk format changes without further explicit coordination. This
method eliminates the problem of two developers both allocating
version number 31 to mean their own feature.

Looks good.

pool open (zpool import and implicit import from zpool.cache)
   If pool is at SPA_VERSION_FEATURES, we must check for feature
   compatibility.  First we will look through entries in the label
   nvlist's features_for_read.  If there is a feature listed there
   which we don't understand, and it has a nonzero value, then we
   can not open the pool.

Is it worth splitting feature used value into optional and
mandatory?  (Possibly with the ability to have an optional read
feature be linked to a mandatory write feature).

To use an existing example: dedupe (AFAIK) does not affect read code
and so could show up as an optional read feature but a mandatory write
feature (though I suspect this could equally be handled by just
listing it in features_for_write).

As a more theoretical example, consider OS-X resource forks?  The
presence of a resource fork matters for both read and write on OS-X
but nowhere else.  A (hypothetical) ZFS port to OS-X would want to
know whether the pool contained resource forks even if opened R/O
but this should not stop a different ZFS port from reading (and
maybe even writing to) the pool.

-- 
Peter Jeremy


pgpj1BokjEkft.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS working group and feature flags proposal

2011-05-25 Thread Matthew Ahrens
On Wed, May 25, 2011 at 3:08 PM, Peter Jeremy 
peter.jer...@alcatel-lucent.com wrote:

 On 2011-May-26 03:02:04 +0800, Matthew Ahrens mahr...@delphix.com wrote:

 Looks good.


Thanks for taking the time to look at this.  More comments inline below.


 pool open (zpool import and implicit import from zpool.cache)
If pool is at SPA_VERSION_FEATURES, we must check for feature
compatibility.  First we will look through entries in the label
nvlist's features_for_read.  If there is a feature listed there
which we don't understand, and it has a nonzero value, then we
can not open the pool.

 Is it worth splitting feature used value into optional and
 mandatory?  (Possibly with the ability to have an optional read
 feature be linked to a mandatory write feature).

 To use an existing example: dedupe (AFAIK) does not affect read code
 and so could show up as an optional read feature but a mandatory write
 feature (though I suspect this could equally be handled by just
 listing it in features_for_write).


I'm not sure I understand the optional idea.  How would an optional read
feature change the behavior, as opposed to just being listed in
features_for_write?   If dedup'd pools can be read by old code that
doesn't understand dedup, then the dedup feature should be listed in
features_for_write, and not features_for_read.


 As a more theoretical example, consider OS-X resource forks?  The
 presence of a resource fork matters for both read and write on OS-X
 but nowhere else.  A (hypothetical) ZFS port to OS-X would want to
 know whether the pool contained resource forks even if opened R/O
 but this should not stop a different ZFS port from reading (and
 maybe even writing to) the pool.


A hypothetical resource fork feature would probably be a ZPL (filesystem)
feature, rather than a pool feature, but that doesn't really change your
question.

If the presence of a resource fork doesn't preclude old code from reading or
writing it, but the MacOS code needs to know, are there any resource forks
in this filesystem, then this can be handled in a way specific to the
resource fork code.  For example, with a new entry in the MASTER_NODE_OBJ,
which only the resource fork code would care about; other code would ignore
it.  So this can be handled seamlessly, outside the scope of the feature
flags proposal.

However, it's more likely that old code would not be able to safely write to
a filesystem with resource forks (for example, to know how to free the
resource fork when a file is removed).  In this case, resource forks would
be a feature for write.  The MacOS code could use the features_for_write
to determine the presence of resource forks, even if opening the filesystem
read-only.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] DDT sync?

2011-05-25 Thread Matthew Ahrens
On Wed, May 25, 2011 at 2:23 PM, Edward Ned Harvey 
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:

 I've finally returned to this dedup testing project, trying to get a handle
 on why performance is so terrible.  At the moment I'm re-running tests and
 monitoring memory_throttle_count, to see if maybe that's what's causing the
 limit.  But while that's in progress and I'm still thinking...



 I assume the DDT tree must be stored on disk, in the regular pool, and each
 entry is stored independently from each other entry, right?  So whenever
 you're performing new unique writes, that means you're creating new entries
 in the tree, and every so often the tree will need to rebalance itself.  By
 any chance, are DDT entry creation treated as sync writes?  If so, that
 could be hurting me.  For every new unique block written, there might be a
 significant amount of small random writes taking place that are necessary to
 support the actual data write.  Anyone have any knowledge to share along
 these lines?


The DDT is a ZAP object, so it is an on-disk hashtable, free of O(log(n))
rebalancing operations.  It is written asynchronously, from syncing context.
 That said, for each block written (unique or not), the DDT must be updated,
which means reading and then writing the block that contains that dedup
table entry, and the indirect blocks to get to it.  With a reasonably large
DDT, I would expect about 1 write to the DDT for every block written to the
pool (or written but actually dedup'd).

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] DDT sync?

2011-05-25 Thread Edward Ned Harvey
 From: Matthew Ahrens [mailto:mahr...@delphix.com]
 Sent: Wednesday, May 25, 2011 6:50 PM
 
 The DDT is a ZAP object, so it is an on-disk hashtable, free of O(log(n))
 rebalancing operations.  It is written asynchronously, from syncing
 context.  That said, for each block written (unique or not), the DDT must
be
 updated, which means reading and then writing the block that contains that
 dedup table entry, and the indirect blocks to get to it.  With a
reasonably
 large DDT, I would expect about 1 write to the DDT for every block written
to
 the pool (or written but actually dedup'd).

So ... If the DDT were already cached completely in ARC, and I write a new
unique block to a file, ideally I would hope (after write buffering because
all of this will be async) that one write will be completed to disk - It
would be the aggregate of the new block plus the new DDT entry, but because
of write aggregation it should literally be a single seek+latency penalty. 

Most likely in reality, additional writes will be necessary, to update the
parent block pointers or parent DDT branches and so forth, but hopefully
that's all managed well and kept to a minimum.  So maybe a single new write
ultimately yields a dozen times the disk access time...

I'm honing this in closer, but so far what I'm seeing is ... zpool iostat
indicates 1000 reads taking place for every 20 writes.  This is on a
literally 100% idle pool, where the only activity in the system is me
performing this write benchmark.  The only logical explanation I see for
this behavior is to conclude the DDT must not be cached in ARC.  So every
write yields a flurry of random reads...  50 or so...

Anyway, like I said, still exploring this.  No conclusions drawn yet.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] DDT sync?

2011-05-25 Thread Daniel Carosone
On Wed, May 25, 2011 at 03:50:09PM -0700, Matthew Ahrens wrote:
  That said, for each block written (unique or not), the DDT must be updated,
 which means reading and then writing the block that contains that dedup
 table entry, and the indirect blocks to get to it.  With a reasonably large
 DDT, I would expect about 1 write to the DDT for every block written to the
 pool (or written but actually dedup'd).

That, right there, illustrates exactly why some people are
disappointed wrt performance expectations from dedup.

To paraphrase, and in general: 

 * for write, dedup may save bandwidth but will not save write iops.
 * dedup may amplify iops with more metadata reads 
 * dedup may turn larger sequential io into smaller random io patterns 
 * many systems will be iops bound before they are bandwidth or space
   bound (and l2arc only mitigates read iops)
 * any iops benefit will only come on later reads of dedup'd data, so
   is heavily dependent on access pattern.

Assessing whether these amortised costs are worth it for you can be
complex, especially when the above is not clearly understood.

To me, the thing that makes dedup most expensive in iops is the writes
for update when a file (or snapshot) is deleted.  These are additional
iops that dedup creates, not ones that it substitutes for others in
roughly equal number.  

This load is easily forgotten in a cursory analysis, and yet is always
there in a steady state with rolling auto-snapshots.  As I've written
before, I've had some success managing this load using deferred deletes
and snapshot holds, either to spread the load or to shift it to
otherwise-quiet times, as the case demanded.  I'd rather not have to. :-)

--
Dan.

pgpS8sJWxBEVR.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS issues and the choice of platform

2011-05-25 Thread Daniel Carosone
On Wed, May 25, 2011 at 10:59:19PM +0200, Roy Sigurd Karlsbakk wrote:
 The systems where we have had issues, are two 100TB boxes, with some
 160TB raw storage each, so licensing this with nexentastor will be
 rather expensive. What would you suggest? Will a solaris express
 install give us good support when the shit hits the fan? 

No more so than what you have now, without a support contract.

--
Dan.



pgpDkEAYOO8xi.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Compatibility between Sun-Oracle Fishworks appliance zfs and other zfs implementations

2011-05-25 Thread Matt Weatherford


Hi,

We have a Sun/Oracle Fishworks appliance that we have spent a good 
amount of $ on. This is a great box and we love it, although the EDU 
discounts that Sun used to provide for hardware and support contracts 
seem to have dried up so the cost of supporting it moving forward is 
still unknown.


My question is how compatible are the ZPOOL versions between the 
open-source ZFS implementations and the latest on-disk format from 
Fishworks' ZFS...


Heres what I see, having applied the very latest firmware 
(2010.Q3.2.1) to the box, and applying the deferred updates (zpool upgrades)


pike# zpool get version internal
NAME  PROPERTY  VALUESOURCE
internal  version   28   default
pike# zpool get version external-J4400-12x1TB
NAME   PROPERTY  VALUESOURCE
external-J4400-12x1TB  version   28   default
pike#

Can I expect to move my JBOD over to a different OS such as FreeBSD, 
Illuminos, or Solaris  and be able to get my data off still?  (by this i 
mean perform a zpool import on another platform)


The compatibility of ZFS was a major selling point for us getting in to 
this appliance originally ... we were swayed by the open storage 
marketing and whatnot... I guess Im asking if it looks like the 
situation has changed.  Apologies for the fuzzy question


Matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Compatibility between Sun-Oracle Fishworks appliance zfs and other zfs implementations

2011-05-25 Thread Matthew Ahrens
On Wed, May 25, 2011 at 8:01 PM, Matt Weatherford m...@u.washington.eduwrote:

 pike# zpool get version internal
 NAME  PROPERTY  VALUESOURCE
 internal  version   28   default
 pike# zpool get version external-J4400-12x1TB
 NAME   PROPERTY  VALUESOURCE
 external-J4400-12x1TB  version   28   default
 pike#

 Can I expect to move my JBOD over to a different OS such as FreeBSD,
 Illuminos, or Solaris  and be able to get my data off still?  (by this i
 mean perform a zpool import on another platform)


Yes, because zpool version 28 is supported in Illumos.  I'm sure Oracle
Solaris does or will soon support it too.  According to Wikipedia, the
9-current development branch [of FreeBSD] uses ZFS Pool version 28.

FYI, version 28 is the last Oracle-supplied open-source version, so
non-Oracle implementations will probably not support versions 29, 30, etc.

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] bug? ZFS crypto vs. scrub

2011-05-25 Thread Daniel Carosone
Just a ping for any further updates, as well as a crosspost to migrate
the thread to zfs-discuss (from -crypto-). 

Is there any further information I can provide?  What's going on with
that zpool history, and does it tell you anything about the chances
of recovering the actual key used?

On Thu, May 12, 2011 at 08:52:04PM +1000, Daniel Carosone wrote:
 On Thu, May 12, 2011 at 10:04:19AM +0100, Darren J Moffat wrote:
  There is a possible bug in in that area too, and it is only for the  
  keysource=passphrase case. 
 
 Ok, sounds like it's not yet a known one.  If there's anything I can
 do to help track it down, let me know.  
 
  It isn't anything to do with the terminal.
 
 Heh, ok.. just a random WAG.
 
  More importantly, what are the prospects of correctly reproducing that
  key so as to get at data?  I still have the original staging pool, but
  some additions made since the transfer would be lost otherwise. It's
  not especially important data, but would be annoying to lose or have
  to reproduce.
 
  I'm not sure, can you send me the ouput of 'zpool history' on the pool  
  that the recv was done to.  I'll be able to determine from that if I can  
  fix up the problem or not.
 
 Can do - but it's odd.   Other than the initial create, and the most
 recent scrub, the history only contains a sequence of auto-snapshot
 creations and removals. None of the other commands I'd expect, like
 the filesystem creations and recv, the device replacements (as I
 described in my other post), previous scrubs, or anything else:
 
 dan@ventus:~# zpool history geek | grep -v auto-snap
 History for 'geek':
 2011-04-01.08:48:15 zpool create -f geek raidz2 /rpool1/stage/file0 
 /rpool1/stage/file1 /rpool1/stage/file2 /rpool1/stage/file3 
 /rpool1/stage/file4 /rpool1/stage/file5 /rpool1/stage/file6 
 /rpool1/stage/file7 /rpool1/stage/file8 c2t600144F0DED90A004D9590440001d0
 2011-05-10.14:03:34 zpool scrub geek
 
 If you want the rest, I'm happy to send it, but I don't expect it will
 tell you anything.  I do wonder why that is...
 
 --
 Dan.




pgpIJLjpYwQNX.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss