Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Haudy Kazemi

Ross Walker wrote:

On Aug 18, 2010, at 10:43 AM, Bob Friesenhahn  
wrote:

  

On Wed, 18 Aug 2010, Joerg Schilling wrote:


Linus is right with his primary decision, but this also applies for static
linking. See Lawrence Rosen for more information, the GPL does not distinct
between static and dynamic linking.
  

GPLv2 does not address linking at all and only makes vague references to the "program".  There is 
no insinuation that the program needs to occupy a single address space or mention of address spaces at all. 
The "program" could potentially be a composition of multiple cooperating executables (e.g. like 
GCC) or multiple modules.  As you say, everything depends on the definition of a "derived work".

If a shell script may be dependent on GNU 'cat', does that make the shell script a 
"derived work"?  Note that GNU 'cat' could be replaced with some other 'cat' 
since 'cat' has a well defined interface.  A very similar situation exists for loadable 
modules which have well defined interfaces (like 'cat').  Based on the argument used for 
'cat', the mere injection of a loadable module into an execution environment which 
includes GPL components should not require that module to be distributable under GPL.  
The module only needs to be distributable under GPL if it was developed in such a way 
that it specifically depends on GPL components.



This is how I see it as well.

The big problem is not the insmod'ing of the blob but how it is distributed.

As far as I know this can be circumvented by not including it in the main 
distribution but through a separate repo to be installed afterwards, ala Debian 
non-free.

-Ross
  


Various distros do the same thing with patent/license encumbered and 
binary-only pieces like some device drivers, applications, and 
multimedia codecs and playback components.  If a user wants that piece 
they click 'yes I still want it'.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 64-bit vs 32-bit applications

2010-08-18 Thread Peter Jeremy
On 2010-Aug-18 04:40:21 +0800, Joerg Schilling 
 wrote:
>Ian Collins  wrote:
>> Some application benefit from the extended register set and function 
>> call ABI, others suffer due to increased sizes impacting the cache.
>
>Well, please verify your claims as they do not meet my experience.

I would agree with Ian that it varies.  I have recently been
evaluating a number of different SHA256 implementations and have just
compared the 32-bit vs 64-bit performance on both x86 (P4 nocona using
gcc 4.2.1) and SPARC (US-IVa using Studio12).

Comparing the different implementations on each platform, the
differences between best and worst varied from 10% to 27% depending on
the platform (and the slowest algorithm on x86/64 was equal fastest in
the other 3 platforms).

Comparing the 32-bit vs 64-bit version of each implementation on
each platform, the difference between 32-bit and 64-bit varied from
-11% to +13% on SPARC and same to +68% on x86.

My interpretation of those results is that you can't generalise: The
only way to determine whether your application is faster in 32-bit or
64-bit more is to test it.  And your choice of algorithm is at least
as important as whether it's 32-bit or 64-bit.

-- 
Peter Jeremy


pgpSec5hUa4mU.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 64-bit vs 32-bit applications

2010-08-18 Thread Ian Collins

On 08/18/10 08:40 AM, Joerg Schilling wrote:

Ian Collins  wrote:
   

Some application benefit from the extended register set and function
call ABI, others suffer due to increased sizes impacting the cache.
 

Well, please verify your claims as they do not meet my experience.

It may be that you are right in case you don't compile with optimization.
I compile with a high level of optimization and all my applications run at
least as fast as in 32 bit mode (as mentioned, this does not apply to sparc).
BTW: this applies to Sun Studio.

   
A quick test with a C++ application I'm working with which does a lot of 
string and container manipulation shows it
runs about 10% slower in 64 bit mode on AMD64 and about the same in 32 
or 64 bit on a core i7. Built with -fast.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Ross Walker
On Aug 18, 2010, at 10:43 AM, Bob Friesenhahn  
wrote:

> On Wed, 18 Aug 2010, Joerg Schilling wrote:
>> 
>> Linus is right with his primary decision, but this also applies for static
>> linking. See Lawrence Rosen for more information, the GPL does not distinct
>> between static and dynamic linking.
> 
> GPLv2 does not address linking at all and only makes vague references to the 
> "program".  There is no insinuation that the program needs to occupy a single 
> address space or mention of address spaces at all. The "program" could 
> potentially be a composition of multiple cooperating executables (e.g. like 
> GCC) or multiple modules.  As you say, everything depends on the definition 
> of a "derived work".
> 
> If a shell script may be dependent on GNU 'cat', does that make the shell 
> script a "derived work"?  Note that GNU 'cat' could be replaced with some 
> other 'cat' since 'cat' has a well defined interface.  A very similar 
> situation exists for loadable modules which have well defined interfaces 
> (like 'cat').  Based on the argument used for 'cat', the mere injection of a 
> loadable module into an execution environment which includes GPL components 
> should not require that module to be distributable under GPL.  The module 
> only needs to be distributable under GPL if it was developed in such a way 
> that it specifically depends on GPL components.

This is how I see it as well.

The big problem is not the insmod'ing of the blob but how it is distributed.

As far as I know this can be circumvented by not including it in the main 
distribution but through a separate repo to be installed afterwards, ala Debian 
non-free.

-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-18 Thread Frank Cusack

On 8/18/10 3:58 PM -0400 Linder, Doug wrote:

Erik Trimble wrote:


That said, stability vs new features has NOTHING to do with the OSS
development model.  It has everything to do with the RELEASE model.
[...]
All that said, using the OSS model for actual *development* of an
Operating System is considerably superior to using a closed model. For
reasons I outlined previously in a post to opensolaris-discuss.


I didn't mean to imply there was anything wrong with the OSS
release-early-and-often model.


I also didn't mean to imply Solaris was creaky or wrong or bad
compared to OpenSolaris.  It has different requirements.

But I did mean that folks who want the latest and greatest are not
the same folks that want stability.  So people using OpenSolaris
are not the same people using Solaris.  (Of course there are shops
where both are used to different ends, but one is not a gateway
to the other.)

I agree with Erik, there is an upgrade path, but that's just the
natural incorporation of OpenSolaris features into Solaris (same
as existed before, just "OpenSolaris" wasn't something available
publicly and widely).  That's not the same as migrating to OpenSolaris.
When today's features are in Solaris, OpenSolaris will have newer
shinier features.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Group Quotas

2010-08-18 Thread Jordan Schwartz
Well I seemed to have hit on that hot button topic of NFSv4, (good
thing I didn't mention that we are running IPv4).

To get back to the topic, is anyone running ZFS group quota on large
filesystem with lots of smaller files and thousands
for groups per filesystem, or have any quota related experinces to share?

Thanks,

Jordan

On Tue, Aug 17, 2010 at 5:20 PM, Jordan Schwartz  wrote:
> ZFSfolk,
>
> Pardon the slightly offtopic post, but I figured this would be a good
> forum to get some feedback.
>
> I am looking at implementing zfs group quotas on some X4540s and
> X4140/J4400s, 64GB of RAM per server, running Solaris 10 Update 8
> servers with IDR143158-06.
>
> There is one large filesystem per server that is served via NFSv3 to
> linux based clients for web and email loads. There will be at least a
> few thousand group quotas per filesystem.
>
> Are there any scaling/performance issues with group based quotas?
>
> For the filesystems that  are already populated with thousands of
> groups and terabytes of data in relatively small files, will there be
> any performance impacts as the quotas are created?
>
> Also for the pre-populated filesystems will "zfs get groupsp...@$gid
> $zpool/$fs" return the total usage for the group?
>
> Thanks for any feedback,
>
> Jordan Schwartz
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Group Quotas

2010-08-18 Thread Greg Mason
> 
> Also the linux NFSv4 client is bugged (as in hang-the-whole-machine bugged).
> I am deploying a new osol fileserver for home directories and I'm using NFSv3 
> + automounter (because I am also using one dataset per user, and thus I have 
> to mount each home dir separately).

We are also in the same boat here. I have about 125TB of ZFS storage in 
production currently, running OSOL, across 5 X4540s. We tried the NFSv4 route, 
and crawled back to NFSv3 and the linux automounter because NFSv4 on Linux is 
*that* broken. As in hung-disk-io-that-wedges-the-whole-box broken. We know 
that NFSv3 was never meant for the scale we're using it at, but we have no 
choice in the matter.

On the topic of Linux clients, NFS and ZFS: We've also found that Linux is bad 
at handling lots of mounts/umounts. We will occasionally find a client where 
the automounter requested a mount, but it never actually completed. It'll show 
as mounted in /proc/mounts, but won't *actually* be mounted. A umount -f for 
the affected filesystem fixes this. On ~250 clients in an HPC environment, 
we'll see such an error every week or so.

I'm hoping that recent versions of linux (i.e. RHEL 6) are a bit better at 
NFSv4, but i'm not holding my breath.

--
Greg Mason
HPC Administrator
Michigan State University
Institute for Cyber Enabled Research
High Performance Computing Center

web: www.icer.msu.edu
email: gma...@msu.edu




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Group Quotas

2010-08-18 Thread Simone Caldana
Il giorno 18/ago/2010, alle ore 21.24, David Magda ha scritto:
> On Wed, August 18, 2010 15:14, Linder, Doug wrote:
>> I've noticed that everytime someone mentions using NFS with ZFS here, they
>> always seem to be using NFSv3.  Is there a reason for this that I just
>> don't know about? 
> At $WORK it's generally namespace issues:
> 
>  http://blogs.sun.com/tdh/entry/linux_nfsv4_namespace_implementation_fools

Also the linux NFSv4 client is bugged (as in hang-the-whole-machine bugged).
I am deploying a new osol fileserver for home directories and I'm using NFSv3 + 
automounter (because I am also using one dataset per user, and thus I have to 
mount each home dir separately).

-- 
Simone Caldana

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread John D Groenveld
In message <4c6c4e30.7060...@ianshome.com>, Ian Collins writes:
>If you count Monday this week as lately, we have never had to wait more 
>than 24 hours for replacement drives for our 45x0 or 7000 series 

Same here, but two weeks ago for a failed drive in an X4150.

Last week SunSolve was sending my service order requests to
/dev/null, but someone manually entered after I submitted
web feedback.

John
groenv...@acm.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Ian Collins

On 08/19/10 03:44 AM, Ethan Erchinger wrote:

Have you dealt with Sun/Oracle support lately? lololol  It's a disaster.
We've had a failed disk in a fully support Sun system for over 3 weeks,
Explorer data turned in, and been given the runaround forever.  The 7000
series support is no better, possibly worse.

   
If you count Monday this week as lately, we have never had to wait more 
than 24 hours for replacement drives for our 45x0 or 7000 series 
systems.  Even if the drive has only degraded by checksum errors, they 
still ship a replacement.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Tim Cook
On Wed, Aug 18, 2010 at 1:34 PM, Miles Nordin  wrote:

> > "ee" == Ethan Erchinger  writes:
>
>ee> We've had a failed disk in a fully support Sun system for over
>ee> 3 weeks, Explorer data turned in, and been given the runaround
>ee> forever.
>
> that sucks.
>
> but while NetApp may replace your disk immediately, they are an
> abusive partner with their CEO waving his cock around on mailing lists
> presuming to ban all resale claiming first-sale doctrine does not
> apply to his magical ONTap software.  All their documentation is
> locked up behind a paywall, and they entice all their mouth-breather
> JFDI bank sysadmins to do their discussion of the product on the
> login-walled vendor-censored NOW forums.
>


>
> My choice would always be for the company that gives the option of not
> paying for support without their detonating some self-destructing
> DRMblob and destroying my entire business.  No matter how bad their
> support is when I choose to pay for it, I would always buy from them.
> Companies that try to make money by sticking a wrench into the gears
> of the market are necktie-strangled scammers and, I think, not a good
> fit for highly-technical customers.  It just doesn't pay in the long
> run, though if I'm honest I suppose the stories I've heard about
> ditching NetApp are about scaling problems as often as they are about
> abusive-relationship problems.
>
>

Holy uncalled-for FUD filled rant batman!  Great, we all see you hate
NetApp.  Not sure why you felt the need to interject it here.

PS: First sale doesn't apply, it's already held-up in a court of law, and
Hitz isn't their CEO.  See:  *Davidson & Associates v. Internet Gateway Inc
(2004)
*

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 2TB drive will not work on motherboard

2010-08-18 Thread seth keith
this is a 64 bit system, and I already used 2 of these drives in a raidz1 pool 
and they worked great, except I needed to use the SATA controller card and not 
the motherboard SATA. Any ideas?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Fwd: Kernel panic on import / interrupted zfs destroy

2010-08-18 Thread Matthew Ellison
Hmm still running zdb since last night.  Anyone have any suggestions or advice 
how to proceed with this issue?

Thanks,

Matthew Ellison

Begin forwarded message:

> From: Matthew Ellison 
> Date: August 18, 2010 3:15:39 AM EDT
> To: zfs-discuss@opensolaris.org
> Subject: Kernel panic on import / interrupted zfs destroy
> 
> I have a box running snv_134 that had a little boo-boo.
> 
> The problem first started a couple of weeks ago with some corruption on two 
> filesystems in a 11 disk 10tb raidz2 set.  I ran a couple of scrubs that 
> revealed a handful of corrupt files on my 2 de-duplicated zfs filesystems.  
> No biggie.
> 
> I thought that my problems had something to do with de-duplication in 134, so 
> I went about the process of creating new filesystems and copying over the 
> "good" files to another box.  Every time I touched the "bad" files I got a 
> filesystem error 5.  When trying to delete them manually, I got kernel panics 
> - which eventually turned into reboot loops.
> 
> I tried installing nexenta on another disk to see if that would allow me to 
> get passed the reboot loop - which it did.  I finished moving the "good" 
> files over (using rsync, which skipped over the error 5 files, unlike cp or 
> mv), and destroyed one of the two filesystems.  Unfortunately, this caused a 
> kernel panic in the middle of the destroy operation, which then became 
> another panic / reboot loop.
> 
> I was able to get in with milestone=none and delete the zfs cache, but now I 
> have a new problem:  Any attempt to import the pool results in a panic.  I 
> have tried from my snv_134 install, from the live cd, and from nexenta.  I 
> have tried various zdb incantations (with aok=1 and zfs:zfs_recover=1), to no 
> avail - these error out after a few minutes.  I have even tried another 
> controller.
> 
> I have zdb -e -bcsvL running now from 134 (without aok=1) which has been 
> running for several hours.  Can zdb recover from this kind of situation (with 
> a half-destroyed filesystem that panics the kernel on import?)  What is the 
> impact of the above zdb operation without aok=1?  Is there any likelihood of 
> a recovery of non-affected filesystems?
> 
> Any suggestions?
> 
> Regards,
> 
> Matthew Ellison

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 2TB drive will not work on motherboard

2010-08-18 Thread Ian Collins

On 08/19/10 04:56 AM, seth keith wrote:

I had a perfectly working 7 drive raidz pool using some on board STATA 
connectors and some on PCI SATA controller cards. My pool was using 500GB 
drives. I had the stupid idea to replace my 500GB drives with 2TB ( Mitsubishi 
) drives. This process resulted in me loosing much of my data ( see my other 
post ). Now that I am picking up the pieces, I think I have tracked the problem 
down to some incompatibility with the drives and on board SATA. I can create 
pools on the controller card SATA slots, but not on the on board SATA. ( see 
below ). I can switch the two drives around and I can always create pools on 
the external (c11t0d0 ) SATA but never on the internal. However, with a 500GB 
drive it works fine on either one.

Does anyone know how to resolve this. Is there a bios update or some kind of 
patch or something? Please help.

my motherboard is a MSI N1996. I have two,l so I tried the other one with the 
same result, so it's not a hardware failure. The other thing I notice is the 
drives look different to format. These are identical drives.

   

Is this a 32 bit system?  If so, you're out of luck with 2TB drives.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-18 Thread Linder, Doug
Erik Trimble wrote: 

> That said, stability vs new features has NOTHING to do with the OSS
> development model.  It has everything to do with the RELEASE model.
> [...] 
> All that said, using the OSS model for actual *development* of an
> Operating System is considerably superior to using a closed model. For
> reasons I outlined previously in a post to opensolaris-discuss.

I didn't mean to imply there was anything wrong with the OSS 
release-early-and-often model.  On the contrary, I think it's excellent and I 
fully support it.  All my personal stuff is usually the very freshest code 
available that day.  I just meant to say that sometimes young OSS zealots 
people get overconfident and think anyone who doesn't always upgrade business 
systems to the bleeding edge is "stodgy," "behind the times," or "stuck in the 
past" when in fact it's just "professionalism".

Doug Linder
--
Learn more about Merchant Link at www.merchantlink.com.

THIS MESSAGE IS CONFIDENTIAL.  This e-mail message and any attachments are 
proprietary and confidential information intended only for the use of the 
recipient(s) named above.  If you are not the intended recipient, you may not 
print, distribute, or copy this message or any attachments.  If you have 
received this communication in error, please notify the sender by return e-mail 
and delete this message and any attachments from your computer.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-18 Thread Erik Trimble

 On 8/18/2010 12:24 PM, Linder, Doug wrote:

On Fri, Aug 13 at 19:06, Frank Cusack wrote:



OpenSolaris is for enthusiasts and great great folks like Nexenta.
Solaris lags so far behind it's not really an upgrade path.

It's often hard for OSS-minded people to believe, but there are an awful lot of places 
that actively DO NOT want the latest and greatest, and for good reason.  They let the 
pioneers get the arrows in the back.  Their main concern is stability over all else.  
Gee-whiz new features might seem great to someone who's used top patching their Fedora 
installation with 32 patches every morning, but for critical high-availability stuff that 
absolutely, positive, can NEVER go down, staying comfortably in the middle ground is the 
ideal strategy.  Sun's own white papers on patching advise that the best practice for 
patching is "do it when there's a specific reason".

Solaris isn't "so far behind".  It's right exactly where the market wants it.  
There are plenty of bleeding-edge operating systems out there for those who prefer to 
live on the edge.  As a Solaris sysadmin, would I like to use all the nifty geegaws on my 
production systems that I use on my desktop?  Sure, in a perfect world I'd be able to do 
that.  But that's not the reality, and I'm not risking the business or my job on anything 
less than ten thousand percent tested for years before adopting it.

"Newer" != "better".
--


Well,

Most of the systems people like me that I know also value stability and 
conformity to expectations (i.e. standards) over new features.


That said, stability vs new features has NOTHING to do with the OSS 
development model.  It has everything to do with the RELEASE model.


Also, to answer Frank's statement: yes, there *is* an upgrade path from 
Solaris 10 to OpenSolaris. There will likely be a *better* one when what 
was OpenSolaris is productized and turned into Solaris Express, soon to 
be Solaris Next (11).


Take a look at Fedora vs RedHat Enterprise.   This is the closest Linux 
analogy we've come up with for showing the (former) difference between 
OpenSolaris and Solaris 10/11/etc.


While there were certainly a few folks who ran OpenSolaris in production 
(who absolutely needed the new features and couldn't wait until they 
made it to Solaris 10), I'm going to say that 99.999% of people ran 
Solaris 10, for exactly the reasons you indicated above.


All that said, using the OSS model for actual *development* of an 
Operating System is considerably superior to using a closed model. For 
reasons I outlined previously in a post to opensolaris-discuss.


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-18 Thread Linder, Doug
On Fri, Aug 13 at 19:06, Frank Cusack wrote:


> OpenSolaris is for enthusiasts and great great folks like Nexenta.
> Solaris lags so far behind it's not really an upgrade path.

It's often hard for OSS-minded people to believe, but there are an awful lot of 
places that actively DO NOT want the latest and greatest, and for good reason.  
They let the pioneers get the arrows in the back.  Their main concern is 
stability over all else.  Gee-whiz new features might seem great to someone 
who's used top patching their Fedora installation with 32 patches every 
morning, but for critical high-availability stuff that absolutely, positive, 
can NEVER go down, staying comfortably in the middle ground is the ideal 
strategy.  Sun's own white papers on patching advise that the best practice for 
patching is "do it when there's a specific reason".

Solaris isn't "so far behind".  It's right exactly where the market wants it.  
There are plenty of bleeding-edge operating systems out there for those who 
prefer to live on the edge.  As a Solaris sysadmin, would I like to use all the 
nifty geegaws on my production systems that I use on my desktop?  Sure, in a 
perfect world I'd be able to do that.  But that's not the reality, and I'm not 
risking the business or my job on anything less than ten thousand percent 
tested for years before adopting it.

"Newer" != "better".
--
Learn more about Merchant Link at www.merchantlink.com.

THIS MESSAGE IS CONFIDENTIAL.  This e-mail message and any attachments are 
proprietary and confidential information intended only for the use of the 
recipient(s) named above.  If you are not the intended recipient, you may not 
print, distribute, or copy this message or any attachments.  If you have 
received this communication in error, please notify the sender by return e-mail 
and delete this message and any attachments from your computer.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Group Quotas

2010-08-18 Thread David Magda
On Wed, August 18, 2010 15:14, Linder, Doug wrote:

> I've noticed that everytime someone mentions using NFS with ZFS here, they
> always seem to be using NFSv3.  Is there a reason for this that I just
> don't know about?  To me, using NFSv4 is a no-brainer.  ZFS supports it
> natively, it supports all the wonderful extra capabilities that the ZFS
> ACLs allow, has stronger security, stateful protocol, and all kinds of
> other nifty stuff.  Why do people seem to be clinging so rabidly to the
> old version?  Is there some technical reason I'm missing?

At $WORK it's generally namespace issues:

  http://blogs.sun.com/tdh/entry/linux_nfsv4_namespace_implementation_fools

Haven't really found a use for the "extras" that NFSv4 adds, so it's not
worth the effort.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Group Quotas

2010-08-18 Thread Linder, Doug
Jordan Schwartz wrote:

> There is one large filesystem per server that is served via NFSv3 to

I've noticed that everytime someone mentions using NFS with ZFS here, they 
always seem to be using NFSv3.  Is there a reason for this that I just don't 
know about?  To me, using NFSv4 is a no-brainer.  ZFS supports it natively, it 
supports all the wonderful extra capabilities that the ZFS ACLs allow, has 
stronger security, stateful protocol, and all kinds of other nifty stuff.  Why 
do people seem to be clinging so rabidly to the old version?  Is there some 
technical reason I'm missing?

Doug Linder

I apologize for all the stupid cruft below that my company's mail server adds.


--
Learn more about Merchant Link at www.merchantlink.com.

THIS MESSAGE IS CONFIDENTIAL.  This e-mail message and any attachments are 
proprietary and confidential information intended only for the use of the 
recipient(s) named above.  If you are not the intended recipient, you may not 
print, distribute, or copy this message or any attachments.  If you have 
received this communication in error, please notify the sender by return e-mail 
and delete this message and any attachments from your computer.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Miles Nordin
> "ee" == Ethan Erchinger  writes:

ee> We've had a failed disk in a fully support Sun system for over
ee> 3 weeks, Explorer data turned in, and been given the runaround
ee> forever.

that sucks.  

but while NetApp may replace your disk immediately, they are an
abusive partner with their CEO waving his cock around on mailing lists
presuming to ban all resale claiming first-sale doctrine does not
apply to his magical ONTap software.  All their documentation is
locked up behind a paywall, and they entice all their mouth-breather
JFDI bank sysadmins to do their discussion of the product on the
login-walled vendor-censored NOW forums.

My choice would always be for the company that gives the option of not
paying for support without their detonating some self-destructing
DRMblob and destroying my entire business.  No matter how bad their
support is when I choose to pay for it, I would always buy from them.
Companies that try to make money by sticking a wrench into the gears
of the market are necktie-strangled scammers and, I think, not a good
fit for highly-technical customers.  It just doesn't pay in the long
run, though if I'm honest I suppose the stories I've heard about
ditching NetApp are about scaling problems as often as they are about
abusive-relationship problems.


pgpaDnuGNPnyC.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Please help destroy pool.

2010-08-18 Thread Alxen4
Thanks Cindy,

I just needed to delete all luns before

sbdadm delete-lu 600144F00800270514BC4C1E29FB0001

itadm delete-target -f
iqn.1986-03.com.sun:02:f38e0b34-be30-ca29-dfbd-d1d28cd75502

And then I was able to destroy ZFS system itself.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cant't detach spare device from pool

2010-08-18 Thread Mark Musante
You need to let the resilver complete before you can detach the spare.  This is 
a known problem, CR 6909724.

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6909724



On 18 Aug 2010, at 14:02, Dr. Martin Mundschenk wrote:

> Hi!
> 
> I had trouble with my raidz in the way, that some of the blockdevices where 
> not found by the OSOL Box the other day, so the spare device was hooked on 
> automatically.
> 
> After fixing the problem, the missing device came back online, but I am 
> unable to detach the spare device, even though all devices are online and 
> functional.
> 
> m...@iunis:~# zpool status tank
>   pool: tank
>  state: ONLINE
> status: One or more devices is currently being resilvered.  The pool will
> continue to function, possibly in a degraded state.
> action: Wait for the resilver to complete.
>  scrub: resilver in progress for 1h5m, 1,76% done, 61h12m to go
> config:
> 
> NAME   STATE READ WRITE CKSUM
> tank   ONLINE   0 0 0
>   raidz1-0 ONLINE   0 0 0
> c9t0d1 ONLINE   0 0 0
> c9t0d3 ONLINE   0 0 0  15K resilvered
> c9t0d0 ONLINE   0 0 0
> spare-3ONLINE   0 0 0
>   c9t0d2   ONLINE   0 0 0  37,5K resilvered
>   c16t0d0  ONLINE   0 0 0  14,1G resilvered
> cache
>   c18t0d0  ONLINE   0 0 0
> spares
>   c16t0d0  INUSE currently in use
> 
> errors: No known data errors
> 
> m...@iunis:~# zpool detach tank c16t0d0
> cannot detach c16t0d0: no valid replicas
> 
> How can I solve the Problem?
> 
> Martin
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Cant't detach spare device from pool

2010-08-18 Thread Dr. Martin Mundschenk
Hi!

I had trouble with my raidz in the way, that some of the blockdevices where not 
found by the OSOL Box the other day, so the spare device was hooked on 
automatically.

After fixing the problem, the missing device came back online, but I am unable 
to detach the spare device, even though all devices are online and functional.

m...@iunis:~# zpool status tank
  pool: tank
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 1h5m, 1,76% done, 61h12m to go
config:

NAME   STATE READ WRITE CKSUM
tank   ONLINE   0 0 0
  raidz1-0 ONLINE   0 0 0
c9t0d1 ONLINE   0 0 0
c9t0d3 ONLINE   0 0 0  15K resilvered
c9t0d0 ONLINE   0 0 0
spare-3ONLINE   0 0 0
  c9t0d2   ONLINE   0 0 0  37,5K resilvered
  c16t0d0  ONLINE   0 0 0  14,1G resilvered
cache
  c18t0d0  ONLINE   0 0 0
spares
  c16t0d0  INUSE currently in use

errors: No known data errors

m...@iunis:~# zpool detach tank c16t0d0
cannot detach c16t0d0: no valid replicas

How can I solve the Problem?

Martin


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Paul Kraus
On Wed, Aug 18, 2010 at 11:44 AM, Ethan Erchinger  wrote:
>
> Frank wrote:
>> Have you dealt with RedHat "Enterprise" support?  lol.
>
> Have you dealt with Sun/Oracle support lately? lololol  It's a disaster.
> We've had a failed disk in a fully support Sun system for over 3 weeks,
> Explorer data turned in, and been given the runaround forever.  The 7000
> series support is no better, possibly worse.

We have seen virtually no degradation of Sun (Oracle) Support since
the takeover. Sun started enforcing the requirement of system serial
number before Oracle completed the acquisition, and that is the last
major change we've seen. On the other hand, much of our experience is
related to our local support staff, and they were well above Sun
average for over a decade.

>> The "enterprise" is going to continue to want Oracle on Solaris.
>
> The "enterprise" wants what they used to get from Sun, not what's
> currently being offered.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Jacob Ritorto
+1: This thread is relevant and productive discourse that'll assist 
OpenSolaris orphans in pending migration choices.


On 08/18/10 12:27, Edward Ned Harvey wrote:

Compatibility of ZFS&  Linux, as well as the future development of ZFS, and
the health and future of opensolaris / solaris, oracle&  sun ... Are
definitely relevant to this list.

People are allowed to conjecture.

If you don't have interest in a thread, just ignore the thread.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Please help destroy pool.

2010-08-18 Thread Cindy Swearingen

Hi Alxen4,

If /tank/macbook0-data is a ZFS volume that has been shared as an iSCSI
LUN, then you will need to unshare/remove those features before removing
it.

Thanks,

Cindy

On 08/18/10 00:10, Alxen4 wrote:

I have a pool with zvolume (Opensolaris b134)

When I try zpool destroy tank I get "pool is busy"

# zpool destroy -f tank
cannot destroy 'tank': pool is busy


When I try destroy zvolume first I get " dataset is busy"

# zfs destroy -f tank/macbook0-data
cannot destroy 'tank/macbook0-data': dataset is busy

zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT

tank/fs1134K  16.7T  44.7K  /tank/fs1
tank/fs2135K  16.7T  44.7K  /tank/fs2
tank/macbook0-data   4.13T  20.3T   522G  -
tank/fs3 145G  16.7T   145G  /tank/fs3


What next should I try ?

Please help.

Thanks in advance.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Frank Cusack

On 8/18/10 9:29 AM -0700 Ethan Erchinger wrote:

Edward wrote:

I have had wonderful support, up to and including recently, on my Sun
hardware.


I wish we had the same luck.  We've been handed off between 3 different
"technicians" at this point, each one asking for the same information.


Do they at least phrase it as "Can you verify the problem?", the way
that call center operators ask you for the information you've already
entered via the automated attendant? :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Joerg Schilling
"Garrett D'Amore"  wrote:

> All of this is entirely legal conjecture, by people who aren't lawyers,
> for issues that have not been tested by court and are clearly subject to
> interpretation.  Since it no longer is relevant to the topic of the
> list, can we please either take the discussion offline, or agree to just
> let the topic die (on the basis that there cannot be an authoritative
> answer until there is some case law upon which to base it?)

Garret, I did reply to your mail because you quoted claims from people who 
aren't lawyers and who do not give evidence for their claims.

As I know that is makes no sense to discuss claims from non-lawyers, I replied
with quotes and statements I did read from lawyers.

I am no lawyer, but I talk with lawyers on Copyright and licensing issues and 
I try to ignore legal claims from anybody who does not give evidence for his 
claims. 

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 2TB drive will not work on motherboard

2010-08-18 Thread seth keith
I had a perfectly working 7 drive raidz pool using some on board STATA 
connectors and some on PCI SATA controller cards. My pool was using 500GB 
drives. I had the stupid idea to replace my 500GB drives with 2TB ( Mitsubishi 
) drives. This process resulted in me loosing much of my data ( see my other 
post ). Now that I am picking up the pieces, I think I have tracked the problem 
down to some incompatibility with the drives and on board SATA. I can create 
pools on the controller card SATA slots, but not on the on board SATA. ( see 
below ). I can switch the two drives around and I can always create pools on 
the external (c11t0d0 ) SATA but never on the internal. However, with a 500GB 
drive it works fine on either one.

Does anyone know how to resolve this. Is there a bios update or some kind of 
patch or something? Please help.

my motherboard is a MSI N1996. I have two,l so I tried the other one with the 
same result, so it's not a hardware failure. The other thing I notice is the 
drives look different to format. These are identical drives.

# format

AVAILABLE DISK SELECTIONS:
   0. c3d0 
  /p...@0,0/pci8086,3...@1c/pci-...@0/i...@1/c...@0,0
   1. c6d1 
  /p...@0,0/pci-...@1f,2/i...@1/c...@1,0
   2. c11t0d0 
  /p...@0,0/pci8086,3...@1c,2/pci1095,7...@0/d...@0,0
Specify disk (enter its number):
Specify disk (enter its number):
Specify disk (enter its number): zpool create test^C
#
# zpool destroy test3
# zpool create test3 c11t0d0
# zpool create test4 c6d1
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c6d1s0 is part of exported or potentially active ZFS pool test2. 
Please see zpool(1M).
# zpool create -f test4 c6d1
cannot create 'test4': invalid argument for this pool operation
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread andrew . gabriel
What you say is true only on the system itself. On an NFS client system, 30 
seconds of lost data in the middle of a file (as per my earlier example) is a 
corrupt file.

-original message-
Subject: Re: [zfs-discuss] Solaris startup script location
From: Edward Ned Harvey 
Date: 18/08/2010 17:17

> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Alxen4
> 
> Disabling ZIL converts all synchronous calls to asynchronous which
> makes ZSF to report data acknowledgment before it actually was written
> to stable storage which in turn improves performance but might cause
> data corruption in case of server crash.
> 
> Is it correct ?

It is partially correct.

With the ZIL disabled, you could lose up to 30 sec of writes, but it won't
cause an inconsistent filesystem, or "corrupt" data.  If you make a
distinction between "corrupt" and "lost" data, then this is valuable for you
to know:

Disabling the ZIL can result in up to 30sec of lost data, but not corrupt
data.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Networker & Dedup @ ZFS

2010-08-18 Thread Richard Elling
On Aug 18, 2010, at 5:11 AM, Paul Kraus wrote:

> On Wed, Aug 18, 2010 at 7:51 AM, Peter Tribble  
> wrote:
> 
>> I tried this with NetBackup, and decided against it pretty rapidly.
>> Basically, we
>> got hardly any dedup at all. (Something like 3%; compression gave us
>> much better results.) Tiny changes in block alignment completely ruin the
>> possibility of significant benefit.
> 
>We are using Netbackup with ZFS Disk Stage under Solaris 10U8,
> no dedupe but are getting 1.9x compression ratio :-)
> 
>> Using ZFS dedup is logically the wrong place to do this; you want a decent
>> backup system that doesn't generate significant amounts of duplicate data
>> in the first place.
> 
>The latest release of NBU (7.0) supports both client side and
> server side dedupe (at additional cost ;-). We are using it in test
> for backing up remote servers across slow WAN links with very good
> results.


It is always better to manipulate data closer to the consumer of said data. 
Ideally, applications replicate, compress, and dedup their own data.
 -- richard

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Ethan Erchinger
Edward wrote:
> That is really weird.  What are you calling "failed?"  If you're
getting
> either a red blinking light, or a checksum failure on a device in a
zpool...
> You should get your replacement with no trouble.

Yes, failed, with all the normal "failed" signs, cfgadm not finding it,
"FAULTED" in zpool output.

> I have had wonderful support, up to and including recently, on my Sun
> hardware.

I wish we had the same luck.  We've been handed off between 3 different
"technicians" at this point, each one asking for the same information.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Garrett D'Amore
> 
> interpretation.  Since it no longer is relevant to the topic of the
> list, can we please either take the discussion offline, or agree to
> just
> let the topic die (on the basis that there cannot be an authoritative
> answer until there is some case law upon which to base it?)

Compatibility of ZFS & Linux, as well as the future development of ZFS, and
the health and future of opensolaris / solaris, oracle & sun ... Are
definitely relevant to this list.

People are allowed to conjecture.

If you don't have interest in a thread, just ignore the thread.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ethan Erchinger
> 
> We've had a failed disk in a fully support Sun system for over 3 weeks,
> Explorer data turned in, and been given the runaround forever.  The
> 7000
> series support is no better, possibly worse.

That is really weird.  What are you calling "failed?"  If you're getting
either a red blinking light, or a checksum failure on a device in a zpool...
You should get your replacement with no trouble.

I have had wonderful support, up to and including recently, on my Sun
hardware.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Alxen4
> 
> For example I'm trying to use ramdisk as ZIL device (ramdiskadm )

Other people have already corrected you about ramdisk for log.
It's already been said, use SSD, or disable ZIL completely.

But this was not said:

In many cases, you can gain a large performance increase by enabling the
WriteBack buffer of your ZFS server raid controller card.  You only want to
do this if you have a BBU enabled on the card.  The performance gain is
*not* quite as good as using a nonvolatile log device, but certainly worth
checking anyway.  Because it's low cost, and doesn't consume slots...

Also, if you get a log device, you want two of them, and mirror them.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Alxen4
> 
> Disabling ZIL converts all synchronous calls to asynchronous which
> makes ZSF to report data acknowledgment before it actually was written
> to stable storage which in turn improves performance but might cause
> data corruption in case of server crash.
> 
> Is it correct ?

It is partially correct.

With the ZIL disabled, you could lose up to 30 sec of writes, but it won't
cause an inconsistent filesystem, or "corrupt" data.  If you make a
distinction between "corrupt" and "lost" data, then this is valuable for you
to know:

Disabling the ZIL can result in up to 30sec of lost data, but not corrupt
data.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Garrett D'Amore
All of this is entirely legal conjecture, by people who aren't lawyers,
for issues that have not been tested by court and are clearly subject to
interpretation.  Since it no longer is relevant to the topic of the
list, can we please either take the discussion offline, or agree to just
let the topic die (on the basis that there cannot be an authoritative
answer until there is some case law upon which to base it?)

- Garrett


On Wed, 2010-08-18 at 09:43 -0500, Bob Friesenhahn wrote:
> On Wed, 18 Aug 2010, Joerg Schilling wrote:
> >
> > Linus is right with his primary decision, but this also applies for static
> > linking. See Lawrence Rosen for more information, the GPL does not distinct
> > between static and dynamic linking.
> 
> GPLv2 does not address linking at all and only makes vague references 
> to the "program".  There is no insinuation that the program needs to 
> occupy a single address space or mention of address spaces at all. 
> The "program" could potentially be a composition of multiple 
> cooperating executables (e.g. like GCC) or multiple modules.  As you 
> say, everything depends on the definition of a "derived work".
> 
> If a shell script may be dependent on GNU 'cat', does that make the 
> shell script a "derived work"?  Note that GNU 'cat' could be replaced 
> with some other 'cat' since 'cat' has a well defined interface.  A 
> very similar situation exists for loadable modules which have well 
> defined interfaces (like 'cat').  Based on the argument used for 
> 'cat', the mere injection of a loadable module into an execution 
> environment which includes GPL components should not require that 
> module to be distributable under GPL.  The module only needs to be 
> distributable under GPL if it was developed in such a way that it 
> specifically depends on GPL components.
> 
> Bob


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Narrow escape with FAULTED disks

2010-08-18 Thread Cindy Swearingen

Its hard to tell what caused the smart predictive failure message,
like a temp fluctuation. If ZFS noticed that a disk wasn't available
yet, then I would expect a message to that effect.

In any case, I think I would have a replacement disk available.

The important thing is that you continue to monitor your hardware
for failures.

We recommend using ZFS redundancy and alway have backups of your
data.

Thanks,

Cindy


On 08/18/10 02:38, Mark Bennett wrote:

Hi Cindy,

Not very enlightening.
No previous errors for the disks.
I did replace one about a month earlier when it showed a rise in io errors, and 
before it reached a level where fault management would have failed it.

The disk mentioned is not one of those that went FAULTED.
Also, no more smart error events since.
The ZFS failed on boot after a reboot command.

The scrub was eventually stopped at 75% due to the performance impact.
No errors were found up to that point.

One thing I see from the (attached) messages log is that the zfs error occurs 
before all the disks have been logged as enumerated.
This is probably the first reboot since at least 8, and maybe 16 extra disks 
were hot plugged and added to the pool.

The Hardware is a Supermicro 3U plus 2 x 4U SAS storage chassis.
The SAS controller has 16 disks on one SAS port, and 32 in the other.




Aug 16 18:44:39.2154 02f57499-ae0a-c46c-b8f8-825205a8505d ZFS-8000-D3
  100%  fault.fs.zfs.device
Problem in: zfs://pool=drgvault/vdev=d79c5fc5b5c3b789
   Affects: zfs://pool=drgvault/vdev=d79c5fc5b5c3b789
   FRU: -
  Location: -
Aug 16 18:44:39.5569 25e0bdc2-0171-c4b5-b530-a268f8572bd1 ZFS-8000-D3
  100%  fault.fs.zfs.device
Problem in: zfs://pool=drgvault/vdev=e912d259d7829903
   Affects: zfs://pool=drgvault/vdev=e912d259d7829903
   FRU: -
  Location: -
Aug 16 18:44:39.8964 8e9cff35-8e9d-c0f1-cd5b-bd1d0276cda1 ZFS-8000-CS
  100%  fault.fs.zfs.pool
Problem in: zfs://pool=drgvault
   Affects: zfs://pool=drgvault
   FRU: -
  Location: -
Aug 16 18:45:47.2604 3848ba46-ee18-4aad-b632-9baf25b532ea DISK-8000-0X
  100%  fault.io.disk.predictive-failure
Problem in: 
hc://:product-id=LSILOGIC-SASX36-A.1:server-id=:chassis-id=50030480005a337f:serial=6XW15V2S:part=ST32000542AS-ST32000542AS:revision=CC34/ses-enclosure=1/bay=6/disk=0
   Affects: 
dev:///:devid=id1,s...@n5000c50021f4916f//p...@0,0/pci8086,4...@3/pci15d9,a...@0/s...@24,0
   FRU: 
hc://:product-id=LSILOGIC-SASX36-A.1:server-id=:chassis-id=50030480005a337f:serial=6XW15V2S:part=ST32000542AS-ST32000542AS:revision=CC34/ses-enclosure=1/bay=6/disk=0
  Location: 006



Mark.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Ethan Erchinger

Frank wrote:
> Have you dealt with RedHat "Enterprise" support?  lol.

Have you dealt with Sun/Oracle support lately? lololol  It's a disaster.
We've had a failed disk in a fully support Sun system for over 3 weeks,
Explorer data turned in, and been given the runaround forever.  The 7000
series support is no better, possibly worse.

> The "enterprise" is going to continue to want Oracle on Solaris.

The "enterprise" wants what they used to get from Sun, not what's
currently being offered.

Ethan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Bob Friesenhahn

On Wed, 18 Aug 2010, Joerg Schilling wrote:


Linus is right with his primary decision, but this also applies for static
linking. See Lawrence Rosen for more information, the GPL does not distinct
between static and dynamic linking.


GPLv2 does not address linking at all and only makes vague references 
to the "program".  There is no insinuation that the program needs to 
occupy a single address space or mention of address spaces at all. 
The "program" could potentially be a composition of multiple 
cooperating executables (e.g. like GCC) or multiple modules.  As you 
say, everything depends on the definition of a "derived work".


If a shell script may be dependent on GNU 'cat', does that make the 
shell script a "derived work"?  Note that GNU 'cat' could be replaced 
with some other 'cat' since 'cat' has a well defined interface.  A 
very similar situation exists for loadable modules which have well 
defined interfaces (like 'cat').  Based on the argument used for 
'cat', the mere injection of a loadable module into an execution 
environment which includes GPL components should not require that 
module to be distributable under GPL.  The module only needs to be 
distributable under GPL if it was developed in such a way that it 
specifically depends on GPL components.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Gary Mills
On Wed, Aug 18, 2010 at 12:16:04AM -0700, Alxen4 wrote:
> Is there any way run start-up script before non-root pool is mounted ?
> 
> For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
> So I need to create ramdisk before actual pool is mounted otherwise it 
> complains that log device is missing :)

Yes, it's actually quite easy.  You need to create an SMF manifest and
method.  The manifest should make the ZFS mount dependant on it with
the `dependent' and `/dependent' tag pair.  It also needs to be
dependant on resources it needs, with the `dependency' and
`/dependency' pairs. It should also specify a `single_instance/' and
`transient' service.  The method script can do whatever the mount
requires, such as creating the ramdisk.

-- 
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Networker & Dedup @ ZFS

2010-08-18 Thread Paul Kraus
On Wed, Aug 18, 2010 at 7:51 AM, Peter Tribble  wrote:

> I tried this with NetBackup, and decided against it pretty rapidly.
> Basically, we
> got hardly any dedup at all. (Something like 3%; compression gave us
> much better results.) Tiny changes in block alignment completely ruin the
> possibility of significant benefit.

We are using Netbackup with ZFS Disk Stage under Solaris 10U8,
no dedupe but are getting 1.9x compression ratio :-)

> Using ZFS dedup is logically the wrong place to do this; you want a decent
> backup system that doesn't generate significant amounts of duplicate data
> in the first place.

The latest release of NBU (7.0) supports both client side and
server side dedupe (at additional cost ;-). We are using it in test
for backing up remote servers across slow WAN links with very good
results.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Networker & Dedup @ ZFS

2010-08-18 Thread Peter Tribble
On Wed, Aug 18, 2010 at 9:48 AM, Sigbjorn Lie  wrote:
> Hi,
>
> We are considering using a ZFS based storage as a staging disk for Networker. 
> We're aiming at
> providing enough storage to be able to keep 3 months worth of backups on 
> disk, before it's moved
> to tape.
>
> To provide storage for 3 months of backups, we want to utilize the dedup 
> functionality in ZFS.
>
> I've searched around for these topics and found no success stories, however 
> those who has tried
> did not mention if they had attempted to change the blocksize to any smaller 
> than the default of
> 128k.
>
> Does anyone have any experience with this kind of setup?

I tried this with NetBackup, and decided against it pretty rapidly.
Basically, we
got hardly any dedup at all. (Something like 3%; compression gave us
much better results.) Tiny changes in block alignment completely ruin the
possibility of significant benefit.

Using ZFS dedup is logically the wrong place to do this; you want a decent
backup system that doesn't generate significant amounts of duplicate data
in the first place.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Networker & Dedup @ ZFS

2010-08-18 Thread Hans Foertsch
Hello,

we use ZFS on Solaris 10u8 as a backup to disk solution with EMC Networker.

We use the standard recordsize 128k and zfs compression.

Dedup we can't use, because of Solaris 10.

But we working on to use more feature and look for more improvements...

But we are happy with this solution.

Hans
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Joerg Schilling
Miles Nordin  wrote:

> > "gd" == Garrett D'Amore  writes:
>
>  >> Joerg is correct that CDDL code can legally live right
>  >> alongside the GPLv2 kernel code and run in the same program.
>
> gd> My understanding is that no, this is not possible.
>
> GPLv2 and CDDL are incompatible:
>
>  
> http://www.fsf.org/licensing/education/licenses/index_html/#GPLIncompatibleLicenses

This URL contains a claim from a laymen - not a lawyer. It is based on a 
questionable generalization and it lacks a legal proof.

The GPL in fact is "incompatible" with any license but "public domain" and the 
latter license is not permitted in many jurisdictions (such as Europe).

The GPL mentions something called a "derivative work". Such a work is created 
if _you_ by your own make changes to an existing program. As you are the author
of these changes, you have the permission to put it under GPL as the GPL 
requires.

Unfortunately, the FSF likes to convince you that the only legal way to make 
any change to a GPLd program is by creating a so called "derivative work". 
This however cannot be done for several reasons:

1) You would need to declare other peoples code that you add to a GPLd work to 
be your own changes, but this is in conflict with the Copyright law.

2) The GPL itself is in conflct with the US Copyright law, see:
http://www.osscc.net/en/gpl.html
The papers from the lawyers Lawrence Rosen, Tom Gordon and Lothar Determan
explain why a license like the GPL is in conflict with US Copyright law title
17 section 106:
http://www.copyright.gov/title17/92chap1.html#106
when it tries to redefine the law definition of a "derivative work".

Lawrence Rosen is the previous legal counselor of the OpenSource Initiative.
Tom Gordon is a US Lawyer living in Berlin and working three rooms to my left.
Lothar Determan is a professor of law at the Freie Universität Berlin and at
the University of San Francisco. BTW: 30% of the text from Lothar Determan is 
legal proof and quotes.

3) All lawyers I am aware of that did publish reviews on the GPL confirm that
the only way to combine code from independent works is to create a so called
"collective work". This is even confirmed by the FSF friendly German lawyers
(for the special case of a new filesystem for Linux) that wrote the book "Die 
GPL kommentiert und erklärt" and that work for Harald Welte (gplviolations.org).


> however Linus's ``interpretation'' of the GPL considers that 'insmod'
> is ``mere aggregation'' and not ``linking'', but subject to rules of
> ``bad taste''.  Although this may sound ridiculous, there are blob
> drivers for wireless chips, video cards, and storage controllers
> relying on this ``interpretation'' for over a decade.  I think a ZFS
> porting project could do the same and end up emitting the same warning
> about a ``tained'' kernel that proprietary modules do:

Linus is right with his primary decision, but this also applies for static 
linking. See Lawrence Rosen for more information, the GPL does not distinct 
between static and dynamic linking. 


>  http://lwn.net/Articles/147070/

A nice quote that explains the way Moglen acts in the public. His claims do not 
contain a single legal proof. He still only made vague intimations that leave 
it open whether Linus is right or not. Moglen is a politician, he tries to get 
to a certain point he likes to reach and he does not include the current legal 
situation in his talks. Take this interview as what it is: a political 
statement but no legal claim.

People should be careful when listening to Moglen as he e.g. claims that people 
can rightfully relicense a BSDld piece of code under GPL (without giving a 
legal 
proof as usual...). Let us check the legal situation:

Changing the license is a privileged act reserved to the Copyright owner. 
Unless you have an explicit permission to do so, you can't. The BSDl does not
contain such an explicit permission, so you cannot change the license of other 
peoples work distributed under BSDl.

Changing the license would also require the right to sub-license, but the BSDl 
(similar to the GPL) does not give this right away. As a result, every user 
always gets his permissions directly from the original copyright holder who put 
the code under BSDl.

Given this background, even the BSDld drivers in Linux can only be legally used 
by Linux if they form a "collective work" that is clearly permitted by the GPL.
The same would apply to a CDDLd driver in Linux.

And BTW: Moglen did even confirm to me in a private mail that the claims about
GPL/BSD and GPL/CDDL compatibility on the FSF website are wrong. As he did not 
repeat this in the public, this proves the politician...

> the quickest link I found of Linus actually speaking about his
> ``interpretation'', his thoughts are IMHO completely muddled (which
> might be intentional):
>
>  http://lkml.org/lkml/2003/12/3/228

Nice to see that Linus seems to know the legal background ;-)

Jörg

-- 
 EMail

Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Simone Caldana
Il giorno 18/ago/2010, alle ore 10.20, Alxen4 ha scritto:

> My NFS Client is ESXi so the major question is there risk of corruption for 
> VMware images if I disable ZIL ?

I do the same use of ZFS. I had a huge improvement in performance by using 
mirrors instead of raidz. How is your zpool configured?


-- 
Simone Caldana
Senior Consultant
Critical Path
via Cuniberti 58, 10100 Torino, Italia
+39 011 4513811 (Direct)
+39 011 4513825 (Fax)
simone.cald...@criticalpath.net
http://www.cp.net/

Critical Path
A global leader in digital communications


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Alxen4
Thanks.Everything is clear now.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Networker & Dedup @ ZFS

2010-08-18 Thread Sigbjorn Lie
Hi,

We are considering using a ZFS based storage as a staging disk for Networker. 
We're aiming at
providing enough storage to be able to keep 3 months worth of backups on disk, 
before it's moved
to tape.

To provide storage for 3 months of backups, we want to utilize the dedup 
functionality in ZFS.

I've searched around for these topics and found no success stories, however 
those who has tried
did not mention if they had attempted to change the blocksize to any smaller 
than the default of
128k.

Does anyone have any experience with this kind of setup?


Regards,
Sigbjorn


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Garrett D'Amore
On Wed, 2010-08-18 at 01:20 -0700, Alxen4 wrote:
> Thanks...Now I think I understand...
> 
> Let me summarize it andd let me know if I'm wrong.
> 
> Disabling ZIL converts all synchronous calls to asynchronous which makes ZSF 
> to report data acknowledgment before it actually was written to stable 
> storage which in turn improves performance but might cause data corruption in 
> case of server crash.
> 
> Is it correct ?
> 
> In my case I'm having serious performance issues with NFS over ZFS.
> My NFS Client is ESXi so the major question is there risk of corruption for 
> VMware images if I disable ZIL ?

Yes.  If your server crashes, you can lose data.

- Garrett

> 
> 
> Thanks.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Andrew Gabriel

Alxen4 wrote:

Thanks...Now I think I understand...

Let me summarize it andd let me know if I'm wrong.

Disabling ZIL converts all synchronous calls to asynchronous which makes ZSF to 
report data acknowledgment before it actually was written to stable storage 
which in turn improves performance but might cause data corruption in case of 
server crash.

Is it correct ?

In my case I'm having serious performance issues with NFS over ZFS.
  


You need a non-volatile slog, such as an SSD.


My NFS Client is ESXi so the major question is there risk of corruption for 
VMware images if I disable ZIL ?
  


Yes.

If your NFS server takes an unexpected outage and comes back up again, 
some writes will have been lost which ESXi thinks succeeded (typically 5 
to 30 seconds worth of writes/updates immediately before the outage). So 
as an example, if you had an application writing a file sequentially, 
you will likely find an area of the file is corrupt because the data was 
lost.


--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Alxen4
Thanks...Now I think I understand...

Let me summarize it andd let me know if I'm wrong.

Disabling ZIL converts all synchronous calls to asynchronous which makes ZSF to 
report data acknowledgment before it actually was written to stable storage 
which in turn improves performance but might cause data corruption in case of 
server crash.

Is it correct ?

In my case I'm having serious performance issues with NFS over ZFS.
My NFS Client is ESXi so the major question is there risk of corruption for 
VMware images if I disable ZIL ?


Thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Andrew Gabriel

Andrew Gabriel wrote:

Alxen4 wrote:

Is there any way run start-up script before non-root pool is mounted ?

For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
So I need to create ramdisk before actual pool is mounted otherwise 
it complains that log device is missing :)


For sure I can manually remove/and add it by script and put the 
script in regular rc2.d location...I'm just looking for more elegant 
way to it.


Can you start by explaining what you're trying to do, because this may 
be completely misguided?


A ramdisk is volatile, so you'll lose it when system goes down, 
causing failure to mount on reboot. Recreating a ramdisk on reboot 
won't recreate the slog device you lost when the system went down. I 
expect the zpool would fail to mount.


Furthermore, using a ramdisk as a ZIL is effectively just a very 
inefficient way to disable the ZIL.
A better way to do this is to "zfs set sync=disabled ..." on relevant 
filesystems.
I can't recall which build introduced this, but prior to that, you can 
set zfs://zil_disable=1 in /etc/system but that applies to all 
pools/filesystems.




The double-slash was brought to you by a bug in thunderbird. The 
original read: set zfs:zil_disable=1


--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Garrett D'Amore
On Wed, 2010-08-18 at 00:49 -0700, Alxen4 wrote:
> Any argumentation why ?


Because a RAMDISK defeats the purpose of a ZIL, which is to provide a
fast *stable storage* for data being written.  If you are using a
RAMDISK, you are not getting any non-volatility guarantees that the ZIL
is supposed to offer.  You may as well run without one.  (Which will go
fast, but at the expense of data integrity.)

- Garrett


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Alxen4
Any argumentation why ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Andrew Gabriel

Alxen4 wrote:

Is there any way run start-up script before non-root pool is mounted ?

For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
So I need to create ramdisk before actual pool is mounted otherwise it 
complains that log device is missing :)

For sure I can manually remove/and add it  by script and put the script in 
regular rc2.d location...I'm just looking for more elegant way to it.
  


Can you start by explaining what you're trying to do, because this may 
be completely misguided?


A ramdisk is volatile, so you'll lose it when system goes down, causing 
failure to mount on reboot. Recreating a ramdisk on reboot won't 
recreate the slog device you lost when the system went down. I expect 
the zpool would fail to mount.


Furthermore, using a ramdisk as a ZIL is effectively just a very 
inefficient way to disable the ZIL.
A better way to do this is to "zfs set sync=disabled ..." on relevant 
filesystems.
I can't recall which build introduced this, but prior to that, you can 
set zfs://zil_disable=1 in /etc/system but that applies to all 
pools/filesystems.


--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Garrett D'Amore
On Wed, 2010-08-18 at 00:16 -0700, Alxen4 wrote:
> Is there any way run start-up script before non-root pool is mounted ?
> 
> For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
> So I need to create ramdisk before actual pool is mounted otherwise it 
> complains that log device is missing :)
> 
> For sure I can manually remove/and add it  by script and put the script in 
> regular rc2.d location...I'm just looking for more elegant way to it.
> 
> 
> Thanks a lot.


You *really* don't want to use a ramdisk as your ZIL.  You'd be better
off just disabling the zil altogether.

- Garrett

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Kernel panic on import / interrupted zfs destroy

2010-08-18 Thread Matthew Ellison
I have a box running snv_134 that had a little boo-boo.

The problem first started a couple of weeks ago with some corruption on two 
filesystems in a 11 disk 10tb raidz2 set.  I ran a couple of scrubs that 
revealed a handful of corrupt files on my 2 de-duplicated zfs filesystems.  No 
biggie.

I thought that my problems had something to do with de-duplication in 134, so I 
went about the process of creating new filesystems and copying over the "good" 
files to another box.  Every time I touched the "bad" files I got a filesystem 
error 5.  When trying to delete them manually, I got kernel panics - which 
eventually turned into reboot loops.

I tried installing nexenta on another disk to see if that would allow me to get 
passed the reboot loop - which it did.  I finished moving the "good" files over 
(using rsync, which skipped over the error 5 files, unlike cp or mv), and 
destroyed one of the two filesystems.  Unfortunately, this caused a kernel 
panic in the middle of the destroy operation, which then became another panic / 
reboot loop.

I was able to get in with milestone=none and delete the zfs cache, but now I 
have a new problem:  Any attempt to import the pool results in a panic.  I have 
tried from my snv_134 install, from the live cd, and from nexenta.  I have 
tried various zdb incantations (with aok=1 and zfs:zfs_recover=1), to no avail 
- these error out after a few minutes.  I have even tried another controller.

I have zdb -e -bcsvL running now from 134 (without aok=1) which has been 
running for several hours.  Can zdb recover from this kind of situation (with a 
half-destroyed filesystem that panics the kernel on import?)  What is the 
impact of the above zdb operation without aok=1?  Is there any likelihood of a 
recovery of non-affected filesystems?

Any suggestions?

Regards,

Matthew Ellison
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Solaris startup script location

2010-08-18 Thread Alxen4
Is there any way run start-up script before non-root pool is mounted ?

For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
So I need to create ramdisk before actual pool is mounted otherwise it 
complains that log device is missing :)

For sure I can manually remove/and add it  by script and put the script in 
regular rc2.d location...I'm just looking for more elegant way to it.


Thanks a lot.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss