Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-05 Thread Frank Cusack
On Fri, Oct 5, 2012 at 1:28 PM, Ian Collins  wrote:

> I do have a lot of what would appear to be unnecessary filesystems, but
> after loosing the WAN 3 days into a large transfer, a change of tactic was
> required!
>

I've recently (last year or so) gone the other way, and have made an effort
to combine filesystems.  I'm now thinking of remote replication so maybe
I'll break them up again.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-05 Thread Frank Cusack
On Fri, Oct 5, 2012 at 3:17 AM, Ian Collins  wrote:

> I do have to suffer a slow, glitchy WAN to a remote server and rather than
> send stream files, I broke the data *on the remote server* into a more
> fine grained set of filesystems than I would do normally.  In this case, I
> made the directories under what would have been the leaf filesystems
> filesystems themselves.
>

Meaning you also broke the data on the LOCAL server into the same set of
more granular filesystems?  Or is it now possible to zfs send a
subdirectory of a filesystem?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2011-12-27 Thread Frank Cusack
So with a de facto fork (illumos) now in place, is it possible that two
zpools will report the same version yet be incompatible across
implementations?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] S11 vs illumos zfs compatiblity

2011-12-27 Thread Frank Cusack


If I "upgrade" ZFS to use the new features in Solaris 11 I will be unable
> to import my pool using the free ZFS implementation that is available in
> illumos based distributions
>

Is that accurate?  I understand if the S11 version is ahead of illumos, of
course I can't use the same pools in both places, but that is the same
problem as using an S11 pool on S10.  The author is implying a much worse
situation, that there are zfs "tracks" in addition to versions and that S11
is now on a different track and an S11 pool will not be usable elsewhere,
"ever".  I hope it's just a misrepresentation.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-20 Thread Frank Cusack
Of course I meant 'zpool *' not 'zfs *' below.

On Tue, Dec 20, 2011 at 4:27 PM, Frank Cusack  wrote:

> On Tue, Dec 20, 2011 at 2:11 PM, Gregg Wonderly wrote:
>
>>  On 12/19/2011 8:51 PM, Frank Cusack wrote:
>>
>> If you don't detach the smaller drive, the pool size won't increase.
>> Even if the remaining smaller drive fails, that doesn't mean you have to
>> detach it.  So yes, the pool size might increase, but it won't be
>> "unexpectedly".  It will be because you detached all smaller drives.  Also,
>> even if a smaller drive is failed, it can still be attached.
>>
>> If you don't have a controller slot to connect the replacement drive
>> through, then you have to remove the smaller drive, physically.
>>
>
> Physically, yes.  By detach, I meant 'zfs detach', a logical operation.
>
>   You can, then attach the replacement drive, but will "replace" work
>> then, or must you remove and then add it because it is "the same disk"?
>>
>
> I was thinking that you leave the failed drive [logically] attached.  So,
> you don't 'zfs replace', you just 'zfs attach' your new drive.  Yes, this
> leaves the mirror in faulted condition.  You'd correct that later when you
> get a replacement smaller drive.
>
> But, as Fajar noted, just make sure autoexpand is off and you can still do
> a 'zfs replace' operation if you like (perhaps so your monitoring shuts up)
> and the pool size will not unexpectedly grow.
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-20 Thread Frank Cusack
On Tue, Dec 20, 2011 at 2:11 PM, Gregg Wonderly  wrote:

>  On 12/19/2011 8:51 PM, Frank Cusack wrote:
>
> If you don't detach the smaller drive, the pool size won't increase.  Even
> if the remaining smaller drive fails, that doesn't mean you have to detach
> it.  So yes, the pool size might increase, but it won't be "unexpectedly".
> It will be because you detached all smaller drives.  Also, even if a
> smaller drive is failed, it can still be attached.
>
> If you don't have a controller slot to connect the replacement drive
> through, then you have to remove the smaller drive, physically.
>

Physically, yes.  By detach, I meant 'zfs detach', a logical operation.

  You can, then attach the replacement drive, but will "replace" work then,
> or must you remove and then add it because it is "the same disk"?
>

I was thinking that you leave the failed drive [logically] attached.  So,
you don't 'zfs replace', you just 'zfs attach' your new drive.  Yes, this
leaves the mirror in faulted condition.  You'd correct that later when you
get a replacement smaller drive.

But, as Fajar noted, just make sure autoexpand is off and you can still do
a 'zfs replace' operation if you like (perhaps so your monitoring shuts up)
and the pool size will not unexpectedly grow.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-19 Thread Frank Cusack
If you don't detach the smaller drive, the pool size won't increase.  Even
if the remaining smaller drive fails, that doesn't mean you have to detach
it.  So yes, the pool size might increase, but it won't be "unexpectedly".
It will be because you detached all smaller drives.  Also, even if a
smaller drive is failed, it can still be attached.

It doesn't make sense for attach to do anything with partition tables, IMHO.

I *always* order the spare when I order the original drives, to have it on
hand, even for my home system.  Drive sizes change more frequently than
they fail, for me.  Sure, when I use the spare I may not be able to order a
new spare of the same size, but at least at that time I have time to
prepare and am not scrambling.

On Mon, Dec 19, 2011 at 3:55 PM, Gregg Wonderly  wrote:

>  That's why I'm asking.  I think it should always mirror the partition
> table and allocate exactly the same amount of space so that the pool
> doesn't suddenly change sizes unexpectedly and require a disk size that I
> don't have at hand, to put the mirror back up.
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Frank Cusack
You can just do fdisk to create a single large partition.  The attached
mirror doesn't have to be the same size as the first component.

On Thu, Dec 15, 2011 at 11:27 PM, Gregg Wonderly  wrote:

> Cindy, will it ever be possible to just have attach mirror the surfaces,
> including the partition tables?  I spent an hour today trying to get a new
> mirror on my root pool.  There was a 250GB disk that failed.  I only had a
> 1.5TB handy as a replacement.  prtvtoc ... | fmthard does not work in this
> case and so you have to do the partitioning by hand, which is just silly to
> fight with anyway.
>
> Gregg
>
> Sent from my iPhone
>
> On Dec 15, 2011, at 6:13 PM, Tim Cook  wrote:
>
> Do you still need to do the grub install?
> On Dec 15, 2011 5:40 PM, "Cindy Swearingen" 
> wrote:
>
>> Hi Anon,
>>
>> The disk that you attach to the root pool will need an SMI label
>> and a slice 0.
>>
>> The syntax to attach a disk to create a mirrored root pool
>> is like this, for example:
>>
>> # zpool attach rpool c1t0d0s0 c1t1d0s0
>>
>> Thanks,
>>
>> Cindy
>>
>> On 12/15/11 16:20, Anonymous Remailer (austria) wrote:
>>
>>>
>>> On Solaris 10 If I install using ZFS root on only one drive is there a
>>> way
>>> to add another drive as a mirror later? Sorry if this was discussed
>>> already. I searched the archives and couldn't find the answer. Thank you.
>>> __**_
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/**mailman/listinfo/zfs-discuss
>>>
>> __**_
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/**mailman/listinfo/zfs-discuss
>>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-15 Thread Frank Cusack
It can still be done for USB, but you have to boot from alternate media to
attach the mirror.

On Thu, Dec 15, 2011 at 3:41 PM, Frank Cusack  wrote:

> Yes, except if your root pool is on a USB stick or removable media.
>
>
> On Thu, Dec 15, 2011 at 3:20 PM, Anonymous Remailer (austria) <
> mixmas...@remailer.privacy.at> wrote:
>
>>
>> On Solaris 10 If I install using ZFS root on only one drive is there a way
>> to add another drive as a mirror later? Sorry if this was discussed
>> already. I searched the archives and couldn't find the answer. Thank you.
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-15 Thread Frank Cusack
Yes, except if your root pool is on a USB stick or removable media.

On Thu, Dec 15, 2011 at 3:20 PM, Anonymous Remailer (austria) <
mixmas...@remailer.privacy.at> wrote:

>
> On Solaris 10 If I install using ZFS root on only one drive is there a way
> to add another drive as a mirror later? Sorry if this was discussed
> already. I searched the archives and couldn't find the answer. Thank you.
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] does log device (ZIL) require a mirror setup?

2011-12-11 Thread Frank Cusack
Corruption?  Or just loss?

On Sun, Dec 11, 2011 at 1:27 PM, Matt Breitbach
wrote:

> I would say that it's a "highly recommended".  If you have a pool that
> needs
> to be imported and it has a faulted, unmirrored log device, you risk data
> corruption.
>
> -Matt Breitbach
>
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Thomas Nau
> Sent: Sunday, December 11, 2011 1:28 PM
> To: zfs-discuss@opensolaris.org
> Subject: [zfs-discuss] does log device (ZIL) require a mirror setup?
>
> Dear all
> We use a STEC ZeusRAM as a log device for a 200TB RAID-Z2 pool.
> As they are supposed to be read only after a crash or when booting and
> those nice things are pretty expensive I'm wondering if mirroring
> the log devices is a "must / highly recommended"
>
> Thomas
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] virtualbox rawdisk discrepancy

2011-11-29 Thread Frank Cusack
On Tue, Nov 29, 2011 at 10:39 PM, Fajar A. Nugraha  wrote:

> On Wed, Nov 30, 2011 at 1:25 PM, Frank Cusack  wrote:
> > I haven't been able to get this working.  To keep it simpler, next I am
> > going to try usbcopy of the live USB image in the VM, and see if I can
> boot
> > real hardware from the resultant live USB stick.
>
> To be clear, I'm talking about two things:
> (1) live USB, created from Live CD
> (2) solaris installed on USB
>

yup


>
> This first one works on real hardware, but not on a VM. The cause is
> simple: seems like a boot code somewhere searches ONLY removable media
> for live solaris image. Since you need to map the USB disk as regular
> disk (SATA/IDE/SCSI) in a VM to be able to boot from it, you won't be
> able to boot live usb on a VM.
>

yup


>
> The second one works on both real hardare and VM, BUT with a
> prequisite that you have to export-import rpool first on that
> particular system. Unless you already have solaris installed, this
> usually means you need to boot with a live cd/usb first.
>

yup.  I didn't quite do that, what I did is exit to shell after installing
(from install CD) onto the USB.  Then in the shell from the install CD I
did the zpool export.  The resultant USB is still unbootable for me on real
hardware.

During this install, the USB is seen as a SATA disk.  I tried to install
onto it as a pass through USB device, but a python script in the installer
that tries to label the disk fails.  This is likely because it has to
invoke 'format -e' instead of 'format' in order to see the USB disk in the
first place.  When you invoke the 'label' command, if you have invoked
'format' as 'format -e' you get prompted whether you want an SMI or EFI
label.  The python script doesn't know about this and wants to just do 'y'
or 'n'.

In S10, I have no problem installing on real hardware onto a USB stick
(seen as USB), so I imagine this is just a deficiency of the new S11
installer.

Anyway, the point of that story is that I tried to install onto it as as
USB device, instead of as a SATA device, in case something special happens
to make USB bootable that doesn't happen when the S11 installer thinks it's
a SATA device.  But I was unable to complete that test.


> I'm not sure what you mean by "usbcopy of the live USB image in the
> VM, and see if I can boot real hardware from the resultant live USB
> stick.". If you're trying to create (1), it'd be simpler to just use
> live cd on real hardware, and if necessary create live usb there (MUCH
> faster than on a VM). If you mean (2), then it won't work unless you
> boot with live cd/usb first.
>

I meant (1), because I think this is an easier case to try out than (2).
(1) should DEFINITELY work, IMHO.

I don't use live cd on real hardware because that doesn't meet my objective
of being able to create a removable boot drive, created in a VM, that I can
boot on real hardware if I wanted to.  I mean, I *could* do it that way,
but I want to be able to do this in a 100% VM environment.


>
> Oh and for reference, instead of usbcopy, I prefer using this method:
> http://blogs.oracle.com/jim/entry/how_to_create_a_usb
>

Thanks, I'll check it out.


>
> --
> Fajar
>
> >
> > On Tue, Nov 22, 2011 at 5:25 AM, Fajar A. Nugraha 
> wrote:
> >>
> >> On Tue, Nov 22, 2011 at 7:32 PM, Jim Klimov  wrote:
> >> >> Or maybe not.  I guess this was findroot() in sol10 but in sol11 this
> >> >> seems to have gone away.
> >> >
> >> > I haven't used sol11 yet, so I can't say for certain.
> >> > But it is possible that the default boot (without findroot)
> >> > would use the bootfs property of your root pool.
> >>
> >> Nope.
> >>
> >> S11's grub specifies bootfs for every stanza in menu.lst. bootfs pool
> >> property is no longer used.
> >>
> >> Anyway, after some testing, I found out you CAN use vbox-installed s11
> >> usb stick on real notebook (enough hardware difference there). The
> >> trick is you have to import-export the pool on the system you're going
> >> to boot the stick on. Meaning, you need to have S11 live cd/usb handy
> >> and boot with that first before booting using your disk.
> >>
> >> --
> >> Fajar
> >> ___
> >> zfs-discuss mailing list
> >> zfs-discuss@opensolaris.org
> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >
> >
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] virtualbox rawdisk discrepancy

2011-11-29 Thread Frank Cusack
I haven't been able to get this working.  To keep it simpler, next I am
going to try usbcopy of the live USB image in the VM, and see if I can boot
real hardware from the resultant live USB stick.

On Tue, Nov 22, 2011 at 5:25 AM, Fajar A. Nugraha  wrote:

> On Tue, Nov 22, 2011 at 7:32 PM, Jim Klimov  wrote:
> >> Or maybe not.  I guess this was findroot() in sol10 but in sol11 this
> >> seems to have gone away.
> >
> > I haven't used sol11 yet, so I can't say for certain.
> > But it is possible that the default boot (without findroot)
> > would use the bootfs property of your root pool.
>
> Nope.
>
> S11's grub specifies bootfs for every stanza in menu.lst. bootfs pool
> property is no longer used.
>
> Anyway, after some testing, I found out you CAN use vbox-installed s11
> usb stick on real notebook (enough hardware difference there). The
> trick is you have to import-export the pool on the system you're going
> to boot the stick on. Meaning, you need to have S11 live cd/usb handy
> and boot with that first before booting using your disk.
>
> --
> Fajar
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] virtualbox rawdisk discrepancy

2011-11-21 Thread Frank Cusack
On Mon, Nov 21, 2011 at 10:06 PM, Frank Cusack  wrote:

> grub does need to have an idea of the device path, maybe in vbox it's seen
> as the 3rd disk (c0t2), so the boot device name written to grub.conf is
> "disk3" (whatever the terminology for that is in grub-speak), but when I
> boot on the Sun hardware it is seen as "disk0" and this just doesn't work.
> If it's that easy that'd be awesome, all I need is an alternate grub entry.
>

Or maybe not.  I guess this was findroot() in sol10 but in sol11 this seems
to have gone away.

Also, I was wrong about the disk target.  When I do the install I configure
the USB stick at disk0, seen by Solaris as c3t0, and no findroot() line
gets written to menu.lst.  Maybe it needs that line when it boots as a USB
still on real hardware?

I'll try import/export and a reconfigure boot when I get a chance.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] virtualbox rawdisk discrepancy

2011-11-21 Thread Frank Cusack
On Mon, Nov 21, 2011 at 9:59 PM, Fajar A. Nugraha  wrote:

> On Tue, Nov 22, 2011 at 12:53 PM, Frank Cusack  wrote:
> > On Mon, Nov 21, 2011 at 9:31 PM, Fajar A. Nugraha 
> wrote:
> >>
> >> On Tue, Nov 22, 2011 at 12:19 PM, Frank Cusack 
> wrote:
> >> >
> >> > If we ignore the vbox aspect of it, and assume real hardware with real
> >> > devices, of course you can install on one x86 hardware and move the
> >> > drive to
> >> > boot on another x86 hardware.  This is harder on SPARC (b/c hostid and
> >> > zfs
> >> > mount issues) but still possible.
> >>
> >> Have you tried? :D
> >
> > Yes, I do this all the time.  Between identical hardware, though.  It
> used
> > to be tricky when you had to know actual device paths and/or /dev/dsk/*
> > names but with zfs that issue has gone away and it doesn't matter if
> drives
> > show up at different locations when moving the boot drive around.
> >
>
> Ah, you're more experienced that I am then. In that case you might want to
> try:
> - boot with live CD on your sun box
> - plug your usb drive there
> - force-import then export your usb root pool (to eliminate any disk
> path or ID problem)
>

ah, good idea


> - try boot from usb drive
> - if the above still doesn't work, try running installgrub:
>
> http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_and_Boot_Issues
>

grub does need to have an idea of the device path, maybe in vbox it's seen
as the 3rd disk (c0t2), so the boot device name written to grub.conf is
"disk3" (whatever the terminology for that is in grub-speak), but when I
boot on the Sun hardware it is seen as "disk0" and this just doesn't work.
If it's that easy that'd be awesome, all I need is an alternate grub entry.

I'm still trying to install sol11 on USB, but it's dreadfully slow on
> my system (not sure why)
>

Same here, sustained write to a USB stick is painfully slow.  Normal
operation is fine though.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] virtualbox rawdisk discrepancy

2011-11-21 Thread Frank Cusack
On Mon, Nov 21, 2011 at 9:31 PM, Fajar A. Nugraha  wrote:

>
> On Tue, Nov 22, 2011 at 12:19 PM, Frank Cusack  wrote:
> >
> > If we ignore the vbox aspect of it, and assume real hardware with real
> > devices, of course you can install on one x86 hardware and move the
> drive to
> > boot on another x86 hardware.  This is harder on SPARC (b/c hostid and
> zfs
> > mount issues) but still possible.
>
> Have you tried? :D
>

Yes, I do this all the time.  Between identical hardware, though.  It used
to be tricky when you had to know actual device paths and/or /dev/dsk/*
names but with zfs that issue has gone away and it doesn't matter if drives
show up at different locations when moving the boot drive around.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] virtualbox rawdisk discrepancy

2011-11-21 Thread Frank Cusack
On Mon, Nov 21, 2011 at 9:04 PM, Fajar A. Nugraha  wrote:

> So basically the question is if you install solaris on one machine,
> can you move the disk (in this case the usb stick) to another machine
> and boot it there, right?
>

Yes, but one of the machines is a virtual machine.

The answer, as far as I know, is NO, you can't. Of course, I could be
> wrong though (and in this case I'll be happy if I'm wrong :D ). IIRC
> the only supported way to move (or clone) solaris installation is by
> using flash archive (flar), which (now) should also work on zfs.
>

If we ignore the vbox aspect of it, and assume real hardware with real
devices, of course you can install on one x86 hardware and move the drive
to boot on another x86 hardware.  This is harder on SPARC (b/c hostid and
zfs mount issues) but still possible.

The weird thing here is that the install hardware is a virtual machine.
One thing I know is odd is that the USB drive is seen to the virtual
machine as a SATA drive but when moved to the real hardware it's seen as a
USB drive.  There may be something else going on here that someone more
familiar with vbox may know more about.

Since this works seamlessly when the zpool in question is just a data pool,
I'm wondering why it doesn't work when it's a boot drive.

One thing I noticed is that when mounting it as a data drive, the real
hardware sees the type of disk (between <...> in 'format' output) as
ATA-VBOX.  Clearly that info must have been written when the pool was
created on vbox, and maybe some hardware info was encoded that doesn't
match up when it's booted as a real USB stick.  This doesn't to matter when
it's a data pool but maybe this is tripping it up during boot.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] virtualbox rawdisk discrepancy

2011-11-21 Thread Frank Cusack
I have a Sun machine running Solaris 10, and a Vbox instance running
Solaris 11 11/11.  The vbox machine has a virtual disk pointing to
/dev/disk1 (rawdisk), seen in sol11 as c0t2.

If I create a zpool on the Sun s10 machine, on a USB stick, I can take that
USB stick and access it through the vbox virtual disk.  Just as expected.

If I boot vbox from the s11 ISO, and install s11 onto USB stick (via the
virtual device), I can boot the Sun machine from it, which puts up the grub
menu but then fails to boot Solaris.  There's some kind of error which
might not be making it to /SP/console, but after grub it seems to hang for
a few seconds then reboot.

The vbox happens to be running on Mac OS 10.6.x.

This *should* work, yes?  Any thoughts as to why it doesn't?

Not that this should matter, but on the vbox machine, sol11 sees the USB
stick as a normal SATA hard drive, e.g. when I run 'format' it is in the
list of drives.  On the Sun machine, it is seen as a removable drive by
s10, e.g. I have to run 'format -e' to see the drive.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ... open source moving forward?

2010-12-18 Thread Frank Cusack

On 12/16/10 11:32 AM +0100 Joerg Schilling wrote:

 Note that while there existist
numerous papers  from lawyers that consistently explain which parts of
the GPLv2 are violating  US law and thus are void,


Can you elaborate?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ... open source moving forward?

2010-12-18 Thread Frank Cusack

On 12/16/10 9:11 AM -0500 Linder, Doug wrote:

The only thing I'll add is that I, as I said, I really don't care at all
about licenses.


Then you have no room to complain or even suggest a specific license!


 When it comes to licenses, to me (and, I suspect, the
vast majority of other OSS users), "GPL" is "synonymous with "open
source".  Is that correct?  No.  Am I aware that plenty of other licenses
exist?  Yes.  Is the issue important?  Sure.


Agreed.


 Do I have time or interest
to worry about niggly little details?  No.


Well the problem with licenses is that they are decidedly NOT niggly
little details.  You should consider re-evaluating what you have time
or interest for, if you care about the things you say (such as maximum
and flexible use of the products you are using).


 All I want is to be able to
use the best technology in the ways that are most useful to me without
artificial restrictions.  Anything that advances that, I'm for.


CDDL is close to that, much closer than GPL.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ... open source moving forward?

2010-12-18 Thread Frank Cusack

On 12/16/10 10:24 AM -0500 Linder, Doug wrote:

Tim Cook wrote:


"Claiming you'd start paying for Solaris if they gave you ZFS for free
in Linux is absolutely ridiculous."


*Start* paying?  You clearly have NO idea what it costs to run Solaris in
a production environment with support.


In my experience, it's less than RedHat.  Also TCO is less since Solaris
offers more to begin with.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 10u9

2010-09-08 Thread Frank Cusack

On 9/8/10 9:32 AM -0400 Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Magda

The 9/10 Update appears to have been released. Some of the more
noticeable
ZFS stuff that made it in:

More at:

http://docs.sun.com/app/docs/doc/821-1840/gijtg


Awesome!  Thank you.  :-)
Log device removal in particular, I feel is very important.  (Got bit by
that one.)


suh-weet.  Hopefully zfs attach on removable media (pen drives) works
in U9 as well.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-19 Thread Frank Cusack

On 8/19/10 10:48 AM +0200 Joerg Schilling wrote:

1) The OpenSource definition
http://www.opensource.org/docs/definition.php  section 9 makes it very
clear that an OSS license must not restrict other  software and must not
prevent to bundle different works under different  licenses on one medium.

2) given the fact that the GPL is an aproved OSS licensse, it obviously
complies with the OSS definition.

3) as a result, any GPL interpretation that is based on the assumption
that a  separate distribution would fix problems is wrong.


I don't disagree with you, but 1&2 do not lead to 3.  1 does not even
necessarily lead to 2.

OSI/OSS is not definitive.  A license is not open source because of
its approval by OSI and it is not not-open source because of its
absence in OSI.  For licenses that are approved, it's still possible
that OSI made a mistake (because licenses are complicated things
after all).

You cannot depend on OSI, which has no legal standing, to back up any
claim of what a given license must or must not support.  In absence
of case law, the only definitive measure of a license is the license
itself.

Even what the FSF may say about the GPL isn't necessarily the case.

In (3) you are talking about a GPL interpretation and trying to say
something definitive about it based on what is just someone else's
(OSI's) interpretation.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-18 Thread Frank Cusack

On 8/18/10 3:58 PM -0400 Linder, Doug wrote:

Erik Trimble wrote:


That said, stability vs new features has NOTHING to do with the OSS
development model.  It has everything to do with the RELEASE model.
[...]
All that said, using the OSS model for actual *development* of an
Operating System is considerably superior to using a closed model. For
reasons I outlined previously in a post to opensolaris-discuss.


I didn't mean to imply there was anything wrong with the OSS
release-early-and-often model.


I also didn't mean to imply Solaris was creaky or wrong or bad
compared to OpenSolaris.  It has different requirements.

But I did mean that folks who want the latest and greatest are not
the same folks that want stability.  So people using OpenSolaris
are not the same people using Solaris.  (Of course there are shops
where both are used to different ends, but one is not a gateway
to the other.)

I agree with Erik, there is an upgrade path, but that's just the
natural incorporation of OpenSolaris features into Solaris (same
as existed before, just "OpenSolaris" wasn't something available
publicly and widely).  That's not the same as migrating to OpenSolaris.
When today's features are in Solaris, OpenSolaris will have newer
shinier features.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Frank Cusack

On 8/18/10 9:29 AM -0700 Ethan Erchinger wrote:

Edward wrote:

I have had wonderful support, up to and including recently, on my Sun
hardware.


I wish we had the same luck.  We've been handed off between 3 different
"technicians" at this point, each one asking for the same information.


Do they at least phrase it as "Can you verify the problem?", the way
that call center operators ask you for the information you've already
entered via the automated attendant? :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-17 Thread Frank Cusack

On 8/17/10 3:17 PM -0500 Tim Cook wrote:

If Oracle really wants to keep it out of Linux, that means it wants
to keep it out of FreeBSD also.  Either way, to keep it out it needs
to make it closed source, and as they say, the genie is already out
of the bottle.

I don't agree that there's a licensing problem, but that doesn't matter.
Distributions, which is how nearly EVERYONE uses Linux, are free to
include zfs on their own.  All the major distributions already patch
the kernel heavily.



FreeBSD has nowhere near the installed base  of Linux.  There is also
absolutely 0 "Enterprise" support for FreeBSD.  ZFS will not change that.
 It is not a threat to Oracle.


What I don't understand is why Linux is a threat to Oracle then.
Oracle runs on both Linux and Solaris.  The market for ZFS-backed
disk arrays is small, and again, that genie is already out of the
bottle anyway.

Have you dealt with RedHat "Enterprise" support?  lol.

The "enterprise" is going to continue to want Oracle on Solaris.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-17 Thread Frank Cusack

On 8/17/10 3:31 PM +0900 BM wrote:

On Tue, Aug 17, 2010 at 5:11 AM, Andrej Podzimek 
wrote:

Disclaimer: I use Reiser4


A "Killer FS"™. :-)


LOL
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-17 Thread Frank Cusack

On 8/17/10 9:14 AM -0400 Ross Walker wrote:

On Aug 16, 2010, at 11:17 PM, Frank Cusack 
wrote:


On 8/16/10 9:57 AM -0400 Ross Walker wrote:

No, the only real issue is the license and I highly doubt Oracle will
re-release ZFS under GPL to dilute it's competitive advantage.


You're saying Oracle wants to keep zfs out of Linux?


I would if I were them, wouldn't you?


I'm not sure either way.

If Oracle really wants to keep it out of Linux, that means it wants
to keep it out of FreeBSD also.  Either way, to keep it out it needs
to make it closed source, and as they say, the genie is already out
of the bottle.

I don't agree that there's a licensing problem, but that doesn't matter.
Distributions, which is how nearly EVERYONE uses Linux, are free to
include zfs on their own.  All the major distributions already patch
the kernel heavily.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Frank Cusack

On 8/16/10 9:57 AM -0400 Ross Walker wrote:

No, the only real issue is the license and I highly doubt Oracle will
re-release ZFS under GPL to dilute it's competitive advantage.


You're saying Oracle wants to keep zfs out of Linux?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-15 Thread Frank Cusack

On 8/14/10 10:18 PM -0700 Richard Elling wrote:

On Aug 13, 2010, at 7:06 PM, Frank Cusack wrote:

Interesting POV, and I agree.  Most of the many "distributions" of
OpenSolaris had very little value-add.  Nexenta was the most interesting
and why should Oracle enable them to build a business at their expense?



Markets dictate behaviour. Oracle has clearly stated their goal of
focusing the Sun-acquired assets at the Fortune-500 market.  Nexenta has
a different  market -- the rest of the world. There is plenty of room for
both to be successful.  -- richard


Great point.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-14 Thread Frank Cusack

On 8/15/10 12:39 AM +0100 Kevin Walker wrote:

and Oracle are very, very greedy...


Let's not get all soft about OpenSolaris now ... all public companies
are very, very greedy.  They exist solely to make money.  It's awesome
that they make things that are useful, but it's just a way to meet
the main objective: make money and lots of it.  In fact, as much as
they possibly can.

Sun didn't open source Solaris out of the goodness of its heart or some
misguided CSR program.  They did it because they were desperate.  Sun's
business plan happened to be helped along by open sourcing Solaris, but
that doesn't make Sun less greedy.

Oracle: very, very greedy
Apple: very, very greedy
Microsoft: very, very greedy
Sun: [was] very, very greedy (just not good at it)
Fortune 1000: very, very greedy
...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-14 Thread Frank Cusack

On 8/14/10 7:58 AM -0500 Russ Price wrote:

My guess is that the theoretical Solaris Express 11 will be crippled by
any or all of: missing features, artificial limits on functionality, or a
restrictive license. I consider the latter most likely, much like the OTN


On 8/14/10 3:15 PM -0400 Dave Pooser wrote:

enterprise-grade ZFS. Speaking for myself, if Solaris 11 doesn't include
COMSTAR I'm going to have to take a serious look at another alternative
for our show storage towers


Wow, what leads you guys to even imagine that S11 wouldn't contain
comstar, etc.?  *Of course* it will contain most of the bits that
are current today in OpenSolaris.

Licensing, yes, I wouldn't trust Oracle in that department.  They don't
care so much about Solaris itself as they do about Oracle on Solaris.
Plenty of companies run Solaris/Oracle almost as an appliance, with
very little additional Solaris.  I'm sure Oracle is happy to continue
or even promote that, and clearly Solaris will now be even more of
a preferred platform for Oracle than ever.

On 8/14/10 7:58 AM -0500 Russ Price wrote:

For me, Solaris had zero mindshare since its beginning, on account of
being prohibitively expensive. When OpenSolaris came out, I basically


Very true, early on, but Solaris became free (for limited uses, but enough
to test it) quite a long time before OpenSolaris was ever even born.  Then
it became "very free", maybe a year or 2 before OpenSolaris was launched?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-14 Thread Frank Cusack

On 8/13/10 11:21 PM -0400 Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Frank Cusack

I haven't met anyone who uses Solaris because of OpenSolaris.


What rock do you live under?

Very few people would bother paying for solaris/zfs if they couldn't try
it for free and get a good taste of what it's valuable for.


I also don't know anyone who pays for Solaris.  It's already free and you
can already try it for free.

What's your point?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-14 Thread Frank Cusack

On 8/13/10 8:56 PM -0600 Eric D. Mudama wrote:

On Fri, Aug 13 at 19:06, Frank Cusack wrote:

Interesting POV, and I agree.  Most of the many "distributions" of
OpenSolaris had very little value-add.  Nexenta was the most interesting
and why should Oracle enable them to build a business at their expense?


These distributions are, in theory, the "gateway drug" where people
can experiment inexpensively to try out new technologies (ZFS, dtrace,
crossbow, comstar, etc.) and eventually step up to Oracle's "big iron"
as their business grows.


I've never understood how OpenSolaris was supposed to get you to Solaris.
OpenSolaris is for enthusiasts and great great folks like Nexenta.
Solaris lags so far behind it's not really an upgrade path.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-13 Thread Frank Cusack

On 8/14/10 4:01 AM +0700 "C. Bergström" wrote:

Gary Mills wrote:

If this information is correct,

http://opensolaris.org/jive/thread.jspa?threadID=133043

further development of ZFS will take place behind closed doors.
Opensolaris will become the internal development version of Solaris
with no public distributions.  The community has been abandoned.


It was a community of system administrators and nearly no developers.
While this may make big news the real impact is probably pretty small.


I agree!


Source code updates will get tossed over the fence and developer partners
(Intel) will still have access to onnv-gate.

In a way i see this as a very good thing.  It will not *force* the

^^^
You must have meant "now"?


existing (small) community of companies and developers to band together
to actually work together.  From there the real open source momentum can
happen instead of everyone depending on Sun/Oracle to give them a free
lunch.  The first step that I've been adamant about is making it easier
for developers to play and get their hands on it..  If "we" can enable
that it'll swing things around regardless of what mega-corp does or
doesn't do...


Interesting POV, and I agree.  Most of the many "distributions" of
OpenSolaris had very little value-add.  Nexenta was the most interesting
and why should Oracle enable them to build a business at their expense?

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-13 Thread Frank Cusack

On 8/13/10 3:39 PM -0500 Tim Cook wrote:

Quite frankly, I think there will be an even faster decline of Solaris
installed base after this move.  I know I have no interest in pushing it
anywhere after this mess.


I haven't met anyone who uses Solaris because of OpenSolaris.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Who owns the dataset?

2010-07-16 Thread Frank Cusack

On 7/16/10 4:33 PM -0700 Johnson Earls wrote:

On 07/16/10 10:30 AM, Lori Alt wrote:

You can also run through the zones, doing 'zoneconfig -z  info'
commands to look for datasets delegated to each zone.


That's not necessarily the current owner though, is it?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-16 Thread Frank Cusack

On 7/16/10 3:07 PM -0500 David Dyer-Bennet wrote:


On Fri, July 16, 2010 14:07, Frank Cusack wrote:

On 7/16/10 12:02 PM -0500 David Dyer-Bennet wrote:

It would be nice to have applications request to be notified
before a snapshot is taken, and when that have requested
notification have acknowledged that they're ready, the snapshot
would be taken; and then another notification sent that it was
taken.  Prior to indicating they were ready, the apps could
have achieved a logically consistent on disk state.  That
would eliminate the need for (for example) separate database
backups, if you could have a snapshot with the database on it
in a consistent state.


Any software dependent on cooperating with the filesystem to ensure that
the files are consistent in a snapshot fails the cord-yank test (which
is
equivalent to the "processor explodes" test and the "power supply bursts
into flames" test and the "disk drive shatters" test and so forth).  It
can't survive unavoidable physical-world events.


It can, if said software can roll back to the last consistent state.
That may or may not be "recent" wrt a snapshot.  If an application is
very active, it's possible that many snapshots may be taken, none of
which are actually in a state the application can use to recover from.
Rendering snapshots much less effective.


Wait, if the application can in fact survive the "cord pull" test then by
definition of "survive", all the snapshots are useful.


Useful, yes, but you missed my point about recency.  They may not be as
useful as they could be, and depending on how data changes older data or
transactions may be unrecoverable due to an inconsistent snapshot.


 They'll be
everything consistent that was committed to disk by the time of the yank
(or snapshot); which, it seems to me, is the very best that anybody could
hope for.


This is true only if transactions are journaled somehow, and thus a snapshot
could return the application to it's current state -1.


Also, just administratively, and perhaps legally, it's highly desirable
to know that the time of a snapshot is the actual time that application
state can be recovered to or referenced to.


Maybe, but since that's not achievable for your core corporate asset (the
database), I think of it as a pipe dream rather than a goal.


Ah, because we can't achieve this ideal for some very critical application,
we shouldn't bother getting there for other applications.


Also, if an application cannot survive a cord-yank test, it might be
even more highly desirable that snapshots be a stable that from which
the application can be restarted.


If it cannot survive a cord-yank test, it should not be run, ever, by
anybody, for any purpose more important than playing a game.


Nice ideal world you live in ... wish I were there.

It's not as if a notification mechanism somehow makes things worse for
applications that don't use it.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-16 Thread Frank Cusack

On 7/16/10 12:02 PM -0500 David Dyer-Bennet wrote:

It would be nice to have applications request to be notified
before a snapshot is taken, and when that have requested
notification have acknowledged that they're ready, the snapshot
would be taken; and then another notification sent that it was
taken.  Prior to indicating they were ready, the apps could
have achieved a logically consistent on disk state.  That
would eliminate the need for (for example) separate database
backups, if you could have a snapshot with the database on it
in a consistent state.


Any software dependent on cooperating with the filesystem to ensure that
the files are consistent in a snapshot fails the cord-yank test (which is
equivalent to the "processor explodes" test and the "power supply bursts
into flames" test and the "disk drive shatters" test and so forth).  It
can't survive unavoidable physical-world events.


It can, if said software can roll back to the last consistent state.
That may or may not be "recent" wrt a snapshot.  If an application is
very active, it's possible that many snapshots may be taken, none of
which are actually in a state the application can use to recover from.
Rendering snapshots much less effective.

Also, just administratively, and perhaps legally, it's highly desirable
to know that the time of a snapshot is the actual time that application
state can be recovered to or referenced to.

Also, if an application cannot survive a cord-yank test, it might be
even more highly desirable that snapshots be a stable that from which
the application can be restarted.

A notification mechanism is pretty desirable, IMHO.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-15 Thread Frank Cusack

On 7/15/10 9:49 AM +0900 BM wrote:

On Thu, Jul 15, 2010 at 5:57 AM, Paul B. Henson  wrote:

ZFS is great. It's pretty much the only reason we're running Solaris.


Well, if this is the the only reason, then run FreeBSD instead. I run
Solaris because of the kernel architecture and other things that Linux
or any BSD simply can not do. For example, running something on a port
below 1000, but as a true non-root (i.e. no privileges dropping, but
straight-forward run by a non-root).


Um, there's plenty of things Solaris can do that Linux and FreeBSD can't
do, but non-root privileged ports is not one of them.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ Vertex 2 Pro performance numbers

2010-06-28 Thread Frank Cusack

On 6/26/10 9:47 AM -0400 David Magda wrote:

Crickey. Who's the genius who thinks of these URLs?


SEOs
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS pool - label missing on invalid

2010-06-18 Thread Frank Cusack

On 6/18/10 11:25 PM -0700 Cott Lang wrote:

By detach, do you mean that you ran 'zpool detach'?


Yes.


'zpool detach' clears the information from the disk that zfs needs to
reimport the disk.  If you have a late enough version of opensolaris
you should instead run 'zpool split'.  Otherwise, shut down as normal
(ie, don't tell zfs you are about to do anything different) and then
just boot with the one disk, now in degraded state but otherwise ok.

Like you, I learned this the hard way!

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS pool - label missing on invalid

2010-06-18 Thread Frank Cusack

On 6/18/10 9:46 PM -0700 Cott Lang wrote:

I split a mirror to reconfigure and recopy it. I detached one drive,
reconfigured it ... all after unplugging the remaining pool drive during
a shutdown to verify no accidents could happen.


By detach, do you mean that you ran 'zpool detach'?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mount zfs boot disk on another server?

2010-06-16 Thread Frank Cusack

Should naming the root pool something unique (rpool-nodename) be a
best practice?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Please trim posts

2010-06-11 Thread Frank Cusack

On 6/10/10 11:07 PM -0700 Dave Koelmeyer wrote:

I trimmed, and then got complained at by a mailing list user that the
context of what I was replying to was missing. Can't win :P


There's a big difference between trim and remove.

The worst is when people quote 3-4 paragraphs, respond inline to ONE of
the points, then leave the rest of a long email, signature and all,
quoted at the end.  ugh.

This list is the worst one that I am on for that kind of behavior.  Makes
me wonder how those folks can manage complex storage systems when they
cannot even organize their thoughts efficiently.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating to ZFS

2010-06-04 Thread Frank Cusack

On 6/4/10 11:46 AM -0700 Brandon High wrote:

Be aware that Solaris on x86 has two types of partitions. There are
fdisk partitions (c0t0d0p1, etc) which is what gparted, windows and
other tools will see. There are also Solaris partitions or slices
(c0t0d0s0). You can create or edit these with the 'format' command in
Solaris. These are created in an fdisk partition that is the SOLARIS2
type. So yeah, it's a partition table inside a partition table.


That's not correct, at least not technically.  Solaris *slices* within
the Solaris fdisk partition, are not also known as partitions.  They
are simply known as slices.  By calling them "Solaris partitions or
slices" you are just adding confusion.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one more time: pool size changes

2010-06-03 Thread Frank Cusack

On 6/3/10 12:06 AM -0400 Roman Naumenko wrote:

I think there is a difference. Just quickly checked netapp site:

Adding new disks to a RAID group If a volume has more than one RAID
group, you can specify the RAID group to which you are adding disks.


hmm that's a surprising feature to me.

I remember, and this was a few years back but I don't see why it would
be any different now, we were trying to add drives 1-2 at a time to
medium-sized arrays (don't buy the disks until we need them, to hold
onto cash), and the Netapp performance kept going down down down.  We
eventually had to borrow an array from Netapp to copy our data onto
to rebalance.  Netapp told us explicitly, make sure to add an entire
shelf at a time (and a new raid group, obviously, don't extend any
existing group).
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one more time: pool size changes

2010-06-03 Thread Frank Cusack

On 6/3/10 8:45 AM +0200 Juergen Nickelsen wrote:

Richard Elling  writes:


And some time before I had suggested to a my buddy zfs for his new
home storage server, but he turned it down since there is no
expansion available for a pool.


Heck, let him buy a NetApp :-)


Definitely a possibility, given the availability and pricing of
oldish NetApp hardware on eBay.


Not really.  Software license is invalid on resale, and you can't replace
a failed drive with a generic drive so at some point you must buy an
Ontap license = $$$.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one more time: pool size changes

2010-06-03 Thread Frank Cusack

On 6/2/10 11:10 PM -0400 Roman Naumenko wrote:

Well, I explained it not very clearly. I meant the size of a raidz array
can't be changed.
For sure zpool add can do the job with a pool. Not with a raidz
configuration.


Well in that case it's invalid to compare against Netapp since they
can't do it either (seems to be the consensus on this list).  Neither
zfs nor Netapp (nor any product) is really designed to handle adding
one drive at a time.  Normally you have to add an entire shelf, and
if you're doing that it's better to add a new vdev to your pool.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one more time: pool size changes

2010-06-02 Thread Frank Cusack

On 6/2/10 3:54 PM -0700 Roman Naumenko wrote:

And some time before I had suggested to a my buddy zfs for his new home
storage server, but he turned it down since there is no expansion
available for a pool.


That's incorrect.  zfs pools can be expanded at any time.  AFAIK zfs has
always had this capability.


Nevertheless, NetApp appears to have such feature as I learned from my
co-worker. It works with some restrictions (you have to zero disks before
adding, and rebalance the aggregate after and still without perfect
distribution) - but Ontap is able to do aggregates expansion
nevertheless.


I wasn't aware that Netapp could rebalance.  Is that a true Netapp
feature, or is it a matter of copying the data "manually"?  zfs doesn't
have a cleaner process that rebalances, so for zfs you would have to
copy the data to rebalance the pool.  I certainly wouldn't make my
Netapp/zfs decision based on that (alone).

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SMI lable and EFI label in one disk?

2010-06-01 Thread Frank Cusack

On 6/1/10 4:35 AM -0700 Fred Liu wrote:

I just recalled a thread in this list and it said SMI lable and EFI label
cannot be in one disk. Is it correct?


Correct.  But that was not your original question.


Let me describe my case.
I have a 160GB HDD -- saying c0t0d0. I use OpenSolaris installer to cut a
100GB slice -- c0t0d0s0 for rpool. And I want to use the remaining space
for cache device -- assuming c0t0d0s1. But when I use format command, I
cannot see the remaining space.


You probably created the SMI label within a partition that doesn't include
the entire disk.  I guess as Richard says this is a limitation of the
installer.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs mirror boot hang at boot

2010-05-30 Thread Frank Cusack

On 5/29/10 12:54 AM -0700 Matt Connolly wrote:

I'm running snv_134 on 64-bit x86 motherboard, with 2 SATA drives. The
zpool "rpool" uses whole disk of each drive.


Can't be.  zfs can't boot from a whole disk pool on x86 (maybe sparc too).
You have a single solaris partition with the root pool on it.  I am only
being pedantic because "whole disk" has a special meaning to zfs, distinct
from "a single partition using the entire disk".

...

If I detach a drive from the pool, then the system also correctly boots
off a single connected drive. However, reattaching the 2nd drive causes a
whole resilver to occur.


By "detach" do you mean running "zpool detach", or simply removing the
drive physically without running any command?  I suppose the former
because if you just remove it I'd think you'd have the same non-booting
problem.  If that's right, then that is the expected behavior.
"zpool detach" causes zfs to forget everything it knows about the device
being detached.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-05-14 Thread Frank Cusack

On 4/21/10 3:48 PM +0100 Bayard Bell wrote:

Oracle has a number of technologies that they've acquired that have
remained dual-licensed, and that includes acquiring InnoTech, which they
carried forward despite being able to use it as nearly an existential
threat to MySQL. In the case of their acquisition of Sleepycat, I'm aware
of open-source licensing terms becoming more generous after the Oracle
acquisition, where Oracle added a clear stipulation that redistribution
requiring commercial licensing had to involve third parties, where prior
to the acquisition Sleepycat had taken a less more expansive
interpretation that covered just about any form of software distribution.


I'm no supporter of Oracle's business practices, but I am 90% sure that
Sleepycat changed their license before the Oracle acquisition.  Yes,
it was particularly onerous before they went to standard GPL.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sharing Issues

2010-02-21 Thread Frank Cusack

On 2/21/10 11:08 PM -0800 Tau wrote:

I am having a bit of an issue I have an opensolaris box setup as a
fileserver.  Running through CIFS to provide shares to some windows
machines.

Now lets call my zpool /tank1,


Let's not because '/' is an illegal character in a zpool name.


   when i create a zfs filesystem called
/test it gets shared as /test and i can see it as "test" on my windows
machines...  Now when i create a child system inside the test system
(lets call this /tank1/test/child) the child system gets shared as well
on its own as test_child as seen on the windows system.

I want to be able to create  nested filesystems, and not have the nested
systems shared through cifs  i want to access them through the root
system, and only have the root systems shared to the windows machines...


You're saying system as if it's a shorthand for filesystem.  It isn't.
And technically, for zfs you call them datasets but filesystem is ok.

Does simply setting sharesmb=none not work?  By default, descendant
filesystems inherit the properties of the parent, including share
properties.  So for each child filesystem you don't want to share,
you would have to override the default inherited sharesmb property.

What you should probably do is set an ACL to disallow access to the
child filesystems.  Because even if there is a sharesmb setting that
blocks sharing of a child, what happens then is that the client
accessing the parent can still write into the directory which holds
the mount point for the child, with the write going to the parent,
and on the fileserver you can't see data that the client has written
there because it is masked by the mounted child filesystem.  This
creates all sorts of problems.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-10 Thread Frank Cusack

On 2/10/10 2:06 PM -0800 Brian E. Imhoff wrote:

I then, Create a zpool, using raidz2, using all 24 drives, 1 as a
hotspare: zpool create tank raidz2 c1t0d0 c1t1d0 [] c1t22d0 spare
c1t23d00


Well there's one problem anyway.  That's going to be horribly slow no
matter what.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-09 Thread Frank Cusack

On 2/9/10 5:19 PM -0600 Tim Cook wrote:

On Tue, Feb 9, 2010 at 2:39 PM, Toby Thain 
wrote:



On 9-Feb-10, at 2:02 PM, Frank Cusack wrote:

 On 2/9/10 12:03 PM +1100 Daniel Carosone wrote:>



Snorcle wants to sell hardware.



LOL ... snorcle

But apparently they don't.  Have you seen the new website?   Seems like
a blatant attempt to kill the hardware business to me.




That's very sad. I love, love to spec the "rebooted" Bechtolsheim
hardware designs.

--Toby




How do you figure that?  There are 5 columns on the front page:

Database
Middleware
Applications
Server and Storage Systems
Industry

How much more focus were you hoping for beyond front page status?  Were
you expecting them to remove all references to that little database thing
that their entire business is founded upon?


I assume you are responding to my comment and not Toby's.  Did you try
to drill down past the front page?  To look at the specs for ANY server?
I just thought it was much more difficult to look at and compare specs
than it was on Sun's site.  Turns out you can go to shop.sun.com and
find the same tab interface as before though.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-09 Thread Frank Cusack

On 2/9/10 12:03 PM +1100 Daniel Carosone wrote:>

Snorcle wants to sell hardware.


LOL ... snorcle

But apparently they don't.  Have you seen the new website?   Seems like a
blatant attempt to kill the hardware business to me.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-07 Thread Frank Cusack

On 2/8/10 12:49 AM -0200 Giovanni Tirloni wrote:

I think the industry is in a sad state when you buy enterprise-level
drives and they don't work as expected (see that thread about TLER
settings on WD enterprise drives) that you have to spend extra on drives
that got reviewed by a third-party (Sun/EMC/etc). Just shows how bad the
disk vendors are.


Or how tough the hard drive market is.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-06 Thread Frank Cusack

On 2/6/10 4:51 PM +0100 Kjetil Torgrim Homme wrote:

the pricing does look strange, and I think it would be better to raise
the price of the enclosure (which is silly cheap when empty IMHO) and
reduce the drive prices somewhat.  but that's just psychology, and
doesn't really matter for total cost.


better for whom? :)

if the total price is the same, it's "better" (for Sun) to charge as much
for the razor blades as the market will bear.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS 'secure erase'

2010-02-05 Thread Frank Cusack

You might also want to note that with traditional filesystems, the
'shred' utility will securely erase data, but no tools like that
will work for zfs.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS 'secure erase'

2010-02-05 Thread Frank Cusack

On 2/5/10 5:08 PM -0500 c.hanover wrote:

 would it be possible to
create a 1GB file without writing any data to it, and then use a hex
editor to access the data stored on those blocks previously?


No, not over NFS and also not locally.  You'd be creating a sparse file,
which doesn't allocate space on disk for any filesystem (not just zfs).
So when you read it back, you get back all 0s.  The only way to actually
allocate the space on disk is to write to it, and then of course you
read back the data you wrote, not what was previously there.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS 'secure erase'

2010-02-05 Thread Frank Cusack

On 2/5/10 3:49 PM -0500 c.hanover wrote:

Two things, mostly related, that I'm trying to find answers to for our
security team.

Does this scenario make sense:
* Create a filesystem at /users/nfsshare1, user uses it for a while, asks
for the filesystem to be deleted * New user asks for a filesystem and is
given /users/nfsshare2.  What are the chances that they could use some
tool or other to read unallocated blocks to view the previous user's data?


Over NFS?  none.


Related to that, when files are deleted on a ZFS volume over an NFS
share, how are they wiped out?  Are they zeroed or anything.  Same
question for destroying ZFS filesystems, does the data lay about in any
way?  (That's largely answered by the first scenario.)


In both cases the data is still on disk.


If the data is retrievable in any way, is there a way to a) securely
destroy a filesystem, or b) securely erase empty space on a filesystem.


Someone else will have to answer that.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unionfs help

2010-02-04 Thread Frank Cusack

On 2/4/10 2:46 PM -0600 Nicolas Williams wrote:

In Frank's case, IIUC, the better solution is to avoid the need for
unionfs in the first place by not placing pkg content in directories
that one might want to be writable from zones.  If there's anything
about Perl5 (or anything else) that causes this need to arise, then I
suggest filing a bug.


Right, and thanks for chiming in.  Problem is that perl wants to install
add-on packages in places that the coincide with the system install.
Most stuff is limited to the site_perl directory, which is easily
redirected, but it also has some other locations it likes to meddle with.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-04 Thread Frank Cusack

On 2/4/10 8:21 AM -0500 Ross Walker wrote:

Find -newer doesn't catch files added or removed it assumes identical
trees.


This may be redundant in light of my earlier post, but yes it does.
Directory mtimes are updated when a file is added or removed, and
find -newer will detect that.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-04 Thread Frank Cusack

On 2/4/10 8:00 AM +0100 Tomas Ögren wrote:

rsync by default compares metadata first, and only checks through every
byte if you add the -c (checksum) flag.

I would say rsync is the best tool here.


ah, i didn't know that was the default.  no wonder recently when i was
incremental-rsyncing a few TB of data between 2 hosts (not using zfs)
i didn't get any speedup from --size-only or whatever the flag is.


The "find -newer blah" suggested in other posts won't catch newer files
with an old timestamp (which could happen for various reasons, like
being copied with kept timestamps from somewhere else).


good point.  that is definitely a restriction with find -newer.  but if
you meet that restriction, and don't need to find added or deleted files,
it will be faster since only 1 directory tree has to be walked.

but in the general case it does sound like rsync is the best.  unless
bart can find added and missing files.  in which case bart is better
because it only has to walk 1 dir tree -- assuming you have a saved
manifest from a previous walk over the original dir tree.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unionfs help

2010-02-04 Thread Frank Cusack

BTW, I could just install everything in the global zone and use the
default "inheritance" of /usr into each local zone to see the data.
But then my zones are not independent portable entities; they would
depend on some non-default software installed in the global zone.

Just wanted to explain why this is valuable to me and not just some
crazy way to do something simple.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-04 Thread Frank Cusack

On 2/4/10 12:39 AM -0500 Ross Walker wrote:

On Feb 3, 2010, at 8:59 PM, Frank Cusack 
wrote:

I think you misread the thread.  Either find or ddiff will do it and
either will be better than rsync.


Find can find files that have been added or removed between two directory
trees?

How?


When a file is added or removed in a directory, the directory's mtime
is updated.  So find -newer will locate those directories.  Then of
course you need to do a little bit more work to locate the files.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unionfs help

2010-02-04 Thread Frank Cusack
On February 4, 2010 12:12:04 PM +0100 dick hoogendijk  
wrote:

Why don't you just export that directory with NFS (rw) to your sparse zone
and mount it on /usr/perl5/mumble ? Or is this too simple a thought?


On February 4, 2010 1:41:20 PM +0100 Thomas Maier-Komor 
 wrote:

What about lofs? I thinks lofs is the equivalent for unionfs on Solaris.


The problem with both of those solutions is a) writes will overwrite the
original filesystem data and b) writes will be visible to everyone else.

Neither suggestion provides unionfs capability.

On February 4, 2010 12:12:18 PM + Peter Tribble 
 wrote:

The way I normally do this is to (in the global zone) symlink
/usr/perl5/mumble to somewhere that would be writable such as /opt, and
then put what you need into that location in the zone. Leaves a dangling
symlink in the global zone and other zones, but that's relatively
harmless.


The problem with that is you don't see the underlying data that exists
in the global zone.  I do use that technique for other data (e.g. the
entire /usr/local hierarchy), but it doesn't meet my desired needs in
this case.

I looked into clones (and at least now I understand them much better
than before) and they *almost* provide the functionality I want.  I
could mount a clone in the zoned version of /foo and it would see the
original /foo, and changes would go to the clone only, just like a real
unionfs.

What it's lacking though is that when the underlying filesystem changes
(in the global zone), those changes don't percolate up to the clone.
The clone's base view of files is from the snapshot it was generated
from, which cannot change.  It would be great if you could re-target
(or re-base?) a clone from a different snapshot than the one it was
originally generated from.  Since I don't need realtime updates, for
my purposes that would be a great equivalent to a true unionfs.

So the thread on zfs diff gave me an idea; I will use clones and will
write a 'zfs diff'-like tool.  When the original /usr/perl5/mumble
changes I will use that to pick out files that are different in the
clone and populate a new clone with them.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] unionfs help

2010-02-03 Thread Frank Cusack

Is it possible to emulate a unionfs with zfs and zones somehow?  My zones
are sparse zones and I want to make part of /usr writable within a zone.
(/usr/perl5/mumble to be exact)

I can't just mount a writable directory on top of /usr/perl5 because then
it hides all the stuff in the global zone.  I could repopulate it in the
local zone but ugh that is unattractive.  I'm hoping for a better way.
Creating a full zone is not an option for me.

I don't think this is possible but maybe someone else knows better.  I
was thinking something with snapshots and clones?

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Frank Cusack
On February 3, 2010 6:46:57 PM -0500 Ross Walker  
wrote:

So was there a final consensus on the best way to find the difference
between two snapshots (files/directories added, files/directories deleted
and file/directories changed)?

Find won't do it, ddiff won't do it, I think the only real option is
rsync.


I think you misread the thread.  Either find or ddiff will do it and
either will be better than rsync.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Frank Cusack
On February 3, 2010 12:19:50 PM -0500 Frank Cusack 
 wrote:

If you do need to know about deleted files, the find method still may
be faster depending on how ddiff determines whether or not to do a
file diff.  The docs don't explain the heuristics so I wouldn't want
to guess on that.


An improvement on finding deleted files with the find method would
be to not limit your find criteria to files.  Directories with
deleted files will be newer than in the snapshot so you only need
to look at those directories.  I think this would be faster than
ddiff in most cases.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Frank Cusack
On February 3, 2010 6:02:52 PM +0100 Jens Elkner 
 wrote:

On Wed, Feb 03, 2010 at 10:29:18AM -0500, Frank Cusack wrote:

# newer files
find /file/system -newer /file/system/.zfs/snapshot/snapname -type f
# deleted files
cd /file/system/.zfs/snapshot/snapname
find . -type f -exec "test -f /file/system/{} || echo {}" \;

The above requires GNU find (for -newer), and obviously it only finds
files.  If you need symlinks or directory names modify as appropriate.

The above is also obviously to compare a snapshot to the current
filesystem.  To compare two snapshots make the obvious modifications.


Perhaps http://iws.cs.uni-magdeburg.de/~elkner/ddiff/ wrt. dir2dir cmp
may help as well (should be faster).


If you don't need to know about deleted files, it wouldn't be.  It's hard
to be faster than walking through a single directory tree if ddiff has to
walk through 2 directory trees.

If you do need to know about deleted files, the find method still may
be faster depending on how ddiff determines whether or not to do a
file diff.  The docs don't explain the heuristics so I wouldn't want
to guess on that.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Frank Cusack

On February 3, 2010 12:04:07 PM +0200 Henu  wrote:

Is there a possibility to get a list of changed files between two
snapshots? Currently I do this manually, using basic file system
functions offered by OS. I scan every byte in every file manually and it

  ^^^

On February 3, 2010 10:11:01 AM -0500 Ross Walker  
wrote:

Not a ZFS method, but you could use rsync with the dry run option to list
all changed files between two file systems.


That's exactly what the OP is already doing ...

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Frank Cusack

On February 3, 2010 12:04:07 PM +0200 Henu  wrote:

Is there a possibility to get a list of changed files between two
snapshots?


Great timing as I just looked this up last night, I wanted to verify
that an install program was only changing the files on disk that it
claimed to be changing.  So I have to say, "come on".  It took me but
one google search and the answer was one of the top 3 hits.



# newer files
find /file/system -newer /file/system/.zfs/snapshot/snapname -type f
# deleted files
cd /file/system/.zfs/snapshot/snapname
find . -type f -exec "test -f /file/system/{} || echo {}" \;

The above requires GNU find (for -newer), and obviously it only finds
files.  If you need symlinks or directory names modify as appropriate.

The above is also obviously to compare a snapshot to the current
filesystem.  To compare two snapshots make the obvious modifications.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] PCI-E CF adapter?

2010-02-02 Thread Frank Cusack
On January 14, 2010 1:08:56 PM -0500 Frank Cusack  
wrote:

I know this is slightly OT but folks discuss zfs compatible hardware
here all the time. :)

Has anyone used something like this combination?

<http://www.cdw.com/shop/products/default.aspx?EDC=1346664>
<http://www.cdw.com/shop/products/default.aspx?EDC=1854700>

It'd be nice to have externally accessible CF slots for my NAS.


I couldn't use the PCIe adapter as above, it is a full profile card.
I did find a low profile card for roughly twice as much, but no luck
in my server (x2270).  The BIOS does not recognize it as a device
I can boot from.  I didn't go any further than that.

In the X2270 it's not really useful anyway.  The latch mechanism
for the PCIe slot interferes with the extension of the ExpressCard
adapter.  It does just fit when you get the case all buttoned up,
but the EC retention mechanism is push-to-release and when you
insert the EC/CF card into the adapter, this pushes the EC card
adapter into the PCIe card, triggering its release.  Because
the PCIe latch on the server case interferes with the EC release,
you can't actually eject the EC/CF adapter enough to subsequently
push it back in to lock it.  I know that's all hard to understand
and it's probably not worth anyone's time to re-read it closely.
Sorry about that.

I don't have another machine with PCIe slots to try.  I'll just
stick with USB sticks I guess.  Since I'd avoid writing to the
CF anyway, it's not much different.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Frank Cusack

On February 2, 2010 4:31:47 PM -0500 Miles Nordin  wrote:

 and FCoE is just dumb if you have IB, honestly.


by FCoE are you talking about iSCSI?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Frank Cusack

On February 2, 2010 2:17:30 PM -0600 Tim Cook  wrote:

http://www3.sympatico.ca/n.rieck/docs/vms_vs_unix.html


interesting page, if somewhat dated.  e.g. maybe it wasn't true at the
time but don't we now know from the SCO lawsuit that SCO does indeed
own "UNIX"?

as long as we're OT. :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-02-02 Thread Frank Cusack
On February 2, 2010 11:58:12 AM -0800 Simon Breden  
wrote:

IIRC the Black range are meant to be the 'performance' models and so are
a bit noisy. What's your opinion? And the 2TB models are not cheap either
for a home user. The 1TB seem a good price. And from what little I read,


It depends what you mean by cheap.  As we've recently learned, cheaper
is not necessarily cheaper. :)

What I mean is, it depends how much data you have.  If 2TB drives allow
you to use only 1 chassis, you save on power consumption.  Fewer spindles
also will save on power consumption.  However, w/ 2TB drives you may
need to add more parity (raidz2 vs raidz1, e.g.) to meet your reliability
requirements -- the time to resilver 2TB may not meet your MTTDR reqs.
So you have to include your reliability needs when you figure cost.

That said, I doubt 2TB drives represent good value for a home user.
They WILL fail more frequently and as a home user you aren't likely
to be keeping multiple spares on hand to avoid warranty replacement
time.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Frank Cusack

On February 2, 2010 12:08:13 PM -0600 Tim Cook  wrote:

Not exactly unix, but there's still VMS clusters running around out there
with 100% uptime for over 20 years.  I wouldn't mind seeing it opened up.


Agreed, I'd love to see that opened up.  Might even give it new life.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Frank Cusack

On February 2, 2010 11:58:17 AM -0600 Tim Cook  wrote:

On Tue, Feb 2, 2010 at 11:53 AM, Frank Cusack
wrote:


On February 2, 2010 8:57:32 AM -0800 Orvar Korvar <
knatte_fnatte_tja...@yahoo.com> wrote:


I love that Sun shares their products for free. Which other big Unix
vendor does that?



Who's left?



Pretty sure HP and IBM are still alive and well.


Yeah but who would want it, even for free. :P
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Frank Cusack
On February 2, 2010 8:57:32 AM -0800 Orvar Korvar 
 wrote:

I love that Sun shares their products for free. Which other big Unix
vendor does that?


Who's left?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-01 Thread Frank Cusack

http://www.memoryx.net/5410456.html

I've bought sleds for X4150s and X2270s from them.


interesting mis-description on the web page.  thumper doesn't use SCA
drives.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status output confusing

2010-02-01 Thread Frank Cusack
On February 1, 2010 4:15:10 PM -0500 Frank Cusack 
 wrote:

On February 1, 2010 1:09:21 PM -0700 Cindy Swearingen
 wrote:

Whether disk swapping on the fly or a controller firmware update
renumbers the devices causes a problem really depends on the driver-->ZFS
interaction and we can't speak for all hardware.


With mpxio disks are known by multiple names.  zfs doesn't seem to have
a problem with that?


... known to the system by multiple names, but known to zfs by the single
"WWN" type identifier given by mpxio.  I guess.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status output confusing

2010-02-01 Thread Frank Cusack
On February 1, 2010 1:09:21 PM -0700 Cindy Swearingen 
 wrote:

Whether disk swapping on the fly or a controller firmware update
renumbers the devices causes a problem really depends on the driver-->ZFS
interaction and we can't speak for all hardware.


With mpxio disks are known by multiple names.  zfs doesn't seem to have
a problem with that?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status output confusing

2010-02-01 Thread Frank Cusack
On February 1, 2010 10:19:24 AM -0700 Cindy Swearingen 
 wrote:

ZFS has recommended ways for swapping disks so if the pool is exported,
the system shutdown and then disks are swapped, then the behavior is
unpredictable and ZFS is understandably confused about what happened. It
might work for some hardware, but in general, ZFS should be notified of
the device changes.


That's quite frequently difficult or impossible.  Can you elaborate as
to when this becomes a problem (you may have already done so in your
followup, but like you said, it's Monday :)) and how to notify ZFS of
the change?

I thought zfs wrote a unique ID into each member disk/slice of a pool
so that they could be reordered in any fashion at any time (even without
export) and no problem.  Long ago, but I've tested swapping scsi target
ids (not controller ids) and it worked fine on non-Sun hardware.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home ZFS NAS - 2 drives or 3?

2010-02-01 Thread Frank Cusack
On February 1, 2010 11:59:14 AM -0600 David Dyer-Bennet  
wrote:

One idea I seriously considered is to boot off a USB key.  No online
redundancy (but I'd keep a second loaded key, plus the files to quickly
reimage a new key, handy).


I've just built my first USB-booting zfs system.  I took the plunge
after playing with an x4275 and using the internal CF slot for it.

I boot off of a mirrored pair of USB sticks.

It works great and doesn't eat 2 disk bays.


Yes, logging and such will to some extent wear through the write capacity
of the USB key, but I expect it'd last several years, which is enough for
me to not worry about it.


I wouldn't worry so much about write wear (as I recently learned on this
list) as writes being dog slow.  It was easy to redirect most log files
to the spinning bits of rust.  Some logs (/var/svc/log, eg) however can't
be redirected but all of those type that I could find are very infrequently
updated.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs rpool mirror on non-equal drives

2010-01-31 Thread Frank Cusack
On January 30, 2010 10:27:41 AM -0800 Michelle Knight 
 wrote:

I did this as a test because I am aware that zpools don't like drives
switching controlers without being exported first.


They don't mind it at all.  It's one of the great things about zfs.

What they do mind is being remounted on a system with a different
hostid without having been exported first.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-25 Thread Frank Cusack

On January 24, 2010 12:20:55 PM -0800 "R.G. Keen"  wrote:

I do apologize for the snottier parts of my reply to your first note,
which I am editing. I did not get a chance to read this note from you
before responding.


Oh not at all.  Snotty is as snotty does.  um, what that is supposed
to mean is -- I deserved it.  :)

I'm sure I've said this 3 times already in different ways, but I just
thought you were generalizing that if you bought smaller drives, which
are cheaper, since the drives are cheaper that allows you to buy more
of them and thus have more parity.  This violated a primary assumption
of mine, that you always buys drives solely based on how much data
you need to store, and then you factor in the level of redundancy you
require.  By that assumption, you would always want to buy the drives
which are the least $/GB and which your data still fits into nicely.
(If you have 2.1TB of data, you wouldn't buy 2x2TB drives, and please
ignore the loss due to base 10 vs base 2 and filesystem overhead in
that statement.)

I see that your goals are completely different.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Frank Cusack

On January 24, 2010 8:26:07 AM -0800 "R.G. Keen"  wrote:

“Fewer/bigger versus more/smaller drives”: Tim and Frank have worked
this one over. I made the choice based on wanting to get a raidz3 setup,
for which more disks are needed than raidz or raidz2. This idea comes out
of the time-to-resilver versus time to failure line of reasoning.


Sorry I missed this part of your post before responding just a moment
ago.  If you want raidz3, you will spend more money on larger drives
if your data still fits into N smaller drives.  If you only have .75TB
of data, then of course it is a waste to get 1.5TB drives because you
still need 5 (1+3)+1 of them and you should definitely use the cheaper
.75TB drives.  But you'd do even better to use a triple mirror of the
smaller drives.

Once the size of your data exceeds the size of the smaller drive, and
you have to buy 2 of them just for the data part (not incl. the parity),
it's now more expensive to use the smaller drives.

In the above paragraph you haven't mentioned cost at all, but since
you did talk elsewhere about the cost of the smaller drives being cheaper,
I wanted to make it clear you are spending more money by using the smaller
drives.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Frank Cusack

On January 24, 2010 8:26:07 AM -0800 "R.G. Keen"  wrote:

 In my case, I got 0.75TB drives
for $58 each. The cost per bit is bigger than buying 1TB or 1.5TB drives,
all right, but I can buy more of them, and that lets me put another drive
on for the next level of error correction data.


That's the point I was arguing against.  You did not respond to my
argument, and you don't have to now, but as long as you keep stating
this without correcting me I will keep responding.

The size of the drive has nothing to do with letting you put another
drive on, for more redundancy.  If you want more redundancy, you *have*
to buy more drives, whether big or small.  If you're implying that
because of the lower cost you can afford to buy the additional drive,
that also clearly incorrect as the cost per bit is more, so in fact
you spend more with the smaller drives PLUS the cost for the
additional redundancy.

Also, smaller drives require LESS redundancy for the same level of
availability, not more.  Of course, because drives are only available
in discrete sizes you may end up with the same raidz level (1,2 or 3)
anyway.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Frank Cusack
On January 24, 2010 8:41:00 AM -0800 Erik Trimble  
wrote:

an external JBOD chassis, not a server chassis.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Frank Cusack
On January 24, 2010 11:45:57 AM +1100 Daniel Carosone  
wrote:

On Sat, Jan 23, 2010 at 06:39:25PM -0500, Frank Cusack wrote:

Smaller devices cost more $/GB; ie they are more expensive.


Usually, other than the very largest (most recent) drives, that are
still at a premium price.


Yes, I should have clarified that.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Frank Cusack

On January 23, 2010 8:23:08 PM -0600 Tim Cook  wrote:

I bet you'll get the same performance out of 3x1.5TB drives you get out of
6x500GB drives too.


Yup.  And if that's the case, probably you want to go with the 3 drives
because your operating costs (power consumption) will be less.


 Are you really trying to argue people should never
buy anything but the largest drives available?


No.  Are you really so dense that you extrapolate my argument to an
extremely broad catch-all?  There are other reasons besides cost that
people might want to buy smaller drives.  And, e.g., if your data set
isn't that large, don't spend money for space you don't need.

The post that I was responding to claimed smaller drives *allowed* him
to get to raidz3.  I challenged that as incorrect.  It's the larger
drives that *require* raidz3 because resilver time is longer.  So far
I've seen no argument to the contrary.  Just a side argument about
cost which I happen to disagree with.  And a followup side argument
about planning for redundancy which I also disagree with.

Let's say you need 3TB of storage.  That's a lot for most home uses.
The actual amount doesn't matter as the costs will scale.  So you
buy 5 1.5TB drives.  4 (2+2) in a raidz2 plus a hot spare.  For the
sake of this argument, let's say you've done the math and raidz2
meets your redundancy requirement, based on time to resilver.  More
likely, a home user has not done the math but that's besides the point.

Now let's do it with .5GB drives.  A quick survey shows me they come
in at about a 10% discount to the 1.5TB drives.  I'm being generous
because I can't even find .5GB drives, but I see that 320GB drives
are about 10% less.  If you want to get even "cheaper", 250GB drives
are about 50% less cost than 1.5TB drives (which by my argument, which
you refute, makes them 3x more expensive but whatever).

So with .5GB drives you need 6+3 drives -- because the smaller drives
"allows" you to get to raidz2, plus a hot spare.  That's twice as many
drives, however you are only paying 10% less per drive.  PLUS with this
many drives you now need a pretty big chassis.  Plus your power costs
are now quite a bit higher.

Please put together a scenario for me where smaller drives cost less.


I hope YOU aren't ever buying for MY company.


Rest assured, I won't be.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] customizing "zfs list" with less typing

2010-01-23 Thread Frank Cusack
On January 23, 2010 4:33:59 PM -0800 "Richard L. Hamilton" 
 wrote:

It might be nice if "zfs list" would check an environment variable for
a default list of properties to show (same as the comma-separated list
used with the -o option).  If not set, it would use the current default
list; if set, it would use the value of that environment variable as the
list.

I find there are a lot of times I want to see the same one additional
property when using "zfs list"; an environment variable would mean a
one-time edit of .profile rather than typing the -o option with the
default list modified by whatever I want to add.

...

Both of those, esp. together, would make quickly checking or
familiarizing oneself with a server that much more civilized, IMO.


Just make 'zfs' an alias to your version of it.  A one-time edit
of .profile can update that alias.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs mounts don't follow child filesystems?

2010-01-23 Thread Frank Cusack
On January 23, 2010 6:53:26 PM -0600 Bob Friesenhahn 
 wrote:

On Sat, 23 Jan 2010, Frank Cusack wrote:

Notice that the referenced path is subordinate to the exported zfs
filesystem.


Well, assuming there is a single home zfs filesystem and not a
filesystem-per-user.  For filesystem-per-user your example simply
mounts the correct shared filesystem.  Even for a single home
filesystem, the above doesn't actually mount /export/home and
then also mount /export/home/USER, so it's not following the
zfs filesystem hierarchy.


I am using a filesystem-per-user on my system.  Are you saying that my
system is wrong?

These per-user fileystems are NFS exported due to the inheritance of zfs
properties from their parent directory.  The property is only set in one
place.


You have misunderstood the problem.

Of course, or rather I understand, that zfs child filesystems inherit
the sharenfs property from the parent similar to how they inherit other
properties.  (And even if they didn't, clients can still mount
subdirectories of the directory that is shared unless the server
explicitly disallows that option.  Regardless of underlying filesystem.)

With zfs filesystems, when you have a directory which is a subordinate
filesystem, as in filesystem-per-user, then if the NFS client mounts the
parent filesystem, when it crosses the child filesystem boundary it does
not see into the child filesystem as it would if it were local.

server:
export
export/home
export/home/user

client mounts server:/export/home on /home.  the client can see (e.g.)
/home/user, but as an empty directory.  when the client enters that
directory it is writing into the export/home filesystem on the server
(and BTW those writes are not visible on the server since they are
obscured by the child filesystem.)

NFS4 has a mechanism to follow and mount the child filesystem.

Your example doesn't do that, it simply mounts the child filesystem
directly.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Frank Cusack

On January 23, 2010 6:09:49 PM -0600 Tim Cook  wrote:

When you've got a home system and X amount of dollars
to spend, $/GB means absolutely nothing when you need a certain number of
drives to have the redundancy you require.


Don't you generally need a certain amount of GB?  I know I plan my
storage based on how much data I have, even my home systems.  And THEN
add in the overhead for redundancy.  If we're talking about such a
small amount of storage ("home") that the $/GB is not a factor (ie,
even with the most expensive $/GB drives we won't exceed the budget and
we don't have better things to spend the money on anyway) then raidz3
seems unnecessary.  I mean, just do a triple mirror of the 1.5TB drives
rather than say (6) .5TB drives in a raidz3.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs mounts don't follow child filesystems?

2010-01-23 Thread Frank Cusack
On January 23, 2010 2:17:12 PM -0600 Bob Friesenhahn 
 wrote:

On Sat, 23 Jan 2010, Frank Cusack wrote:


I thought with NFS4 *on solaris* that clients would follow the zfs
filesystem hierarchy and mount sub-filesystems.  That doesn't seem
to be happening and I can't find any documentation on it (either way).

Did I only dream up this feature or does it actually exist?  I am
using s10_u8.


The Solaris 10 automounter should handle this for you:

% cat /etc/auto_home
# Home directory map for automounter
#
# +auto_home
*   myserver:/export/home/&

Notice that the referenced path is subordinate to the exported zfs
filesystem.


Well, assuming there is a single home zfs filesystem and not a
filesystem-per-user.  For filesystem-per-user your example simply
mounts the correct shared filesystem.  Even for a single home
filesystem, the above doesn't actually mount /export/home and
then also mount /export/home/USER, so it's not following the
zfs filesystem hierarchy.

So while your example doesn't demonstrate the behavior I'm asking
for, the automounter does indeed work as I want, at least for /net.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Frank Cusack

On January 23, 2010 1:20:13 PM -0800 Richard Elling

My theory is that drives cost $100.


Obviously you're not talking about Sun drives. :)

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Frank Cusack

On January 23, 2010 5:17:16 PM -0600 Tim Cook  wrote:

Smaller devices get you to raid-z3 because they cost less money.
Therefore, you can afford to buy more of them.


I sure hope you aren't ever buying for my company! :) :)

Smaller devices cost more $/GB; ie they are more expensive.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool status -x not as documented

2010-01-23 Thread Frank Cusack

zpool status [-xv] [pool] ...

Displays the detailed health status for the given pools.
...
-xOnly display status for pools that are  exhibiting
  errors or are otherwise unavailable.

 # zpool status -x
 all pools are healthy
 # zpool status -x rpool
 pool 'rpool' is healthy

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  1   2   3   4   >