Re: [zfs-discuss] ZFS root boot failure?

2008-06-12 Thread Kurt Schreiner
On Thu, Jun 12, 2008 at 06:38:56AM +0200, Richard Elling wrote:
 Vincent Fox wrote:
  So I decided to test out failure modes of ZFS root mirrors.
 
  Installed on a V240 with nv90.  Worked great.
 
  Pulled out disk1, then replaced it and attached again, resilvered, all good.
 
  Now I pull out disk0 to simulate failure there.  OS up and running fine, 
  but lots of error message about SYNC CACHE.
 
  Next I decided to init 0, and reinsert disk 0, and reboot.  Uh oh!
 
 
 This is actually very good.  It means that ZFS recognizes that there
 are two, out of sync mirrors and you booted from the oldest version.
 What happens when you change the boot order?
  -- richard
Hm, but the steps taken, as I read it, were:

pull disk1
replace
*resilver*
pull disk0
...
So the 2 disks should be in sync (due to resilvering)? Or is there
another step needed to get the disks in sync?

Kurt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root boot failure?

2008-06-12 Thread Brian Hechinger
On Wed, Jun 11, 2008 at 10:43:26PM -0700, Richard Elling wrote:
 
 AFAIK, SVM will not handle this problem well.  ZFS and Solaris
 Cluster can detect this because the configuration metadata knows
 the time difference (ZFS can detect this by the latest txg).

Having been through this myself with SVM in the past, no, it does
not handle this well at all.  If I remember correctly Veritas handled
this a lot better than SVM did/does (please bear in mind I haven't
used either of those in quite some time).

 I predict that if you had booted from disk B, then it would have
 worked (but I don't have the hardware setup to test this tonight)

Unfortunatly I thought of this after deleting his mail.  He said that
before pulling disk B he scrambled it with dd.  He broke the boot
sectors on disk B, which ZFS doesn't replicate as far as I can tell.
(See the section on ZFS install where it talks about adding a mirror
after the fact, you need to manually install the boot sectors.)

IMHO, ZFS boot/root should really go out of its way to make sure the
boot sectors are up to date.  As most other mirroring solutions (hardware
or software) mirror raw volumes, they just do it automatically due to
the nature of how they work.  This is behavior that has come to be
expected, so it's a really good idea if ZFS could do it.

I think something else that might help is if ZFS were to boot, see that
the volume it booted from is older than the other one, print a message
to that effect and either halt the machine or issue a reboot pointing
at the other disk (probably easier with OF than the BIOS of a PC).

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS dependent clones question

2008-06-12 Thread Yiannis
Hi,

After managing to upgrade to svn90 after a few failed attempts, I was left with 
a ton of zfs datasets (see previous post) most of which I've managed to 
destroy, however there's something that stumps me

NAME USED  AVAIL  REFER 
 MOUNTPOINT
rpool   9.85G  24.6G62K 
 /rpool
rpool/ROOT  7.74G  24.6G18K 
 /rpool/ROOT
rpool/ROOT/opensolaris  55.7M  24.6G  2.95G 
 legacy
rpool/ROOT/opensolaris-10   7.68G  24.6G  4.44G 
 legacy
rpool/ROOT/[EMAIL PROTECTED]:-:2008-06-01-08:08:11  1.47G  -  2.95G  -
rpool/ROOT/opensolaris-10/opt   1.78G  24.6G  1.78G 
 /opt
rpool/ROOT/opensolaris-10/[EMAIL PROTECTED]:-:2008-06-01-08:08:11   138K  - 
  622M  -
rpool/ROOT/opensolaris/opt  0  24.6G   622M 
 /opt
rpool/export2.10G  24.6G21K 
 /export
rpool/export/home   2.10G  24.6G  2.10G 
 /export/home

-bash-3.2# zfs destroy rpool/ROOT/[EMAIL PROTECTED]:-:2008-06-01-08:08:11
cannot destroy 'rpool/ROOT/[EMAIL PROTECTED]:-:2008-06-01-08:08:11': snapshot 
has dependent clones
use '-R' to destroy the following datasets:
rpool/ROOT/opensolaris/opt
rpool/ROOT/opensolaris

and
-bash-3.2# zfs destroy rpool/ROOT/opensolaris-10/[EMAIL 
PROTECTED]:-:2008-06-01-08:08:11
cannot destroy 'rpool/ROOT/opensolaris-10/[EMAIL 
PROTECTED]:-:2008-06-01-08:08:11': snapshot has dependent clones
use '-R' to destroy the following datasets:
rpool/ROOT/opensolaris/opt


opensolaris-10 is the dataset I am currently operating under (or so I 
presume!), is it safe to destroy the other one (and update grub accordingly) ?

Thank you,

Yiannis.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Boot from mirrored vdev

2008-06-12 Thread Rich Teer
Hi all,

Booting from a two-way mirrored metadevice created using SVM
can be a bit risky, especially when one of the drives fail
(not being able to form a quarum, the kernel will panic).
Is booting from mirrored vdev created by using ZFS similarly
iffy?  That is, if one disk in the vdev dies, will the machine
panic?

Cheers,

-- 
Rich Teer, SCSA, SCNA, SCSECA

CEO,
My Online Home Inventory

URLs: http://www.rite-group.com/rich
  http://www.linkedin.com/in/richteer
  http://www.myonlinehomeinventory.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Boot from mirrored vdev

2008-06-12 Thread A Darren Dunham
On Thu, Jun 12, 2008 at 07:29:08AM -0700, Rich Teer wrote:
 Hi all,
 
 Booting from a two-way mirrored metadevice created using SVM
 can be a bit risky, especially when one of the drives fail
 (not being able to form a quarum, the kernel will panic).

SVM doesn't panic in that situation.  At boot time, root is mounted
read-only, so a panic is unnecessary to protect the filesystem.

Instead the boot process stalls and you get a shell that lets you
resolve the replica states manually (usually by deleting the replicas
from the dead drive).  

Panic should only happen if you're already running and you then lose
more than 50% of replicas (uncommon in 2 disk setups).

 Is booting from mirrored vdev created by using ZFS similarly
 iffy?  That is, if one disk in the vdev dies, will the machine
 panic?

Good question.  SVM by default stalls the boot to ensure a strict
quorum.  VxVM continues the boot even though only 50% of DB are
available.  I think this is because it uses a timestamp/generation ID to
resolve which copy is more up-to-date.

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root boot failure?

2008-06-12 Thread A Darren Dunham
On Thu, Jun 12, 2008 at 07:28:23AM -0400, Brian Hechinger wrote:
 I think something else that might help is if ZFS were to boot, see that
 the volume it booted from is older than the other one, print a message
 to that effect and either halt the machine or issue a reboot pointing
 at the other disk (probably easier with OF than the BIOS of a PC).

That's the method taken by VxVM.  When it finally imports the booting
DG, it may find that the root volume isn't present on the disk that
booted.  It will stop the boot process at that point.

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root boot failure?

2008-06-12 Thread Cindy . Swearingen
Vincent,

I think you are running into some existing bugs, particularly this one:

http://bugs.opensolaris.org/view_bug.do?bug_id=6668666

Please review the list of known issues here:

http://opensolaris.org/os/community/zfs/boot/

Also check out the issues described on page 77 in this section:

Booting From a Alternate Disk in a Mirrored ZFS root Pool

http://opensolaris.org/os/community/zfs/docs/

Cindy

Vincent Fox wrote:
On Thu, Jun 12, 2008 at 06:38:56AM +0200, Richard

  pull disk1
  replace
  *resilver*
  pull disk0
  ...
So the 2 disks should be in sync (due to
resilvering)? Or is there
another step needed to get the disks in sync?
 
 
 That is an accurate summary.  I thought I was all good with the resilver and 
 in fact ran a scrub and status to be certain of it.
 
 If boot sectors do not get installed by default onto disk1, I will have to 
 make this a part of the post-install script for JumpStart.   I will re-run 
 this experiment with a clean nv90 install onto a mirror set, and just pull 
 disk0 this time without messing with disk1.
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root boot failure?

2008-06-12 Thread Richard Elling
Kurt Schreiner wrote:
 On Thu, Jun 12, 2008 at 06:38:56AM +0200, Richard Elling wrote:
   
 Vincent Fox wrote:
 
 So I decided to test out failure modes of ZFS root mirrors.

 Installed on a V240 with nv90.  Worked great.

 Pulled out disk1, then replaced it and attached again, resilvered, all good.

 Now I pull out disk0 to simulate failure there.  OS up and running fine, 
 but lots of error message about SYNC CACHE.

 Next I decided to init 0, and reinsert disk 0, and reboot.  Uh oh!

   
 This is actually very good.  It means that ZFS recognizes that there
 are two, out of sync mirrors and you booted from the oldest version.
 What happens when you change the boot order?
  -- richard
 
 Hm, but the steps taken, as I read it, were:

   pull disk1
   replace
   *resilver*
   pull disk0
   ...
 So the 2 disks should be in sync (due to resilvering)? Or is there
 another step needed to get the disks in sync?

   
The amnesia occurred later:
Now I pull out disk0 to simulate failure there. OS up and running 
fine, but lots of error message about SYNC CACHE.
Next I decided to init 0, and reinsert disk 0, and reboot. Uh oh!


 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs root / cannot activate new BE

2008-06-12 Thread Peter Lees
i folks

i have set up a new BE on zfs root, but it does not want to activate. server is 
build 90, x86 (64 bit)

i already have 2 other BE's on UFS/SVM

when i try to activate the zfs BE it seems OK, but on reboot now zfs BE option 
is shown in grub.

i have 2 disks: disk 1 has the 2 SVM metadevices on it, disk 2 has the new zfs 
pool 

in debugging it seems that the live upgrade activate.sh script  
(/etc/lu/DelayUpdate/activate.sh) is having a problem  not letting the BE 
activation go ahead; here's the relevant pieces:

+ cd /etc/lu 
+ [ no = yes ] 
+ ./installgrub.findroot ./stage1.findroot ./stage2.findroot /dev/md/rdsk/d30 
floppy: cannot mount pcfs
invalid bios paramet block
+ [ 255 -ne 0 ] 
+ gettext installgrub failed for %s 
+ /etc/lib/lu/luprintf -Eelp2 installgrub failed for %s /dev/md/rdsk/d30 
ERROR: installgrub failed for /dev/md/rdsk/d30
+ /bin/touch /tmp/.lulib.luig.error.13449

[...]

+ cd /etc/lu
+ [ no = yes ]
+ ./installgrub.findroot ./stage1.findroot ./stage2.findroot /dev/md/rdsk/d0
cannot open /boot/grub/stage2 on pcfs
+ [ 255 -ne 0 ]
+ gettext installgrub failed for %s
+ /etc/lib/lu/luprintf -Eelp2 installgrub failed for %s /dev/md/rdsk/d0
ERROR: installgrub failed for /dev/md/rdsk/d0
+ /bin/touch /tmp/.lulib.luig.error.13449



as you can see, trying to run installgrub on the metadevices is throwing errors 
(2 different errors, strangely). manually running installgrub on the raw slices 
works OK. manually running the same commands on the metadevices raises pretty 
much the same info:

host:/etc/lu# ./installgrub.findroot ./stage1.findroot ./stage2.findroot 
/dev/md/rdsk/d30
invalid bios paramet block
floppy: cannot mount pcfs
host:/etc/lu# ./installgrub.findroot ./stage1.findroot ./stage2.findroot 
/dev/md/rdsk/d0
mount: /dev/md/dsk/d0 is not a DOS filesystem.
cannot mount /dev/md/dsk/d0



trying to activate one of the svm-based BE's throws similar errors

if anyone's seen similar i'd appreciate any suggestions; i'm worried i might 
have screwed up the grub location a bit when juggling the disks around to make 
space for zfs, but i'm not sure where to investigate to work out what i might 
have done.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root boot failure?

2008-06-12 Thread Kurt Schreiner
On Thu, Jun 12, 2008 at 07:31:49PM +0200, Richard Elling wrote:
 Kurt Schreiner wrote:
  On Thu, Jun 12, 2008 at 06:38:56AM +0200, Richard Elling wrote:
 
  Vincent Fox wrote:
 
  So I decided to test out failure modes of ZFS root mirrors.
 
  Installed on a V240 with nv90.  Worked great.
 
  Pulled out disk1, then replaced it and attached again, resilvered, all 
  good.
 
  Now I pull out disk0 to simulate failure there.  OS up and running fine, 
  but lots of error message about SYNC CACHE.
 
  Next I decided to init 0, and reinsert disk 0, and reboot.  Uh oh!
 
 
  This is actually very good.  It means that ZFS recognizes that there
  are two, out of sync mirrors and you booted from the oldest version.
  What happens when you change the boot order?
   -- richard
 
  Hm, but the steps taken, as I read it, were:
 
pull disk1
replace
*resilver*
pull disk0
...
  So the 2 disks should be in sync (due to resilvering)? Or is there
  another step needed to get the disks in sync?
 
 
 The amnesia occurred later:
 Now I pull out disk0 to simulate failure there. OS up and running
 fine, but lots of error message about SYNC CACHE.
 Next I decided to init 0, and reinsert disk 0, and reboot. Uh oh!
Ah! Ok, got it now...

Thanks,
Kurt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-12 Thread Chris Siebenmann
| Every time I've come across a usage scenario where the submitter asks
| for per user quotas, its usually a university type scenario where
| univeristies are notorious for providing lots of CPU horsepower (many,
| many servers) attached to a simply dismal amount of back-end storage.

 Speaking as one of those pesky university people (although we don't use
quotas): one of the reasons this happens is that servers are a lot less
expensive than disk space. With disk space you have to factor in the
cost of backups and ongoing maintenance, wheras another server is just N
thousand dollars in one time costs and some rack space.

(This assumes that you are not rack space, heat, or power constrained,
which I think most university environments generally are not.)

 Or to put it another way: disk space is a permanent commitment,
servers are not.

- cks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Boot from mirrored vdev

2008-06-12 Thread Richard Elling
Rich Teer wrote:
 Hi all,

 Booting from a two-way mirrored metadevice created using SVM
 can be a bit risky, especially when one of the drives fail
 (not being able to form a quarum, the kernel will panic).
 Is booting from mirrored vdev created by using ZFS similarly
 iffy?  That is, if one disk in the vdev dies, will the machine
 panic?

   
The machine should not panic and should be bootable, if properly
configured.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SXCE build 90 vs S10U6?

2008-06-12 Thread Paul B. Henson

How close is Solaris Express build 90 to what will be released as the
official Solaris 10 update 6?

We just bought five x4500 servers, but I don't really want to deploy in
production with U5. There are a number of features in U6 I'd like to have
(zfs allow for better integration with our local identity system, refquota
support to minimize user confusion, ZFS boot, ...)

On the other hand, I don't really want to let these five servers sit around
as insanely expensive and heavy paperweights all summer waiting for U6 to
hopefully be released by September.

My understanding is that SXCE maintains the same packaging system and
jumpstart installation procedure as Solaris 10 (as opposed to OpenSolaris,
which is completely different). If SXCE is close enough to what will become
Solaris 10U6, I could do my initial development and integration on top of
that, and be ready to go into production almost as soon as U6 is released,
rather than wait for it to be released and then have to spin my wheels
working with it.

Would it be feasible to develop a ZFS boot jumpstart configuration with
SXCE that would be mostly compatible with U6? Does SXCE have any particular
ZFS features above and beyond what will be included in U6 I should be sure
to avoid? Any other caveats I would want to take into consideration?

Thanks much...


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SXCE build 90 vs S10U6?

2008-06-12 Thread Albert Lee

On Thu, 2008-06-12 at 17:52 -0700, Paul B. Henson wrote:
 How close is Solaris Express build 90 to what will be released as the
 official Solaris 10 update 6?
 
 We just bought five x4500 servers, but I don't really want to deploy in
 production with U5. There are a number of features in U6 I'd like to have
 (zfs allow for better integration with our local identity system, refquota
 support to minimize user confusion, ZFS boot, ...)
 
 On the other hand, I don't really want to let these five servers sit around
 as insanely expensive and heavy paperweights all summer waiting for U6 to
 hopefully be released by September.
 
 My understanding is that SXCE maintains the same packaging system and
 jumpstart installation procedure as Solaris 10 (as opposed to OpenSolaris,
 which is completely different). If SXCE is close enough to what will become
 Solaris 10U6, I could do my initial development and integration on top of
 that, and be ready to go into production almost as soon as U6 is released,
 rather than wait for it to be released and then have to spin my wheels
 working with it.


While the S10 updates include features backported from Nevada you can
only upgrade from S10 to Solaris Express, not the other way around
(which would technically be a downgrade).

(As you probably know Solaris 10 and Nevada are completely separate
lines of development. Solaris Express is built from Nevada, as are the
other OpenSolaris distributions.)

 
 Would it be feasible to develop a ZFS boot jumpstart configuration with
 SXCE that would be mostly compatible with U6? Does SXCE have any particular
 ZFS features above and beyond what will be included in U6 I should be sure
 to avoid? Any other caveats I would want to take into consideration?
 

I don't think there will be any spec changes for S10u6 from the ZFS boot
support currently available in SX, but the JumpStart configuration for
SX might not be compatible for other reasons (install-discuss may know
better).

-Albert

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SXCE build 90 vs S10U6?

2008-06-12 Thread Mike Gerdts
On Thu, Jun 12, 2008 at 9:22 PM, Tim [EMAIL PROTECTED] wrote:
 They aren't even close to each other. ?Things like in-kernel cifs will
 never be put back.

 My question is, what is holding you back from just deploying on sxce?
 Sun now offers support for it.

To the best of my knowledge, Sun has never provided support for sxce.
They have provided support for sxde, but that is winding down.

http://developers.sun.com/sxde/support.jsp

With the release of OpenSolaris 2008.05 we are pleased to
announce the availability of OpenSolaris Subscriptions
support as well as Sun Developer Expert Assistance for
OpenSolaris. This marks the end of the SXDE program. To
provide a smooth transition to OpenSolaris support, Sun
Developer Expert Assistance for SXDE 1/08 will remain
available through July of 2008. Thank you for your support
and participation. We look forward to seeing you at
opensolaris.com.

If you've been following the various lists related to OpenSolaris
2008.05, you will likely understand that the plans and mechanisms
around its support are not yet fully baked.  Currently, the only
installation mechanism requires a live CD and GUI console.  This
doesn't fit very well with my idea of what I want to run in production
in a data center.  That, combined with a relatively short
supported life (18 months) it doesn't fit the bill for many data
centers.

I'm thinking that by the time that it is possible to have a private
repository (e.g. mirror of pkg.sun.com) and the current batch of
really fresh code from the Installation and Packaging community gets
burned in a bit, the 18 month cycle will not be such a big deal in
many cases.  It's shaping up that upgrading to the latest bits should
be easier and safer than patching is today.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SXCE build 90 vs S10U6?

2008-06-12 Thread Tim
I guess I find the difference between b90 and opensolaris trivial
given we're supposed to be getting constant updates following the sxce
builds.



On 6/12/08, Mike Gerdts [EMAIL PROTECTED] wrote:
 On Thu, Jun 12, 2008 at 9:22 PM, Tim [EMAIL PROTECTED] wrote:
 They aren't even close to each other. ?Things like in-kernel cifs will
 never be put back.

 My question is, what is holding you back from just deploying on sxce?
 Sun now offers support for it.

 To the best of my knowledge, Sun has never provided support for sxce.
 They have provided support for sxde, but that is winding down.

 http://developers.sun.com/sxde/support.jsp

 With the release of OpenSolaris 2008.05 we are pleased to
 announce the availability of OpenSolaris Subscriptions
 support as well as Sun Developer Expert Assistance for
 OpenSolaris. This marks the end of the SXDE program. To
 provide a smooth transition to OpenSolaris support, Sun
 Developer Expert Assistance for SXDE 1/08 will remain
 available through July of 2008. Thank you for your support
 and participation. We look forward to seeing you at
 opensolaris.com.

 If you've been following the various lists related to OpenSolaris
 2008.05, you will likely understand that the plans and mechanisms
 around its support are not yet fully baked.  Currently, the only
 installation mechanism requires a live CD and GUI console.  This
 doesn't fit very well with my idea of what I want to run in production
 in a data center.  That, combined with a relatively short
 supported life (18 months) it doesn't fit the bill for many data
 centers.

 I'm thinking that by the time that it is possible to have a private
 repository (e.g. mirror of pkg.sun.com) and the current batch of
 really fresh code from the Installation and Packaging community gets
 burned in a bit, the 18 month cycle will not be such a big deal in
 many cases.  It's shaping up that upgrading to the latest bits should
 be easier and safer than patching is today.

 --
 Mike Gerdts
 http://mgerdts.blogspot.com/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SXCE build 90 vs S10U6?

2008-06-12 Thread Mike Gerdts
On Thu, Jun 12, 2008 at 10:12 PM, Tim [EMAIL PROTECTED] wrote:
 I guess I find the difference between b90 and opensolaris trivial
 given we're supposed to be getting constant updates following the sxce
 builds.

But the supported version of OpenSolaris will not be on the same
schedule as sxce.  Opensolaris 2008.05 is based on snv_86.  The
supported version will only have bug fixes until 2008.11.  That is, it
follows much more of fthe same type of schedule that sxde did.

Additionally, OpenSolaris has completely redone the installation and
packaging bits.  When you are running a bunch of servers with
aggregate storage capacity of over 100 TB you are probably doing
something that is rather important to the company that shelled out
well over $100,000 for the hardware.  In most (not all) environments
that I have worked in this says that you don't want to be relying too
heavily on 1.0 software[1] or external web services[2] that the
maintainer has not shown a track record[3] of maintaining in a way
that meets typical enterprise-level requirements.


1. The non-live CD installer has not even made it into the unstable
Mercurial repository.  The pkg and beadm commands and associated
libraries have less than a month of existence in anything that any
vendor is claiming to support.
2. AFAIK, pkg.sun.com does not serve packages yet.
pkg.opensolaris.org serves up packages from snv_90 by default even
though snv_86 is the variant that is supposedly supported.
3. There were numerous complaints of repeated timeouts when the snv_90
packages were released resulting in having to restart the upgrade from
the start.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SXCE build 90 vs S10U6?

2008-06-12 Thread Tim
...There was a post just this afternoon stating the opensolaris update
track would be back to following sxce with b91 so I haven't a clue
what you're talking about.

As for the features/support they're looking for, if they wanted
enterprise infallible storage, a thumper was the wrong choice day 1.
I love the platform, but its nowhere near the league of a filer, or
universe of a usp/sym.



On 6/12/08, Mike Gerdts [EMAIL PROTECTED] wrote:
 On Thu, Jun 12, 2008 at 10:12 PM, Tim [EMAIL PROTECTED] wrote:
 I guess I find the difference between b90 and opensolaris trivial
 given we're supposed to be getting constant updates following the sxce
 builds.

 But the supported version of OpenSolaris will not be on the same
 schedule as sxce.  Opensolaris 2008.05 is based on snv_86.  The
 supported version will only have bug fixes until 2008.11.  That is, it
 follows much more of fthe same type of schedule that sxde did.

 Additionally, OpenSolaris has completely redone the installation and
 packaging bits.  When you are running a bunch of servers with
 aggregate storage capacity of over 100 TB you are probably doing
 something that is rather important to the company that shelled out
 well over $100,000 for the hardware.  In most (not all) environments
 that I have worked in this says that you don't want to be relying too
 heavily on 1.0 software[1] or external web services[2] that the
 maintainer has not shown a track record[3] of maintaining in a way
 that meets typical enterprise-level requirements.


 1. The non-live CD installer has not even made it into the unstable
 Mercurial repository.  The pkg and beadm commands and associated
 libraries have less than a month of existence in anything that any
 vendor is claiming to support.
 2. AFAIK, pkg.sun.com does not serve packages yet.
 pkg.opensolaris.org serves up packages from snv_90 by default even
 though snv_86 is the variant that is supposedly supported.
 3. There were numerous complaints of repeated timeouts when the snv_90
 packages were released resulting in having to restart the upgrade from
 the start.

 --
 Mike Gerdts
 http://mgerdts.blogspot.com/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-12 Thread Keith Bierman

On Jun 12, 2008, at 12:46 PM, Chris Siebenmann wrote:


  Or to put it another way: disk space is a permanent commitment,
 servers are not.


In the olden times (e.g. 1980s) on various CDC and Univac timesharing  
services, I recall there being two kinds of storage ... dayfiles  
and permanent files. The former could (and as a matter of policy did)  
be removed at the end of the day.

It was typically cheaper to move the fraction of one's dayfile output  
to tape, and have it rolled back in the next day ... but that was an  
optimization (or pessimization if the true costs were calculated).

I could easily imagine providing two tiers of storage for a  
university environment ... one which wasn't backed up, and doesn't  
come with any serious promises ... which could be pretty inexpensive  
and the second tier which has the kind of commitments you suggest are  
required.

Tier 2 should be better than storing things in /tmp, but could  
approach consumer pricing ... and still be good enough for a lot of  
uses.
-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss