Re: [zfs-discuss] grrr, How to get rid of mis-touched file named `-c'

2011-11-28 Thread Jason King (Gmail)
Did you try rm -- filename ?

Sent from my iPhone

On Nov 23, 2011, at 1:43 PM, Harry Putnam rea...@newsguy.com wrote:

 Somehow I touched some rather peculiar file names in ~.  Experimenting
 with something I've now forgotten I guess.
 
 Anyway I now have 3 zero length files with names -O, -c, -k.
 
 I've tried as many styles of escaping as I could come up with but all
 are rejected like this:
 
  rm \-c 
  rm: illegal option -- c
  usage: rm [-fiRr] file ...
 
 Ditto for:
 
  [\-]c
  '-c'
  *c
  '-'c
 \075c
 
 OK, I'm out of escapes.  or other tricks... other than using emacs but
 I haven't installed emacs as yet.
 
 I can just ignore them of course, until such time as I do get emacs
 installed, but by now I just want to know how it might be done from a
 shell prompt.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Native ZFS for Linux

2010-06-10 Thread Jason King
On Thu, Jun 10, 2010 at 11:32 PM, Erik Trimble erik.trim...@oracle.com wrote:
 On 6/10/2010 9:04 PM, Rodrigo E. De León Plicet wrote:

 On Tue, Jun 8, 2010 at 7:14 PM, Anurag Agarwalanu...@kqinfotech.com
  wrote:


 We at KQInfotech, initially started on an independent port of ZFS to
 linux.
 When we posted our progress about port last year, then we came to know
 about
 the work on LLNL port. Since then we started working on to re-base our
 changing on top Brian's changes.

 We are working on porting ZPL on that code. Our current status is that
 mount/unmount is working. Most of the directory operations and read/write
 is
 also working. There is still lot more development work and testing that
 needs to be going in this. But we are committed to make this happen so
 please stay tuned.


 Good times ahead!


 I don't mean to be a PITA, but I'm assuming that someone lawyerly has had
 the appropriate discussions with the porting team about how linking against
 the GPL'd Linux kernel means your kernel module has to be GPL-compatible.
  It doesn't matter if you distribute it outside the general kernel source
 tarball, what matters is that you're linking against a GPL program, and the
 old GPL v2 doesn't allow for a non-GPL-compatibly-licensed module to do
 that.

 As a workaround, take a look at what nVidia did for their X driver - it uses
 a GPL'd kernel module as a shim, which their codebase can then call from
 userland. Which is essentially what the ZFS FUSE folks have been reduced to
 doing.

How does EMC get away with it with powerpath, or Symantec with VxVM
and VxFS? -- I don't recall any shim modules with either product on
Linux when I used them at a previous job, yet they're still there.


 If the new work is a whole new implementation of the ZFS *design* intended
 for the linux kernel, then Yea! Great!  (fortunately, it does sound like
 this is what's going on)  Otherwise, OpenSolaris CDDL'd code can't go into a
 Linux kernel, module or otherwise.

Well technically they could start with the GRUB zfs code, which is GPL
licensed, but I don't think that's the case.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] useradd(1M) and ZFS dataset homedirs

2010-05-14 Thread Jason King
In the meantime, you can use autofs to do something close to this if
you like (sort of like the pam_mkhomedir module) -- you can have it
execute a script that returns the appropriate auto_user entry (given a
username as input).  I wrote one a long time ago that would do a zfs
create if the dataset didn't already exist (and assuming the homedir
for the user was /home/USERNAME from getent).

On Fri, May 14, 2010 at 8:15 AM, David Magda dma...@ee.ryerson.ca wrote:
 I have a suggestion on modifying useradd(1M) and am not sure where to
 input it.

 Since individual ZFS file systems often make it easy to manage things,
 would it be possible to modify useradd(1M) so that if the 'base_dir' is in
 a zpool, a new dataset is created for the user's homedir?

 So if you specify -m, a regular directory is created, but if you specify
 (say) -z, a new dataset is created. Usermod(1M) would also probably have
 this option.


 GNU / Linux already as a -Z (capital-zed) option AFAICT:

       -Z, --selinux-user SEUSER
           The SELinux user for the user´s login. The default is
           to leave this field blank, which causes the system to
           select the default SELinux user.


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot versus Netapp - Security and convenience

2010-05-03 Thread Jason King
If you're just wanting to do something like the netapp .snapshot
(where it's in every directory), I'd be curious if the CIFS shadow
copy support might already have done a lot of the heavy lifting for
this. That might be a good place to look

On Mon, May 3, 2010 at 7:25 PM, Peter Jeremy
peter.jer...@alcatel-lucent.com wrote:
 On 2010-Apr-30 21:56:46 +0800, Edward Ned Harvey solar...@nedharvey.com 
 wrote:
How many bytes long is an inode number?  I couldn't find that easily by
googling, so for the moment, I'll guess it's a fixed size, and I'll guess
64bits (8 bytes).

 Based on a rummage in some header files, it looks like it's 8 bytes.

How many bytes is that?  Would it be exceptionally difficult to extend
and/or make variable?

 Extending inodes increases the amount of metadata associated with a
 file, which increases overheads for small files.  It looks like a ZFS
 inode is currently 264 bytes, but is always stored with a dnode and
 currently has some free space.  ZFS code assumes that the physical
 dnode (dnode+znode+some free space) is a fixed size and making it
 variable is likely to be quite difficult.

One important consideration in that hypothetical scenario would be
fragmentation.  If every inode were fragmented in two, that would be a real
drag for performance.  Perhaps every inode could be extended (for example)
32 bytes to accommodate a list of up to 4 parent inodes, but whenever the
number of parents exceeds 4, the inode itself gets fragmented to store a
variable list of parents.

 ACLs already do something like this.  And having parent information
 stored away from the rest of the inode would not impact the normal
 inode access time since the parent information is not normally needed.

 On 2010-Apr-30 23:08:58 +0800, Edward Ned Harvey solar...@nedharvey.com 
 wrote:
Therefore, it should be very easy to implement proof of concept, by writing
a setuid root C program, similar to sudo which could then become root,
identify the absolute path of a directory by its inode number, and then
print that absolute path, only if the real UID has permission to ls that
path.

 It doesn't need to be setuid.  Check out
 http://minnie.tuhs.org/cgi-bin/utree.pl?file=V6/usr/source/s2/pwd.c
 http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/pwd.c
 (The latter is somewhat more readable)

While not trivial, it's certainly possible to extend inodes of files, to
include parent pointers.

 This is a far more significant change and the utility is not clear.

Also not trivial, it's certainly possible to make all this information
available under proposed directories, .zfs/inodes or something similar.

 HP Tru64 already does something like this.

 --
 Peter Jeremy

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot versus Netapp - Security and convenience

2010-05-03 Thread Jason King
Well the GUI I think is just Windows, it's all just APIs that are
presented to windows.

On Mon, May 3, 2010 at 10:16 PM, Edward Ned Harvey
solar...@nedharvey.com wrote:
 From: jason.brian.k...@gmail.com [mailto:jason.brian.k...@gmail.com] On
 Behalf Of Jason King

 If you're just wanting to do something like the netapp .snapshot
 (where it's in every directory), I'd be curious if the CIFS shadow
 copy support might already have done a lot of the heavy lifting for
 this. That might be a good place to look

 This is a wonderful suggestion.  Although I'm not happy with the GUI
 implementation of CIFS shadow copy, it certainly does seem that they would
 have to tackle a lot of the same issues.

 Heheheh.  Not that I have any clue how to start answering that question.
 ;-)


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-21 Thread Jason King
ISTR POSIX also doesn't allow a number of features that can be turned
on with zfs (even ignoring the current issues that prevent ZFS from
being fully POSIX compliant today).  I think an additional option for
the snapdir property ('directory' ?) that provides this behavior (with
suitable warnings about posix compliance) would be reasonable.

I believe it's sufficient that zfs provide the necessary options to
act in a posix compliant manner (much like you have to set $PATH
correctly to get POSIX conforming behavior, even though that might not
be the default), though I'm happy to be corrected about this.


On Wed, Apr 21, 2010 at 12:45 PM, Nicolas Williams
nicolas.willi...@oracle.com wrote:
 POSIX doesn't allow us to have special dot files/directories outside
 filesystem root directories.

 Nico
 --
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making ZFS better: zfshistory

2010-04-21 Thread Jason King
It still has the issue that the end user has to know where the root of
the filesystem is in the tree (assuming it's even accessible on the
system -- might not be for an NFS mount).


On Wed, Apr 21, 2010 at 6:01 PM, Brandon High bh...@freaks.com wrote:
 On Wed, Apr 21, 2010 at 10:38 AM, Edward Ned Harvey
 solar...@nedharvey.com wrote:
 At present, the workaround I have for zfs is:
        ln -s .zfs/snapshot snapshot
 This makes the snapshot directory plainly visible to all NFS and CIFS users.
 Easy to find every time, easy to remember.  Especially important for Mac
 cifs clients, because there's no addressbar to type in .zfs even if you
 knew that's what you want to do.

 You can also set  snapdir=visible for the datasets that you care
 about, which will make them show up for all users.

 -B

 --
 Brandon High : bh...@freaks.com
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] benefits of zfs root over ufs root

2010-04-01 Thread Jason King
On Thu, Apr 1, 2010 at 9:06 AM, David Magda dma...@ee.ryerson.ca wrote:
 On Wed, March 31, 2010 21:25, Bart Smaalders wrote:

 ZFS root will be the supported root filesystem for Solaris Next; we've
 been using it for OpenSolaris for a couple of years.

 This is already supported:

 Starting in the Solaris 10 10/08 release, you can install and boot from a
 ZFS root file system in the following ways:

 http://docs.sun.com/app/docs/doc/819-5461/zfsboot-1

I suspect it was meant that for new installs of Solaris.Next, ZFS will
be the only supported option (i.e. you can upgrade w/ UFS root, but
not do new installs with UFS root).  I thought I saw comments to that
effect in the past (but I don't, nor ever have worked for Sun/Oracle,
so this is just my recollection of past conversations).
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] benefits of zfs root over ufs root

2010-03-31 Thread Jason King
On Wed, Mar 31, 2010 at 7:53 PM, Erik Trimble erik.trim...@oracle.com wrote:
 Brett wrote:

 Hi Folks,

 Im in a shop thats very resistant to change. The management here are
 looking for major justification of a move away from ufs to zfs for root file
 systems. Does anyone know if there are any whitepapers/blogs/discussions
 extolling the benefits of zfsroot over ufsroot?

 Regards in advance
 Rep


 I can't give you any links, but here's a short list of advantages:

 (1) all the standard ZFS advantages over UFS
 (2) LiveUpgrade/beadm related improvements
      (a)  much faster on ZFS
      (b)  don't need dedicated slice per OS instance, so it's far simpler to
 have N different OS installs
      (c)  very easy to keep track of which OS instance is installed where
 WITHOUT having to mount each one
      (d)  huge space savings (snapshots save lots of space on upgrades)
 (3) much more flexible swap space allocation (no hard-boundary slices)
 (4) simpler layout of filesystem partitions, and more flexible in changing
 directory size limits (e.g. /var )
 (5) mirroring a boot disk is simple under ZFS - much more complex under
 SVM/UFS
 (6) root-pool snapshots make backups trivially easy



 --
 Erik Trimble
 Java System Support
 Mailstop:  usca22-123
 Phone:  x17195
 Santa Clara, CA
 Timezone: US/Pacific (GMT-0800)

I don't think 2b is given enough emphasis.  The ability to quickly
clone your root filesystem, apply whatever change you need to (patch,
config change), reboot into the new environment, and be able to
provably back out to the prior state with easy is a life saver (yes
you could do this with ufs, but is assumes you have enough free slices
on your direct attached disks, and it takes _far_ longer simply
because you must first copy the entire boot environment first --
adding probably a few hours, versus the ~1s to snapshot + clone).
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with hundreds of millions of files

2010-02-24 Thread Jason King
Could also try /usr/gnu/bin/ls -U.

I'm working on improving the memory profile of /bin/ls (as it gets
somewhat excessive when dealing with large directories), which as a
side effect should also help with this.

Currently /bin/ls allocates a structure for every file, and doesn't
output anything until it's finished reading the entire directory, so
even if it skips the sort, that's generally a fraction of the total
time spent, and doesn't save you much.

The structure also contains some duplicative data (I'm guessing that
at the time -- a _long_ time ago, the decision was made to precompute
some stuff versus testing the mode bits -- probably premature
optimization, even then).

I'm trying to make it so that it does what's necessary and avoid
duplicate work (so for example, if the output doesn't need to be
sorted it can display the entries as they are read -- though the
situations where this can be done are not as often as you think).
Hopefully once I'm done (I've been tied down with some other stuff),
I'll be able to post some results.

On Wed, Feb 24, 2010 at 7:29 PM, Kjetil Torgrim Homme
kjeti...@linpro.no wrote:
 David Dyer-Bennet d...@dd-b.net writes:

 Which is bad enough if you say ls.  And there's no option to say
 don't sort that I know of, either.

 /bin/ls -f

 /bin/ls makes sure an alias for ls to ls -F or similar doesn't
 cause extra work.  you can also write \ls -f to ignore a potential
 alias.

 without an argument, GNU ls and SunOS ls behave the same.  if you write
 ls -f * you'll only get output for directories in SunOS, while GNU ls
 will list all files.

 (ls -f has been there since SunOS 4.0 at least)
 --
 Kjetil T. Homme
 Redpill Linpro AS - Changing the game

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] chmod behavior with symbolic links

2010-02-22 Thread Jason King
If you're doing anything with ACLs, the GNU utilities have no
knowledge of ACLs, so GNU chmod will not modify them (nor will GNU ls
show ACLs), you need to use /bin/chmod and /bin/ls to manipulate them.

It does sound though that GNU chmod is explicitly testing and skipping
any entry that's a link (at least when -R is present).  If lchmod
isn't added, perhaps a flag to indicate 'don't touch symlinks' would
be useful to add to /bin/chmod (changing the default behavior might be
a bit more complicated, though might be worth looking into).

On Mon, Feb 22, 2010 at 9:06 AM, Ryan  John john.r...@bsse.ethz.ch wrote:
 Hi,



 I know it’s documented in the manual, but I find it a bit strange behaviour
 that chmod –R changes the permissions of the target of a symbolic link.



 This just really messed up my system, where I have a data directory, with a
 backup of some Linux systems.

 Within these Linux systems, there are some absolute links like -
 /usr/bin/bash

 So, what I did, was set some NFSv4 ACLs recursively on this “data” directory
 like:

 chmod -R
 A=group:admins:rwxpdDaARWcCos:fd-:allow,group:it:rwxpdDaARWcCos:fd-:allow
 /array0/data



 What I then found is /usr/bin/bash and a whole lot of other files in
 /usr/lib /lib and /usr/sbin look like:

 --+  1 root    root  1019 Feb 22 14:31 bash

     group:admins:rwxpdDaARWcCos:--I:allow

     group:it:rwxpdDaARWcCos:--I:allow



 From here, I can only think of backing out the last update and updating
 again.



 I noticed the GNU chmod won’t change the ACL of the target.



 Is there any reason for this behaviour?

 Have I missed something?



 Regards

 John Ryan



 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle Performance - ZFS vs UFS

2010-02-13 Thread Jason King
On Sat, Feb 13, 2010 at 9:58 AM, Jim Mauro james.ma...@sun.com wrote:
 Using ZFS for Oracle can be configured to deliver very good performance.
 Depending on what your priorities are in terms of critical metrics, keep in
 mind
 that the most performant solution is to use Oracle ASM on raw disk devices.
 That is not intended to imply anything negative about ZFS or UFS. The simple
 fact is that when you but your Oracle datafiles on any file system, there's
 a much
 longer code path involved in reading and writing files, along with the file
 systems
 use of memory that needs to be considered. ZFS offers enterprise-class
 features
 (the admin model, snapshots, etc) that make it a great choice to deploy in
 production, but, from a pure performance point-of-view, it's not going to be
 the absolute fastest. Configured correctly, it can meet or exceed
 performance
 requirements.

 For Oracle, you need to;
 - Make sure you're the latest Solaris 10 update release (update 8).
 - For the datafiles, set the recordsize to align with the db_block_size (8k)
 - Put the redo logs on a seperate zpool, with the default 128k recordsize
 - Disable ZFS data caching (primarycache=metadata). Let Oracle cache the
 data
   in the SGA.
 - Watch your space in your zpools - don't run them at 90% full.

 Read the link Richard sent for some additional information.

There is of course the caveat of using raw devices with databases (it
becomes harder to track usage, especially as the number of LUNs
increases, slightly less visibility into their usage statistics at the
OS level ).   However perhaps now someone can implement the CR I filed
a long time ago to add ASM support to libfstyp.so that would allow
zfs, mkfs, format, etc. to identify ASM volumes =)


 Thanks,
 /jim


 Tony MacDoodle wrote:

 Was wondering if anyone has had any performance issues with Oracle running
 on ZFS as compared to UFS?

 Thanks
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle Performance - ZFS vs UFS (Jason King)

2010-02-13 Thread Jason King
My problem is when you have 100+ luns divided between OS and DB,
keeping track of what's for what can become problematic.   It becomes
even worse when you start adding luns -- the chance of accidentally
grabbing a DB lun instead of one of the new ones is non-trivial (then
there's also the chance that your storage guy might make a mistake and
give you luns already mapped elsewhere on accident -- which I have
seen happen before).  And when you're forced to do it at 3am after
already working 12 hours that day well safeguards are a good
thing.


On Sat, Feb 13, 2010 at 2:13 PM, Allen Eastwood mi...@paconet.us wrote:

 There is of course the caveat of using raw devices with databases (it
 becomes harder to track usage, especially as the number of LUNs
 increases, slightly less visibility into their usage statistics at the
 OS level ).   However perhaps now someone can implement the CR I filed
 a long time ago to add ASM support to libfstyp.so that would allow
 zfs, mkfs, format, etc. to identify ASM volumes =)

 While that would be nice, I would submit that if using ASM, usage becomes 
 solely a DBA problem.  From the OS level, as a system admin, I don't really 
 care…I refer any questions back to the DBA.  They should have tools to deal 
 with all that.

 OTOH, with more things stacked on more servers (zones, etc.) I might care if 
 there's a chance of whatever Oracle is doing affecting performance elsewhere.

 Thoughts?
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] /usr/bin/chgrp destroys ACL's?

2010-02-10 Thread Jason King
On Wed, Feb 10, 2010 at 6:45 PM, Paul B. Henson hen...@acm.org wrote:

 We have an open bug which results in new directories created over NFSv4
 from a linux client having the wrong group ownership. While waiting for a
 patch to resolve the issue, we have a script running hourly on the server
 which finds directories owned by the wrong group and fixes them.

 One of our users complained that the ACL's on some of their content were
 broken, and upon investigation we determined /usr/bin/chgrp is breaking
 them:

 drwxrws--x+  2 root     iit_webdev       2 Feb 10 16:29 testdir
            owner@:rwxpdDaARWcC--:-di---:allow
            owner@:rwxpdDaARWcC--:--:allow
            group@:rwxpdDaARWc---:-di---:allow
            group@:rwxpdDaARWc---:--:allow
    group:iit_webdev-admin:rwxpdDaARWcC--:-di---:allow
    group:iit_webdev-admin:rwxpdDaARWcC--:--:allow
         everyone@:--x---a-R-c---:-di---:allow
         everyone@:--x---a-R-c---:--:allow
            owner@:rwxpdDaARWcC--:f-i---:allow
            group@:rwxpdDaARWc---:f-i---:allow
    group:iit_webdev-admin:rwxpdDaARWcC--:f-i---:allow
         everyone@:--:f-i---:allow

 # chgrp iit testdir

 drwxrws--x+  2 root     iit            2 Feb 10 16:29 testdir
            owner@:rwxpdDaARWcC--:-di---:allow
            owner@:dDaARWcC--:--:allow
            group@:rwxpdDaARWc---:-di---:allow
            group@:dDaARWc---:--:allow
    group:iit_webdev-admin:rwxpdDaARWcC--:-di---:allow
    group:iit_webdev-admin:rwxpdDaARWcC--:--:allow
         everyone@:--x---a-R-c---:-di---:allow
         everyone@:--a-R-c---:--:allow
            owner@:rwxpdDaARWcC--:f-i---:allow
            group@:rwxpdDaARWc---:f-i---:allow
    group:iit_webdev-admin:rwxpdDaARWcC--:f-i---:allow
         everyone@:--:f-i---:allow
            owner@:--:--:deny
            owner@:rwxp---A-W-Co-:--:allow
            group@:--:--:deny
            group@:rwxp--:--:allow
         everyone@:rw-p---A-W-Co-:--:deny
         everyone@:--x---a-R-c--s:--:allow

 Sure enough, per truss:

 chmod(testdir, 02771)                         = 0

 Looking at the chgrp man page:

     Unless chgrp  is  invoked  by  a  process  with  appropriate
     privileges, the set-user-ID and set-group-ID bits of a regu-
     lar file will be cleared  upon  successful  completion;  the
     set-user-ID and set-group-ID bits of other file types may be
     cleared.

 Well, I'm running the chgrp as *root*, and it's not *clearing* the existing
 setgid bit on the directory, it's *adding* it when it's already there. Why?
 It seems completely unnecessary and **breaks the ACL**.

 This is yet another instance of the general problem I posted about
 yesterday:

        http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg34588.html

 to which I have so far received no comments (Dozens of people can spend
 over a week arguing about the cost effectiveness of Sun branded storage ;),
 and not a single person is interested in an endemic ACL problem?).

 I was completely unsuccessful at getting samba under Solaris 10 to stop
 gratuitously chmod()'ing stuff, so I ended up preloading a shared library
 overriding the chmod call with a noop. Which works perfectly, and results
 in exactly the behavior I need. But it's not really feasible to run around
 and tweak every little binary around (preload a shared library to stop
 chgrp from breaking ACL's too?), which is why I think it would be an
 excellent feature to let the underlying operating system deal with it --
 hence aclmode ignore/deny...

I believe the problem is that chgrp is not ACL aware.  I suspect that
zfs is interpreting the group ACLs and adjusting the mode value
accordingly to try to indicate the 'preserve owner/group on new file'
semantics with the old permissions, however it sounds like it's not a
symmetric operation -- if chgrp sees a directory with suid or sgid
set, it does chown(file, original_mode  ~(S_IFMT)), when it should
probably be more careful if ACLs are present.

I do think the default aclmode and aclinherit settings are unintuitive
and quite surprising (I'd almost argue flat out wrong).  I've found
setting aclmode and aclinherit to passthrough saves what little hair I
have left.   If you haven't tried that already, might want to see if
that helps any.

rant type=mini
My experience (perhaps others will have different experiences) is that
due to the added complexity and administrative overhead, ACLs are used
when it's absolutely necessary -- i.e. you have something that due to
it's nature must have very explicit and precise access control.
Things like payroll, financial, or other HR data for example.  The
last thing I want is the system going behind my back and silently
modifying the permissions I'm trying to set, and leaving directories
and files with permissions other than what was set (which is what you
get today with the defaults).  

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-25 Thread Jason King
On Tue, Jan 19, 2010 at 9:25 PM, Matthew Ahrens matthew.ahr...@sun.com wrote:
 Michael Schuster wrote:

 Mike Gerdts wrote:

 On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi mikko.la...@lmmz.net wrote:

 Hello,

 As a result of one badly designed application running loose for some
 time,
 we now seem to have over 60 million files in one directory. Good thing
 about ZFS is that it allows it without any issues. Unfortunatelly now
 that
 we need to get rid of them (because they eat 80% of disk space) it seems
 to be quite challenging.

 Traditional approaches like find ./ -exec rm {} \; seem to take
 forever
 - after running several days, the directory size still says the same.
 The
 only way how I've been able to remove something has been by giving rm
 -rf to problematic directory from parent level. Running this command
 shows directory size decreasing by 10,000 files/hour, but this would
 still
 mean close to ten months (over 250 days) to delete everything!

 I also tried to use unlink command to directory as a root, as a user
 who
 created the directory, by changing directory's owner to root and so
 forth,
 but all attempts gave Not owner error.

 Any commands like ls -f or find will run for hours (or days) without
 actually listing anything from the directory, so I'm beginning to
 suspect
 that maybe the directory's data structure is somewhat damaged. Is there
 some diagnostics that I can run with e.g zdb to investigate and
 hopefully fix for a single directory within zfs dataset?

 In situations like this, ls will be exceptionally slow partially
 because it will sort the output.

 that's what '-f' was supposed to avoid, I'd guess.

 Yes, but unfortunately, the typical reason ls is slow with huge directories
 is that it requires a huge amount of memory.  Even when not sorting (with
 -f), it still allocates a huge amount of memory for each entry listed, and
 buffers the output until the directory is entirely read.  So typically -f
 doesn't help performance much.  Improving this would be a great small
 project for an OpenSolaris contributor!  I filed a couple of bugs for this
 several years ago, I can dig them up if anyone is interested.

 --matt

After a few days of deliberation, I've decided to start working on
this (in addition to adding the 256 color ls support Danek was
interested in as well as addressing a number of other bugs).

I suspect it's going to require a significant overhaul of the existing
code to get it where it can behave better with large directories
(though the current code could probably use the cleanup).

If anyone's interested in testing once I've got it to a point worth
testing (will probably be a few weeks, depending on how much time I
can commit to it)... let me know...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2009-12-03 Thread Jason King
On Thu, Dec 3, 2009 at 9:58 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
 On Thu, 3 Dec 2009, Erik Ableson wrote:

 Much depends on the contents of the files. Fixed size binary blobs that
 align nicely with 16/32/64k boundaries, or variable sized text files.

 Note that the default zfs block size is 128K and so that will therefore be
 the default dedup block size.

 Most files are less than 128K and occupy a short tail block so concatenating
 them will not usually enjoy the benefits of deduplication.

 It is not wise to riddle zfs with many special-purpose features since zfs
 would then be encumbered by these many features, which tend to defeat future
 improvements.

Well it could be done in a way such that it could be fs-agnostic
(perhaps extending /bin/cat with a new flag such as -o outputfile, or
detecting if stdout is a file vs tty, though corner cases might get
tricky).   If a particular fs supported such a feature, it could take
advantage of it, but if it didn't, it could fall back to doing a
read+append.  Sort of like how mv figures out if the source  target
are the same or different filesystems and acts accordingly.

There are a few use cases I've encountered where having this would
have been _very_ useful (usually when trying to get large crashdumps
to Sun quickly).  In general, it would allow one to manipulate very
large files by breaking them up into smaller subsets while still
having the end result be a single file.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + fsck

2009-11-08 Thread Jason King
On Sun, Nov 8, 2009 at 7:55 AM, Robert Milkowski mi...@task.gda.pl wrote:

 fyi

 Robert Milkowski wrote:

 XXX wrote:

 | Have you actually tried to roll-back to previous uberblocks when you
 | hit the issue?  I'm asking as I haven't yet heard about any case
 | of the issue witch was not solved by rolling back to a previous
 | uberblock. The problem though was that the way to do it was hackish.

  Until recently I didn't even know that this was possible or a likely
 solution to 'pool panics system on import' and similar pool destruction,
 and I don't have any tools to do it. (Since we run Solaris 10, we won't
 have official support for it for quite some time.)


 I wouldn't be that surprised if this particular feature would actually be
 backported to S10 soon. At least you may raise a CR asking for it - maybe
 you will get an access to IDR first (I'm not saying there is or isn't
 already one).

  If there are (public) tools for doing this, I will give them a try
 the next time I get a test pool into this situation.


 IIRC someone send one to the zfs-discuss list some time ago.
 Then usually you will also need to poke with zdb.
 A sketchy and unsupported procedure was discussed on the list as well.
 Look at the archives.

 | The bugs which prevented importing a pool in some circumstances were
 | really annoying but lets face it - it was bound to happen and they
 | are just bugs which are getting fixed. ZFS is still young after all.
 | And when you google for data loss on other filesystems I'm sure you
 | will find lots of user testimonies - be it ufs, ext3, raiserfs or your
 | favourite one.

  The difference between ZFS and those other filesystems is that with
 a few exceptions (XFS, ReiserFS), which sysadmins in the field didn't
 like either, those filesystems didn't generally lose *all* your data
 when something went wrong. Their official repair tools could usually
 put things back together to at least some extent.


 Generally they didn't although I've seen situation when entire ext2 and
 ufs were lost and fsck was not able to get them even mounted (kernel panics
 right after mounting them). In other occassion fsck was crashing the box in
 yet another one fsck claimed everything was ok but then when doing backup
 system was crashing (fsck can't really properly fix filesystem state - it is
 more of guessing and sometimes it goes terribly wrong).

 But I agrre that generally with other file systems you can recover most or
 all data just fine.
 And generally it is the case with zfs - there were probably more bugs in
 ZFS as it is much younger filesystem, but most of them were very quickly
 fixed. And the uberblock one - I 100% agree then when you hit the issue and
 didn't know about manual method to recover it was very bad - but it has
 finally been fixed.

 (Just as importantly, when they couldn't put things back together you
 could honestly tell management and the users 'we ran the recovery tools
 and this is all they could get back'. At the moment, we would have
 to tell users and management 'well, there are no (official) recovery
 tools...', unless Sun Support came through for once.)


 But these tools are built-in into zfs and are happening automatically and
 with virtually 100% confidence that if something can be fixed it is fixed
 correctly and if something is wrong it will be detected - thanks to
 end-to-end checksumming of data and meta-data. The problem *was* that one
 case scenario when rolling back to previous uberblock is required was not
 implemented and required a complicated and undocumented procedure to follow.
 It wasn't high priority for Sun as it was very rare , wasn't affecting much
 enterprise customers and although complicated the procedure is there is one
 and was successfully used on many occasions even for non paying customers
 thanks to guys like Victor on the zfs mailing list who helped some people in
 such a situations.

 But you didn't know about it and it seems like Sun's support service was
 no use for you - which is really a shame.
 In your case I would probably point that out to them and at least get some
 good deal as a compensation or something...

 But what is most important is that finally fully supported, built in and
 easy to use procedure is available to recover from such situations. As time
 will progress and more bugs will be fixed ZFS will behave much better under
 many corner cases as it does already in Open Solaris - last 6 months or so
 were really very productive in fixing many bugs like that.

 | However the whole point of the discussion is that zfs really doesn't |
 need a fsck tool.
 | All the problems encountered so far were bugs and most of them are |
 already fixed. One missing feature was a built-in support for | rolling-back
 uberblock which just has been integrated. But I'm sure | there are more bugs
 to be found..

  I disagree strongly. Fsck tools have multiple purposes; ZFS obsoletes
 some of them but not all. One thing fsck is there for is to 

Re: [zfs-discuss] s10u8: lots of fixes, any commentary?

2009-10-15 Thread Jason King
On Thu, Oct 15, 2009 at 2:57 AM, Ian Collins i...@ianshome.com wrote:
 Dale Ghent wrote:

 So looking at the README for patch 14144[45]-09, there are ton of ZFS
 fixes and feature adds.

 The big features are already described in the update 8 release docs, but
 would anyone in-the-know care to comment or point out any interesting CR
 fixes that might be substantial in the areas of stability or performance?

 A couple of my CRs (a panic and a hang) are fixed in there, but LU appears
 to be FUBAR for systems with ZFS root and zones, so I can't run any
 tests

Having tried to install about 5 patches on a system with ZFS root +
sparse zones (plus a delegated dataset), FUBAR is putting it mildly..
:)

I found upgrade on attach worked much better in that instance (just
meant I could only snapshot, not create a new BE).  But hopefully I
can get ahold of a box for more testing to get it to actually work.


 --
 Ian.

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] s10u8: lots of fixes, any commentary?

2009-10-15 Thread Jason King
On Thu, Oct 15, 2009 at 9:25 AM, Enda O'Connor enda.ocon...@sun.com wrote:


 Jason King wrote:

 On Thu, Oct 15, 2009 at 2:57 AM, Ian Collins i...@ianshome.com wrote:

 Dale Ghent wrote:

 So looking at the README for patch 14144[45]-09, there are ton of ZFS
 fixes and feature adds.

 The big features are already described in the update 8 release docs, but
 would anyone in-the-know care to comment or point out any interesting CR
 fixes that might be substantial in the areas of stability or
 performance?

 A couple of my CRs (a panic and a hang) are fixed in there, but LU
 appears
 to be FUBAR for systems with ZFS root and zones, so I can't run any
 tests

 Having tried to install about 5 patches on a system with ZFS root +
 sparse zones (plus a delegated dataset), FUBAR is putting it mildly..
 :)

 if you have a separate /var dataset on zfs root then Lu in update 8 ( or
 using latest 121430-42/121431-43 ) is broke.
 this is covered in CR 6884728

 So you do need to apply 119255-70 before patching zfs root on x86 as well,
 to avoid another issue.

 what was the error/problem you ran into, output from patchadd + logs from
 /var/sadm/patch/*/log for failed patch, or /var/tmp/patchid.log.$$ if they
 exist, plus some data on the setup, ie zfs list and zonecfg to give an idea.

My problem was creating the new BE to patch -- lucreate worked, but
lumount was a disaster and left things so horribly messed up that even
after a luumount and ludelete of the new BE, a reboot was required
just to make the system sane (thankfully this was all in a maint
window anyway).  But it caused  a bunch of stale mnttab entries that
wouldn't go away as well as a bunch of 'already mounted' errors when
you tried to do anything with the filesystems.  I punted, rebooted to
clear things up, did a zfs snapshot of everything, then patched the
lived system (since I had a boot server I could use to mount the pool
and rollback if needed).



 Enda




 I found upgrade on attach worked much better in that instance (just
 meant I could only snapshot, not create a new BE).  But hopefully I
 can get ahold of a box for more testing to get it to actually work.

 --
 Ian.

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 --
 Enda O'Connor x19781  Software Product Engineering
 Patch System Test : Ireland : x19781/353-1-8199718

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] White box server for OpenSolaris

2009-09-25 Thread Jason King
It does seem to come up regularly... perhaps someone with access could
throw up a page under the ZFS community with the conclusions (and
periodic updates as appropriate)..

On Fri, Sep 25, 2009 at 3:32 AM, Erik Trimble erik.trim...@sun.com wrote:
 Nathan wrote:

 While I am about to embark on building a home NAS box using OpenSolaris
 with ZFS.

 Currently I have a chassis that will hold 16 hard drives, although not in
 caddies - down time doesn't bother me if I need to switch a drive, probably
 could do it running anyways just a bit of a pain. :)

 I am after suggestions of motherboard, CPU and ram.  Basically I want ECC
 ram and at least two PCI-E x4 channels.  As I want to run 2 x AOC-USAS_L8i
 cards for 16 drives.

 I want something with a bit of guts but over the top.  I know the HCL is
 there but I want to see what other people are using in their solutions.


 Go back and look through the archives for this list. We just had this
 discussion last month. Let's not rehash it again, as it seems to get redone
 way too often.



 --
 Erik Trimble
 Java System Support
 Mailstop:  usca22-123
 Phone:  x17195
 Santa Clara, CA

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, power failures, and UPSes

2009-06-30 Thread Jason King
On Tue, Jun 30, 2009 at 1:36 PM, Erik Trimbleerik.trim...@sun.com wrote:
 Bob Friesenhahn wrote:

 On Tue, 30 Jun 2009, Neal Pollack wrote:

 Actually, they do quite a bit more than that. They create jobs, generate
 revenue for battery manufacturers, and tech's that change batteries and do
 PM maintenance on the large units.  Let's not

 It sounds like this is a responsibility which should be moved to the US
 federal goverment since UPSs create jobs.

 Actually, I think UPS already employs some 410,000+ people, making it the
 3rd largest private employer in the USA. (5th overall, if you include the
 Federal Gov't and the US Postal Service).

 wink


 In the last 28 years of doing this stuff, I've found a few times that the
 UPS has actually worked and lasted as long as the outage.

 I have seen UPSs help quite a lot for short glitches lasting seconds, or a
 minute.  Otherwise the outage is usually longer than the UPSs can stay up
 since the problem required human attention.

 A standby generator is needed for any long outages.

 Bob
 --
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 As someone who has spend enough time doing data center work, I can attest to
 the fact that UPSes are really useful only as extremely-short-interval
 solutions. A dozen or so minutes, at best.

 The best design I've see was for an old BBN (hey, remember them!) site just
 outside of Cambridge, MA.  It took in utility power, ran it through a
 conditioner setup, and then through this nice switch thing.  The switch took
 three inputs:  Utility, a local diesel generator, and a line of marine
 batteries.  The switch itself was internally redundant (which isn't hard to
 do, it's 50's tech), so you could draw power from any (or even all 3 at
 once).  Nothing really fancy; it was simple, with no semiconductor stuff to
 fail - just all 50-ish hardwired circuitry. I don't even think there was a
 transistor in the whole shebang. Lots of capacitors, though.   :-)


 The jist of the whole thing was, that if utility power was out more than 5
 minutes, there was not good predictor of how long it would remain out - I
 saw a nice little graph that showed no real good prediction of outage time
 based on existing outage length (i.e. if the power has been out X minutes,
 you can expect it to be restored in Y minutes...).   I suspect it was
 something like 20 years of accumulated data or so...

 The end of this is simple:  UPSes should give you enough time to start the
 gen-pack.  If you are having problems with your gen-pack, you'll never have
 enough UPS time to fix it (and, it's not cost-effective to try to make it
 so), so FIX YOUR GEN PACK BEFORE the outage.  Which means - TEST it, and
 TEST it, and TEST it again!

Slight corollary -- just because you have a generator and test it
doesn't mean you can assume you can get fuel in a timely manner (so
still be prepared to shutdown if needed).  I have seen places whose DR
plans completely rely on the assumption there will never be any
problems refueling their generators.  However, last year after Ike
hit, one of ATT's central offices lost power because it ran out of
fuel (and couldn't get refilled in time).



 For home use, I set my UPS to immediately shut down anything attached to it
 for /any/ service outage.  Large enough batteries to handle anything more
 than a couple of minutes are frankly a fire-hazard for the home, not to
 mention a maintenance PITA.

 --
 Erik Trimble
 Java System Support
 Mailstop:  usca22-123
 Phone:  x17195
 Santa Clara, CA

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot mount '/export' directory is not empty

2009-03-09 Thread Jason King
On Mon, Mar 9, 2009 at 5:31 PM, Jan Hlodan jan.hlo...@sun.com wrote:
 Hi Tomas,

 thanks for the answer.
 Unfortunately, it didn't help much.
 However I can mount all file systems, but system is broken - desktop
 wont come up.

 Could not update ICEauthority file /.ICEauthority
 There is a problem with the configuration serve.
 (/usr/lib/gconf-check-2-exited with status 256)

 Then I can see wallpaper and cursor. That's it, nothing more.

There's a bug with mounting hierarchical mounts (i.e. trying to mount
/export/home before /export or such), you might be hitting that
(unfortunately the bugid escapes me at the moment).
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] posix acl migration to zfs acl's

2009-02-20 Thread Jason King
On Fri, Feb 20, 2009 at 2:59 PM, Darin Perusich
darin.perus...@cognigencorp.com wrote:
 Hello All,

 I'm in the process of migrating a file server from Solaris 9, where
 we're making extensive use of POSIX-ACLs, to ZFS and I have a question
 that I'm hoping someone can clear up for me. I'm using ufsrestore to
 restore the data to the ZFS file system so the ACLs are converted to
 NFSv4 style ACLs and everything looks good. But when I inspect the
 converted ZFS-ACLs it looks to me like there are additional and
 redundant ACLs, specifically those converted from the POSIX-ACL mask value.

 In the case I'm looking at the POSIX-ACL being converted on the
 directory is as follows:

 # file: test_dir1
 # owner: root
 # group: group_1
 user::rwx
 group::r-x  #effective:r-x
 group:group_2:r-x#effective:r-x
 mask:rwx
 other:---

 Once the directory is restored to the ZFS file system the ACLs have been
 converted to the following:

 drwxr-x---+  2 root group_1   2 Feb 20 15:00 test_dir1
owner@:rwxp-DaA--cC-s:--:allow
owner@:--:--:deny
group@:---A---C--:--:deny
group@:r-x---a---c--s:--:allow
  group:group_2:---A---C--:--:deny
  group:group_2:r-x---a---c--s:--:allow
group@:-w-p-D-A---C--:--:deny
  group:group_2:-w-p-D-A---C--:--:deny
 everyone@:--a---c--s:--:allow
 everyone@:rwxp-D-A---C--:--:deny

 The ACLs that I'm questioning the need for are:

group@:---A---C--:--:deny
group:group_2:---A---C--:--:deny

 Wouldn't these 2 ACLs be covered by the other group deny ACLs?

group@:---A---C--:--:deny
group@:-w-p-D-A---C--:--:deny
and
group:group_2:---A---C--:--:deny
group:group_2:-w-p-D-A---C--:--:deny

 It would seem to me that the converted POSIX-ACL mask are unnecessary.

 Regards,

 --
 Darin Perusich
 Unix Systems Administrator
 Cognigen Corporation
 395 Youngs Rd.
 Williamsville, NY 14221
 Phone: 716-633-3463
 Email: darin...@cognigencorp.com
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Take a look at the aclmode and aclinherit properties of the filesystem
(they're in the zfs manpage).  I know I found the defaults to be
rather surprising (and was pulling what little hair I had out until I
discovered them when trying to get ACLs working on ZFS).
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Practical Application of ZFS

2009-01-07 Thread Jason King
On Wed, Jan 7, 2009 at 3:51 PM, Kees Nuyt k.n...@zonnet.nl wrote:
 On Tue, 6 Jan 2009 21:41:32 -0500, David Magda
 dma...@ee.ryerson.ca wrote:

On Jan 6, 2009, at 14:21, Rob wrote:

 Obviously ZFS is ideal for large databases served out via
 application level or web servers. But what other practical ways are
 there to integrate the use of ZFS into existing setups to experience
 it's benefits.

Remember that ZFS is made up of the ZPL and the DMU (amongst other
things). The ZPL is the POSIX compatibility layer that most of us use.
The DMU is the actual transactional object model that stores the
actual data objects (e.g. files).

It would technically be possible for (say) MySQL to create a database
engine on top of that transactional store.

 I wouldn't be surprised to see that happen,
 given that:

 - InnoDB used to be the only transactional
  storage engine in MySQL

 - Innobase, the creator of InnoDB, has been
  acquired by Oracle

 - MySQL desparately needs a replacement
  for the InnoDB storage engine

 - MySQL has been acquired by SUN

 - ZFS (ZPL,DMU) is by SUN.

 - performance of the MySQL/InnoDB/ZFS stack is sub-optimal.

 No, I don't have any inside information.

Well if you look at some of the diagrams from
http://www.opensolaris.org/os/community/zfs/docs/zfs_last.pdf  it's
obvious that it's been thought of already.

I actually thought a neat project would be to create a transactional
API that was more or less a thin layer on top of ZFS, and then create
a database using the hotspot jvm (so probably in java, but not
necessairly so) to handle the query parsing, optimization, etc.  The
thought was the query could be compiled to java bytecode (and possibly
to native machine language all without having to write a native
machine language compiler).  Of course it looks like derby does the
'compile to bytecode' stuff already.  But the backend userland
transactional api using ZFS might still be an interesting project if
anyone was interested.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] J4500 device renumbering

2008-07-15 Thread Jason King
On Tue, Jul 15, 2008 at 4:17 AM, Ross [EMAIL PROTECTED] wrote:
 Well I haven't used a J4500, but when we had an x4500 (Thumper) on loan they 
 had Solaris pretty well integrated with the hardware.  When a disk failed, I 
 used cfgadm to offline it and as soon as I did that a bright blue Ready to 
 Remove LED lit up on the drive tray of the faulty disk, right next to the 
 handle you need to lift to remove the drive.

 There's also a bright red Fault LED as well as the standard green OK LED, 
 so spotting failed drives really should be a piece of cake.  Certainly in my 
 tests, so long as you followed the procedure in the manual it really is 
 impossible to get the wrong drive.


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


http://blogs.sun.com/eschrock/entry/external_storage_enclosures_in_solaris
has a bit more info on some of this -- while I would expect Sun
products to integrate that well, it's nice to know the framework is
there for other vendors to do the same if they wish.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-01 Thread Jason King
On Tue, Jul 1, 2008 at 8:10 AM, Mike Gerdts [EMAIL PROTECTED] wrote:
 On Tue, Jul 1, 2008 at 7:31 AM, Darren J Moffat [EMAIL PROTECTED] wrote:
 Mike Gerdts wrote:

 On Tue, Jul 1, 2008 at 5:56 AM, Darren J Moffat [EMAIL PROTECTED]
 wrote:

 Instead we should take it completely out of their hands and do it all
 dynamically when it is needed.  Now that we can swap on a ZVOL and ZVOLs
 can be extended this is much easier to deal with and we don't lose the
 benefit of protected swap devices (in fact we have much more than we had
 with SVM).

 Are you suggesting that if I have a system that has 500 MB swap free
 and someone starts up another JVM with a 16 GB heap that swap should
 automatically grow by 16+ GB right at that time?  I have seen times
 where applications require X GB of RAM, make the reservation, then
 never dirty more than X/2 GB of pages.  In these cases dynamically
 growing swap to a certain point may be OK.

 Not at all, and I don't see how you could get that assumption from what I
 said.  I said dynamically when it is needed.

 I think I came off wrong in my initial message.  I've seen times when
 vmstat reports only megabytes of free swap while gigabytes of RAM were
 available.  That is, reservations far outstripped actual usage.  Do
 you have mechanisms in mind to be able to detect such circumstances
 and grow swap to a point that the system can handle more load without
 spiraling to a long slow death?

Having this dynamic would be nice with Oracle.  10g at least will use
DISM in the preferred configuration Oracle is now preaching to DBAs.
I ran into this a few months ago on an upgrade (Solaris 8 - 10,
Oracle 8 - 10g, and hw upgrade).  The side effect of using DISM is
that it reserves an amount equal to the SGA in swap, and will fail to
startup if swap is too small.  In practice, I don't see the space ever
being touched (I suspect it's mostly there as a requirement for
dynamic reconfiguration w/ DISM, but didn't bother to dig that far).
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and viewing ACLs / OpenSolaris 2008.05

2008-05-14 Thread Jason King
On Wed, May 14, 2008 at 6:42 PM, Dave Koelmeyer
[EMAIL PROTECTED] wrote:
 Hi All, first time caller here, so please be gentle...

 I'm on OpenSolaris 2008.05, and following the really useful guide here to 
 create a CIFs share in domain mode:

 http://blogs.sun.com/timthomas/entry/configuring_the_opensolaris_cifs_server


 Works like a charm. Now, I want to be able to view the ACLs of files and 
 directories
 created in my shared pool, a la:

 http://dlc.sun.com/osol/docs/content/ZFSADMIN/gbace.html

 That help page says I can view ACLs by doing ls -v file, but no matter what 
 I try I cannot view anything ACL related: What I get instead is:


 [EMAIL PROTECTED]:/tank/stuff# ls -l
 total 2
 drwxrwxrwx 3 2147483649 2147483650 3 2008-05-14 18:37 New Folder
 -rw-r--r-- 1 root   root   0 2008-05-14 18:35 test

 [EMAIL PROTECTED]:/tank/stuff# ls -v test
 test


 Tried capital V:


 [EMAIL PROTECTED]:/tank/stuff# ls -V test
 ls: invalid option -- V
 Try `ls --help' for more information.


 And trying to get the ACL on the the directory as well:

 [EMAIL PROTECTED]:/tank/stuff# ls -dv New\ Folder/
 New Folder/


 So my question what I am missing/doing wrong, and how do I simply view the 
 ACLs for files/directories with a view knowing what to modify...?


The GNU shell utilities do not support ACLs.  You must use the
traditional Solaris utilities instead -- either explicitly call
/bin/ls, /bin/chmod, or adjust your path/aliases accordingly.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] News on Single Drive RaidZ Expansion?

2008-05-08 Thread Jason King
On Thu, May 8, 2008 at 8:59 PM, EchoB [EMAIL PROTECTED] wrote:
 I cannot recall if it was this (-discuss) or (-code) but a post a few
  months ago caught my attention.
  In it someone detailed having worked out the math and algorithms for a
  flexible expansion scheme for ZFS. Clearly this is very exciting to me,
  and most people who use ZFS on purpose.
  I am wondering if there is currently any work in progress to implement
  that - or any other method of accomplishing that task. It seems to be
  one of the most asked about features. I haven't heard anything in a
  while - so I figured I'd ask.

I suspect this might be what you're looking for:
http://blogs.sun.com/ahl/entry/expand_o_matic_raid_z

However it's dependent on the block pointer rewrite functionality
(which I believe is being worked on, but I cannot say for sure).
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fwd: zfs boot suddenly not working

2007-12-18 Thread Jason King
Edit the kernel$ line and add '-k' at the end.  That should drop you
into the kernel debugger after the panic (typing '$q' will exit the
debugger, and resume whatever it was doing -- in this case likely
rebooting).


On Dec 18, 2007 6:26 PM, Michael Hale [EMAIL PROTECTED] wrote:


 Begin forwarded message:

 From: Michael Hale [EMAIL PROTECTED]
 Date: December 18, 2007 6:15:12 PM CST
 To: zfs-discuss@opensolaris.org
 Subject: zfs boot suddenly not working

  We have a machine that is configured with zfs boot , Nevada v67- we have
 two pools, rootpool and datapool.  It has been working ok since June.  Today
 it kernel panicked and now when we try to boot it up, it gets to the grub
 screen, we select ZFS, and then there is a kernel panic that flashes by too
 quickly for us to see and then it reboots.

 If we boot to a nevada v77 DVD and if we boot to that, we can do a zpool
 import and mount the zfs pools successfully.  We scrubbed them and didn't
 find any errors.  From the nevada v77 DVD we can see everything ok.

 Here is our grub menu.lst

 title Solaris ZFS snv_67 X86
 kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
 module$ /platform/i86pc/$ISADIR/boot_archive

 First of all, is there a way to slow down that kernel panic so that we can
 see what it is?  Also, we suspect that maybe /platform/i86pm/boot_archive
 might have been damaged.  Is there a way to regenerate it?
 --
 Michael Hale
 [EMAIL PROTECTED]





 --
 Michael Hale
 [EMAIL PROTECTED]
 http://www.gift-culture.org





 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GRUB + zpool version mismatches

2007-10-18 Thread Jason King
Upon further thought, it was probably just defaulting to the first
thing grub could boot -- my laptop is partitioned (in order) xp,
recovery partition, solaris.   The grub menu though lists the options
in order of solaris b62 (zfs /), solaris b74 (also zfs /), recovery
partition, windows.  So my initial assumption is probably wrong -- if
it was actually reading the menu.lst, it probably would have done the
recovery partition.

Specifically, I was experimenting (so yes, highly unsupported), but
essentially on a b62 zfs root system, I did (a bit long):

download b74 dvd + assemble iso
zfs create -o mountpoint=legacy tank/b74
mount -F zfs tank/b74 /a
lofiadm -a b74.iso
mount -F hsfs /dev/lofi/1 /mnt

cp /mnt/Solaris_11/Product/.order /tmp/order
vi /tmp/order
cp /var/sadm/install/admin /tmp/admin
vi /tmp/admin (set everything to overwrite)

for dir in /dev /devices; do
( mkdir /a/${dir}; cd $dir; find . -xdev ) | cpio -pd /a/dev
done

cat /tmp/order | while read pkg; do
pkgadd -a /tmp/admin -n -d /mnt/Solaris_11/Product -R /a ${pkg}
done

cp /etc/path_to_inst /a/etc/path_to_inst
devfsadm -r /a
echo etc/zfs/zpool.cache  /boot/solaris/filelist.ramdisk
echo etc/zfs/zpool.cache  /a/boot/solaris/filelist.ramdisk
bootadm update-archive -R /a

umount /a
umount /mnt
lofiadm -d /dev/lofi/1

mount -F zfs tank /mnt
vi /mnt/boot/grub/menu.lst
(add entry for tank/b74)

reboot

All works well, I see message about pool version being downrev, but
after a few boots and running through it's paces, think 'ok, it's
good', and do zpool upgrade tank... then still use it for a few hours

Then I rebooted, and it would go straight into windows -- no menu or anything.
Once I backtracked, and I suspected it was the zpool upgrade that
broke things (since I never installed the grub in b74), so I burn a
b74 cd (thanks Jesse :P) from xp, boot off it, do a zpool import -f,
and reinstall grub, and all is well.

So I think it was barfing on the zpool version being higher than what
it knew about, but gave no indication that was the issue.


On 10/18/07, Lori Alt [EMAIL PROTECTED] wrote:
 I think this is an artifact of a manual setup.  Ordinarily, if
 booting from a zfs root pool, grub wouldn't even be able
 to read the menu.lst if it couldn't interpret the pool format.

 I'm not sure what the entire sequence of events is here,
 so I'm not sure if there's a bug.   Perhaps you could elaborate.

 Lori

 Jason King wrote:
  Apparently with zfs boot, if the zpool is a version grub doesn't
  recognize, it merely ignores any zfs entries in menu.lst, and
  apparently instead boots the first entry it thinks it can boot.  I ran
  into this myself due to some boneheaded mistakes while doing a very
  manual zfs / install at the summit.
 
  Shouldn't it at least spit out a warning?  If so, I have no issues
  filing a bug, but wanted to bounce it off those more knowledgeable in
  this area than I am.
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] GRUB + zpool version mismatches

2007-10-17 Thread Jason King
Apparently with zfs boot, if the zpool is a version grub doesn't
recognize, it merely ignores any zfs entries in menu.lst, and
apparently instead boots the first entry it thinks it can boot.  I ran
into this myself due to some boneheaded mistakes while doing a very
manual zfs / install at the summit.

Shouldn't it at least spit out a warning?  If so, I have no issues
filing a bug, but wanted to bounce it off those more knowledgeable in
this area than I am.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] device alias

2007-09-25 Thread Jason King
On 9/25/07, Gregory Shaw [EMAIL PROTECTED] wrote:



 On Sep 25, 2007, at 7:09 PM, Richard Elling wrote:

 Dale Ghent wrote:
 On Sep 25, 2007, at 7:48 PM, Richard Elling wrote:
 The problem with this is that wrong information is much worse than no
 information, there is no way to automatically validate the  information,
 and therefore people are involved.  If people were reliable, then even
 a text file would work.  If it can't be automatic and reliable, then
 it isn't worth doing.
 I dunno if I think we have to think this far into it.
 Consider the Clearview project and their implementation of vanity  names for
 network interfaces. Conceivably, such a feature is useful  to admins as they
 can set the name to be a particular vlan number, or  the switch/blade/port
 where the other end of the ethernet line is  terminated. Or their
 ex-girlfriend's name if it's a particular  troublesome interface. The point
 is, it allows arbitrary naming of  something so that the admin(s) can
 associate with it better as an  object. Most importantly, there's a
 distinction here. Solaris  provides the facility. It's up to the admin to
 maintain it. That's as  far as it should go.

 Actually, you can use the existing name space for this.  By default,
 ZFS uses /dev/dsk.  But everything in /dev is a symlink.  So you could
 setup your own space, say /dev/myknowndisks and use more descriptive
 names.  You might need to hack on the startup service to not look at
 /dev, but that shouldn't be too hard.  In other words, if the answer
 is let the sysadmin do it then it can be considered solved.  The
 stretch goal is to make some sort of reasonable name service.  At
 this point I'll note the the FC folks envisioned something like that,
 but never implemented it.

 # ramdiskadm -a BrownDiskWithWhiteHat 150m
 /dev/ramdisk/BrownDiskWithWhiteDot
 # zpool create zwimming /dev/ramdisk/BrownDiskWithWhiteDot
 # zpool status zwimming
   pool: zwimming
  state: ONLINE
  scrub: none requested
 config:

 NAME  STATE READ WRITE CKSUM
 zwimming  ONLINE   0 0 0
   /dev/ramdisk/BrownDiskWithWhiteDot  ONLINE
 0 0 0

 errors: No known data errors
 # ls -l /dev/ramdisk/BrownDiskWithWhiteHat
 lrwxrwxrwx   1 root root  55 Sep 25 17:59
 /dev/ramdisk/BrownDiskWithWhiteDot -
 ../../devices/pseudo/[EMAIL PROTECTED]:BrownDiskWithWhiteDot
 # zpool export zwimming
 # mkdir /dev/whee
 # cd /dev/whee
 # ln -s
 ../../devices/pseudo/[EMAIL PROTECTED]:BrownDiskWithWhiteHat
 YellowDiskUnderPinkBox
 # zpool import -d /dev/whee zwimming
 # zpool status zwimming
   pool: zwimming
  state: ONLINE
  scrub: none requested
 config:

 NAMESTATE READ WRITE CKSUM
 zwimmingONLINE   0 0 0
   /dev/whee/YellowDiskUnderPinkBox  ONLINE   0
0 0

 errors: No known data errors


  -- richard

 But nobody would actually do this.  If the process can't be condensed into a
 single step (e.g. a single command), people won't bother.

 Besides, who would take the chance that a 'boot -r' would keep their
 elaborate symbolic link tree intact?   I wouldn't.

 I've learned that you can't count on anything in /dev remaining over 'boot
 -r', patches, driver updates, san events, etc.


 -
 Gregory Shaw, IT Architect
 IT CTO Group, Sun Microsystems Inc.
 Phone: (303)-272-8817 (x78817)
 500 Eldorado Blvd, UBRM02-157 [EMAIL PROTECTED] (work)
 Broomfield, CO 80021   [EMAIL PROTECTED] (home)
 When Microsoft writes an application for Linux, I've won. - Linus Torvalds





 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



I've actually contemplated requesting such a feature for /dev
(creating vanity aliases that are presistant) as it would also be
useful for things like databases that use raw disk (i.e. Oracle ASM).
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Root and upgrades

2007-09-13 Thread Jason King
On 9/13/07, Brian Hechinger [EMAIL PROTECTED] wrote:
 On Thu, Sep 13, 2007 at 10:54:41AM -0600, Lori Alt wrote:
  In-place upgrade of zfs datasets is not supported and probably
  never will  be (LiveUpgrade will be the way to go with zfs because
  the cloning features of zfs make it a natural).  But the LiveUpgrade
  changes aren't ready yet, so for the time being, you really need to
  reinstall in order to get a later version.

 Hi Lori,

 I know I'm a pain in your rear :), but is there an ETA for LU?

 -brian
 --
 Perl can be fast and elegant as much as J2EE can be fast and elegant.
 In the hands of a skilled artisan, it can and does happen; it's just
 that most of the shit out there is built by people who'd be better
 suited to making sure that my burger is cooked thoroughly.  -- Jonathan 
 Patschke
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Or perhaps less complex (possibly, don't know for sure), what about
manually taking the dvd iso and putting the bits onto an empty zfs
filesystem (i.e. don't worry about converting or carrying over
existing configuration files)?  It seems like doing a pkgadd -R +
bootadm update-archive -R almost works, but some piece is missing
(kernel can't find the ramdisk).
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Requirements for a bootable zfs filesystem

2007-09-08 Thread Jason King
Just playing around a bit w/ zfs + zfs root (no particularly good
reason other than to just mess around a bit), and I hit an issue that
I suspect is simple to fix, but I cannot seem to figure out what that
is.

I wanted to try (essentially) doing a very manual install to an empty
zfs filesystem.
So I took a b72 ISO, mounted it, then did something did something similar to:

cat /mnt/Solaris_11/Product/.order | while read pkg; do
pkgadd -n -M -a admin -R /newfs -d /mnt/Solaris_11/Product $pkg
done

(where the iso was mounted on /mnt  /newfs was the empty zfs filesystem)

After that completed, I made sure the filesystem mountpoint was set to
legacy, updated it's vfstab accordingly, updated grub (the system was
already installed using the zfsrootkit from b62), copied
/etc/zfs/zpool.cache to /newfs/etc/zfs, created
/newfs/etc/boot/solaris/filelist.ramdisk, added 'etc/zfs/zpool.cache',
then did a 'bootadm update-archive -R /newfs'

When I reboot, it panics in rootconf() that it cannot mount /ramdisk:a

Now what's interesting, is if I clone the existing /, and just
overwrite all the packages on the clone, that boots (though not sure
how clean it would be).  So I suspect there must be something extra
piece on the filesystem that's missing that I am unaware of.  Any
ideas?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS boot: Now, how can I do a pseudo live upgrade?

2007-05-31 Thread Jason King

I've had at least some success (tried it once so far) doing a BFU to cloned
filesystem from a b62 zfs root system, I could probably document that if
there is interest.

I have not tried taking a new ISO and installing the new packages ontop of a
cloned fileystem though.

On 5/31/07, Lori Alt [EMAIL PROTECTED] wrote:


zfs-boot crowd:

I said I'd try to come up with a procedure for liveupgrading
the netinstalled zfs-root setup, but I haven't found time to
do so yet (I'm focusing on getting this supported in install
for real).  So while I hate to retreat into the I never said
you could upgrade this configuration excuse, that's what
I'm going to do, at least for now.  I might get a chance
to work on a liveupgrade procedure in the next couple of
weeks.  In the meantime, if someone else wants to take
a shot at it and post the results, go ahead.

Lori

Malachi de Ælfweald wrote:
 No, I did mean 'snapshot -r' but I thought someone on the list said
 that the '-r' wouldn't work until b63... hmmm...

 Well, realistically, all of us new to this should probably know how to
 patch our system before we put any useful data on it anyway, right? :)

 Thanks,
 Mal

 On 5/25/07, *Constantin Gonzalez* [EMAIL PROTECTED]
 mailto: [EMAIL PROTECTED] wrote:

 Hi Malachi,

 Malachi de Ælfweald wrote:
  I'm actually wondering the same thing because I have b62 w/ the
ZFS
  bits; but need the snapshot's -r functionality.

 you're lucky, it's already there. From my b62 machine's man zfs:

  zfs snapshot [-r] [EMAIL PROTECTED]|[EMAIL PROTECTED]

  Creates  a  snapshot  with  the  given  name.  See   the
  Snapshots section for details.

  -rRecursively create  snapshots  of  all  descendant
datasets.  Snapshots are taken atomically, so that
all recursive snapshots  correspond  to  the  same
moment in time.

 Or did you mean send -r?

 Best regards,
Constantin


 --
 Constantin GonzalezSun Microsystems
 GmbH, Germany
 Platform Technology Group, Global Systems
 Engineering   http://www.sun.de/
 Tel.: +49 89/4 60 08-25 91
 http://blogs.sun.com/constantin/ http://blogs.sun.com/constantin/ 

 Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551
 Kirchheim-Heimstetten
 Amtsgericht Muenchen: HRB 161028
 Geschaeftsfuehrer: Marcel Schneider, Wolfgang Engels, Dr. Roland
 Boemer
 Vorsitzender des Aufsichtsrates: Martin Haering


 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zfs boot image conversion kit is posted

2007-04-30 Thread Jason King
I tried it and it worked great.  Even cloned my boot environment, and BFU'd the 
clone and it seemed to work (minus a few unrelated annoyances I haven't tracked 
down yet).  I'm quite excited about the possibilities :)

I am wondering though, is it possible to skip the creation of the pool and have 
it install to an empty filesystem(s) in an existing pool (assume the pool is 
already setup w/ grub and the like)?   I'm thinking installing new builds (no 
upgrades), etc, as time goes on until the new installer is here.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Thoughts on patching + zfs root

2006-11-08 Thread Jason King
Anxiously anticipating the ability to boot off zfs, I know there's been some 
talk about leveraging some of the snapshotting/cloning features in conjunction 
with upgrades and patches.

What I am really hoping for is the ability to clone /, patch the clone, then 
boot off the clone (by doing a clone swap).  This would minimize the downtime 
needed to patch (as compared to today) since the install could be done while 
the system is still up and running.  I suspect to do this might require some 
interaction with things like patchadd, etc., so I am curious if such a feature 
is already in the works, or if perhaps an RFE should be filed.

Assuming the plans are already there for this, I'd like to anticipate any 
gotchas with our standard install procedure, and was just wondering if any 
thought had been put in as to what the requirements would be.  Just off the top 
of my head, I would guess it'd mostly revolve around making sure stuff is 
separated properly into different filesystems, but was curious if anyone else 
has thought about this.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss