[zfs-discuss] Saving data across install

2011-08-03 Thread Nomen Nescio
I installed a Solaris 10 development box on a 500G root mirror and later I
received some smaller drives. I learned from this list its better to have
the root mirror on the smaller small drives and then create another mirror
on the original 500G drives so I copied everything that was on the small
drives onto the 500G mirror to free up the smaller drives for a new install.

After my install completes on the smaller mirror, how do I access the 500G
mirror where I saved my data? If I simply create a tank mirror using those
drives will it recognize there's data there and make it accessible? Or will
it destroy my data? Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Saving data across install

2011-08-03 Thread Fajar A. Nugraha
On Wed, Aug 3, 2011 at 7:02 AM, Nomen Nescio nob...@dizum.com wrote:
 I installed a Solaris 10 development box on a 500G root mirror and later I
 received some smaller drives. I learned from this list its better to have
 the root mirror on the smaller small drives and then create another mirror
 on the original 500G drives so I copied everything that was on the small
 drives onto the 500G mirror to free up the smaller drives for a new install.

 After my install completes on the smaller mirror, how do I access the 500G
 mirror where I saved my data? If I simply create a tank mirror using those
 drives will it recognize there's data there and make it accessible? Or will
 it destroy my data? Thanks.

CREATING a pool on the drive will definitely destroy the data.

IMHO the easiest way is to:
- boot using live CD
- import the 500G pool, renaming it to something else other than rpool
(e.g. zpool import rpool datapool)
- export the pool
- install on the new disk

Also, there's actually a way to copy everthing that was installed on
the old pool to the new pool WITHOUT having to reinstall from scratch
(e.g. so that your customizations stays the same), but depending on
your level of expertise it might be harder. See
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_Recovery

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Saving data across install

2011-08-03 Thread Fajar A. Nugraha
On Wed, Aug 3, 2011 at 1:10 PM, Fajar A. Nugraha l...@fajar.net wrote:
 After my install completes on the smaller mirror, how do I access the 500G
 mirror where I saved my data? If I simply create a tank mirror using those
 drives will it recognize there's data there and make it accessible? Or will
 it destroy my data? Thanks.

 CREATING a pool on the drive will definitely destroy the data.

 IMHO the easiest way is to:
 - boot using live CD
 - import the 500G pool, renaming it to something else other than rpool
 (e.g. zpool import rpool datapool)
 - export the pool
 - install on the new disk

... and just is case it wasn't obvious already, after the installation
is complete, you can simply do a zpool import datapool (or whatever
you rename the old pool to) to access the old data.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Wrong rpool used after reinstall!

2011-08-03 Thread Anonymous Remailer (austria)

You wrote:

 
  Hi Roy, things got alot worse since my first email. I don't know what
  happened but I can't import the old pool at all. It shows no errors but when
  I import it I get a kernel panic from assertion failed: zvol_get_stats(os,
  nv) which looks like is fixed by patch 6801926 which is applied in Solaris
  10u9. But I cannot boot update 9 on this box! I tried Solaris Express, none
  of those will run right either. They all go into maintenance mode. The last
  thing I can boot is update 8 and that is the one with the ZFS bug.
 
 If they go into maintenance mode but could recognize the disk, you
 should still be able to do zfs stuff (zpool import, etc). If you're
 lucky you'd only miss the GUI

Thank you for your comments. This is pretty frustrating.

Unfortunately I'm hitting another bug I saw on the net. Once the Express
Live CD or text installer falls back to maintenance mode, I keep getting a
msg bash: /usr/bin/hostname command not found (from memory, may not be
exact). All ZFS commands fail with this message. I don't know what causes
this and I'm surprised since Solaris 10 update 8 works mostly fine on the
same hardware. I would expect support to get wider not less but the opposite
seems to be happening because I can't install update 9 on this machine.

  I have 200G of files I deliberately copied to this new mirror and now I
  can't get at them! Any ideas?
 
 Another thing you can try (albeit more complex) is use another OS
 (install, or even use a linux Live CD), install virtualbox or similar
 on it, pass the disk as raw vmdk, and use solaris express on the VM.
 You should be able to at least import the pool and recover the data.

I didn't think I would be able to use raw disks with exported pools in a VM
but your comment is interesting. I will consider it. The host is bootable,
it just immediately panics upon trying to import the 2nd pool. Is there some
way I can force a normal boot where my root pool is mounted but it doesn't
mount the other pool? If so I could install VirtualBox and try your
suggestion without moving the drives to another box.

Aren't there any ZFS recovery tools, or is ZFS just not expected to break?

I mentioned I can boot the update 9 installer, but it fails when trying to
read media from the DVD because of a lack of nvidia drivers (documented
limitation) I wonder if I can do some kind of network or jumpstart
install. I have no other Solaris Intel boxes (and this post should explain
some of the reasons why) but I have several Solaris Sparc boxes. I haven't
gone through the doc on net/jumpstart installs, there is alot to read. But
maybe this would get update 9 on that box and maybe it could import the pool.

Are there any other Solaris based live CD or DVD I could try that have
known, good ZFS support? I will need something where I can scp/rcp/rsync the
data off that box, assuming that I can import it somehow.

Thanks,
Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Saving data across install

2011-08-03 Thread Nomen Nescio
Please ignore this post. Bad things happened and now there is another thread
for it. Thank you.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool import starves machine of memory

2011-08-03 Thread Paul Kraus
I am having a very odd problem, and so far the folks at Oracle
Support have not provided a working solution, so I am asking the crowd
here while still pursuing it via Oracle Support.

The system is a T2000 running 10U9 with CPU-2010-01and two J4400
loaded with 1 TB SATA drives. There is one zpool on the J4400 (3 x 15
disk vdev + 3 hot spare). This system is the target for zfs send /
recv replication from our production server.The OS is UFS on local
disk.

 While I was on vacation this T2000 hung with out of resource
errors. Other staff tried rebooting, which hung the box. Then they
rebooted off of an old BE (10U9 without CPU-2010-01). Oracle Support
had them apply a couple patches and an IDR to address zfs stability
and reliability problems as well as set the following in /etc/system

set zfs:zfs_arc_max = 0x7 (which is 28 GB)
set zfs:arc_meta_limit = 0x7 (which is 28 GB)

The system has 32 GB RAM and 32 (virtual) CPUs. They then tried
importing the zpool and the system hung (after many hours) with the
same out of resource error. At this point they left the problem for
me :-(

I removed the zfs.cache from the 10U9 + CPU 2010-10 BE and booted
from that. I then applied the IDR (IDR146118-12 )and the zfs patch it
depended on (145788-03). I did not include the zfs arc and zfs arc
meta limits as I did not think they relevant. A zpool import shows the
pool is OK and a sampling with zdb -l of the drives shows good labels.
I started importing the zpool and after many hours it hung the system
with out of resource errors. I had a number of tools running to see
what was going on. The only thing this system is doing is importing
the zpool.

ARC had climbed to about 8 GB and then declined to 3 GB by the time
the system hung. This tells me that there is something else consuming
RAM and the ARC is releasing it.

The hung TOP screen showed the largest user process only had 148 MB
allocated (and much less resident).

VMSTAT showed a scan rate of over 900,000 (NOT a typo) and almost 8 GB
of free swap (so whatever is using memory cannot be paged out).

So my guess is that there is a kernel module that is consuming all
(and more) of the RAM in the box. I am looking for a way to query how
much RAM each kernel module is using and script that in a loop (which
will hang when the box runs out of RAM next). I am very open to
suggestions here.

   Since this is the recv end of replication, I assume there was a zfs
recv going on at the time the system initially hung. I know there was
a 3+ TB snapshot replicating (via a 100 Mbps WAN link) when I left for
vacation, that may have still been running. I also assume that any
partial snapshots (% instead of @) are being removed when the pool is
imported. But what could be causing a partial snapshot removal, even
of a very large snapshot, to run the system out of RAM ? What caused
the initial hang of the system (I assume due to out of RAM) ? I did
not think there was a limit to the size of either a snapshot or a zfs
recv.

Hung TOP screen:

load averages: 91.43, 33.48, 18.989 xxx-xxx1   18:45:34
84 processes:  69 sleeping, 12 running, 1 zombie, 2 on cpu
CPU states: 95.2% idle,  0.5% user,  4.4% kernel,  0.0% iowait,  0.0% swap
Memory: 31.9G real, 199M free, 267M swap in use, 7.7G swap free

   PID USERNAME THR PR NCE  SIZE   RES STATE   TIME FLTSCPU COMMAND
   533 root  51 59   0  148M 30.6M run   520:210  9.77% java
  1210 yy 1  0   0 5248K 1048K cpu25   2:080  2.23% xload
 14720 yy 1 59   0 3248K 1256K cpu24   1:560  0.03% top
   154 root   1 59   0 4024K 1328K sleep   1:170  0.02% vmstat
  1268 yy 1 59   0 4248K 1568K sleep   1:260  0.01% iostat
...

VMSTAT:

kthr  memorypagedisk  faults  cpu
 r b w   swap  free  re  mf pi po fr de sr m0 m1 m2 m3   in   sy   cs us sy id
 0 0 112 8117096 211888 55 46 0 0 425 0 912684 0 0 0 0  976  166  836  0  2 98
 0 0 112 8117096 211936 53 51 6 0 394 0 926702 0 0 0 0  976  167  833  0  2 98

ARC size (B): 4065882656

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com/event.php?eid=123170297765140)
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Exapnd ZFS storage.

2011-08-03 Thread Nix
Hi,

I have 4 disk with 1 TB of disk and I want to expand the zfs pool size.

I have 2 more disk with 1 TB of size.

Is it possible to expand the current RAIDz array with new disk?

Thanks,
Nix
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Wrong rpool used after reinstall!

2011-08-03 Thread Stuart James Whitefish
Thanks for your comments so far. I'll try to put everything I know into this
post now that I have signed up at the forums.

Solaris 10, update 8 Intel, 500G ZFS root mirror rpool.

I recently received two 320G drives and realized from reading this list it
would have been better if I would have done the install on the small drives
but I didn't have them at the time. I added the two 320G drives and created
tank mirror.

I moved some data from other sources to the tank and then decided to go
ahead and do a new install. In preparation for that I moved all the data I
wanted to save onto the rpool mirror and then installed Solaris 10 update 8
again on the 320G drives.

When my system rebooted after the installation, I saw for some reason it
used my tank pool as root. I realize now since it was originally a root pool
and had boot blocks this didn't help. Anyway I shut down, changed the boot
order and then booted into my system. It paniced when trying to access the
tank and instantly rebooted. I had to go through this several times until I
caught a glimpse of one of the first messages:

assertion failed: zvol_get_stats(os, nv)

Here is what my system looks like when I boot into failsafe mode.


# zpool import
  pool: rpool
id: 16453600103421700325
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

rpoolONLINE
  mirror ONLINE
c0t2d0s0 ONLINE
c0t3d0s0 ONLINE

  pool: tank
id: 12861119534757646169
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

tank ONLINE
  mirror ONLINE
c0t0d0s0 ONLINE
c0t1d0s0 ONLINE

# zpool import tank
cannot import 'tank': pool may be in use from other system
use '-f' to import anyway

When I do

# zpool import -f tank

I get a kernal panic and the screen flashes and the machine quickly reboots
before I can see what's written, but I did see

   assertion failed: zvol_get_stats(os, nv)

and there seems to be a patch 6801926 associated with it. I tried the
following:

1. fmdump -ev. Either the command is not found when I bring my system up in
the Solaris failsafe mode, or when the root pool which is ok gets mounted on
/a the command is also not found. I did chroot /a /bin/sh and then the
command is found but I get error messages about not having certain files
available in /var, probably because I have /var on a separate filesystem. I
do not know how to mount things as if the system came up normally and if I
boot in the normal mode although it can import the root pool the tank pool
was also a root pool at one time and it seems to mount it and then panic and
reboot. I think it would be a lot easier to do whatever we can do if I could
get the system to boot into normal mode without importing tank. Can anyone
please explain in very clear and simple directions how to do that please?

2. Solaris 10 update 9 does boot but won't install, presumably because of not 
having a CD driver for my nvidia chipset, although the install does start it 
gives a message something like the CD is not a Solaris OS CD.. I tried single 
user shell and it also panics when I zpool import -f tank. This is very 
discouraging because according to the Solaris 10 update 9 patch list, 6801926 
is supposed to be preapplied. I don't understand why the update 8 DVD works on 
my system but update 9 doesn't. That is just weird.

3. Solaris 11 Express falls into maintenance mode and doesn't go any further. I 
get an error message something like bash: /usr/bin/hostname command not found, 
and I can't do anything useful.

4. OpenIndiana Server doesn't come all the way up. I get the Solaris 5.11 
message and then it does nothing.

5. Milax falls into maintenance mode like Sol 11 Express and then panics when I 
try to import tank. 

I have Solaris on Sparc boxes available if it would help to do a net install
or jumpstart. I have never done those and it looks complicated, although I
think I may be able to get to the point in the u9 installer on my Intel box 
where it asks me whether I want to install from DVD etc. But I may be wrong, 
and anyway the single user shell in the u9 DVD also panics when I try to import 
tank so maybe that won't help.

I have only 4 sata ports on this Intel box so I have to keep pulling cables
to be able to boot from a DVD and then I won't have all my drives
available. I cannot move these drives to any other box because they are
consumer drives and my servers all have ultras.

I have about 200G of data on the tank pool, about 100G or so I don't have
anywhere else. I created this pool specifically to make a safe place to
store data that I had accumulated over several years and didn't have
organized yet. Can anyone PLEASE help me get this data back!

Thank you.

Jim
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org

Re: [zfs-discuss] zpool import starves machine of memory

2011-08-03 Thread Paul Kraus
An additional data point, when i try to do a zdb -e -d and find the
incomplete zfs recv snapshot I get an error as follows:

# sudo zdb -e -d xxx-yy-01 | grep %
Could not open xxx-yy-01/aaa-bb-01/aaa-bb-01-01/%1309906801, error 16
#

Anyone know what error 16 means from zdb and how this might impact
importing this zpool ?

On Wed, Aug 3, 2011 at 9:19 AM, Paul Kraus p...@kraus-haus.org wrote:
    I am having a very odd problem, and so far the folks at Oracle
 Support have not provided a working solution, so I am asking the crowd
 here while still pursuing it via Oracle Support.

    The system is a T2000 running 10U9 with CPU-2010-01and two J4400
 loaded with 1 TB SATA drives. There is one zpool on the J4400 (3 x 15
 disk vdev + 3 hot spare). This system is the target for zfs send /
 recv replication from our production server.The OS is UFS on local
 disk.

     While I was on vacation this T2000 hung with out of resource
 errors. Other staff tried rebooting, which hung the box. Then they
 rebooted off of an old BE (10U9 without CPU-2010-01). Oracle Support
 had them apply a couple patches and an IDR to address zfs stability
 and reliability problems as well as set the following in /etc/system

 set zfs:zfs_arc_max = 0x7 (which is 28 GB)
 set zfs:arc_meta_limit = 0x7 (which is 28 GB)

    The system has 32 GB RAM and 32 (virtual) CPUs. They then tried
 importing the zpool and the system hung (after many hours) with the
 same out of resource error. At this point they left the problem for
 me :-(

    I removed the zfs.cache from the 10U9 + CPU 2010-10 BE and booted
 from that. I then applied the IDR (IDR146118-12 )and the zfs patch it
 depended on (145788-03). I did not include the zfs arc and zfs arc
 meta limits as I did not think they relevant. A zpool import shows the
 pool is OK and a sampling with zdb -l of the drives shows good labels.
 I started importing the zpool and after many hours it hung the system
 with out of resource errors. I had a number of tools running to see
 what was going on. The only thing this system is doing is importing
 the zpool.

 ARC had climbed to about 8 GB and then declined to 3 GB by the time
 the system hung. This tells me that there is something else consuming
 RAM and the ARC is releasing it.

 The hung TOP screen showed the largest user process only had 148 MB
 allocated (and much less resident).

 VMSTAT showed a scan rate of over 900,000 (NOT a typo) and almost 8 GB
 of free swap (so whatever is using memory cannot be paged out).

    So my guess is that there is a kernel module that is consuming all
 (and more) of the RAM in the box. I am looking for a way to query how
 much RAM each kernel module is using and script that in a loop (which
 will hang when the box runs out of RAM next). I am very open to
 suggestions here.

   Since this is the recv end of replication, I assume there was a zfs
 recv going on at the time the system initially hung. I know there was
 a 3+ TB snapshot replicating (via a 100 Mbps WAN link) when I left for
 vacation, that may have still been running. I also assume that any
 partial snapshots (% instead of @) are being removed when the pool is
 imported. But what could be causing a partial snapshot removal, even
 of a very large snapshot, to run the system out of RAM ? What caused
 the initial hang of the system (I assume due to out of RAM) ? I did
 not think there was a limit to the size of either a snapshot or a zfs
 recv.

 Hung TOP screen:

 load averages: 91.43, 33.48, 18.989             xxx-xxx1               
 18:45:34
 84 processes:  69 sleeping, 12 running, 1 zombie, 2 on cpu
 CPU states: 95.2% idle,  0.5% user,  4.4% kernel,  0.0% iowait,  0.0% swap
 Memory: 31.9G real, 199M free, 267M swap in use, 7.7G swap free

   PID USERNAME THR PR NCE  SIZE   RES STATE   TIME FLTS    CPU COMMAND
   533 root      51 59   0  148M 30.6M run   520:21    0  9.77% java
  1210 yy     1  0   0 5248K 1048K cpu25   2:08    0  2.23% xload
  14720 yy     1 59   0 3248K 1256K cpu24   1:56    0  0.03% top
   154 root       1 59   0 4024K 1328K sleep   1:17    0  0.02% vmstat
  1268 yy     1 59   0 4248K 1568K sleep   1:26    0  0.01% iostat
 ...

 VMSTAT:

 kthr      memory            page            disk          faults      cpu
  r b w   swap  free  re  mf pi po fr de sr m0 m1 m2 m3   in   sy   cs us sy id
  0 0 112 8117096 211888 55 46 0 0 425 0 912684 0 0 0 0  976  166  836  0  2 98
  0 0 112 8117096 211936 53 51 6 0 394 0 926702 0 0 0 0  976  167  833  0  2 98

 ARC size (B): 4065882656

 --
 {1-2-3-4-5-6-7-}
 Paul Kraus
 - Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
 - Sound Designer: Frankenstein, A New Musical
 (http://www.facebook.com/event.php?eid=123170297765140)
 - Sound Coordinator, Schenectady Light Opera Company (
 http://www.sloctheater.org/ )
 - Technical Advisor, RPI Players




-- 

Re: [zfs-discuss] Exapnd ZFS storage.

2011-08-03 Thread LaoTsao
if you have 4 hdd then expand raidz is simple(from 4 hdd raidz)

Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On Aug 3, 2011, at 3:02, Nix mithun.gaik...@gmail.com wrote:

 Hi,
 
 I have 4 disk with 1 TB of disk and I want to expand the zfs pool size.
 
 I have 2 more disk with 1 TB of size.
 
 Is it possible to expand the current RAIDz array with new disk?
 
 Thanks,
 Nix
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Exapnd ZFS storage.

2011-08-03 Thread Brandon High
On Wed, Aug 3, 2011 at 3:02 AM, Nix mithun.gaik...@gmail.com wrote:
 I have 4 disk with 1 TB of disk and I want to expand the zfs pool size.

 I have 2 more disk with 1 TB of size.

 Is it possible to expand the current RAIDz array with new disk?

You can't add the new drives to your current vdev. You can create
another vdev to add to your pool though.

If you're adding another vdev, it should have the same geometry as
your current (ie: 4 drives). The zpool command will complain if you
try to add a vdev with different geometry or redundancy, though you
can force it with -f.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Fragmentation issue - examining the ZIL

2011-08-03 Thread Brandon High
On Mon, Aug 1, 2011 at 4:27 PM, Daniel Carosone d...@geek.com.au wrote:
 The other thing that can cause a storm of tiny IOs is dedup, and this
 effect can last long after space has been freed and/or dedup turned
 off, until all the blocks corresponding to DDT entries are rewritten.
 I wonder if this was involved here.

Using dedup on a pool that houses an Oracle DB is Doing It Wrong in so
many ways...

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Wrong rpool used after reinstall!

2011-08-03 Thread Ian Collins

 On 08/ 4/11 01:29 AM, Stuart James Whitefish wrote:

I have Solaris on Sparc boxes available if it would help to do a net install
or jumpstart. I have never done those and it looks complicated, although I
think I may be able to get to the point in the u9 installer on my Intel box 
where it asks me whether I want to install from DVD etc. But I may be wrong, 
and anyway the single user shell in the u9 DVD also panics when I try to import 
tank so maybe that won't help.


Put your old drive in a USB enclosure and connect it to another system in order 
to read back the data.



I have only 4 sata ports on this Intel box so I have to keep pulling cables
to be able to boot from a DVD and then I won't have all my drives
available. I cannot move these drives to any other box because they are
consumer drives and my servers all have ultras.


Most modern boards will be boot from a live USB stick.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Fragmentation issue - examining the ZIL

2011-08-03 Thread Daniel Carosone
On Wed, Aug 03, 2011 at 12:32:56PM -0700, Brandon High wrote:
 On Mon, Aug 1, 2011 at 4:27 PM, Daniel Carosone d...@geek.com.au wrote:
  The other thing that can cause a storm of tiny IOs is dedup, and this
  effect can last long after space has been freed and/or dedup turned
  off, until all the blocks corresponding to DDT entries are rewritten.
  I wonder if this was involved here.
 
 Using dedup on a pool that houses an Oracle DB is Doing It Wrong in so
 many ways...

Indeed, but alas people still Do It Wrong.  In particular, when a pool
is approaching full, turning on dedup might seem like an attractive
proposition to someone who doesn't understand the cost. 

So i just wonder if they have, or had at some time past, enabed it.

--
Dan.

pgpNRTzK8WULH.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [illumos-Developer] revisiting aclmode options

2011-08-03 Thread Paul B. Henson

On 8/2/2011 7:07 AM, Gordon Ross wrote:


It seems consistent to me that a discard mode would simply never
present suid/sgid/sticky.  (It discards mode settings.) After all,
the suid/sgid/sticky bits don't have any counterpart in Windows
security descriptors, and Windows ACL use interited $CREATOR_OWNER
ACEs to do the equivalent of the sticky bit.


I see it somewhat differently; the purpose of discard is to prevent
any attempted change of the mode bits via chmod from affecting the ACL.
As you point out, there is no corresponding functionality in NFSv4 ACLs,
so by definition a change of the suid/sgid/sticky part of the mode bits
would not affect the ACL. And not allowing them to be changed would
result in lost functionality -- for example, setting the sgid bit on the
directory so the group owner is inherited on child directories, which is
actually quite valuable for the functionality of the group@ entry.

So I think the implementation of both a discard and deny aclmode
would need to incorporate the ability to modify the parts of the mode
bits that are not related to the ACL.


--
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss