Glenn Lagasse wrote: > I'm looking to get input on some thoughts I have for configuring laptops > with 2008.11. > > What I have in mind is changing some of the defaults that you get when > you install 2008.11. I'm not advocating any changes to our installers, > merely some manual tweaking that might be useful on at least my laptop, > possibly others as well. >
Well, others have advocated for installer changes, see bug 86. Doing so would actually require performance PIT runs and so on, so I haven't pursued it. > ZFS > > Since most laptops (if not all) only ship with a single disk, the ZFS > root pool we create contains a single vdev. ZFS by default stores > multiple copies of metadata but it requires >1 vdev to store multiple > copies of data. Unless of course you create your filesystems with the > copies parameter (set to 2 or more). It seems to me, that setting > copies=2 on our ZFS filesystems on laptops is probably a useful setting > to protect (at least as much as we can using a single vdev) from > catastrophic data loss due to drive hardware issues. > > My question is, does copies=2 work the way I'm thinking and more > specifically is there a reason we wouldn't want to do this (apart from > using more space but I'll get to that in a minute)? > > Also, does it make sense to set copies >1 on things like the dump and > swap volumes? If you set the dump volume to have copies=2 then it takes > up twice the space you create (so a 2g dump volume takes up 4g without a > dump having been written). A related question to dump volumes is that > if we don't have some redundancy via copies=2 do we a) care since dump > should just be a copy of memory and only persist until the dump is saved > and b) can the system actually run in this fashion. By which I mean if > we only have one copy of *data* on the dump volume and the disk gets > corrupted in the blocks that the dump volume lives on, what's the > failure mode? I don't care really if we just lose whatever dump was > there (if there was one) but if the zpool becomes totally unusable > because it can't correct the damage then that's a problem. Same sort of > questions for swap except I'd imagine we want copies=2 on swap so that > we don't just die if swap went away because ZFS couldn't recover from a > data failure. > This seems unlikely to be of substantial benefit to me, but I'd suggest asking ZFS and VM people. > On laptop hard disks, it's probably a good idea to enable compression. > Partially due to their historically small size but also if we're storing > 2 copies of data the compression could help. Disks are getting bigger, > but that's neither here nor there for this discussion. I'm sure we > don't want to compress the dump device (the exact reason escapes me) but > things like rpool/ROOT/opensolaris and rpool/export seem like good > candidates. Swap I'm not so sure about. Is there any benefit to > compressing a ZFS volume like swap? > The dump data is compressed when written, so compressing the volume would be redundant. I think compression is beneficial on newer, multi-core systems with slow disks. Older, slower CPU's will tend to struggle. I ran my M5 laptop with an lzjb-compressed root and some gzip-9-compressed data file systems for a long time before I wiped it for 2008.11. I'd say performance was unnoticeably different vs. the current non-compressed installation I have in place. > Anyone know why I can't set the bootfs property on a zpool that has > filesystems with compression=gzip and copies=2 set? When I try to set > it, it says cannot set property for 'rpool': operation not supported on > this type of pool. And the gui installer fails in _get_root_dataset: > Could not determine root dataset from vfstab ICT_EXPLICIT_BOOTFS_FAILED. > I don't see this failure if I use the default lzjb compression instead > of gzip. Which is a shame because lzjb gives you a 1.75x compression > ratio whereas gzip (which is really gzip -6) is 2.7x. Is gzip not > supported on root pools? > The GRUB ZFS implementation doesn't have support for gzip. See CR 6538017. > Not so much a question, but I'm also thinking that limiting the ZFS arc > cache to 1G (at least on systems with <=4G of memory) is a good idea. > I've noticed interactivity in Gnome sometimes takes a nose dive if you > let the arc consume whatever it wants (regardless of whatever algorithim > it uses for freeing things upon demand). > Hard to respond without more description of the problems you've encountered, but I haven't seen significant issues on any of my systems, only one of which has more than 4 GB. Dave