Re: [zfs-discuss] Zvol vs zfs send/zfs receive

2012-09-15 Thread Bill Sommerfeld
On 09/14/12 22:39, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Dave Pooser Unfortunately I did not realize that zvols require disk space sufficient to duplicate the zvol, and

Re: [zfs-discuss] Very poor small-block random write performance

2012-07-20 Thread Bill Sommerfeld
On 07/19/12 18:24, Traffanstead, Mike wrote: iozone doesn't vary the blocksize during the test, it's a very artificial test but it's useful for gauging performance under different scenarios. So for this test all of the writes would have been 64k blocks, 128k, etc. for that particular step.

Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Bill Sommerfeld
On 07/11/12 02:10, Sašo Kiselkov wrote: Oh jeez, I can't remember how many times this flame war has been going on on this list. Here's the gist: SHA-256 (or any good hash) produces a near uniform random distribution of output. Thus, the chances of getting a random hash collision are around

Re: [zfs-discuss] Advanced Format HDD's - are we there yet? (or - how to buy a drive that won't be teh sux0rs on zfs)

2012-05-28 Thread Bill Sommerfeld
On 05/28/12 17:13, Daniel Carosone wrote: There are two problems using ZFS on drives with 4k sectors: 1) if the drive lies and presents 512-byte sectors, and you don't manually force ashift=12, then the emulation can be slow (and possibly error prone). There is essentially an

Re: [zfs-discuss] zfs receive slowness - lots of systime spent in genunix`list_next ?

2011-12-05 Thread Bill Sommerfeld
On 12/05/11 10:47, Lachlan Mulcahy wrote: zfs`lzjb_decompress10 0.0% unix`page_nextn31 0.0% genunix`fsflush_do_pages 37 0.0% zfs`dbuf_free_range

Re: [zfs-discuss] zfs diff performance disappointing

2011-09-26 Thread Bill Sommerfeld
On 09/26/11 12:31, Nico Williams wrote: On Mon, Sep 26, 2011 at 1:55 PM, Jesus Cea j...@jcea.es wrote: Should I disable atime to improve zfs diff performance? (most data doesn't change, but atime of most files would change). atime has nothing to do with it. based on my experiences with

Re: [zfs-discuss] Encryption accelerator card recommendations.

2011-06-27 Thread Bill Sommerfeld
On 06/27/11 15:24, David Magda wrote: Given the amount of transistors that are available nowadays I think it'd be simpler to just create a series of SIMD instructions right in/on general CPUs, and skip the whole co-processor angle. see: http://en.wikipedia.org/wiki/AES_instruction_set Present

Re: [zfs-discuss] OpenIndiana | ZFS | scrub | network | awful slow

2011-06-16 Thread Bill Sommerfeld
On 06/16/11 15:36, Sven C. Merckens wrote: But is the L2ARC also important while writing to the device? Because the storeges are used most of the time only for writing data on it, the Read-Cache (as I thought) isn´t a performance-factor... Please correct me, if my thoughts are wrong. if

Re: [zfs-discuss] Disk replacement need to scan full pool ?

2011-06-14 Thread Bill Sommerfeld
On 06/14/11 04:15, Rasmus Fauske wrote: I want to replace some slow consumer drives with new edc re4 ones but when I do a replace it needs to scan the full pool and not only that disk set (or just the old drive) Is this normal ? (the speed is always slow in the start so thats not what I am

Re: [zfs-discuss] Wired write performance problem

2011-06-08 Thread Bill Sommerfeld
On 06/08/11 01:05, Tomas Ögren wrote: And if pool usage is90%, then there's another problem (change of finding free space algorithm). Another (less satisfying) workaround is to increase the amount of free space in the pool, either by reducing usage or adding more storage. Observed behavior

Re: [zfs-discuss] Available space confusion

2011-06-06 Thread Bill Sommerfeld
On 06/06/11 08:07, Cyril Plisko wrote: zpool reports space usage on disks, without taking into account RAIDZ overhead. zfs reports net capacity available, after RAIDZ overhead accounted for. Yup. Going back to the original numbers: nebol@filez:/$ zfs list tank2 NAMEUSED AVAIL REFER

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Bill Sommerfeld
On 05/31/11 09:01, Anonymous wrote: Hi. I have a development system on Intel commodity hardware with a 500G ZFS root mirror. I have another 500G drive same as the other two. Is there any way to use this disk to good advantage in this box? I don't think I need any more redundancy, I would like

Re: [zfs-discuss] Format returning bogus controller info

2011-02-26 Thread Bill Sommerfeld
On 02/26/11 17:21, Dave Pooser wrote: While trying to add drives one at a time so I can identify them for later use, I noticed two interesting things: the controller information is unlike any I've seen before, and out of nine disks added after the boot drive all nine are attached to c12 -- and

Re: [zfs-discuss] ZFS send/recv initial data load

2011-02-16 Thread Bill Sommerfeld
On 02/16/11 07:38, white...@gmail.com wrote: Is it possible to use a portable drive to copy the initial zfs filesystem(s) to the remote location and then make the subsequent incrementals over the network? Yes. If so, what would I need to do to make sure it is an exact copy? Thank you,

Re: [zfs-discuss] Understanding directio, O_DSYNC and zfs_nocacheflush on ZFS

2011-02-07 Thread Bill Sommerfeld
On 02/07/11 11:49, Yi Zhang wrote: The reason why I tried that is to get the side effect of no buffering, which is my ultimate goal. ultimate = final. you must have a goal beyond the elimination of buffering in the filesystem. if the writes are made durable by zfs when you need them to be

Re: [zfs-discuss] Understanding directio, O_DSYNC and zfs_nocacheflush on ZFS

2011-02-07 Thread Bill Sommerfeld
On 02/07/11 12:49, Yi Zhang wrote: If buffering is on, the running time of my app doesn't reflect the actual I/O cost. My goal is to accurately measure the time of I/O. With buffering on, ZFS would batch up a bunch of writes and change both the original I/O activity and the time. if batching

Re: [zfs-discuss] ZFS advice for laptop

2011-01-04 Thread Bill Sommerfeld
On 01/04/11 18:40, Bob Friesenhahn wrote: Zfs will disable write caching if it sees that a partition is being used This is backwards. ZFS will enable write caching on a disk if a single pool believes it owns the whole disk. Otherwise, it will do nothing to caching. You can enable it

Re: [zfs-discuss] ZFS Crypto in Oracle Solaris 11 Express

2010-12-02 Thread Bill Sommerfeld
On 11/17/10 12:04, Miles Nordin wrote: black-box crypto is snake oil at any level, IMNSHO. Absolutely. Congrats again on finishing your project, but every other disk encryption framework I've seen taken remotely seriously has a detailed paper describing the algorithm, not just a list of

Re: [zfs-discuss] resilver = defrag?

2010-09-09 Thread Bill Sommerfeld
On 09/09/10 20:08, Edward Ned Harvey wrote: Scores so far: 2 No 1 Yes No. resilver does not re-layout your data or change whats in the block pointers on disk. if it was fragmented before, it will be fragmented after. C) Does zfs send zfs receive mean it will defrag?

Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Bill Sommerfeld
On 08/21/10 10:14, Ross Walker wrote: I am trying to figure out the best way to provide both performance and resiliency given the Equallogic provides the redundancy. (I have no specific experience with Equallogic; the following is just generic advice) Every bit stored in zfs is checksummed

Re: [zfs-discuss] Increase resilver priority

2010-07-23 Thread Bill Sommerfeld
On 07/23/10 02:31, Giovanni Tirloni wrote: We've seen some resilvers on idle servers that are taking ages. Is it possible to speed up resilver operations somehow? Eg. iostat shows5MB/s writes on the replaced disks. What build of opensolaris are you running? There were some recent

Re: [zfs-discuss] L2ARC and ZIL on same SSD?

2010-07-22 Thread Bill Sommerfeld
On 07/22/10 04:00, Orvar Korvar wrote: Ok, so the bandwidth will be cut in half, and some people use this configuration. But, how bad is it to have the bandwidth cut in half? Will it hardly notice? For a home server, I doubt you'll notice. I've set up several systems (desktop home server) as

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Bill Sommerfeld
On 07/20/10 14:10, Marcelo H Majczak wrote: It also seems to be issuing a lot more writing to rpool, though I can't tell what. In my case it causes a lot of read contention since my rpool is a USB flash device with no cache. iostat says something like up to 10w/20r per second. Up to 137 the

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Bill Sommerfeld
On 07/20/10 14:10, Marcelo H Majczak wrote: It also seems to be issuing a lot more writing to rpool, though I can't tell what. In my case it causes a lot of read contention since my rpool is a USB flash device with no cache. iostat says something like up to 10w/20r per second. Up to 137 the

Re: [zfs-discuss] Dedup... still in beta status

2010-06-15 Thread Bill Sommerfeld
On 06/15/10 10:52, Erik Trimble wrote: Frankly, dedup isn't practical for anything but enterprise-class machines. It's certainly not practical for desktops or anything remotely low-end. We're certainly learning a lot about how zfs dedup behaves in practice. I've enabled dedup on two desktops

Re: [zfs-discuss] New SSD options

2010-05-20 Thread Bill Sommerfeld
On 05/20/10 12:26, Miles Nordin wrote: I don't know, though, what to do about these reports of devices that almost respect cache flushes but seem to lose exactly one transaction. AFAICT this should be a works/doesntwork situation, not a continuum. But there's so much brokenness out there.

Re: [zfs-discuss] ZFS root ARC memory usage on VxFS system...

2010-05-07 Thread Bill Sommerfeld
On 05/07/10 15:05, Kris Kasner wrote: Is ZFS swap cached in the ARC? I can't account for data in the ZFS filesystems to use as much ARC as is in use without the swap files being cached.. seems a bit redundant? There's nothing to explicitly disable caching just for swap; from zfs's point of

Re: [zfs-discuss] Single-disk pool corrupted after controller failure

2010-05-01 Thread Bill Sommerfeld
On 05/01/10 13:06, Diogo Franco wrote: After seeing that on some cases labels were corrupted, I tried running zdb -l on mine: ... (labels 0, 1 not there, labels 2, 3 are there). I'm looking for pointers on how to fix this situation, since the disk still has available metadata. there are two

Re: [zfs-discuss] Is it safe/possible to idle HD's in a ZFS Vdev to save wear/power?

2010-04-17 Thread Bill Sommerfeld
On 04/16/10 20:26, Joe wrote: I was just wondering if it is possible to spindown/idle/sleep hard disks that are part of a Vdev pool SAFELY? it's possible. my ultra24 desktop has this enabled by default (because it's a known desktop type). see the power.conf man page; I think you may need

Re: [zfs-discuss] SSD best practices

2010-04-17 Thread Bill Sommerfeld
On 04/17/10 07:59, Dave Vrona wrote: 1) Mirroring. Leaving cost out of it, should ZIL and/or L2ARC SSDs be mirrored ? L2ARC cannot be mirrored -- and doesn't need to be. The contents are checksummed; if the checksum doesn't match, it's treated as a cache miss and the block is re-read from

Re: [zfs-discuss] Suggestions about current ZFS setup

2010-04-14 Thread Bill Sommerfeld
On 04/14/10 12:37, Christian Molson wrote: First I want to thank everyone for their input, It is greatly appreciated. To answer a few questions: Chassis I have: http://www.supermicro.com/products/chassis/4U/846/SC846E2-R900.cfm Motherboard:

Re: [zfs-discuss] dedup screwing up snapshot deletion

2010-04-14 Thread Bill Sommerfeld
On 04/14/10 19:51, Richard Jahnel wrote: This sounds like the known issue about the dedupe map not fitting in ram. Indeed, but this is not correct: When blocks are freed, dedupe scans the whole map to ensure each block is not is use before releasing it. That's not correct. dedup uses a

Re: [zfs-discuss] Secure delete?

2010-04-11 Thread Bill Sommerfeld
On 04/11/10 10:19, Manoj Joseph wrote: Earlier writes to the file might have left older copies of the blocks lying around which could be recovered. Indeed; to be really sure you need to overwrite all the free space in the pool. If you limit yourself to worrying about data accessible via a

Re: [zfs-discuss] Secure delete?

2010-04-11 Thread Bill Sommerfeld
On 04/11/10 12:46, Volker A. Brandt wrote: The most paranoid will replace all the disks and then physically destroy the old ones. I thought the most paranoid will encrypt everything and then forget the key... :-) Actually, I hear that the most paranoid encrypt everything *and then* destroy

Re: [zfs-discuss] SSD sale on newegg

2010-04-06 Thread Bill Sommerfeld
On 04/06/10 17:17, Richard Elling wrote: You could probably live with an X25-M as something to use for all three, but of course you're making tradeoffs all over the place. That would be better than almost any HDD on the planet because the HDD tradeoffs result in much worse performance.

Re: [zfs-discuss] Tuning the ARC towards LRU

2010-04-05 Thread Bill Sommerfeld
On 04/05/10 15:24, Peter Schuller wrote: In the urxvt case, I am basing my claim on informal observations. I.e., hit terminal launch key, wait for disks to rattle, get my terminal. Repeat. Only by repeating it very many times in very rapid succession am I able to coerce it to be cached such that

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-22 Thread Bill Sommerfeld
On 03/22/10 11:02, Richard Elling wrote: Scrub tends to be a random workload dominated by IOPS, not bandwidth. you may want to look at this again post build 128; the addition of metadata prefetch to scrub/resilver in that build appears to have dramatically changed how it performs (largely for

Re: [zfs-discuss] sympathetic (or just multiple) drive failures

2010-03-20 Thread Bill Sommerfeld
On 03/19/10 19:07, zfs ml wrote: What are peoples' experiences with multiple drive failures? 1985-1986. DEC RA81 disks. Bad glue that degraded at the disk's operating temperature. Head crashes. No more need be said. - Bill

Re: [zfs-discuss] Scrub not completing?

2010-03-17 Thread Bill Sommerfeld
On 03/17/10 14:03, Ian Collins wrote: I ran a scrub on a Solaris 10 update 8 system yesterday and it is 100% done, but not complete: scrub: scrub in progress for 23h57m, 100.00% done, 0h0m to go Don't panic. If zpool iostat still shows active reads from all disks in the pool, just step

Re: [zfs-discuss] Snapshot recycle freezes system activity

2010-03-08 Thread Bill Sommerfeld
On 03/08/10 12:43, Tomas Ögren wrote: So we tried adding 2x 4GB USB sticks (Kingston Data Traveller Mini Slim) as metadata L2ARC and that seems to have pushed the snapshot times down to about 30 seconds. Out of curiosity, how much physical memory does this system have?

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-08 Thread Bill Sommerfeld
On 03/08/10 17:57, Matt Cowger wrote: Change zfs options to turn off checksumming (don't want it or need it), atime, compression, 4K block size (this is the applications native blocksize) etc. even when you disable checksums and compression through the zfs command, zfs will still compress

Re: [zfs-discuss] swap across multiple pools

2010-03-03 Thread Bill Sommerfeld
On 03/03/10 05:19, Matt Keenan wrote: In a multipool environment, would be make sense to add swap to a pool outside or the root pool, either as the sole swap dataset to be used or as extra swap ? Yes. I do it routinely, primarily to preserve space on boot disks on large-memory systems.

Re: [zfs-discuss] Who is using ZFS ACL's in production?

2010-03-02 Thread Bill Sommerfeld
On 03/02/10 08:13, Fredrich Maney wrote: Why not do the same sort of thing and use that extra bit to flag a file, or directory, as being an ACL only file and will negate the rest of the mask? That accomplishes what Paul is looking for, without breaking the existing model for those that need/wish

Re: [zfs-discuss] compressed root pool at installation time with flash archive predeployment script

2010-03-02 Thread Bill Sommerfeld
On 03/02/10 12:57, Miles Nordin wrote: cc == chad campbellchad.campb...@cummins.com writes: cc I was trying to think of a way to set compression=on cc at the beginning of a jumpstart. are you sure grub/ofwboot/whatever can read compressed files? Grub and the sparc zfs boot

Re: [zfs-discuss] Who is using ZFS ACL's in production?

2010-03-01 Thread Bill Sommerfeld
On 03/01/10 13:50, Miles Nordin wrote: dd == David Dyer-Bennetd...@dd-b.net writes: dd Okay, but the argument goes the other way just as well -- when dd I run chmod 6400 foobar, I want the permissions set that dd specific way, and I don't want some magic background feature

Re: [zfs-discuss] ZFS compression and deduplication on root pool on SSD

2010-02-28 Thread Bill Sommerfeld
On 02/28/10 15:58, valrh...@gmail.com wrote: Also, I don't have the numbers to prove this, but it seems to me that the actual size of rpool/ROOT has grown substantially since I did a clean install of build 129a (I'm now at build133). WIthout compression, either, that was around 24 GB, but

Re: [zfs-discuss] Who is using ZFS ACL's in production?

2010-02-26 Thread Bill Sommerfeld
On 02/26/10 10:45, Paul B. Henson wrote: I've already posited as to an approach that I think would make a pure-ACL deployment possible: http://mail.opensolaris.org/pipermail/zfs-discuss/2010-February/037206.html Via this concept or something else, there needs to be a way to configure

Re: [zfs-discuss] Freeing unused space in thin provisioned zvols

2010-02-26 Thread Bill Sommerfeld
On 02/26/10 11:42, Lutz Schumann wrote: Idea: - If the guest writes a block with 0's only, the block is freed again - if someone reads this block again - it wil get the same 0's it would get if the 0's would be written - The checksum of a all 0 block dan be hard coded for SHA1 /

Re: [zfs-discuss] Who is using ZFS ACL's in production?

2010-02-26 Thread Bill Sommerfeld
On 02/26/10 17:38, Paul B. Henson wrote: As I wrote in that new sub-thread, I see no option that isn't surprising in some way. My preference would be for what I labeled as option (b). And I think you absolutely should be able to configure your fileserver to implement your preference. Why

Re: [zfs-discuss] ZFS ZIL + L2ARC SSD Setup

2010-02-12 Thread Bill Sommerfeld
On 02/12/10 09:36, Felix Buenemann wrote: given I've got ~300GB L2ARC, I'd need about 7.2GB RAM, so upgrading to 8GB would be enough to satisfy the L2ARC. But that would only leave ~800MB free for everything else the server needs to do. - Bill

Re: [zfs-discuss] Reading ZFS config for an extended period

2010-02-11 Thread Bill Sommerfeld
On 02/11/10 10:33, Lori Alt wrote: This bug is closed as a dup of another bug which is not readable from the opensolaris site, (I'm not clear what makes some bugs readable and some not). the other bug in question was opened yesterday and probably hasn't had time to propagate.

Re: [zfs-discuss] most of my space is gone

2010-02-06 Thread Bill Sommerfeld
On 02/06/10 08:38, Frank Middleton wrote: AFAIK there is no way to get around this. You can set a flag so that pkg tries to empty /var/pkg/downloads, but even though it looks empty, it won't actually become empty until you delete the snapshots, and IIRC you still have to manually delete the

Re: [zfs-discuss] server hang with compression on, ping timeouts from remote machine

2010-01-31 Thread Bill Sommerfeld
On 01/31/10 07:07, Christo Kutrovsky wrote: I've also experienced similar behavior (short freezes) when running zfs send|zfs receive with compression on LOCALLY on ZVOLs again. Has anyone else experienced this ? Know any of bug? This is on snv117. you might also get better results after the

Re: [zfs-discuss] zvol being charged for double space

2010-01-27 Thread Bill Sommerfeld
On 01/27/10 21:17, Daniel Carosone wrote: This is as expected. Not expected is that: usedbyrefreservation = refreservation I would expect this to be 0, since all the reserved space has been allocated. This would be the case if the volume had no snapshots. As a result, used is over twice

Re: [zfs-discuss] Disks and caches

2010-01-07 Thread Bill Sommerfeld
On Thu, 2010-01-07 at 11:07 -0800, Anil wrote: There is talk about using those cheap disks for rpool. Isn't rpool also prone to a lot of writes, specifically when the /tmp is in a SSD? Huh? By default, solaris uses tmpfs for /tmp, /var/run, and /etc/svc/volatile; writes to those filesystems

Re: [zfs-discuss] zpool fragmentation issues?

2009-12-15 Thread Bill Sommerfeld
On Tue, 2009-12-15 at 17:28 -0800, Bill Sprouse wrote: After running for a while (couple of months) the zpool seems to get fragmented, backups take 72 hours and a scrub takes about 180 hours. Are there periodic snapshots being created in this pool? Can they run with atime turned

Re: [zfs-discuss] zfs on ssd

2009-12-11 Thread Bill Sommerfeld
On Fri, 2009-12-11 at 13:49 -0500, Miles Nordin wrote: sh == Seth Heeren s...@zfs-fuse.net writes: sh If you don't want/need log or cache, disable these? You might sh want to run your ZIL (slog) on ramdisk. seems quite silly. why would you do that instead of just disabling the

Re: [zfs-discuss] Resilver/scrub times?

2009-11-22 Thread Bill Sommerfeld
Yesterday's integration of 6678033 resilver code should prefetch as part of changeset 74e8c05021f1 (which should be in build 129 when it comes out) may improve scrub times, particularly if you have a large number of small files and a large number of snapshots. I recently tested an early

Re: [zfs-discuss] zfs eradication

2009-11-11 Thread Bill Sommerfeld
On Wed, 2009-11-11 at 10:29 -0800, Darren J Moffat wrote: Joerg Moellenkamp wrote: Hi, Well ... i think Darren should implement this as a part of zfs-crypto. Secure Delete on SSD looks like quite challenge, when wear leveling and bad block relocation kicks in ;) No I won't be doing

Re: [zfs-discuss] This is the scrub that never ends...

2009-11-10 Thread Bill Sommerfeld
On Fri, 2009-09-11 at 13:51 -0400, Will Murnane wrote: On Thu, Sep 10, 2009 at 13:06, Will Murnane will.murn...@gmail.com wrote: On Wed, Sep 9, 2009 at 21:29, Bill Sommerfeld sommerf...@sun.com wrote: Any suggestions? Let it run for another day. I'll let it keep running as long

Re: [zfs-discuss] dedupe question

2009-11-07 Thread Bill Sommerfeld
On Sat, 2009-11-07 at 17:41 -0500, Dennis Clarke wrote: Does the dedupe functionality happen at the file level or a lower block level? it occurs at the block allocation level. I am writing a large number of files that have the fol structure : -- file begins 1024 lines of random ASCII

Re: [zfs-discuss] sched regularily writing a lots of MBs to the pool?

2009-11-04 Thread Bill Sommerfeld
zfs groups writes together into transaction groups; the physical writes to disk are generally initiated by kernel threads (which appear in dtrace as threads of the sched process). Changing the attribution is not going to be simple as a single physical write to the pool may contain data and

Re: [zfs-discuss] Resilvering, amount of data on disk, etc.

2009-10-26 Thread Bill Sommerfeld
On Mon, 2009-10-26 at 10:24 -0700, Brian wrote: Why does resilvering an entire disk, yield different amounts of data that was resilvered each time. I have read that ZFS only resilvers what it needs to, but in the case of replacing an entire disk with another formatted clean disk, you would

Re: [zfs-discuss] Which directories must be part of rpool?

2009-09-25 Thread Bill Sommerfeld
On Fri, 2009-09-25 at 14:39 -0600, Lori Alt wrote: The list of datasets in a root pool should look something like this: ... rpool/swap I've had success with putting swap into other pools. I believe others have, as well. - Bill

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-18 Thread Bill Sommerfeld
On Wed, 2009-09-16 at 14:19 -0700, Richard Elling wrote: Actually, I had a ton of data on resilvering which shows mirrors and raidz equivalently bottlenecked on the media write bandwidth. However, there are other cases which are IOPS bound (or CR bound :-) which cover some of the postings

Re: [zfs-discuss] This is the scrub that never ends...

2009-09-09 Thread Bill Sommerfeld
On Wed, 2009-09-09 at 21:30 +, Will Murnane wrote: Some hours later, here I am again: scrub: scrub in progress for 18h24m, 100.00% done, 0h0m to go Any suggestions? Let it run for another day. A pool on a build server I manage takes about 75-100 hours to scrub, but typically starts

Re: [zfs-discuss] zfs kernel compilation issue

2009-08-29 Thread Bill Sommerfeld
On Fri, 2009-08-28 at 23:12 -0700, P. Anil Kumar wrote: I would like to know why its picking up amd64 config params from the Makefile, while uname -a clearly shows that its i386 ? it's behaving as designed. on solaris, uname -a always shows i386 regardless of whether the system is in 32-bit

Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-30 Thread Bill Sommerfeld
On Wed, 2009-07-29 at 06:50 -0700, Glen Gunselman wrote: There was a time when manufacturers know about base-2 but those days are long gone. Oh, they know all about base-2; it's just that disks seem bigger when you use base-10 units. Measure a disk's size in 10^(3n)-based KB/MB/GB/TB units,

Re: [zfs-discuss] Speeding up resilver on x4500

2009-06-22 Thread Bill Sommerfeld
On Mon, 2009-06-22 at 06:06 -0700, Richard Elling wrote: Nevertheless, in my lab testing, I was not able to create a random-enough workload to not be write limited on the reconstructing drive. Anecdotal evidence shows that some systems are limited by the random reads. Systems I've run which

Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-19 Thread Bill Sommerfeld
On Wed, 2009-06-17 at 12:35 +0200, casper@sun.com wrote: I still use disk swap because I have some bad experiences with ZFS swap. (ZFS appears to cache and that is very wrong) I'm experimenting with running zfs swap with the primarycache attribute set to metadata instead of the default

Re: [zfs-discuss] schedulers [was: zfs related google summer of code ideas - your vote]

2009-03-04 Thread Bill Sommerfeld
On Wed, 2009-03-04 at 12:49 -0800, Richard Elling wrote: But I'm curious as to why you would want to put both the slog and L2ARC on the same SSD? Reducing part count in a small system. For instance: adding L2ARC+slog to a laptop. I might only have one slot free to allocate to ssd. IMHO the

Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-12 Thread Bill Sommerfeld
On Thu, 2009-02-12 at 17:35 -0500, Blake wrote: That does look like the issue being discussed. It's a little alarming that the bug was reported against snv54 and is still not fixed :( bugs.opensolaris.org's information about this bug is out of date. It was fixed in snv_54: changeset:

Re: [zfs-discuss] Problems at 90% zpool capacity 2008.05

2009-01-07 Thread Bill Sommerfeld
On Tue, 2009-01-06 at 22:18 -0700, Neil Perrin wrote: I vaguely remember a time when UFS had limits to prevent ordinary users from consuming past a certain limit, allowing only the super-user to use it. Not that I'm advocating that approach for ZFS. looks to me like zfs already provides a

Re: [zfs-discuss] Setting per-file record size / querying fs/file record size?

2008-10-22 Thread Bill Sommerfeld
On Wed, 2008-10-22 at 10:30 +0100, Darren J Moffat wrote: I'm assuming this is local filesystem rather than ZFS backed NFS (which is what I have). Correct, on a laptop. What has setting the 32KB recordsize done for the rest of your home dir, or did you give the evolution directory its own

Re: [zfs-discuss] Disabling COMMIT at NFS level, or disabling ZIL on a per-filesystem basis

2008-10-22 Thread Bill Sommerfeld
On Wed, 2008-10-22 at 10:45 -0600, Neil Perrin wrote: Yes: 6280630 zil synchronicity Though personally I've been unhappy with the exposure that zil_disable has got. It was originally meant for debug purposes only. So providing an official way to make synchronous behaviour asynchronous is

Re: [zfs-discuss] Tool to figure out optimum ZFS recordsize for a Mail server Maildir tree?

2008-10-22 Thread Bill Sommerfeld
On Wed, 2008-10-22 at 09:46 -0700, Mika Borner wrote: If I turn zfs compression on, does the recordsize influence the compressratio in anyway? zfs conceptually chops the data into recordsize chunks, then compresses each chunk independently, allocating on disk only the space needed to store each

Re: [zfs-discuss] Setting per-file record size / querying fs/file record size?

2008-10-21 Thread Bill Sommerfeld
On Mon, 2008-10-20 at 16:57 -0500, Nicolas Williams wrote: I've a report that the mismatch between SQLite3's default block size and ZFS' causes some performance problems for Thunderbird users. I was seeing a severe performance problem with sqlite3 databases as used by evolution (not

Re: [zfs-discuss] Quantifying ZFS reliability

2008-10-01 Thread Bill Sommerfeld
On Wed, 2008-10-01 at 11:54 -0600, Robert Thurlow wrote: like they are not good enough though, because unless this broken router that Robert and Darren saw was doing NAT, yeah, it should not have touch the TCP/UDP checksum. NAT was not involved. I believe we proved that the problem bit

Re: [zfs-discuss] resilver speed.

2008-09-05 Thread Bill Sommerfeld
On Fri, 2008-09-05 at 09:41 -0700, Richard Elling wrote: Also does the resilver deliberately pause? Running iostat I see that it will pause for five to ten seconds where no IO is done at all, then it continues on at a more reasonable pace. I have not seen such behaviour during resilver

Re: [zfs-discuss] Sidebar to ZFS Availability discussion

2008-09-02 Thread Bill Sommerfeld
On Sun, 2008-08-31 at 12:00 -0700, Richard Elling wrote: 2. The algorithm *must* be computationally efficient. We are looking down the tunnel at I/O systems that can deliver on the order of 5 Million iops. We really won't have many (any?) spare cycles to play with.

Re: [zfs-discuss] Sidebar to ZFS Availability discussion

2008-09-02 Thread Bill Sommerfeld
On Sun, 2008-08-31 at 15:03 -0400, Miles Nordin wrote: It's sort of like network QoS, but not quite, because: (a) you don't know exactly how big the ``pipe'' is, only approximately, In an ip network, end nodes generally know no more than the pipe size of the first hop -- and in

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-08-28 Thread Bill Sommerfeld
On Thu, 2008-08-28 at 13:05 -0700, Eric Schrock wrote: A better option would be to not use this to perform FMA diagnosis, but instead work into the mirror child selection code. This has already been alluded to before, but it would be cool to keep track of latency over time, and use this to

Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread Bill Sommerfeld
On Thu, 2008-08-21 at 21:15 -0700, mike wrote: I've seen 5-6 disk zpools are the most recommended setup. This is incorrect. Much larger zpools built out of striped redundant vdevs (mirror, raidz1, raidz2) are recommended and also work well. raidz1 or raidz2 vdevs of more than a single-digit

Re: [zfs-discuss] more ZFS recovery

2008-08-07 Thread Bill Sommerfeld
On Thu, 2008-08-07 at 11:34 -0700, Richard Elling wrote: How would you describe the difference between the data recovery utility and ZFS's normal data recovery process? I'm not Anton but I think I see what he's getting at. Assume you have disks which once contained a pool but all of the

Re: [zfs-discuss] Checksum error: which of my files have failed scrubbing?

2008-08-05 Thread Bill Sommerfeld
On Tue, 2008-08-05 at 12:11 -0700, soren wrote: soren wrote: ZFS has detected that my root filesystem has a small number of errors. Is there a way to tell which specific files have been corrupted? After a scrub a zpool status -v should give you a list of files with unrecoverable

Re: [zfs-discuss] Block unification in ZFS

2008-08-05 Thread Bill Sommerfeld
See the long thread titled ZFS deduplication, last active approximately 2 weeks ago. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Can I trust ZFS?

2008-08-03 Thread Bill Sommerfeld
On Sun, 2008-08-03 at 11:42 -0500, Bob Friesenhahn wrote: Zfs makes human error really easy. For example $ zpool destroy mypool Note that zpool destroy can be undone by zpool import -D (if you get to it before the disks are overwritten). ___

Re: [zfs-discuss] checksum errors on root pool after upgrade to snv_94

2008-07-20 Thread Bill Sommerfeld
On Fri, 2008-07-18 at 10:28 -0700, Jürgen Keil wrote: I ran a scrub on a root pool after upgrading to snv_94, and got checksum errors: Hmm, after reading this, I started a zpool scrub on my mirrored pool, on a system that is running post snv_94 bits: It also found checksum errors #

[zfs-discuss] checksum errors on root pool after upgrade to snv_94

2008-07-17 Thread Bill Sommerfeld
I ran a scrub on a root pool after upgrading to snv_94, and got checksum errors: pool: r00t state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs

Re: [zfs-discuss] J4500 device renumbering

2008-07-15 Thread Bill Sommerfeld
On Tue, 2008-07-15 at 15:32 -0500, Bob Friesenhahn wrote: On Tue, 15 Jul 2008, Ross Smith wrote: It sounds like you might be interested to read up on Eric Schrock's work. I read today about some of the stuff he's been doing to bring integrated fault management to Solaris:

Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-06-24 Thread Bill Sommerfeld
On Tue, 2008-06-24 at 09:41 -0700, Richard Elling wrote: IMHO, you can make dump optional, with no dump being default. Before Sommerfeld pounces on me (again :-)) actually, in the case of virtual machines, doing the dump *in* the virtual machine into preallocated virtual disk blocks is silly.

Re: [zfs-discuss] Growing root pool ?

2008-06-11 Thread Bill Sommerfeld
On Wed, 2008-06-11 at 07:40 -0700, Richard L. Hamilton wrote: I'm not even trying to stripe it across multiple disks, I just want to add another partition (from the same physical disk) to the root pool. Perhaps that is a distinction without a difference, but my goal is to grow my root

Re: [zfs-discuss] disk names?

2008-06-05 Thread Bill Sommerfeld
On Wed, 2008-06-04 at 23:12 +, A Darren Dunham wrote: Best story I've heard is that it dates from before the time when modifiable (or at least *easily* modifiable) slices didn't exist. No hopping into 'format' or using 'fmthard'. Instead, your disk came with an entry in 'format.dat' with

Re: [zfs-discuss] ZFS root compressed ?

2008-06-05 Thread Bill Sommerfeld
On Thu, 2008-06-05 at 23:04 +0300, Cyril Plisko wrote: 1. Are there any reasons to *not* enable compression by default ? Not exactly an answer: Most of the systems I'm running today on ZFS root have compression=on and copies=2 for rpool/ROOT 2. How can I do it ? (I think I can run zfs set

Re: [zfs-discuss] What is a vdev?

2008-05-23 Thread Bill Sommerfeld
On Fri, 2008-05-23 at 13:45 -0700, Orvar Korvar wrote: Ok, so i make one vdev out of 8 discs. And I combine all vdevs into one large zpool? Is it correct? I have 8 port SATA card. I have 4 drives into one zpool. zpool create mypool raidz1 disk0 disk1 disk2 disk3 you have a pool consisting

Re: [zfs-discuss] ZFS ACLs/Samba integration

2008-03-17 Thread Bill Sommerfeld
On Fri, 2008-03-14 at 18:11 -0600, Mark Shellenbaum wrote: I think it is a misnomer to call the current implementation of ZFS a pure ACL system, as clearly the ACLs are heavily contaminated by legacy mode bits. Feel free to open an RFE. It may be a tough sell with PSARC, but maybe if

Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-28 Thread Bill Sommerfeld
On Wed, 2008-02-27 at 13:43 -0500, Kyle McDonald wrote: How was it MVFS could do this without any changes to the shells or any other programs? I ClearCase could 'grep FOO /dir1/dir2/file@@/main/*' to see which version of 'file' added FOO. (I think @@ was the special hidden key. It might

Re: [zfs-discuss] five megabytes per second with

2008-02-21 Thread Bill Sommerfeld
On Thu, 2008-02-21 at 11:06 -0800, John Tracy wrote: I've read that this behavior can be expected depending on how the LAG is setup, whether it divides hashes up the data on a per packet or per source/destination basis/or other options. (this is a generic answer, not specific to zfs exported

Re: [zfs-discuss] [osol-code] /usr/bin and /usr/xpg4/bin differences

2007-12-18 Thread Bill Sommerfeld
On Sat, 2007-12-15 at 22:00 -0800, Sasidhar Kasturi wrote: If i want to make some modifications in the code.. Can i do it for /xpg4/bin commands or .. i should do it for /usr/bin commands?? If possible (if there's no inherent conflict with either the applicable standards or existing practice)

Re: [zfs-discuss] What is the correct way to replace a good disk?

2007-11-02 Thread Bill Sommerfeld
On Fri, 2007-11-02 at 11:20 -0700, Chris Williams wrote: I have a 9-bay JBOD configured as a raidz2. One of the disks, which is on-line and fine, needs to be swapped out and replaced. I have been looking though the zfs admin guide and am confused on how I should go about swapping out. I

  1   2   >