Re: [zfs-discuss] ZFS Perfomance

2010-04-15 Thread Daniel Carosone
On Wed, Apr 14, 2010 at 09:58:50AM -0700, Richard Elling wrote: On Apr 14, 2010, at 8:57 AM, Yariv Graf wrote: From my experience dealing with 4TB you stop writing after 80% of zpool utilization YMMV. I have routinely completely filled zpools. There have been some improvements in

Re: [zfs-discuss] dedup causing problems with NFS?(was Re: snapshots taking too much space)

2010-04-14 Thread Daniel Carosone
On Wed, Apr 14, 2010 at 08:48:42AM -0500, Paul Archer wrote: So I turned deduplication on on my staging FS (the one that gets mounted on the database servers) yesterday, and since then I've been seeing the mount hang for short periods of time off and on. (It lights nagios up like a

Re: [zfs-discuss] dedup screwing up snapshot deletion

2010-04-14 Thread Daniel Carosone
On Wed, Apr 14, 2010 at 09:04:50PM -0500, Paul Archer wrote: I realize that I did things in the wrong order. I should have removed the oldest snapshot first, on to the newest, and then removed the data in the FS itself. For the problem in question, this is irrelevant. As discussed in the

Re: [zfs-discuss] How to Catch ZFS error with syslog ?

2010-04-12 Thread Daniel Carosone
On Mon, Apr 12, 2010 at 09:32:50AM -0600, Tim Haley wrote: Try explicitly enabling fmd to send to syslog in /usr/lib/fm/fmd/plugins/syslog-msgs.conf Wow, so useful, yet so well hidden I never even knew to look for it. Please can this be on by default? Please? -- Dan. pgpDwZouV1dUr.pgp

Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-12 Thread Daniel Carosone
On Mon, Apr 12, 2010 at 06:17:47PM -0500, Harry Putnam wrote: But, I'm too unskilled in solaris and zfs admin to be risking a total melt down if I try that before gaining a more thorough understanding. Grab virtualbox or something similar and set yourself up a test environment. In general, and

Re: [zfs-discuss] ZFS RAID-Z1 Degraded Array won't import

2010-04-12 Thread Daniel Carosone
On Mon, Apr 12, 2010 at 08:01:27PM -0700, Peter Tripp wrote: So I decided I would attach the disks to 2nd system (with working fans) where I could backup the data to tape. So here's where I got dumb...I ran 'zpool export'. Of course, I never actually ended up attaching the disks to another

Re: [zfs-discuss] What happens when unmirrored ZIL log device is removed ungracefully

2010-04-11 Thread Daniel Carosone
On Sun, Apr 11, 2010 at 07:03:29PM -0400, Edward Ned Harvey wrote: Heck, even if the faulted pool spontaneously sent the server into an ungraceful reboot, even *that* would be an improvement. Please look at the pool property failmode. Both of the preferences you have expressed are available,

Re: [zfs-discuss] Sync Write - ZIL log performance - Feedback for ZFS developers?

2010-04-10 Thread Daniel Carosone
On Sat, Apr 10, 2010 at 11:50:05AM -0500, Bob Friesenhahn wrote: Huge synchronous bulk writes are pretty rare since usually the bottleneck is elsewhere, such as the ethernet. Also, large writes can go straight to the pool, and the zil only logs the intent to commit those blocks (ie, link them

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-10 Thread Daniel Carosone
On Sat, Apr 10, 2010 at 12:56:04PM -0500, Tim Cook wrote: At that price, for the 5-in-3 at least, I'd go with supermicro. For $20 more, you get what appears to be a far more solid enclosure. My intent with that link was only to show an example, not make a recommendation. I'm glad others have

Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-10 Thread Daniel Carosone
On Sat, Apr 10, 2010 at 02:51:45PM -0500, Harry Putnam wrote: [Note: This discussion started in another thread Subject: about backup and mirrored pools but the subject has been significantly changed so started a new thread] Bob Friesenhahn bfrie...@simple.dallas.tx.us writes:

Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-10 Thread Daniel Carosone
On Sat, Apr 10, 2010 at 06:20:54PM -0500, Bob Friesenhahn wrote: Since he is already using mirrors, he already has enough free space since he can move one disk from each mirror to the main pool (which unfortunately, can't be the boot 'rpool' pool), send the data, and then move the second

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-09 Thread Daniel Carosone
On Fri, Apr 09, 2010 at 10:21:08AM -0700, Eric Andersen wrote: If I could find a reasonable backup method that avoided external enclosures altogether, I would take that route. I'm tending to like bare drives. If you have the chassis space, there are 5-in-3 bays that don't need extra drive

Re: [zfs-discuss] vPool unavailable but RaidZ1 is online

2010-04-09 Thread Daniel Carosone
On Sun, Apr 04, 2010 at 07:13:58AM -0700, Kevin wrote: I am trying to recover a raid set, there are only three drives that are part of the set. I attached a disk and discovered it was bad. It was never part of the raid set. Are you able to tell us more precisely what you did with this disk?

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-08 Thread Daniel Carosone
On Thu, Apr 08, 2010 at 12:14:55AM -0700, Erik Trimble wrote: Daniel Carosone wrote: Go with the 2x7 raidz2. When you start to really run out of space, replace the drives with bigger ones. While that's great in theory, there's getting to be a consensus that 1TB 7200RPM 3.5 Sata drives

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-08 Thread Daniel Carosone
On Thu, Apr 08, 2010 at 03:48:54PM -0700, Erik Trimble wrote: Well To be clear, I don't disagree with you; in fact for a specific part of the market (at least) and a large part of your commentary, I agree. I just think you're overstating the case for the rest. The problem is (and this

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-08 Thread Daniel Carosone
On Thu, Apr 08, 2010 at 08:36:43PM -0700, Richard Elling wrote: On Apr 8, 2010, at 6:19 PM, Daniel Carosone wrote: As for error rates, this is something zfs should not be afraid of. Indeed, many of us would be happy to get drives with less internal ECC overhead and complexity for greater

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-07 Thread Daniel Carosone
Go with the 2x7 raidz2. When you start to really run out of space, replace the drives with bigger ones. You will run out of space eventually regardless; this way you can replace 7 at a time, not 14 at a time. With luck, each replacement will last you long enough that the next replacement will

Re: [zfs-discuss] refreservation and ZFS Volume

2010-04-06 Thread Daniel Carosone
On Tue, Apr 06, 2010 at 01:44:20PM -0400, Tony MacDoodle wrote: I am trying to understand how refreservation works with snapshots. If I have a 100G zfs pool I have 4 20G volume groups in that pool. refreservation = 20G on all volume groups. Now when I want to do a snapshot

Re: [zfs-discuss] ZFS on-disk DDT block arrangement

2010-04-06 Thread Daniel Carosone
On Wed, Apr 07, 2010 at 01:52:23AM +1000, taemun wrote: I was wondering if someone could explain why the DDT is seemingly (from empirical observation) kept in a huge number of individual blocks, randomly written across the pool, rather than just a large binary chunk somewhere. It's not really

Re: [zfs-discuss] refreservation and ZFS Volume

2010-04-06 Thread Daniel Carosone
On Wed, Apr 07, 2010 at 06:27:09AM +1000, Daniel Carosone wrote: You have reminded me.. I wrote some patches to the zfs manpage to help clarify this issue, while travelling, and never got around to posting them when I got back. I'll dig them up off my netbook later today. http

Re: [zfs-discuss] SSD sale on newegg

2010-04-06 Thread Daniel Carosone
On Tue, Apr 06, 2010 at 06:53:04PM -0700, Richard Elling wrote: Disagree. Swap is a perfectly fine workload for SSDs. Under ZFS, even more so. I'd really like to squash this rumour and thought we were making progress on that front :-( Today, there are millions or thousands of

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-05 Thread Daniel Carosone
On Sun, Apr 04, 2010 at 11:46:16PM -0700, Willard Korfhage wrote: Looks like it was RAM. I ran memtest+ 4.00, and it found no problems. Then why do you suspect the ram? Especially with 12 disks, another likely candidate could be an overloaded power supply. While there may be problems showing

Re: [zfs-discuss] Removing SSDs from pool

2010-04-05 Thread Daniel Carosone
On Mon, Apr 05, 2010 at 07:43:26AM -0400, Edward Ned Harvey wrote: Is the database running locally on the machine? Or at the other end of something like nfs? You should have better performance using your present config than just about any other config ... By enabling the log devices, such as

Re: [zfs-discuss] ZFS: Raid and dedup

2010-04-05 Thread Daniel Carosone
On Mon, Apr 05, 2010 at 06:32:13PM -0700, Learner Study wrote: I'm wondering what is the correct flow when both raid5 and de-dup are enabled on a storage volume I think we should do de-dup first and then raid5 ... is that understanding correct? Not really. Strictly speaking, ZFS

Re: [zfs-discuss] ZFS: Raid and dedup

2010-04-05 Thread Daniel Carosone
On Mon, Apr 05, 2010 at 06:58:57PM -0700, Learner Study wrote: Hi Jeff: I'm a bit confused...did you say Correct to my orig email or the reply from Daniel... Jeff is replying to your mail, not mine. It looks like he's read your question a little differently. By that reading, you are

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-05 Thread Daniel Carosone
On Mon, Apr 05, 2010 at 09:46:58PM -0500, Tim Cook wrote: On Mon, Apr 5, 2010 at 9:39 PM, Willard Korfhage opensola...@familyk.orgwrote: It certainly has symptoms that match a marginal power supply, but I measured the power consumption some time ago and found it comfortably within the

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-05 Thread Daniel Carosone
On Mon, Apr 05, 2010 at 09:35:21PM -0700, Willard Korfhage wrote: By the way, I see that now one of the disks is listed as degraded - too many errors. Is there a good way to identify exactly which of the disks it is? It's hidden in iostat -E, of all places. -- Dan. pgpB1dUBrSfPC.pgp

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-05 Thread Daniel Carosone
On Tue, Apr 06, 2010 at 12:29:35AM -0500, Tim Cook wrote: On Tue, Apr 6, 2010 at 12:24 AM, Daniel Carosone d...@geek.com.au wrote: On Mon, Apr 05, 2010 at 09:35:21PM -0700, Willard Korfhage wrote: By the way, I see that now one of the disks is listed as degraded - too many errors

Re: [zfs-discuss] bit-flipping in RAM...

2010-03-31 Thread Daniel Carosone
On Thu, Apr 01, 2010 at 12:38:29AM +0100, Robert Milkowski wrote: So I wasn't saying that it can work or that it can work in all circumstances but rather I was trying to say that it probably shouldn't be dismissed on a performance argument alone as for some use cases It would be of great

Re: [zfs-discuss] zfs diff

2010-03-29 Thread Daniel Carosone
On Mon, Mar 29, 2010 at 06:38:47PM -0400, David Magda wrote: A new ARC case: I read this earlier this morning. Welcome news indeed! I have some concerns about the output format, having worked with similar requirements in the past. In particular: as part of the monotone VCS when reporting

Re: [zfs-discuss] zfs diff

2010-03-29 Thread Daniel Carosone
On Tue, Mar 30, 2010 at 12:37:15PM +1100, Daniel Carosone wrote: There will also need to be clear rules on output ordering, with respect to renames, where multiple changes have happened to renamed files. Separately, but relevant in particular to the above due to the potential for races: what

Re: [zfs-discuss] sharing a ssd between rpool and l2arc

2010-03-29 Thread Daniel Carosone
On Mon, Mar 29, 2010 at 01:10:22PM -0700, F. Wessels wrote: The caiman installer allows you to control the size of the partition on the boot disk but it doesn't allow you (at least I couldn't figure out how) to control the size of the slices. So you end with slice0 filling the entire

Re: [zfs-discuss] sharing a ssd between rpool and l2arc

2010-03-29 Thread Daniel Carosone
On Tue, Mar 30, 2010 at 03:13:45PM +1100, Daniel Carosone wrote: You can: - install to a partition that's the size you want rpool - expand the partition to the full disk - expand the s2 slice to the full disk - leave the s0 slice for rpool alone - make another slice for l2arc

[zfs-discuss] on alignment and verification

2010-03-28 Thread Daniel Carosone
There's been some talk about alignment lately, both for flash and WD disks. What's missing, at least from my perspective, is a clear an unambiguous test so users can verify that their zfs pools are aligned correctly. This should be a test that sees through all the layers of BIOS and SMI/EFI and

Re: [zfs-discuss] on alignment and verification

2010-03-28 Thread Daniel Carosone
On Mon, Mar 29, 2010 at 12:21:39PM +1100, Daniel Carosone wrote: #1. Use xxd (or similar) to examine the contents of the raw disk This relies on knowing what to look for, and how that is aligned to the start of the partition and to to metaslab addresses and offsets that determine the writes

Re: [zfs-discuss] on alignment and verification

2010-03-28 Thread Daniel Carosone
On Sun, Mar 28, 2010 at 09:32:02PM -0700, Richard Elling wrote: This is documented in the ZFS on disk format doc. Yep, I've been there in the meantime.. ;-) Use prtvtoc or format to see the beginning of the slice relative to the beginning of the partition. I dunno how you tell the start of

Re: [zfs-discuss] SSD As ARC

2010-03-27 Thread Daniel Carosone
On Sat, Mar 27, 2010 at 01:03:39AM -0700, Erik Trimble wrote: You can't share a device (either as ZIL or L2ARC) between multiple pools. Discussion here some weeks ago reached suggested that an L2ARC device was used for all ARC evictions, regardless of the pool. I'd very much like an

Re: [zfs-discuss] ZFS and 4kb sector Drives (All new western digital GREEN Drives?)

2010-03-27 Thread Daniel Carosone
On Fri, Mar 26, 2010 at 05:57:31PM -0700, Darren Mackay wrote: not sure if 32bit BSD supports 48bit LBA Solaris is the only otherwise-modern OS with this daft limitation. -- Dan. pgpE9xlpyJDRZ.pgp Description: PGP signature ___ zfs-discuss mailing

Re: [zfs-discuss] ZFS and 4kb sector Drives (All new western digital GREEN Drives?)

2010-03-27 Thread Daniel Carosone
On Sat, Mar 27, 2010 at 08:47:26PM +1100, Daniel Carosone wrote: On Fri, Mar 26, 2010 at 05:57:31PM -0700, Darren Mackay wrote: not sure if 32bit BSD supports 48bit LBA Solaris is the only otherwise-modern OS with this daft limitation. Ok, it's not due to LBA48, but the 1Tb limitation

Re: [zfs-discuss] CR 6880994 and pkg fix

2010-03-24 Thread Daniel Carosone
On Tue, Mar 23, 2010 at 07:22:59PM -0400, Frank Middleton wrote: On 03/22/10 11:50 PM, Richard Elling wrote: Look again, the checksums are different. Whoops, you are correct, as usual. Just 6 bits out of 256 different... Look which bits are different - digits 24, 53-56 in both cases.

Re: [zfs-discuss] ZFS on a 11TB HW RAID-5 controller

2010-03-24 Thread Daniel Carosone
On Wed, Mar 24, 2010 at 08:02:06PM +0100, Svein Skogen wrote: Maybe someone should look at implementing the zfs code for the XScale range of io-processors (such as the IOP333)? NetBSD runs on (many of) those. NetBSD has an (in-progress, still-some-issues) ZFS port. Hopefully they will converge

Re: [zfs-discuss] pool use from network poor performance

2010-03-23 Thread Daniel Carosone
On Mon, Mar 22, 2010 at 10:58:05PM -0700, homerun wrote: if i access to datapool from network , smb , nfs , ftp , sftp , jne... i get only max 200 KB/s speeds compared to rpool that give XX MB/S speeds to and from network it is slow. Any ideas what reasons might be and how try to find

Re: [zfs-discuss] Intel SASUC8I - worth every penny

2010-03-21 Thread Daniel Carosone
On Sat, Mar 20, 2010 at 09:50:10PM -0700, Erik Trimble wrote: Nah, the 8x2.5-in-2 are $220, while the 5x3.5-in-3 are $120. And they have a sas expander inside, unlike every other variant of these I've seen so far. Cabling mess win. -- Dan. pgpNzVMcKh5yn.pgp Description: PGP signature

Re: [zfs-discuss] ZFS+CIFS: Volume Shadow Services, or Simple Symlink?

2010-03-21 Thread Daniel Carosone
On Sun, Mar 21, 2010 at 08:59:29PM -0400, Edward Ned Harvey wrote: ln -s .zfs/snapshot snapshots Voila. All Windows or Mac or Linux or whatever users are able to easily access snapshots. Not being a CIFS user, could you clarify/confirm for me.. is this just a presentation issue, ie

Re: [zfs-discuss] Q : recommendations for zpool configuration

2010-03-19 Thread Daniel Carosone
On Fri, Mar 19, 2010 at 06:34:50PM +1100, taemun wrote: A pool with a 4-wide raidz2 is a completely nonsensical idea. No, it's not - not completely. It has the same amount of accessible storage as two striped mirrors. And would be slower in terms of IOPS, and be harder to upgrade in the

Re: [zfs-discuss] Q : recommendations for zpool configuration

2010-03-19 Thread Daniel Carosone
On Fri, Mar 19, 2010 at 12:59:39AM -0700, homerun wrote: Thanks for comments So possible choises are : 1) 2 2-way mirros 2) 4 disks raidz2 BTW , can raidz have spare ? so is there one posible choise more : 3 disks raidz with 1 spare ? raidz2 is basically this, with a pre-silvered

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread Daniel Carosone
On Thu, Mar 18, 2010 at 03:36:22AM -0700, Kashif Mumtaz wrote: I did another test on both machine. And write performance on ZFS extraordinary slow. - In ZFS data was being write around 1037 kw/s while disk remain busy

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-18 Thread Daniel Carosone
On Thu, Mar 18, 2010 at 05:21:17AM -0700, Tonmaus wrote: No, because the parity itself is not verified. Aha. Well, my understanding was that a scrub basically means reading all data, and compare with the parities, which means that these have to be re-computed. Is that correct? A scrub

Re: [zfs-discuss] dedupratio riddle

2010-03-18 Thread Daniel Carosone
As noted, the ratio caclulation applies over the data attempted to dedup, not the whole pool. However, I saw a commit go by just in the last couple of days about the dedupratio calculation being misleading, though I didn't check the details. Presumably this will be reported differently from the

Re: [zfs-discuss] How to manage scrub priority or defer scrub?

2010-03-18 Thread Daniel Carosone
On Thu, Mar 18, 2010 at 09:54:28PM -0700, Tonmaus wrote: (and the details of how much and how low have changed a few times along the version trail). Is there any documentation about this, besides source code? There are change logs and release notes, and random blog postings along the way

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-17 Thread Daniel Carosone
On Wed, Mar 17, 2010 at 10:15:53AM -0500, Bob Friesenhahn wrote: Clearly there are many more reads per second occuring on the zfs filesystem than the ufs filesystem. yes Assuming that the application-level requests are really the same From the OP, the workload is a find /. So, ZFS makes

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Daniel Carosone
On Wed, Mar 17, 2010 at 08:43:13PM -0500, David Dyer-Bennet wrote: My own stuff is intended to be backed up by a short-cut combination -- zfs send/receive to an external drive, which I then rotate off-site (I have three of a suitable size). However, the only way that actually works so

Re: [zfs-discuss] zfs send and receive ... any ideas for FEC?

2010-03-11 Thread Daniel Carosone
On Wed, Mar 10, 2010 at 02:54:18PM +0100, Svein Skogen wrote: Are there any good options for encapsulating/decapsulating a zfs send stream inside FEC (Forward Error Correction)? This could prove very useful both for backup purposes, and for long-haul transmissions. I used par2 for this for

Re: [zfs-discuss] zfs send and receive ... any ideas for FEC?

2010-03-11 Thread Daniel Carosone
On Thu, Mar 11, 2010 at 07:23:43PM +1100, Daniel Carosone wrote: You have reminded me to go back and look again, and either find that whatever issue was at fault last time was transient and now gone, or determine what it actually was and get it resolved. In case you want to: http

Re: [zfs-discuss] zfs send and receive ... any ideas for FEC?

2010-03-11 Thread Daniel Carosone
On Thu, Mar 11, 2010 at 02:00:41AM -0800, Svein Skogen wrote: I can't help but keep wondering if not some sort of FEC wrapper (optional of course) might solve both the backup and some of the long-distance-transfer (where retransmissions really isn't wanted) issues. Retransmissions aren't

Re: [zfs-discuss] What's the advantage of using multiple filesystems in a pool

2010-03-04 Thread Daniel Carosone
On Tue, Mar 02, 2010 at 03:14:04PM -0800, Richard Elling wrote: That is just a shorthand for snapshotting (snapshooting? :-) datasets. :-) There still is no pool snapshot feature. One could pick nits about zpool split .. -- Dan. pgppVa56AxgBa.pgp Description: PGP signature

Re: [zfs-discuss] What's the advantage of using multiple filesystems in a

2010-03-04 Thread Daniel Carosone
In addition to all the other good advice in the thread, I will emphasise the benefit of having smaller snapshot granularity. I have found this to be one of the most valuable and comprelling reasons when I have chosen to create a separate filesystem. If there's data that changes often and I

Re: [zfs-discuss] Expand zpool capacity

2010-03-02 Thread Daniel Carosone
For rpool, which has SMI labels and fdisk partitions, you need to expand the size of those, and then ZFS will notice (with or withhout autoexpand, depending on version). -- Dan. pgpHNYyaslcOA.pgp Description: PGP signature ___ zfs-discuss mailing list

Re: [zfs-discuss] Expand zpool capacity

2010-03-02 Thread Daniel Carosone
On Tue, Mar 02, 2010 at 02:04:52PM -0800, Erik Trimble wrote: I don't believe that is true for VM installations like Vladimir's, though I certainly could be wrong. I think you are :-) Vladimir - I would say your best option is to simply back up your data from the OpenSolaris VM, and do

Re: [zfs-discuss] sizing for L2ARC and dedup...

2010-03-01 Thread Daniel Carosone
On Mon, Mar 01, 2010 at 09:22:38AM -0800, Richard Elling wrote: Once again, I'm assuming that each DDT entry corresponds to a record (slab), so to be exact, I would need to know the number of slabs (which doesn't currently seem possible). I'd be satisfied with a guesstimate based on

Re: [zfs-discuss] suggested ssd for zil

2010-02-28 Thread Daniel Carosone
Is there anything that is safe to use as a ZIL, faster than the Mtron but more appropriate for home than a Stec? ACARD ANS-9010, as mentioned several times here recently (also sold as hyperdrive5) -- Dan. pgpeFYm43bUlS.pgp Description: PGP signature

Re: [zfs-discuss] ZFS compression and deduplication on root pool on SSD

2010-02-28 Thread Daniel Carosone
On Sun, Feb 28, 2010 at 07:36:30PM -0800, Bill Sommerfeld wrote: To avoid this in the future, set PKG_CACHEDIR in your environment to point at a filesystem which isn't cloned by beadm -- something outside rpool/ROOT, for instance. +1 - I've just used a dataset mounted at /var/pkg/download,

Re: [zfs-discuss] Recommendations required for home file server config

2010-02-25 Thread Daniel Carosone
On Wed, Feb 24, 2010 at 10:57:08AM +, li...@di.cx wrote: 2 x SuperMicro AOC-SAT2-MV8 SATA controllers (so 16 ports in total, plus 6 on the motherboard) What about case space for the disks? Disks: 3x40GB rpool mirror and spare on shelf. 3 way mirror if you really want and have the

Re: [zfs-discuss] SSDs with a SCSI SCA interface?

2010-02-23 Thread Daniel Carosone
On Tue, Feb 23, 2010 at 12:09:20PM -0800, Erik Trimble wrote: I've got stacks of both v20z/v40z hardware, plus a whole raft of IBM xSeries (/not/ System X) machines which really, really, really need an SSD for improved I/O. At this point, I'd kill for a parallel SCSI - SATA adapter

Re: [zfs-discuss] Lost disk geometry

2010-02-19 Thread Daniel Carosone
On Fri, Feb 19, 2010 at 01:15:17PM -0600, David Dyer-Bennet wrote: On Fri, February 19, 2010 13:09, David Dyer-Bennet wrote: Anybody know what the proper geometry is for a WD1600BEKT-6-1A13? It's not even in the data sheets any more! any such geometry has been entirely fictitious since

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Daniel Carosone
On Fri, Feb 19, 2010 at 11:51:29PM +0100, Ragnar Sundblad wrote: On 19 feb 2010, at 23.40, Eugen Leitl wrote: On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote: I found the Hyperdrive 5/5M, which is a half-height drive bay sata ramdisk with battery backup and auto-backup to

Re: [zfs-discuss] Help with corrupted pool

2010-02-18 Thread Daniel Carosone
On Wed, Feb 17, 2010 at 11:37:54PM -0500, Ethan wrote: It seems to me that you could also use the approach of 'zpool replace' for That is true. It seems like it then have to rebuild from parity for every drive, though, which I think would take rather a long while, wouldn't it? No longer than

Re: [zfs-discuss] Help with corrupted pool

2010-02-18 Thread Daniel Carosone
On Thu, Feb 18, 2010 at 12:42:58PM -0500, Ethan wrote: On Thu, Feb 18, 2010 at 04:14, Daniel Carosone d...@geek.com.au wrote: Although I do notice that right now, it imports just fine using the p0 devices using just `zpool import q`, no longer having to use import -d with the directory

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Daniel Carosone
On Thu, Feb 18, 2010 at 10:39:48PM -0600, Bob Friesenhahn wrote: This sounds like an initial 'silver' rather than a 'resilver'. Yes, in particular it will be entirely seqential. ZFS resilver is in txg order and involves seeking. What I am interested in is the answer to these sort of

[zfs-discuss] getting tangled with recieved mountpoint properties

2010-02-17 Thread Daniel Carosone
I have a machine whose purpose is to be a backup server. It has a pool for holding backups from other machines, using zfs send|recv. Call the pool dpool. Inside there are datasets for hostname/poolname, for each of the received pools. All hosts have an rpool, some have other pools as well. So

Re: [zfs-discuss] Help with corrupted pool

2010-02-17 Thread Daniel Carosone
On Wed, Feb 17, 2010 at 12:31:27AM -0500, Ethan wrote: And I just realized - yes, labels 2 and 3 are in the wrong place relative to the end of the drive; I did not take into account the overhead taken up by truecrypt when dd'ing the data. The raw drive is 1500301910016 bytes; the truecrypt

Re: [zfs-discuss] Help with corrupted pool

2010-02-17 Thread Daniel Carosone
On Wed, Feb 17, 2010 at 03:37:59PM -0500, Ethan wrote: On Wed, Feb 17, 2010 at 15:22, Daniel Carosone d...@geek.com.au wrote: I have not yet successfully imported. I can see two ways of making progress forward. One is forcing zpool to attempt to import using slice 2 for each disk rather than

Re: [zfs-discuss] Help with corrupted pool

2010-02-17 Thread Daniel Carosone
On Wed, Feb 17, 2010 at 04:48:23PM -0500, Ethan wrote: It looks like using p0 is exactly what I want, actually. Are s2 and p0 both the entire disk? No. s2 depends on there being a solaris partition table (Sun or EFI), and if there's also an fdisk partition table (disk shared with other OS), s2

Re: [zfs-discuss] false DEGRADED status based on cannot open device at boot.

2010-02-17 Thread Daniel Carosone
On Wed, Feb 17, 2010 at 05:28:03PM -0500, Dennis Clarke wrote: Good theory, however, this disk is fully external with its own power. It can still be commanded to offline state. -- Dan. pgpzmziAIXUx3.pgp Description: PGP signature ___ zfs-discuss

Re: [zfs-discuss] Help with corrupted pool

2010-02-17 Thread Daniel Carosone
On Wed, Feb 17, 2010 at 04:44:19PM -0500, Ethan wrote: There was no partitioning on the truecrypt disks. The truecrypt volumes occupied the whole raw disks (1500301910016 bytes each). The devices that I gave to the zpool on linux were the whole raw devices that truecrypt exposed (1500301647872

Re: [zfs-discuss] Help with corrupted pool

2010-02-17 Thread Daniel Carosone
On Wed, Feb 17, 2010 at 06:15:25PM -0500, Ethan wrote: Success! Awesome. Let that scrub finish before celebrating completely, but this looks like a good place to stop and consider what you want for an end state. -- Dan. pgph6ALkJoiw6.pgp Description: PGP signature

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-17 Thread Daniel Carosone
On Wed, Feb 17, 2010 at 02:38:04PM -0500, Miles Nordin wrote: copies=2 has proven to be mostly useless in practice. I disagree. Perhaps my cases fit under the weasel-word mostly, but single-disk laptops are a pretty common use-case. If there were a real-world device that tended to randomly

Re: [zfs-discuss] SSD and ZFS

2010-02-16 Thread Daniel Carosone
On Mon, Feb 15, 2010 at 09:11:02PM -0600, Tracey Bernath wrote: On Mon, Feb 15, 2010 at 5:51 PM, Daniel Carosone d...@geek.com.au wrote: Just be clear: mirror ZIL by all means, but don't mirror l2arc, just add more devices and let them load-balance. This is especially true if you're

Re: [zfs-discuss] ZFS Mount Errors

2010-02-16 Thread Daniel Carosone
On Tue, Feb 16, 2010 at 06:20:05PM +0100, Juergen Nickelsen wrote: Tony MacDoodle tpsdoo...@gmail.com writes: Mounting ZFS filesystems: (1/6)cannot mount '/data/apache': directory is not empty (6/6) svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-16 Thread Daniel Carosone
On Tue, Feb 16, 2010 at 02:53:18PM -0800, Christo Kutrovsky wrote: looking to answer myself the following question: Do I need to rollback all my NTFS volumes on iSCSI to the last available snapshot every time there's a power failure involving the ZFS storage server with a disabled ZIL. No,

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-16 Thread Daniel Carosone
On Tue, Feb 16, 2010 at 06:28:05PM -0800, Richard Elling wrote: The problem is that MTBF measurements are only one part of the picture. Murphy's Law says something will go wrong, so also plan on backups. +n Imagine this scenario: You lost 2 disks, and unfortunately you lost the 2 sides of

Re: [zfs-discuss] Help with corrupted pool

2010-02-16 Thread Daniel Carosone
On Tue, Feb 16, 2010 at 10:06:13PM -0500, Ethan wrote: This is the current state of my pool: et...@save:~# zpool import pool: q id: 5055543090570728034 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or

Re: [zfs-discuss] Help with corrupted pool

2010-02-16 Thread Daniel Carosone
On Wed, Feb 17, 2010 at 02:30:28PM +1100, Daniel Carosone wrote: c9t4d0s8 UNAVAIL corrupted data c9t5d0s2 ONLINE c9t2d0s8 UNAVAIL corrupted data c9t1d0s8 UNAVAIL corrupted data c9t0d0s8 UNAVAIL corrupted data - zdb

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-16 Thread Daniel Carosone
On Tue, Feb 16, 2010 at 04:47:11PM -0800, Christo Kutrovsky wrote: One of the ideas that sparkled is have a max devices property for each data set, and limit how many mirrored devices a given data set can be spread on. I mean if you don't need the performance, you can limit (minimize) the

Re: [zfs-discuss] Help with corrupted pool

2010-02-16 Thread Daniel Carosone
On Tue, Feb 16, 2010 at 11:39:39PM -0500, Ethan wrote: If slice 2 is the whole disk, why is zpool trying to using slice 8 for all but one disk? Because it's finding at least part of the labels for the pool member there. Please check the partition tables of all the disks, and use zdb -l on the

Re: [zfs-discuss] Duplicating a system rpool

2010-02-16 Thread Daniel Carosone
On Tue, Feb 16, 2010 at 10:33:26PM -0600, David Dyer-Bennet wrote: Here's what I've started: I've created a mirrored pool called rp2 on the new disks, and I'm zfs send -R a current snapshot over to the new disks. In fact it just finished. I've got an altroot set, and obviously I gave

Re: [zfs-discuss] ZFS slowness under domU high load

2010-02-15 Thread Daniel Carosone
On Mon, Feb 15, 2010 at 01:45:57PM +0100, Bogdan ?ulibrk wrote: One more thing regarding SSD, will be useful to throw in additional SAS/SATA drive in to serve as L2ARC? I know SSD is the most logical thing to put as L2ARC, but will conventional drive be of *any* help in L2ARC? Only in

Re: [zfs-discuss] SSD and ZFS

2010-02-15 Thread Daniel Carosone
On Sun, Feb 14, 2010 at 11:08:52PM -0600, Tracey Bernath wrote: Now, to add the second SSD ZIL/L2ARC for a mirror. Just be clear: mirror ZIL by all means, but don't mirror l2arc, just add more devices and let them load-balance. This is especially true if you're sharing ssd writes with ZIL, as

Re: [zfs-discuss] Removing Cloned Snapshot

2010-02-12 Thread Daniel Carosone
On Fri, Feb 12, 2010 at 09:50:32AM -0500, Mark J Musante wrote: The other option is to zfs send the snapshot to create a copy instead of a clone. One day, in the future, I hope there might be a third option, somewhat as an optimimsation. With dedup and bp-rewrite, a new operation could be

Re: [zfs-discuss] ZFS ZIL + L2ARC SSD Setup

2010-02-12 Thread Daniel Carosone
On Fri, Feb 12, 2010 at 11:26:33AM -0800, Richard Elling wrote: Mathing aorund a bit, for a 300 GB L2ARC (apologies for the tab separation): size (GB) 300 size (sectors) 585937500 labels (sectors)9232 available

Re: [zfs-discuss] Detach ZFS Mirror

2010-02-11 Thread Daniel Carosone
On Thu, Feb 11, 2010 at 02:50:06PM -0500, Tony MacDoodle wrote: I have a 2-disk/2-way mirror and was wondering if I can remove 1/2 the mirror and plunk it in another system? Yes. If you have a recent opensolaris, there is zpool split specifically to help this use case. Otherwise, you can

Re: [zfs-discuss] Removing Cloned Snapshot

2010-02-11 Thread Daniel Carosone
On Thu, Feb 11, 2010 at 10:55:20PM -0500, Tony MacDoodle wrote: I am getting the following message when I try and remove a snapshot from a clone: bash-3.00# zfs destroy data/webser...@sys_unconfigd cannot destroy 'data/webser...@sys_unconfigd': snapshot has dependent clones use '-R' to

[zfs-discuss] lofi crypto pools and *cache properties

2010-02-10 Thread Daniel Carosone
Until zfs-crypto arrives, I am using a pool for sensitive data inside several files encrypted via lofi crypto. The data is also valuable, of course, so the pool is mirrored, with one file on each of several pools (laptop rpool, and a couple of usb devices, not always connected). These backing

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced

2010-02-10 Thread Daniel Carosone
On Wed, Feb 10, 2010 at 12:37:46PM -0500, rwali...@washdcmail.com wrote: I don't disagree with any of the facts you list, but I don't think the alternatives are fully described by Sun vs. much cheaper retail parts. We face exactly this same decision with buying RAM for our servers (maybe

Re: [zfs-discuss] ZFS replication primary secondary

2010-02-10 Thread Daniel Carosone
On Wed, Feb 10, 2010 at 05:36:10PM -0600, David Dyer-Bennet wrote: That's all about *ME* picking the suitable base snapshot, as I understand it. Correct. I understood the recent reference to be suggesting that I didn't have to, that zfs would figure it out for me. Which still appears to me

Re: [zfs-discuss] ZFS replication primary secondary

2010-02-10 Thread Daniel Carosone
On Wed, Feb 10, 2010 at 10:48:57PM -0600, David Dyer-Bennet wrote: But I see how it could indeed be useful in theory to send just a *little* extra if you weren't sure quite what was needed but could guess pretty closely. I think it's mostly for the benefit of retrying the same command, if

Re: [zfs-discuss] Dedup Questions.

2010-02-09 Thread Daniel Carosone
On Tue, Feb 09, 2010 at 08:26:42AM -0800, Richard Elling wrote: zdb -D poolname will provide details on the DDT size. FWIW, I have a pool with 52M DDT entries and the DDT is around 26GB. I wish -D was documented; I had forgotten about it and only found the (expensive) -S variant, which

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-02-08 Thread Daniel Carosone
On Mon, Feb 01, 2010 at 12:22:55PM -0800, Lutz Schumann wrote: Created a pool on head1 containing just the cache device (c0t0d0). This is not possible, unless there is a bug. You cannot create a pool with only a cache device. I have verified this on b131: # zpool create

Re: [zfs-discuss] Intrusion Detection - powered by ZFS Checksumming ?

2010-02-08 Thread Daniel Carosone
On Mon, Feb 08, 2010 at 11:24:56AM -0800, Lutz Schumann wrote: Only with the zdb(1M) tool but note that the checksums are NOT of files but of the ZFS blocks. Thanks - bocks, right (doh) - thats what I was missing. Damn it would be so nice :( If you're comparing the current data to a

Re: [zfs-discuss] zpool list size

2010-02-08 Thread Daniel Carosone
On Mon, Feb 08, 2010 at 11:28:11PM +0100, Lasse Osterild wrote: Ok thanks I know that the amount of used space will vary, but what's the usefulness of the total size when ie in my pool above 4 x 1G (roughly, depending on recordsize) are reserved for parity, it's not like it's useable for

<    1   2   3   >