Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my!

2009-06-19 Thread Moore, Joe
Scott Meilicke wrote: Obviously iSCSI and NFS are quite different at the storage level, and I actually like NFS for the flexibility over iSCSI (quotas, reservations, etc.) Another key difference between them is that with iSCSI, the VMFS filesystem (built on the zvol presented as a block

Re: [zfs-discuss] Monitoring ZFS host memory use

2009-05-07 Thread Moore, Joe
Carson Gaspar wrote: Not true. The script is simply not intelligent enough. There are really 3 broad kinds of RAM usage: A) Unused B) Unfreeable by the kernel (normal process memory) C) Freeable by the kernel (buffer cache, ARC, etc.) Monitoring usually should focus on keeping (A+C)

Re: [zfs-discuss] rename(2), atomicity, crashes and fsync()

2009-03-18 Thread Moore, Joe
Joerg Schilling wrote: James Andrewartha jam...@daa.com.au wrote: Recently there's been discussion [1] in the Linux community about how filesystems should deal with rename(2), particularly in the case of a crash. ext4 was found to truncate files after a crash, that had been written with

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-11 Thread Moore, Joe
Lars-Gunnar Persson wrote: I would like to go back to my question for a second: I checked with my Nexsan supplier and they confirmed that access to every single disk in SATABeast is not possible. The smallest entities I can create on the SATABeast are RAID 0 or 1 arrays. With RAID 1 I'll

Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-10 Thread Moore, Joe
Bob Friesenhahn wrote: Your idea to stripe two disks per LUN should work. Make sure to use raidz2 rather than plain raidz for the extra reliability. This solution is optimized for high data throughput from one user. Striping two disks per LUN (RAID0 on 2 disks) and then adding a ZFS form of

Re: [zfs-discuss] zfs streams data corruption

2009-02-25 Thread Moore, Joe
Miles Nordin wrote: that SQLite2 should be equally as tolerant of snapshot backups as it is of cord-yanking. The special backup features of databases including ``performing a checkpoint'' or whatever, are for systems incapable of snapshots, which is most of them. Snapshots are not

Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-23 Thread Moore, Joe
Mario Goebbels wrote: One thing I'd like to see is an _easy_ option to fall back onto older uberblocks when the zpool went belly up for a silly reason. Something that doesn't involve esoteric parameters supplied to zdb. Between uberblock updates, there may be many write operations to a data

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-20 Thread Moore, Joe
Ross wrote: The problem is they might publish these numbers, but we really have no way of controlling what number manufacturers will choose to use in the future. If for some reason future 500GB drives all turn out to be slightly smaller than the current ones you're going to

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-20 Thread Moore, Joe
Miles Nordin wrote: mj == Moore, Joe joe.mo...@siemens.com writes: mj For a ZFS pool, (until block pointer rewrite capability) this mj would have to be a pool-create-time parameter. naw. You can just make ZFS do it all the time, like the other storage vendors do

Re: [zfs-discuss] zfs subdirectories to data set conversion

2009-01-12 Thread Moore, Joe
Nicolas Williams wrote: It'd be awesome to have a native directory-dataset conversion feature in ZFS. And, relatedly, fast moves of files across datasets in the same volume. These two RFEs have been discussed to death in the list; see the archives. This would be a nice feature to have.

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-25 Thread Moore, Joe
Ross Smith wrote: My justification for this is that it seems to me that you can split disk behavior into two states: - returns data ok - doesn't return data ok And for the state where it's not returning data, you can again split that in two: - returns wrong data - doesn't return data

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-24 Thread Moore, Joe
C. Bergström wrote: Will Murnane wrote: On Mon, Nov 24, 2008 at 10:40, Scara Maccai [EMAIL PROTECTED] wrote: Still don't understand why even the one on http://www.opensolaris.com/, ZFS - A Smashing Hit, doesn't show the app running in the moment the HD is smashed... weird...

Re: [zfs-discuss] OpenSolaris, thumper and hd

2008-10-15 Thread Moore, Joe
Tommaso Boccali wrote: Ciao, I have a thumper with Opensolaris (snv_91), and 48 disks. I would like to try a new brand of HD, by replacing a spare disk with a new one and build on it a zfs pool. Unfortunately the official utility to map a disk to the physical position inside the

Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-08 Thread Moore, Joe
Brian Hechinger On Mon, Oct 06, 2008 at 10:47:04AM -0400, Moore, Joe wrote: I wonder if an AVS-replicated storage device on the backends would be appropriate? write - ZFS-mirrored slog - ramdisk -AVS- physical disk \ +-iscsi- ramdisk -AVS

Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-06 Thread Moore, Joe
Nicolas Williams wrote There have been threads about adding a feature to support slow mirror devices that don't stay synced synchronously. At least IIRC. That would help. But then, if the pool is busy writing then your slow ZIL mirrors would generally be out of sync, thus being of no help

Re: [zfs-discuss] Quantifying ZFS reliability

2008-10-01 Thread Moore, Joe
Toby Thain Wrote: ZFS allows the architectural option of separate storage without losing end to end protection, so the distinction is still important. Of course this means ZFS itself runs on the application server, but so what? The OP in question is not running his network clients on

Re: [zfs-discuss] Quantifying ZFS reliability

2008-10-01 Thread Moore, Joe
Ian Collins wrote: I think you'd be surprised how large an organisation can migrate most, if not all of their application servers to zones one or two Thumpers. Isn't that the reason for buying in server appliances? Assuming that the application servers can coexist in the only 16GB

Re: [zfs-discuss] Quantifying ZFS reliability

2008-10-01 Thread Moore, Joe
Darren J Moffat wrote: Moore, Joe wrote: Given the fact that NFS, as implemented in his client systems, provides no end-to-end reliability, the only data protection that ZFS has any control over is after the write() is issued by the NFS server process. NFS can provided on the wire

Re: [zfs-discuss] [storage-discuss] iscsi target problems on snv_97

2008-09-17 Thread Moore, Joe
I believe the problem you're seeing might be related to deadlock condition (CR 6745310), if you run pstack on the iscsi target daemon you might find a bunch of zombie threads. The fix is putback to snv-99, give snv-99 a try. Yes, a pstack of the core I've generated from iscsitgtd does have

[zfs-discuss] iscsi target problems on snv_97

2008-09-16 Thread Moore, Joe
I've recently upgraded my x4500 to Nevada build 97, and am having problems with the iscsi target. Background: this box is used to serve NFS underlying a VMware ESX environment (zfs filesystem-type datasets) and presents iSCSI targets (zfs zvol datasets) for a Windows host and to act as

Re: [zfs-discuss] X4540

2008-07-11 Thread Moore, Joe
Bob Friesenhahn I expect that Sun is realizing that it is already undercutting much of the rest of its product line. These minor updates would allow the X4540 to compete against much more expensive StorageTek SAN hardware. Assuming, of course that the requirements for the more expensive

Re: [zfs-discuss] proposal partial/relative paths for zfs(1)

2008-07-10 Thread Moore, Joe
Carson Gaspar wrote: Darren J Moffat wrote: $ pwd /cube/builds/darrenm/bugs $ zfs create -c 6724478 Why -c ? -c for current directory -p partial is already taken to mean create all non existing parents and -r relative is already used consistently as recurse in other zfs(1)

Re: [zfs-discuss] ZFS deduplication

2008-07-08 Thread Moore, Joe
Bob Friesenhahn wrote: Something else came to mind which is a negative regarding deduplication. When zfs writes new sequential files, it should try to allocate blocks in a way which minimizes fragmentation (disk seeks). It should, but because of its copy-on-write nature, fragmentation

Re: [zfs-discuss] Help! ZFS pool is UNAVAILABLE

2008-01-02 Thread Moore, Joe
I AM NOT A ZFS DEVELOPER. These suggestions should work, but there may be other people who have better ideas. Aaron Berland wrote: Basically, I have a 3 drive raidz array on internal Seagate drives. running build 64nv. I purchased 3 add'l USB drives with the intention of mirroring and then

[zfs-discuss] ZIL and snapshots

2007-12-13 Thread Moore, Joe
I'm using an x4500 as a large data store for our VMware environment. I have mirrored the first 2 disks, and created a ZFS pool of the other 46: 22 pairs of mirrors, and 2 spares (optimizing for random I/O performance rather than space). Datasets are shared to the VMware ESX servers via NFS. We

Re: [zfs-discuss] ZIL and snapshots

2007-12-13 Thread Moore, Joe
Have you thought of solid state cache for the ZIL? There's a 16GB battery backed PCI card out there, I don't know how much it costs, but the blog where I saw it mentioned a 20x improvement in performance for small random writes. Thought about it, looked in the Sun Store, couldn't find

Re: [zfs-discuss] ZFS + DB + fragments

2007-11-21 Thread Moore, Joe
BillTodd wrote: In order to be reasonably representative of a real-world situation, I'd suggest the following additions: Your suggestions (make the benchmark big enough so seek times are really noticed) are good. I'm hoping that over the holidays, I'll get to play with an extra server...

Re: [zfs-discuss] ZFS + DB + fragments

2007-11-20 Thread Moore, Joe
Louwtjie Burger wrote: Richard Elling wrote: - COW probably makes that conflict worse This needs to be proven with a reproducible, real-world workload before it makes sense to try to solve it. After all, if we cannot measure where we are, how can we prove that we've

Re: [zfs-discuss] HAMMER

2007-11-05 Thread Moore, Joe
Peter Tribble wrote: I'm not worried about the compression effect. Where I see problems is backing up million/tens of millions of files in a single dataset. Backing up each file is essentially a random read (and this isn't helped by raidz which gives you a single disks worth of random read

Re: [zfs-discuss] future ZFS Boot and ZFS copies

2007-10-03 Thread Moore, Joe
Jesus Cea wrote: Darren J Moffat wrote: Why would you do that when it would reduce your protection and ZFS boot can boot from a mirror anyway. I guess ditto blocks would be protection enough, since the data would be duplicated between both disks. Of course, backups are your friend.

Re: [zfs-discuss] space allocation vs. thin provisioning

2007-09-14 Thread Moore, Joe
Mike Gerdts wrote: I'm curious as to how ZFS manages space (free and used) and how its usage interacts with thin provisioning provided by HDS arrays. Is there any effort to minimize the number of provisioned disk blocks that get writes so as to not negate any space benefits that thin

Re: [zfs-discuss] Force ditto block on different vdev?

2007-08-10 Thread Moore, Joe
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Frank Cusack Sent: Friday, August 10, 2007 7:26 AM To: Tuomas Leikola Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] Force ditto block on different vdev? On August 10, 2007 2:20:30 PM +0300 Tuomas Leikola [EMAIL

Re: [zfs-discuss] ZFS and powerpath

2007-07-23 Thread Moore, Joe
Brian Wilson wrote: On Jul 16, 2007, at 6:06 PM, Torrey McMahon wrote: Darren Dunham wrote: My previous experience with powerpath was that it rode below the Solaris device layer. So you couldn't cause trespass by using the wrong device. It would just go to powerpath which would

[zfs-discuss] ZFS mirroring vs. ditto blocks

2007-05-23 Thread Moore, Joe
Has anyone done a comparison of the reliability and performance of a mirrored zpool vs. a non-redundant zpool using ditto blocks? What about a gut-instinct about which will give better performance? Or do I have to wait until my Thumper arrives to find out for myself? Also, in selecting where a