[zfs-discuss] Borked zpool is now invulnerable

2006-05-18 Thread Jeremy Teo
Hello, while testing some code changes, I managed to fail an assertion while doing a zfs create. My zpool is now invulnerable to destruction. :( bash-3.00# zpool destroy -f test_undo internal error: unexpected error 0 at line 298 of ../common/libzfs_dataset.c bash-3.00# zpool status pool:

Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread Jeremy Teo
Hello, What I wanted to point out is the Al's example: he wrote about damaged data. Data were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help. ZFS can detect (and repair) errors on disk surface, bad cables, etc. But cannot detect and repair errors in its (ZFS) code. I

[zfs-discuss] Versioning in ZFS: Do we need it?

2006-10-05 Thread Jeremy Teo
What would versioning of files in ZFS buy us over a zfs snapshots + cron solution? I can think of one: 1. The usefulness of the ability to get the prior version of anything at all (as richlowe puts it) Any others? -- Regards, Jeremy ___ zfs-discuss

Re: [zfs-discuss] A versioning FS

2006-10-06 Thread Jeremy Teo
A couple of use cases I was considering off hand: 1. Oops i truncated my file 2. Oops i saved over my file 3. Oops an app corrupted my file. 4. Oops i rm -rf the wrong directory. All of which can be solved by periodic snapshots, but versioning gives us immediacy. So is immediacy worth it to you

[zfs-discuss] Self-tuning recordsize

2006-10-13 Thread Jeremy Teo
Would it be worthwhile to implement heuristics to auto-tune 'recordsize', or would that not be worth the effort? -- Regards, Jeremy ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Self-tuning recordsize

2006-10-17 Thread Jeremy Teo
Heya Roch, On 10/17/06, Roch [EMAIL PROTECTED] wrote: -snip- Oracle will typically create it's files with 128K writes not recordsize ones. Darn, that makes things difficult doesn't it? :( Come to think of it, maybe we're approaching things from the wrong perspective. Databases such as Oracle

Re: [zfs-discuss] Re: Self-tuning recordsize

2006-10-17 Thread Jeremy Teo
Heya Anton, On 10/17/06, Anton B. Rang [EMAIL PROTECTED] wrote: No, the reason to try to match recordsize to the write size is so that a small write does not turn into a large read + a large write. In configurations where the disk is kept busy, multiplying 8K of data transfer up to 256K

Re: [zfs-discuss] zpool history integrated

2006-10-17 Thread Jeremy Teo
Kudos Eric! :) On 10/17/06, eric kustarz [EMAIL PROTECTED] wrote: Hi everybody, Yesterday I putback into nevada: PSARC 2006/288 zpool history 6343741 want to store a command history on disk This introduces a new subcommand to zpool(1m), namely 'zpool history'. Yes, team ZFS is tracking what

Re: [zfs-discuss] Re: Self-tuning recordsize

2006-10-22 Thread Jeremy Teo
Hello all, Isn't a large block size a simple case of prefetching? In other words, if we possessed an intelligent prefetch implementation, would there still be a need for large block sizes? (Thinking aloud) :) -- Regards, Jeremy ___ zfs-discuss

Re: [zfs-discuss] Changing number of disks in a RAID-Z?

2006-10-23 Thread Jeremy Teo
Hello, Shrinking the vdevs requires moving data. Once you move data, you've got to either invalidate the snapshots or update them. I think that will be one of the more difficult parts. Updating snapshots would be non-trivial, but doable. Perhaps some sort of reverse mapping or brute force

Re: [zfs-discuss] Re: copying a large file..

2006-10-30 Thread Jeremy Teo
This is the same problem described in 6343653 : want to quickly copy a file from a snapshot. On 10/30/06, eric kustarz [EMAIL PROTECTED] wrote: Pavan Reddy wrote: This is the time it took to move the file: The machine is a Intel P4 - 512MB RAM. bash-3.00# time mv ../share/pav.tar . real

Re: [zfs-discuss] Dead drives and ZFS

2006-11-14 Thread Jeremy Teo
On 11/14/06, Bill Sommerfeld [EMAIL PROTECTED] wrote: On Tue, 2006-11-14 at 03:50 -0600, Chris Csanady wrote: After examining the source, it clearly wipes the vdev label during a detach. I suppose it does this so that the machine can't get confused at a later date. It would be nice if the

Re: [zfs-discuss] replacing a drive in a raidz vdev

2006-12-05 Thread Jeremy Teo
On 12/5/06, Bill Sommerfeld [EMAIL PROTECTED] wrote: On Mon, 2006-12-04 at 13:56 -0500, Krzys wrote: mypool2/[EMAIL PROTECTED] 34.4M - 151G - mypool2/[EMAIL PROTECTED] 141K - 189G - mypool2/d3 492G 254G 11.5G legacy I am so confused with all of

Re: [zfs-discuss] Re: Production ZFS Server Death (06/06)

2006-12-07 Thread Jeremy Teo
The whole raid does not fail -- we are talking about corruption here. If you lose some inodes your whole partition is not gone. My ZFS pool would not salvage -- poof, whole thing was gone (granted it was a test one and not a raidz or mirror yet). But still, for what happened, I cannot believe

Re: [zfs-discuss] replacing a drive in a raidz vdev

2006-12-09 Thread Jeremy Teo
Yes. But its going to be a few months. i'll presume that we will get background disk scrubbing for free once you guys get bookmarking done. :) -- Regards, Jeremy ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2006-12-15 Thread Jeremy Teo
The instructions will tell you how to configure the array to ignore SCSI cache flushes/syncs on Engenio arrays. If anyone has additional instructions for other arrays, please let me know and I'll be happy to add them! Wouldn't it be more appropriate to allow the administrator to disable ZFS

Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2006-12-15 Thread Jeremy Teo
On 12/16/06, Richard Elling [EMAIL PROTECTED] wrote: Jason J. W. Williams wrote: Hi Jeremy, It would be nice if you could tell ZFS to turn off fsync() for ZIL writes on a per-zpool basis. That being said, I'm not sure there's a consensus on that...and I'm sure not smart enough to be a ZFS

[zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread Jeremy Teo
On the issue of the ability to remove a device from a zpool, how useful/pressing is this feature? Or is this more along the line of nice to have? -- Regards, Jeremy ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] zpool split

2007-01-23 Thread Jeremy Teo
I'm defining zpool split as the ability to divide a pool into 2 separate pools, each with identical FSes. The typical use case would be to split a N disk mirrored pool into a N-1 pool and a 1 disk pool, and then transport the 1 disk pool to another machine. While contemplating zpool split

Re: [zfs-discuss] ZFS brings System to panic/freeze

2007-01-25 Thread Jeremy Teo
System specifications please? On 1/25/07, ComCept Net GmbH Soliva [EMAIL PROTECTED] wrote: Hello now I was configuring my syste with RaidZ and with Spares (explained below). I would like to test the configuration it means after successful config of ZFS I pulled-out a disk of one of the

Re: [zfs-discuss] Re: Some questions I had while testing ZFS.

2007-01-25 Thread Jeremy Teo
This is 6456939: sd_send_scsi_SYNCHRONIZE_CACHE_biodone() can issue TUR which calls biowait()and deadlock/hangs host http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6456939 (Thanks to Tpenta for digging this up) -- Regards, Jeremy ___

Re: [zfs-discuss] Re: ZFS brings System to panic/freeze

2007-01-25 Thread Jeremy Teo
On 1/25/07, ComCept Net GmbH Andrea Soliva [EMAIL PROTECTED] wrote: Hi Jeremy Did I understand it correct there is no workaround or patch available to solve this situation? Do not misunderstand me but this issue (and it is not a small issue) is from September 2006? Is this in work or..?

Re: [zfs-discuss] can I use zfs on just a partition?

2007-01-25 Thread Jeremy Teo
On 1/25/07, Tim Cook [EMAIL PROTECTED] wrote: Just want to verify, if I have say, 1 160GB disk, can I format it so that the first say 40GB is my main UFS parition with the base OS install, and then make the rest of the disk zfs? Or even better yet, for testing purposes make two 60GB

Re: [zfs-discuss] restore pool from detached disk from mirror

2007-01-30 Thread Jeremy Teo
Hello, On 1/30/07, Robert Milkowski [EMAIL PROTECTED] wrote: Hello zfs-discuss, I had a pool with only two disks in a mirror. I detached one disks and have erased later first disk. Now i would really like to quickly get data from the second disk available again. Other than detaching

Re: [zfs-discuss] ZFS checksums - block or file level

2007-02-01 Thread Jeremy Teo
On 2/1/07, Nathan Essex [EMAIL PROTECTED] wrote: I am trying to understand if zfs checksums apply at a file or a block level. We know that zfs provides end to end checksum integrity, and I assumed that when I write a file to a zfs filesystem, the checksum was calculated at a file level, as

Re: [zfs-discuss] Re: [osol-help] How to recover from rm *?

2007-02-19 Thread Jeremy Teo
Something similar was proposed here before and IIRC someone even has a working implementation. I don't know what happened to it. That would be me. AFAIK, no one really wanted it. The problem that it solves can be solved by putting snapshots in a cronjob. -- Regards, Jeremy

Re: [zfs-discuss] .zfs snapshot directory in all directories

2007-02-26 Thread Jeremy Teo
On 2/26/07, Thomas Garner [EMAIL PROTECTED] wrote: Since I have been unable to find the answer online, I thought I would ask here. Is there a knob to turn to on a zfs filesystem put the .zfs snapshot directory into all of the children directories of the filesystem, like the .snapshot

Re: [zfs-discuss] Add mirror to an existing Zpool

2007-04-10 Thread Jeremy Teo
Read the man page for zpool. Specifically, zpool attach. On 4/10/07, Martin Girard [EMAIL PROTECTED] wrote: Hi, I have a zpool with only one disk. No mirror. I have some data in the file system. Is it possible to make my zpool redundant by adding a new disk in the pool and making it a mirror