Philip Brown wrote: > I've recently started down the road of production use for zfs, and am hitting > my head on some paradigm shifts. I'd like to clarify whether my understanding > is correct, and/or whether there are better ways of doing things. > I have one question for replication, and one question for backups. > These questions are all about Solaris 10 production release (U5, I believe) > not solarisexpress, etc. >
First, zfs send/recv is not a backup/restore solution. You might be happier using the available backup/restore solutions in the market. > 1. For replication purposes: is it still true, that the target filesystem has > to be "offline" to receive even an incremental send? !! > I find this difficult to understand; surely, it should be possible to > "receive" to a snapshot at least? > sends are snapshots, so you are receiving a snapshot. Snapshots are read-only, by definition. There are tricks you can play with clones, though. > 2. for backup/restore purposes:a related question to the above, I suppose. > Let's say that I had "major" damage to a filesystem, which is an active NFS > share, or something otherwise constantly in use. > Either of the following behaviours would be really nice (and both, would be > better still :-) > > 2.1 do a receive of an earlier zfs send, to either a snapshot, or a "child" > filesystem, and then > somehow "promote" some, or ALL, of the files, to the main production > filesystem, without interrupting the active NFS share too badly > I think you can do this, but it might need a staging file system. You might find that scp or rsync is similarly effective. > 2.2 do a receive of an earlier zfs send, to either a snapshot or a "child" > filesystem, and be efficient about disk space used. ie: have the recieve > understand, "hey, I have that file already, completely intact, so I'm not > going to waste space by storing it again". > This question makes no sense to me. Perhaps you can rephrase? > Related to the above, in the issue that even if i HAVE 2x the disk space > required by production purposes, to have a "restoral staging area" on the > machine... If it gets restored to a separate filesystem, I cant just do a > quick "unshare /zfs/foo ; zfs rename /zfs/foo /zfs/foo.old; zfs rename > /zfs/foo.restored /zfs/foo" because that will break all the client NFS > handles, since it is a "new" filesystem, right? Or is that incorrect? > This should be possible, but you might find cp to be an alternative which does not require blowing the NFS file handles. -- richard > Suggestions on the above, and/or any related issues I havent thought of, > would be appreciated > _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss