- A public interface to get the property state
That would come from libzfs. There are private interfaces just now that
are very likely what you need zfs_prop_get()/zfs_prop_set(). They aren't
documented or public though and are subject to change at any time.
mmm, as the state of the
Darren J Moffat darr...@opensolaris.org wrote:
You cannot get a single file out of the zfs send datastream.
I don't see that as part of the definition of a backup - you obviously
do - so we will just have to disagree on that.
If you need to set up a file server of the same size as the
On Wed, March 24, 2010 10:36, Joerg Schilling wrote:
- A public interface to get the property state
That would come from libzfs. There are private interfaces just now that
are very likely what you need zfs_prop_get()/zfs_prop_set(). They aren't
documented or public though and are subject
Hi,
I agree 100% with Chris.
Notice the on their own part of the original post. Yes, nobody wants
to run zfs send or (s)tar by hand.
That's why Chris's script is so useful: You set it up and forget and get the
job done for 80% of home users.
On another note, I was positively surprised by the
On Sat, March 20, 2010 07:31, Chris Gerhard wrote:
Up to a point. zfs send | zfs receive does make a very good back up scheme
for the home user with a moderate amount of storage. Especially when the
entire back up will fit on a single drive which I think would cover the
majority of home
The only tool I'm aware of today that provides a copy of the data,
and all of the ZPL metadata and all the ZFS dataset properties is 'zfs
send'.
AFAIK, this is correct.
Further, the only type of tool that can backup a pool is a tool like
dd.
How is it different to backup a pool, versus
On Mar 21, 2010, at 6:40 AM, Edward Ned Harvey wrote:
The only tool I'm aware of today that provides a copy of the data,
and all of the ZPL metadata and all the ZFS dataset properties is 'zfs
send'.
AFAIK, this is correct.
Further, the only type of tool that can backup a pool is a tool like
Responses inline below...
On Sat, Mar 20, 2010 at 00:57, Edward Ned Harvey solar...@nedharvey.comwrote:
1. NDMP for putting zfs send streams on tape over the network. So
Tell me if I missed something here. I don't think I did. I think this
sounds like crazy talk.
I used NDMP up till
I'll say it again: neither 'zfs send' or (s)tar is an
enterprise (or
even home) backup system on their own one or both can
be components of
the full solution.
Up to a point. zfs send | zfs receive does make a very good back up scheme for
the home user with a moderate amount of
On Fri, Mar 19, 2010 at 11:57 PM, Edward Ned Harvey
solar...@nedharvey.com wrote:
1. NDMP for putting zfs send streams on tape over the network. So
Tell me if I missed something here. I don't think I did. I think this
sounds like crazy talk.
I used NDMP up till November, when we replaced
I'll say it again: neither 'zfs send' or (s)tar is an
enterprise (or
even home) backup system on their own one or both can
be components of
the full solution.
Up to a point. zfs send | zfs receive does make a very good back up
scheme for the home user with a moderate amount of
5+ years ago the variety of NDMP that was available with the
combination of NetApp's OnTap and Veritas NetBackup did backups at the
volume level. When I needed to go to tape to recover a file that was
no longer in snapshots, we had to find space on a NetApp to restore
the volume. It could
On Mar 18, 2010, at 6:28 AM, Darren J Moffat wrote:
The only tool I'm aware of today that provides a copy of the data, and all of
the ZPL metadata and all the ZFS dataset properties is 'zfs send'.
AFAIK, this is correct.
Further, the only type of tool that can backup a pool is a tool like
Ahhh, this has been...interesting...some real personalities involved in
this
discussion. :p The following is long-ish but I thought a re-cap was in
order.
I'm sure we'll never finish this discussion, but I want to at least have a
new
plateau or base from which to consider these questions.
I've
Now, NDMP doesn't do you much good for a locally attached tape drive,
as Darren and Svein pointed out. However, provided the software which is
installed on this fictional server can talk to the tape in an
appropriate way,
then all you have to do is pipe zfs send into it. Right? What did I
Darren J Moffat darren.mof...@oracle.com wrote:
That assumes you are writing the 'zfs send' stream to a file or file
like media. In many cases people using 'zfs send' for they backup
strategy are they are writing it back out using 'zfs recv' into another
pool. In those cases the files
Mike Gerdts mger...@gmail.com wrote:
another server, where the data is immediately fed through zfs receive then
it's an entirely viable backup technique.
Richard Elling made an interesting observation that suggests that
storing a zfs send data stream on tape is a quite reasonable thing to
On 19/03/2010 14:57, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffatdarren.mof...@oracle.com wrote:
That assumes you are writing the 'zfs send' stream to a file or file
like media. In many cases people using 'zfs send' for they backup
strategy are they are writing it back out
Damon (and others)
For those wanting the ability to perform file backups/restores along with all
metadata, without resorting to third party applications, if you have a Sun
support contract, log a call asking that your organisation be added to the list
of users who wants to see RFE #5004379
Darren J Moffat darr...@opensolaris.org wrote:
I'm curious, why isn't a 'zfs send' stream that is stored on a tape yet
the implication is that a tar archive stored on a tape is considered a
backup ?
You cannot get a single file out of the zfs send datastream.
ZFS system attributes (as
On 19/03/2010 16:11, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffatdarr...@opensolaris.org wrote:
I'm curious, why isn't a 'zfs send' stream that is stored on a tape yet
the implication is that a tar archive stored on a tape is considered a
backup ?
You cannot get a single file
On Fri, March 19, 2010 11:33, Darren J Moffat wrote:
On 19/03/2010 16:11, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffatdarr...@opensolaris.org wrote:
I'm curious, why isn't a 'zfs send' stream that is stored on a tape yet
the implication is that a tar archive stored on a tape
On 19/03/2010 17:19, David Dyer-Bennet wrote:
On Fri, March 19, 2010 11:33, Darren J Moffat wrote:
On 19/03/2010 16:11, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffatdarr...@opensolaris.org wrote:
I'm curious, why isn't a 'zfs send' stream that is stored on a tape yet
the
On Fri, March 19, 2010 12:25, Darren J Moffat wrote:
On 19/03/2010 17:19, David Dyer-Bennet wrote:
On Fri, March 19, 2010 11:33, Darren J Moffat wrote:
On 19/03/2010 16:11, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffatdarr...@opensolaris.org wrote:
I'm curious, why isn't a
On 19 mars 2010, at 17:11, Joerg Schilling wrote:
I'm curious, why isn't a 'zfs send' stream that is stored on a tape yet
the implication is that a tar archive stored on a tape is considered a
backup ?
You cannot get a single file out of the zfs send datastream.
zfs send is a block-level
Erik,
I don't think there was any confusion about the block nature of zfs send
vs. the file nature of star. I think what this discussion is coming down to
is
the best ways to utilize zfs send as a backup, since (as Darren Moffat has
noted) it supports all the ZFS objects and metadata.
I see 2
ZFS+CIFS even provides
Windows Volume Shadow Services so that Windows users can do this on
their own.
I'll need to look into that, when I get a moment. Not familiar with
Windows Volume Shadow Services, but having people at home able to do
this
directly seems useful.
I'd like to spin
I'll say it again: neither 'zfs send' or (s)tar is an enterprise (or
even home) backup system on their own one or both can be components
of
the full solution.
I would be pretty comfortable with a solution thusly designed:
#1 A small number of external disks, zfs send onto the disks and
1. NDMP for putting zfs send streams on tape over the network. So
Tell me if I missed something here. I don't think I did. I think this
sounds like crazy talk.
I used NDMP up till November, when we replaced our NetApp with a Solaris Sun
box. In NDMP, to choose the source files, we had the
Damon Atkins damon_atk...@yahoo.com.au wrote:
I vote for zfs needing a backup and restore command against a snapshot.
backup command should output on stderr at least
Full_Filename SizeBytes Modification_Date_1970secSigned
so backup software can build indexes and stdout contains the data.
Svein Skogen sv...@stillbilde.net wrote:
Please, don't compare proper backup drives to that rotating head
non-standard catastrophy... DDS was (in)famous for being a delayed-fuse
tape-shredder.
DDS was a WOM (write only memory) type device. It did not report write errors
and it had many read
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 18.03.2010 10:31, Joerg Schilling wrote:
Svein Skogen sv...@stillbilde.net wrote:
Please, don't compare proper backup drives to that rotating head
non-standard catastrophy... DDS was (in)famous for being a delayed-fuse
tape-shredder.
DDS
My own stuff is intended to be backed up by a short-cut combination --
zfs send/receive to an external drive, which I then rotate off-site (I
have three of a suitable size). However, the only way that actually
works so far is to destroy the pool (not just the filesystem) and
recreate it from
From what I've read so far, zfs send is a block level api and thus
cannot be
used for real backups. As a result of being block level oriented, the
Weirdo. The above cannot be used for real backups is obviously
subjective, is incorrect and widely discussed here, so I just say weirdo.
I'm tired
Edward Ned Harvey solar...@nedharvey.com wrote:
I invite erybody to join star development at:
We know, you have an axe to grind. Don't insult some other product just
because it's not the one you personally work on. Yours is better in some
ways, and zfs send is better in some ways.
If you
Hi all
On Thursday 18 March 2010 13:54:52 Joerg Schilling wrote:
If you have no technical issues to discuss, please stop insulting
people/products.
We are on OpenSolaris and we don't like this kind of discussions on the
mailing lists. Please act collaborative.
May I suggest this to both
Carsten Aulbert carsten.aulb...@aei.mpg.de wrote:
In case of 'star' the blob coming out of it might also be useless if you
don't
have star (or other tools) around for deciphering it - very unlikely, but
still possible ;)
I invite you to inform yourself about star and to test it yourself.
Darren J Moffat darren.mof...@oracle.com wrote:
So exactly what makes it unsuitable for backup ?
Is it the file format or the way the utility works ?
If it is the format what is wrong with it ?
If it is the utility what is needed to fix that ?
This has been discussed many
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 18.03.2010 14:12, Joerg Schilling wrote:
Darren J Moffat darren.mof...@oracle.com wrote:
So exactly what makes it unsuitable for backup ?
Is it the file format or the way the utility works ?
If it is the format what is wrong with it ?
joerg.schill...@fokus.fraunhofer.de (Joerg Schilling) wrote:
This has been discussed many times in the past already.
If you archive the incremental star send data streams, you cannot
extract single files andit seems that this cannot be fixed without
introducing a different archive format.
On Wed, Mar 17, 2010 at 9:15 AM, Edward Ned Harvey
solar...@nedharvey.com wrote:
I think what you're saying is: Why bother trying to backup with zfs
send
when the recommended practice, fully supportable, is to use other tools
for
backup, such as tar, star, Amanda, bacula, etc. Right?
The
A system with 100TB of data its 80% full and the a user ask their local system
admin to restore a directory with large files, as it was 30days ago with all
Windows/CIFS ACLS and NFSv4/ACLS etc.
If we used zfs send, we need to go back to a zfs send some 30days ago, and find
80TB of disk space
On 18/03/2010 12:54, joerg.schill...@fokus.fraunhofer.de wrote:
It has been widely discussed here already that the output of zfs send cannot be
used as a backup.
First define exactly what you mean by backup. Please don't confuse
backup and archival they aren't the same thing.
It would also
On 18/03/2010 13:12, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffatdarren.mof...@oracle.com wrote:
So exactly what makes it unsuitable for backup ?
Is it the file format or the way the utility works ?
If it is the format what is wrong with it ?
If it is the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 18.03.2010 14:28, Darren J Moffat wrote:
On 18/03/2010 13:12, joerg.schill...@fokus.fraunhofer.de wrote:
Darren J Moffatdarren.mof...@oracle.com wrote:
So exactly what makes it unsuitable for backup ?
Is it the file format or the way the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 18.03.2010 18:21, Darren J Moffat wrote:
As to your two questions above, I'll try to answer them from my limited
understanding of the issue.
The format: Isn't fault tolerant. In the least. One single bit wrong and
the entire stream is invalid.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 18.03.2010 18:28, Darren J Moffat wrote:
On 18/03/2010 17:26, Svein Skogen wrote:
The utility: Can't handle streams being split (in case of streams being
larger that a single backup media).
I think it should be possible to store the 'zfs send'
On 18/03/2010 17:34, Svein Skogen wrote:
How would NDMP help with this any more than running a local pipe
splitting the stream (and handling the robotics for feeding in the next
tape)?
Probably doesn't in that case.
I can see the point of NDMP when the tape library isn't physically
connected
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 18.03.2010 18:37, Darren J Moffat wrote:
On 18/03/2010 17:34, Svein Skogen wrote:
How would NDMP help with this any more than running a local pipe
splitting the stream (and handling the robotics for feeding in the next
tape)?
Probably
On 18 mars 2010, at 15:51, Damon Atkins wrote:
A system with 100TB of data its 80% full and the a user ask their local
system admin to restore a directory with large files, as it was 30days ago
with all Windows/CIFS ACLS and NFSv4/ACLS etc.
If we used zfs send, we need to go back to a
As to your two questions above, I'll try to answer them from my limited
understanding of the issue.
The format: Isn't fault tolerant. In the least. One single bit wrong and
the entire stream is invalid. A FEC wrapper would fix this.
I've logged CR# 6936195 ZFS send stream while checksumed
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 18.03.2010 17:49, erik.ableson wrote:
Conceptually, think of a ZFS system as a SAN Box with built-in asynchronous
replication (free!) with block-level granularity. Then look at your other
backup requirements and attach whatever is required to
djm == Darren J Moffat darren.mof...@oracle.com writes:
djm I've logged CR# 6936195 ZFS send stream while checksumed
djm isn't fault tollerant to keep track of that.
Other tar/cpio-like tools are also able to:
* verify the checksums without extracting (like scrub)
* verify or even
c == Miles Nordin car...@ivy.net writes:
mg == Mike Gerdts mger...@gmail.com writes:
c are compatible with the goals of an archival tool:
sorry, obviously I meant ``not compatible''.
mg Richard Elling made an interesting observation that suggests
mg that storing a zfs send data
On 03/18/10 12:07 PM, Khyron wrote:
Ian,
When you say you spool to tape for off-site archival, what software do
you
use?
NetVault.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Mar 18, 2010, at 15:00, Miles Nordin wrote:
Admittedly the second bullet is hard to manage while still backing up
zvol's, pNFS / Lustre data-node datasets, windows ACL's, properties,
Some commercial backup products are able to parse VMware's VMDK files
to get file system information of
From what I've read so far, zfs send is a block level api and thus
cannot be
used for real backups. As a result of being block level oriented, the
Weirdo. The above cannot be used for real backups is obviously
subjective, is incorrect and widely discussed here, so I just say
weirdo.
On 03/17/2010 08:28 PM, Khyron wrote:
The Best Practices Guide is also very clear about send and receive NOT
being
designed explicitly for backup purposes. I find it odd that so many
people seem to
want to force this point. ZFS appears to have been designed to allow
the use of
well known tools
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 17.03.2010 12:28, Khyron wrote:
Note to readers: There are multiple topics discussed herein. Please
identify which
idea(s) you are responding to, should you respond. Also make sure to
take in all of
this before responding. Something you
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 17.03.2010 13:31, Svein Skogen wrote:
On 17.03.2010 12:28, Khyron wrote:
Note to readers: There are multiple topics discussed herein. Please
identify which
*SNIP*
How does backing up the NFSv4 acls help you backup up a zvol (shared for
Stephen Bunn scb...@sbunn.org wrote:
between our machine's pools and our backup server pool. It would be
nice, however, if some sort of enterprise level backup solution in the
style of ufsdump was introduced to ZFS.
Star can do the same as ufsdump does but independent of OS and filesystem.
Exactly!
This is what I meant, at least when it comes to backing up ZFS datasets.
There
are tools available NOW, such as Star, which will backup ZFS datasets due to
the
POSIX nature of those datasets. As well, Amanda, Bacula, NetBackup,
Networker
and probably some others I missed. Re-inventing
The one thing that I keep thinking, and which I have yet to see
discredited, is that
ZFS file systems use POSIX semantics. So, unless you are using
specific features
(notably ACLs, as Paul Henson is), you should be able to backup those
file systems
using well known tools.
This is
To be sure, Ed, I'm not asking:
Why bother trying to backup with zfs send when there are fully supportable
and
working options available right NOW?
Rather, I am asking:
Why do we want to adapt zfs send to do something it was never intended
to do, and probably won't be adapted to do (well, if at
I think what you're saying is: Why bother trying to backup with zfs
send
when the recommended practice, fully supportable, is to use other tools
for
backup, such as tar, star, Amanda, bacula, etc. Right?
The answer to this is very simple.
#1 ...
#2 ...
Oh, one more thing. zfs send
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 17.03.2010 15:15, Edward Ned Harvey wrote:
I think what you're saying is: Why bother trying to backup with zfs
send
when the recommended practice, fully supportable, is to use other tools
for
backup, such as tar, star, Amanda, bacula, etc.
On Wed, March 17, 2010 06:28, Khyron wrote:
The Best Practices Guide is also very clear about send and receive
NOT being designed explicitly for backup purposes. I find it odd
that so many people seem to want to force this point. ZFS appears
to have been designed to allow the use of well
Why do we want to adapt zfs send to do something it was never
intended
to do, and probably won't be adapted to do (well, if at all) anytime
soon instead of
optimizing existing technologies for this use case?
The only time I see or hear of anyone using zfs send in a way it wasn't
intended is
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 17.03.2010 16:19, Edward Ned Harvey wrote:
*snip
Still ... If you're in situation (b) then you want as many options available
to you as possible. I've helped many people and/or companies before, who
... Had backup media, but didn't have the
I think what you're saying is: Why bother trying to backup with zfs
send
when the recommended practice, fully supportable, is to use other tools
for
backup, such as tar, star, Amanda, bacula, etc. Right?
The answer to this is very simple.
#1 ...
#2 ...
Oh, one more thing. zfs send
k == Khyron khyron4...@gmail.com writes:
k Star is probably perfect once it gets ZFS (e.g. NFS v4) ACL
nope, because snapshots are lost and clones are expanded wrt their
parents, and the original tree of snapshots/clones can never be
restored.
we are repeating, though. This is all in
la == Lori Alt lori@sun.com writes:
la This is no longer the case. The send stream format is now
la versioned in such a way that future versions of Solaris will
la be able to read send streams generated by earlier versions of
la Solaris.
Your memory of the thread is
On Wed, March 17, 2010 10:19, Edward Ned Harvey wrote:
However, removable disks are not very
reliable compared to tapes, and the disks are higher cost per GB, and
require more volume in the safe deposit box, so the external disk usage is
limited... Only going back for 2-4 weeks of
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 17.03.2010 18:18, David Dyer-Bennet wrote:
On Wed, March 17, 2010 10:19, Edward Ned Harvey wrote:
However, removable disks are not very
reliable compared to tapes, and the disks are higher cost per GB, and
require more volume in the safe
On 03/18/10 03:53 AM, David Dyer-Bennet wrote:
Anybody using the in-kernel CIFS is also concerned with the ACLs, and I
think that's the big issue.
Especially in a paranoid organisation with 100s of ACEs!
Also, snapshots. For my purposes, I find snapshots at some level a very
important
Ian,
When you say you spool to tape for off-site archival, what software do you
use?
On Wed, Mar 17, 2010 at 18:53, Ian Collins i...@ianshome.com wrote:
SNIP
I have been using a two stage backup process with my main client,
send/receive to a backup pool and spool to tape for off site
On 3/17/2010 17:53, Ian Collins wrote:
On 03/18/10 03:53 AM, David Dyer-Bennet wrote:
Also, snapshots. For my purposes, I find snapshots at some level a very
important part of the backup process. My old scheme was to rsync from
primary ZFS pool to backup ZFS pool, and snapshot both pools
On Wed, Mar 17, 2010 at 08:43:13PM -0500, David Dyer-Bennet wrote:
My own stuff is intended to be backed up by a short-cut combination --
zfs send/receive to an external drive, which I then rotate off-site (I
have three of a suitable size). However, the only way that actually
works so
I vote for zfs needing a backup and restore command against a snapshot.
backup command should output on stderr at least
Full_Filename SizeBytes Modification_Date_1970secSigned
so backup software can build indexes and stdout contains the data.
The advantage of zfs providing the command is that
79 matches
Mail list logo